From: Lu Baolu <baolu.lu@linux.intel.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Kevin Tian <kevin.tian@intel.com>,
Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Nicolin Chen <nicolinc@nvidia.com>, Yi Liu <yi.l.liu@intel.com>,
Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: iommu@lists.linux.dev, linux-kselftest@vger.kernel.org,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, Lu Baolu <baolu.lu@linux.intel.com>
Subject: [RFC PATCHES 02/17] iommu: Support asynchronous I/O page fault response
Date: Tue, 30 May 2023 13:37:09 +0800 [thread overview]
Message-ID: <20230530053724.232765-3-baolu.lu@linux.intel.com> (raw)
In-Reply-To: <20230530053724.232765-1-baolu.lu@linux.intel.com>
Add a new page response code, IOMMU_PAGE_RESP_ASYNC, to indicate that the
domain's page fault handler doesn't respond the hardware immediately, but
do it in an asynchronous way.
The use case of this response code is the nested translation, where the
first-stage page table is owned by the VM guest and any page fault on
it should be propagated to the VM guest and page fault will be responded
in a different thread context later.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
include/linux/iommu.h | 2 ++
drivers/iommu/io-pgfault.c | 3 ++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index d6a93de7d1dd..fce7ad81206f 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -164,11 +164,13 @@ struct iommu_fault {
* this device if possible. This is "Response Failure" in PCI PRI.
* @IOMMU_PAGE_RESP_INVALID: Could not handle this fault, don't retry the
* access. This is "Invalid Request" in PCI PRI.
+ * @IOMMU_PAGE_RESP_ASYNC: Will response later by calling iommu_page_response().
*/
enum iommu_page_response_code {
IOMMU_PAGE_RESP_SUCCESS = 0,
IOMMU_PAGE_RESP_INVALID,
IOMMU_PAGE_RESP_FAILURE,
+ IOMMU_PAGE_RESP_ASYNC,
};
/**
diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index e5b8b9110c13..83f8055a0e09 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -96,7 +96,8 @@ static void iopf_handler(struct work_struct *work)
kfree(iopf);
}
- iopf_complete_group(group->dev, &group->last_fault, status);
+ if (status != IOMMU_PAGE_RESP_ASYNC)
+ iopf_complete_group(group->dev, &group->last_fault, status);
kfree(group);
}
--
2.34.1
next prev parent reply other threads:[~2023-05-30 5:38 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-30 5:37 [RFC PATCHES 00/17] IOMMUFD: Deliver IO page faults to user space Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 01/17] iommu: Move iommu fault data to linux/iommu.h Lu Baolu
2023-05-30 5:37 ` Lu Baolu [this message]
2023-05-30 5:37 ` [RFC PATCHES 03/17] iommu: Add helper to set iopf handler for domain Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 04/17] iommu: Pass device parameter to iopf handler Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 05/17] iommu: Split IO page fault handling from SVA Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 06/17] iommu: Add iommu page fault cookie helpers Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 07/17] iommufd: Add iommu page fault data Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 08/17] iommufd: IO page fault delivery initialization and release Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 09/17] iommufd: Add iommufd hwpt iopf handler Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 10/17] iommufd: Add IOMMU_HWPT_ALLOC_FLAGS_USER_PASID_TABLE for hwpt_alloc Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 11/17] iommufd: Deliver fault messages to user space Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 12/17] iommufd: Add io page fault response support Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 13/17] iommufd: Add a timer for each iommufd fault data Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 14/17] iommufd: Drain all pending faults when destroying hwpt Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 15/17] iommufd: Allow new hwpt_alloc flags Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 16/17] iommufd/selftest: Add IOPF feature for mock devices Lu Baolu
2023-05-30 5:37 ` [RFC PATCHES 17/17] iommufd/selftest: Cover iopf-capable nested hwpt Lu Baolu
2023-05-30 18:50 ` [RFC PATCHES 00/17] IOMMUFD: Deliver IO page faults to user space Nicolin Chen
2023-05-31 2:10 ` Baolu Lu
2023-05-31 4:12 ` Nicolin Chen
2023-06-25 6:30 ` Baolu Lu
2023-06-25 19:21 ` Nicolin Chen
2023-06-26 3:10 ` Baolu Lu
2023-06-26 18:02 ` Nicolin Chen
2023-06-26 18:33 ` Jason Gunthorpe
2023-06-28 2:00 ` Baolu Lu
2023-06-28 12:49 ` Jason Gunthorpe
2023-06-29 1:07 ` Baolu Lu
2023-05-31 0:33 ` Jason Gunthorpe
2023-05-31 3:17 ` Baolu Lu
2023-06-23 6:18 ` Baolu Lu
2023-06-23 13:50 ` Jason Gunthorpe
2023-06-16 11:32 ` Jean-Philippe Brucker
2023-06-19 3:35 ` Baolu Lu
2023-06-26 9:51 ` Jean-Philippe Brucker
2023-06-19 12:58 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230530053724.232765-3-baolu.lu@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox