* [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device
@ 2024-07-18 8:16 Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
` (16 more replies)
0 siblings, 17 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan
Hi,
Per Jason Wang's suggestion, iommufd nesting series[1] is split into
"Enable stage-1 translation for emulated device" series and
"Enable stage-1 translation for passthrough device" series.
This series enables stage-1 translation support for emulated device
in intel iommu which we called "modern" mode.
PATCH1-5: Some preparing work before support stage-1 translation
PATCH6-8: Implement stage-1 translation for emulated device
PATCH9-14: Emulate iotlb invalidation of stage-1 mapping
PATCH15: Set default aw_bits to 48 in scalable modern mode
PATCH16: Introduce "modern" mode to distinguish with legacy mode
PATCH17: Add qtest
Note in spec revision 3.4, it renames "First-level" to "First-stage",
"Second-level" to "Second-stage". But the scalable mode was added
before that change. So we keep old favor using First-level/fl/Second-level/sl
in code but change to use stage-1/stage-2 in commit log.
But keep in mind First-level/fl/stage-1 all have same meaning,
same for Second-level/sl/stage-2.
Qemu code can be found at [2]
[1] https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg02740.html
[2] https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_stage1_emu_v1
Thanks
Zhenzhong
Changelog:
v1:
- define VTD_HOST_AW_AUTO (Clement)
- passing pgtt as a parameter to vtd_update_iotlb (Clement)
- prefix sl_/fl_ to second/first level specific functions (Clement)
- pick reserved bit check from Clement, add his Co-developed-by
- Update test without using libqtest-single.h (Thomas)
rfcv2:
- split from nesting series (Jason)
- merged some commits from Clement
- add qtest (jason)
Clément Mathieu--Drif (5):
intel_iommu: Check if the input address is canonical
intel_iommu: Set accessed and dirty bits during first stage
translation
intel_iommu: Extract device IOTLB invalidation logic
intel_iommu: Add an internal API to find an address space with PASID
intel_iommu: Add support for PASID-based device IOTLB invalidation
Yi Liu (3):
intel_iommu: Rename slpte to pte
intel_iommu: Implement stage-1 translation
intel_iommu: Modify x-scalable-mode to be string option
Yu Zhang (1):
intel_iommu: Use the latest fault reasons defined by spec
Zhenzhong Duan (8):
intel_iommu: Make pasid entry type check accurate
intel_iommu: Add a placeholder variable for scalable modern mode
intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb
invalidation
intel_iommu: Flush stage-1 cache in iotlb invalidation
intel_iommu: Process PASID-based iotlb invalidation
intel_iommu: piotlb invalidation should notify unmap
intel_iommu: Set default aw_bits to 48 in scalable modren mode
tests/qtest: Add intel-iommu test
MAINTAINERS | 1 +
hw/i386/intel_iommu_internal.h | 90 +++-
include/hw/i386/intel_iommu.h | 8 +-
hw/i386/intel_iommu.c | 742 +++++++++++++++++++++++++++------
tests/qtest/intel-iommu-test.c | 71 ++++
tests/qtest/meson.build | 1 +
6 files changed, 764 insertions(+), 149 deletions(-)
create mode 100644 tests/qtest/intel-iommu-test.c
--
2.34.1
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-29 7:39 ` Yi Liu
2024-07-18 8:16 ` [PATCH v1 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
` (15 subsequent siblings)
16 siblings, 2 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yu Zhang, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
From: Yu Zhang <yu.c.zhang@linux.intel.com>
Spec revision 3.0 or above defines more detailed fault reasons for
scalable mode. So introduce them into emulation code, see spec
section 7.1.2 for details.
Note spec revision has no relation with VERSION register, Guest
kernel should not use that register to judge what features are
supported. Instead cap/ecap bits should be checked.
Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 9 ++++++++-
hw/i386/intel_iommu.c | 25 ++++++++++++++++---------
2 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index f8cf99bddf..c0ca7b372f 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -311,7 +311,14 @@ typedef enum VTDFaultReason {
* request while disabled */
VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */
- VTD_FR_PASID_TABLE_INV = 0x58, /*Invalid PASID table entry */
+ /* PASID directory entry access failure */
+ VTD_FR_PASID_DIR_ACCESS_ERR = 0x50,
+ /* The Present(P) field of pasid directory entry is 0 */
+ VTD_FR_PASID_DIR_ENTRY_P = 0x51,
+ VTD_FR_PASID_TABLE_ACCESS_ERR = 0x58, /* PASID table entry access failure */
+ /* The Present(P) field of pasid table entry is 0 */
+ VTD_FR_PASID_ENTRY_P = 0x59,
+ VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
/* Output address in the interrupt address range for scalable mode */
VTD_FR_SM_INTERRUPT_ADDR = 0x87,
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 37c21a0aec..e65f5b29a5 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -796,7 +796,7 @@ static int vtd_get_pdire_from_pdir_table(dma_addr_t pasid_dir_base,
addr = pasid_dir_base + index * entry_size;
if (dma_memory_read(&address_space_memory, addr,
pdire, entry_size, MEMTXATTRS_UNSPECIFIED)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_DIR_ACCESS_ERR;
}
pdire->val = le64_to_cpu(pdire->val);
@@ -814,6 +814,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
dma_addr_t addr,
VTDPASIDEntry *pe)
{
+ uint8_t pgtt;
uint32_t index;
dma_addr_t entry_size;
X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
@@ -823,7 +824,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
addr = addr + index * entry_size;
if (dma_memory_read(&address_space_memory, addr,
pe, entry_size, MEMTXATTRS_UNSPECIFIED)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_TABLE_ACCESS_ERR;
}
for (size_t i = 0; i < ARRAY_SIZE(pe->val); i++) {
pe->val[i] = le64_to_cpu(pe->val[i]);
@@ -831,11 +832,13 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
/* Do translation type check */
if (!vtd_pe_type_check(x86_iommu, pe)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
- if (!vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
- return -VTD_FR_PASID_TABLE_INV;
+ pgtt = VTD_PE_GET_TYPE(pe);
+ if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
+ !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
+ return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
return 0;
@@ -876,7 +879,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
}
if (!vtd_pdire_present(&pdire)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_DIR_ENTRY_P;
}
ret = vtd_get_pe_from_pdire(s, pasid, &pdire, pe);
@@ -885,7 +888,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
}
if (!vtd_pe_present(pe)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_ENTRY_P;
}
return 0;
@@ -938,7 +941,7 @@ static int vtd_ce_get_pasid_fpd(IntelIOMMUState *s,
}
if (!vtd_pdire_present(&pdire)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_DIR_ENTRY_P;
}
/*
@@ -1795,7 +1798,11 @@ static const bool vtd_qualified_faults[] = {
[VTD_FR_ROOT_ENTRY_RSVD] = false,
[VTD_FR_PAGING_ENTRY_RSVD] = true,
[VTD_FR_CONTEXT_ENTRY_TT] = true,
- [VTD_FR_PASID_TABLE_INV] = false,
+ [VTD_FR_PASID_DIR_ACCESS_ERR] = false,
+ [VTD_FR_PASID_DIR_ENTRY_P] = true,
+ [VTD_FR_PASID_TABLE_ACCESS_ERR] = false,
+ [VTD_FR_PASID_ENTRY_P] = true,
+ [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
[VTD_FR_SM_INTERRUPT_ADDR] = true,
[VTD_FR_MAX] = false,
};
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 02/17] intel_iommu: Make pasid entry type check accurate
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 9:06 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
` (14 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
When guest configures Nested Translation(011b) or First-stage Translation only
(001b), type check passed unaccurately.
Fails the type check in those cases as their simulation isn't supported yet.
Fixes: fb43cf739e1 ("intel_iommu: scalable mode emulation")
Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index e65f5b29a5..1cff8b00ae 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -759,20 +759,16 @@ static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
VTDPASIDEntry *pe)
{
switch (VTD_PE_GET_TYPE(pe)) {
- case VTD_SM_PASID_ENTRY_FLT:
case VTD_SM_PASID_ENTRY_SLT:
- case VTD_SM_PASID_ENTRY_NESTED:
- break;
+ return true;
case VTD_SM_PASID_ENTRY_PT:
- if (!x86_iommu->pt_supported) {
- return false;
- }
- break;
+ return x86_iommu->pt_supported;
+ case VTD_SM_PASID_ENTRY_FLT:
+ case VTD_SM_PASID_ENTRY_NESTED:
default:
/* Unknown type */
return false;
}
- return true;
}
static inline bool vtd_pdire_present(VTDPASIDDirEntry *pdire)
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 9:02 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation Zhenzhong Duan
` (13 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
Add an new element scalable_mode in IntelIOMMUState to mark scalable
modern mode, this element will be exposed as an intel_iommu property
finally.
For now, it's only a placehholder and used for cap/ecap initialization,
compatibility check and block host device passthrough until nesting
is supported.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 2 ++
include/hw/i386/intel_iommu.h | 1 +
hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-----------
3 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index c0ca7b372f..4e0331caba 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -195,6 +195,7 @@
#define VTD_ECAP_PASID (1ULL << 40)
#define VTD_ECAP_SMTS (1ULL << 43)
#define VTD_ECAP_SLTS (1ULL << 46)
+#define VTD_ECAP_FLTS (1ULL << 47)
/* CAP_REG */
/* (offset >> 4) << 24 */
@@ -211,6 +212,7 @@
#define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
#define VTD_CAP_DRAIN_WRITE (1ULL << 54)
#define VTD_CAP_DRAIN_READ (1ULL << 55)
+#define VTD_CAP_FS1GP (1ULL << 56)
#define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ | VTD_CAP_DRAIN_WRITE)
#define VTD_CAP_CM (1ULL << 7)
#define VTD_PASID_ID_SHIFT 20
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 1eb05c29fc..788ed42477 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -262,6 +262,7 @@ struct IntelIOMMUState {
bool caching_mode; /* RO - is cap CM enabled? */
bool scalable_mode; /* RO - is Scalable Mode supported? */
+ bool scalable_modern; /* RO - is modern SM supported? */
bool snoop_control; /* RO - is SNP filed supported? */
dma_addr_t root; /* Current root table pointer */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 1cff8b00ae..40cbd4a0f4 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -755,16 +755,20 @@ static inline bool vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
}
/* Return true if check passed, otherwise false */
-static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
- VTDPASIDEntry *pe)
+static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
{
+ X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
+
switch (VTD_PE_GET_TYPE(pe)) {
+ case VTD_SM_PASID_ENTRY_FLT:
+ return s->scalable_modern;
case VTD_SM_PASID_ENTRY_SLT:
- return true;
+ return !s->scalable_modern;
+ case VTD_SM_PASID_ENTRY_NESTED:
+ /* Not support NESTED page table type yet */
+ return false;
case VTD_SM_PASID_ENTRY_PT:
return x86_iommu->pt_supported;
- case VTD_SM_PASID_ENTRY_FLT:
- case VTD_SM_PASID_ENTRY_NESTED:
default:
/* Unknown type */
return false;
@@ -813,7 +817,6 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
uint8_t pgtt;
uint32_t index;
dma_addr_t entry_size;
- X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
index = VTD_PASID_TABLE_INDEX(pasid);
entry_size = VTD_PASID_ENTRY_SIZE;
@@ -827,7 +830,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
}
/* Do translation type check */
- if (!vtd_pe_type_check(x86_iommu, pe)) {
+ if (!vtd_pe_type_check(s, pe)) {
return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
@@ -3861,7 +3864,13 @@ static bool vtd_check_hiod(IntelIOMMUState *s, HostIOMMUDevice *hiod,
return false;
}
- return true;
+ if (!s->scalable_modern) {
+ /* All checks requested by VTD non-modern mode pass */
+ return true;
+ }
+
+ error_setg(errp, "host device is unsupported in scalable modern mode yet");
+ return false;
}
static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int devfn,
@@ -4084,7 +4093,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
}
/* TODO: read cap/ecap from host to decide which cap to be exposed. */
- if (s->scalable_mode) {
+ if (s->scalable_modern) {
+ s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
+ s->cap |= VTD_CAP_FS1GP;
+ } else if (s->scalable_mode) {
s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
}
@@ -4251,9 +4263,9 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
}
}
- /* Currently only address widths supported are 39 and 48 bits */
if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
- (s->aw_bits != VTD_HOST_AW_48BIT)) {
+ (s->aw_bits != VTD_HOST_AW_48BIT) &&
+ !s->scalable_modern) {
error_setg(errp, "Supported values for aw-bits are: %d, %d",
VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (2 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-23 16:02 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 05/17] intel_iommu: Rename slpte to pte Zhenzhong Duan
` (12 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
flush stage-2 iotlb entries with matching domain id and pasid.
With scalable modern mode introduced, guest could send PASID-selective
PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 10 +++++
hw/i386/intel_iommu.c | 78 ++++++++++++++++++++++++++++++++++
2 files changed, 88 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 4e0331caba..f71fc91234 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -440,6 +440,16 @@ typedef union VTDInvDesc VTDInvDesc;
(0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM | VTD_SL_TM)) : \
(0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
+#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
+#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
+
+#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
+#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
+
+#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
+#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
+ VTD_DOMAIN_ID_MASK)
+
/* Information about page-selective IOTLB invalidate */
struct VTDIOTLBPageInvInfo {
uint16_t domain_id;
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 40cbd4a0f4..075a27adac 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -2659,6 +2659,80 @@ static bool vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
return true;
}
+static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
+ gpointer user_data)
+{
+ VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
+ VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
+
+ return ((entry->domain_id == info->domain_id) &&
+ (entry->pasid == info->pasid));
+}
+
+static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
+ uint16_t domain_id, uint32_t pasid)
+{
+ VTDIOTLBPageInvInfo info;
+ VTDAddressSpace *vtd_as;
+ VTDContextEntry ce;
+
+ info.domain_id = domain_id;
+ info.pasid = pasid;
+
+ vtd_iommu_lock(s);
+ g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
+ &info);
+ vtd_iommu_unlock(s);
+
+ QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
+ if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
+ vtd_as->devfn, &ce) &&
+ domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
+ uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
+
+ if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
+ vtd_as->pasid != pasid) {
+ continue;
+ }
+
+ if (!s->scalable_modern) {
+ vtd_address_space_sync(vtd_as);
+ }
+ }
+ }
+}
+
+static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
+ VTDInvDesc *inv_desc)
+{
+ uint16_t domain_id;
+ uint32_t pasid;
+
+ if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
+ (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
+ error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%" PRIx64
+ " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
+ return false;
+ }
+
+ domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
+ pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
+ switch (inv_desc->val[0] & VTD_INV_DESC_IOTLB_G) {
+ case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
+ vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
+ break;
+
+ case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
+ break;
+
+ default:
+ error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%" PRIx64
+ " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
+ return false;
+ }
+ return true;
+}
+
static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
@@ -2769,6 +2843,10 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
break;
case VTD_INV_DESC_PIOTLB:
+ trace_vtd_inv_desc("p-iotlb", inv_desc.val[1], inv_desc.val[0]);
+ if (!vtd_process_piotlb_desc(s, &inv_desc)) {
+ return false;
+ }
break;
case VTD_INV_DESC_WAIT:
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 05/17] intel_iommu: Rename slpte to pte
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (3 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
` (11 subsequent siblings)
16 siblings, 0 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yi Sun, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Yi Liu <yi.l.liu@intel.com>
Because we will support both FST(a.k.a, FLT) and SST(a.k.a, SLT) translation,
rename variable and functions from slpte to pte whenever possible.
But some are SST only, they are renamed with sl_ prefix.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 24 +++---
include/hw/i386/intel_iommu.h | 2 +-
hw/i386/intel_iommu.c | 129 +++++++++++++++++----------------
3 files changed, 78 insertions(+), 77 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index f71fc91234..1875c2ddd6 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -536,24 +536,24 @@ typedef struct VTDRootEntry VTDRootEntry;
/* Second Level Page Translation Pointer*/
#define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
-/* Paging Structure common */
-#define VTD_SL_PT_PAGE_SIZE_MASK (1ULL << 7)
-/* Bits to decide the offset for each level */
-#define VTD_SL_LEVEL_BITS 9
-
/* Second Level Paging Structure */
-#define VTD_SL_PML4_LEVEL 4
-#define VTD_SL_PDP_LEVEL 3
-#define VTD_SL_PD_LEVEL 2
-#define VTD_SL_PT_LEVEL 1
-#define VTD_SL_PT_ENTRY_NR 512
-
/* Masks for Second Level Paging Entry */
#define VTD_SL_RW_MASK 3ULL
#define VTD_SL_R 1ULL
#define VTD_SL_W (1ULL << 1)
-#define VTD_SL_PT_BASE_ADDR_MASK(aw) (~(VTD_PAGE_SIZE - 1) & VTD_HAW_MASK(aw))
#define VTD_SL_IGN_COM 0xbff0000000000000ULL
#define VTD_SL_TM (1ULL << 62)
+/* Common for both First Level and Second Level */
+#define VTD_PML4_LEVEL 4
+#define VTD_PDP_LEVEL 3
+#define VTD_PD_LEVEL 2
+#define VTD_PT_LEVEL 1
+#define VTD_PT_ENTRY_NR 512
+#define VTD_PT_PAGE_SIZE_MASK (1ULL << 7)
+#define VTD_PT_BASE_ADDR_MASK(aw) (~(VTD_PAGE_SIZE - 1) & VTD_HAW_MASK(aw))
+
+/* Bits to decide the offset for each level */
+#define VTD_LEVEL_BITS 9
+
#endif
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 788ed42477..fe9057c50d 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -152,7 +152,7 @@ struct VTDIOTLBEntry {
uint64_t gfn;
uint16_t domain_id;
uint32_t pasid;
- uint64_t slpte;
+ uint64_t pte;
uint64_t mask;
uint8_t access_flags;
};
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 075a27adac..94f6532935 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -48,7 +48,8 @@
/* pe operations */
#define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
-#define VTD_PE_GET_LEVEL(pe) (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
+#define VTD_PE_GET_SL_LEVEL(pe) \
+ (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
/*
* PCI bus number (or SID) is not reliable since the device is usaully
@@ -284,15 +285,15 @@ static gboolean vtd_hash_remove_by_domain(gpointer key, gpointer value,
}
/* The shift of an addr for a certain level of paging structure */
-static inline uint32_t vtd_slpt_level_shift(uint32_t level)
+static inline uint32_t vtd_pt_level_shift(uint32_t level)
{
assert(level != 0);
- return VTD_PAGE_SHIFT_4K + (level - 1) * VTD_SL_LEVEL_BITS;
+ return VTD_PAGE_SHIFT_4K + (level - 1) * VTD_LEVEL_BITS;
}
-static inline uint64_t vtd_slpt_level_page_mask(uint32_t level)
+static inline uint64_t vtd_pt_level_page_mask(uint32_t level)
{
- return ~((1ULL << vtd_slpt_level_shift(level)) - 1);
+ return ~((1ULL << vtd_pt_level_shift(level)) - 1);
}
static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
@@ -349,7 +350,7 @@ static void vtd_reset_caches(IntelIOMMUState *s)
static uint64_t vtd_get_iotlb_gfn(hwaddr addr, uint32_t level)
{
- return (addr & vtd_slpt_level_page_mask(level)) >> VTD_PAGE_SHIFT_4K;
+ return (addr & vtd_pt_level_page_mask(level)) >> VTD_PAGE_SHIFT_4K;
}
/* Must be called with IOMMU lock held */
@@ -360,7 +361,7 @@ static VTDIOTLBEntry *vtd_lookup_iotlb(IntelIOMMUState *s, uint16_t source_id,
VTDIOTLBEntry *entry;
int level;
- for (level = VTD_SL_PT_LEVEL; level < VTD_SL_PML4_LEVEL; level++) {
+ for (level = VTD_PT_LEVEL; level < VTD_PML4_LEVEL; level++) {
key.gfn = vtd_get_iotlb_gfn(addr, level);
key.level = level;
key.sid = source_id;
@@ -377,7 +378,7 @@ out:
/* Must be with IOMMU lock held */
static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
- uint16_t domain_id, hwaddr addr, uint64_t slpte,
+ uint16_t domain_id, hwaddr addr, uint64_t pte,
uint8_t access_flags, uint32_t level,
uint32_t pasid)
{
@@ -385,7 +386,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
uint64_t gfn = vtd_get_iotlb_gfn(addr, level);
- trace_vtd_iotlb_page_update(source_id, addr, slpte, domain_id);
+ trace_vtd_iotlb_page_update(source_id, addr, pte, domain_id);
if (g_hash_table_size(s->iotlb) >= VTD_IOTLB_MAX_SIZE) {
trace_vtd_iotlb_reset("iotlb exceeds size limit");
vtd_reset_iotlb_locked(s);
@@ -393,9 +394,9 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
entry->gfn = gfn;
entry->domain_id = domain_id;
- entry->slpte = slpte;
+ entry->pte = pte;
entry->access_flags = access_flags;
- entry->mask = vtd_slpt_level_page_mask(level);
+ entry->mask = vtd_pt_level_page_mask(level);
entry->pasid = pasid;
key->gfn = gfn;
@@ -710,32 +711,32 @@ static inline dma_addr_t vtd_ce_get_slpt_base(VTDContextEntry *ce)
return ce->lo & VTD_CONTEXT_ENTRY_SLPTPTR;
}
-static inline uint64_t vtd_get_slpte_addr(uint64_t slpte, uint8_t aw)
+static inline uint64_t vtd_get_pte_addr(uint64_t pte, uint8_t aw)
{
- return slpte & VTD_SL_PT_BASE_ADDR_MASK(aw);
+ return pte & VTD_PT_BASE_ADDR_MASK(aw);
}
/* Whether the pte indicates the address of the page frame */
-static inline bool vtd_is_last_slpte(uint64_t slpte, uint32_t level)
+static inline bool vtd_is_last_pte(uint64_t pte, uint32_t level)
{
- return level == VTD_SL_PT_LEVEL || (slpte & VTD_SL_PT_PAGE_SIZE_MASK);
+ return level == VTD_PT_LEVEL || (pte & VTD_PT_PAGE_SIZE_MASK);
}
-/* Get the content of a spte located in @base_addr[@index] */
-static uint64_t vtd_get_slpte(dma_addr_t base_addr, uint32_t index)
+/* Get the content of a pte located in @base_addr[@index] */
+static uint64_t vtd_get_pte(dma_addr_t base_addr, uint32_t index)
{
- uint64_t slpte;
+ uint64_t pte;
- assert(index < VTD_SL_PT_ENTRY_NR);
+ assert(index < VTD_PT_ENTRY_NR);
if (dma_memory_read(&address_space_memory,
- base_addr + index * sizeof(slpte),
- &slpte, sizeof(slpte), MEMTXATTRS_UNSPECIFIED)) {
- slpte = (uint64_t)-1;
- return slpte;
+ base_addr + index * sizeof(pte),
+ &pte, sizeof(pte), MEMTXATTRS_UNSPECIFIED)) {
+ pte = (uint64_t)-1;
+ return pte;
}
- slpte = le64_to_cpu(slpte);
- return slpte;
+ pte = le64_to_cpu(pte);
+ return pte;
}
/* Given an iova and the level of paging structure, return the offset
@@ -743,12 +744,12 @@ static uint64_t vtd_get_slpte(dma_addr_t base_addr, uint32_t index)
*/
static inline uint32_t vtd_iova_level_offset(uint64_t iova, uint32_t level)
{
- return (iova >> vtd_slpt_level_shift(level)) &
- ((1ULL << VTD_SL_LEVEL_BITS) - 1);
+ return (iova >> vtd_pt_level_shift(level)) &
+ ((1ULL << VTD_LEVEL_BITS) - 1);
}
/* Check Capability Register to see if the @level of page-table is supported */
-static inline bool vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
+static inline bool vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
{
return VTD_CAP_SAGAW_MASK & s->cap &
(1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
@@ -836,7 +837,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
pgtt = VTD_PE_GET_TYPE(pe);
if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
- !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
+ !vtd_is_sl_level_supported(s, VTD_PE_GET_SL_LEVEL(pe))) {
return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
@@ -975,7 +976,7 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState *s,
if (s->root_scalable) {
vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
- return VTD_PE_GET_LEVEL(&pe);
+ return VTD_PE_GET_SL_LEVEL(&pe);
}
return vtd_ce_get_level(ce);
@@ -1043,9 +1044,9 @@ static inline uint64_t vtd_iova_limit(IntelIOMMUState *s,
}
/* Return true if IOVA passes range check, otherwise false. */
-static inline bool vtd_iova_range_check(IntelIOMMUState *s,
- uint64_t iova, VTDContextEntry *ce,
- uint8_t aw, uint32_t pasid)
+static inline bool vtd_iova_sl_range_check(IntelIOMMUState *s,
+ uint64_t iova, VTDContextEntry *ce,
+ uint8_t aw, uint32_t pasid)
{
/*
* Check if @iova is above 2^X-1, where X is the minimum of MGAW
@@ -1086,17 +1087,17 @@ static bool vtd_slpte_nonzero_rsvd(uint64_t slpte, uint32_t level)
/*
* We should have caught a guest-mis-programmed level earlier,
- * via vtd_is_level_supported.
+ * via vtd_is_sl_level_supported.
*/
assert(level < VTD_SPTE_RSVD_LEN);
/*
- * Zero level doesn't exist. The smallest level is VTD_SL_PT_LEVEL=1 and
- * checked by vtd_is_last_slpte().
+ * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
+ * checked by vtd_is_last_pte().
*/
assert(level);
- if ((level == VTD_SL_PD_LEVEL || level == VTD_SL_PDP_LEVEL) &&
- (slpte & VTD_SL_PT_PAGE_SIZE_MASK)) {
+ if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
+ (slpte & VTD_PT_PAGE_SIZE_MASK)) {
/* large page */
rsvd_mask = vtd_spte_rsvd_large[level];
} else {
@@ -1122,7 +1123,7 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
uint64_t access_right_check;
uint64_t xlat, size;
- if (!vtd_iova_range_check(s, iova, ce, aw_bits, pasid)) {
+ if (!vtd_iova_sl_range_check(s, iova, ce, aw_bits, pasid)) {
error_report_once("%s: detected IOVA overflow (iova=0x%" PRIx64 ","
"pasid=0x%" PRIx32 ")", __func__, iova, pasid);
return -VTD_FR_ADDR_BEYOND_MGAW;
@@ -1133,7 +1134,7 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
while (true) {
offset = vtd_iova_level_offset(iova, level);
- slpte = vtd_get_slpte(addr, offset);
+ slpte = vtd_get_pte(addr, offset);
if (slpte == (uint64_t)-1) {
error_report_once("%s: detected read error on DMAR slpte "
@@ -1164,17 +1165,17 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
return -VTD_FR_PAGING_ENTRY_RSVD;
}
- if (vtd_is_last_slpte(slpte, level)) {
+ if (vtd_is_last_pte(slpte, level)) {
*slptep = slpte;
*slpte_level = level;
break;
}
- addr = vtd_get_slpte_addr(slpte, aw_bits);
+ addr = vtd_get_pte_addr(slpte, aw_bits);
level--;
}
- xlat = vtd_get_slpte_addr(*slptep, aw_bits);
- size = ~vtd_slpt_level_page_mask(level) + 1;
+ xlat = vtd_get_pte_addr(*slptep, aw_bits);
+ size = ~vtd_pt_level_page_mask(level) + 1;
/*
* From VT-d spec 3.14: Untranslated requests and translation
@@ -1325,14 +1326,14 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
trace_vtd_page_walk_level(addr, level, start, end);
- subpage_size = 1ULL << vtd_slpt_level_shift(level);
- subpage_mask = vtd_slpt_level_page_mask(level);
+ subpage_size = 1ULL << vtd_pt_level_shift(level);
+ subpage_mask = vtd_pt_level_page_mask(level);
while (iova < end) {
iova_next = (iova & subpage_mask) + subpage_size;
offset = vtd_iova_level_offset(iova, level);
- slpte = vtd_get_slpte(addr, offset);
+ slpte = vtd_get_pte(addr, offset);
if (slpte == (uint64_t)-1) {
trace_vtd_page_walk_skip_read(iova, iova_next);
@@ -1355,12 +1356,12 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
*/
entry_valid = read_cur | write_cur;
- if (!vtd_is_last_slpte(slpte, level) && entry_valid) {
+ if (!vtd_is_last_pte(slpte, level) && entry_valid) {
/*
* This is a valid PDE (or even bigger than PDE). We need
* to walk one further level.
*/
- ret = vtd_page_walk_level(vtd_get_slpte_addr(slpte, info->aw),
+ ret = vtd_page_walk_level(vtd_get_pte_addr(slpte, info->aw),
iova, MIN(iova_next, end), level - 1,
read_cur, write_cur, info);
} else {
@@ -1377,7 +1378,7 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
event.entry.perm = IOMMU_ACCESS_FLAG(read_cur, write_cur);
event.entry.addr_mask = ~subpage_mask;
/* NOTE: this is only meaningful if entry_valid == true */
- event.entry.translated_addr = vtd_get_slpte_addr(slpte, info->aw);
+ event.entry.translated_addr = vtd_get_pte_addr(slpte, info->aw);
event.type = event.entry.perm ? IOMMU_NOTIFIER_MAP :
IOMMU_NOTIFIER_UNMAP;
ret = vtd_page_walk_one(&event, info);
@@ -1411,11 +1412,11 @@ static int vtd_page_walk(IntelIOMMUState *s, VTDContextEntry *ce,
dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
uint32_t level = vtd_get_iova_level(s, ce, pasid);
- if (!vtd_iova_range_check(s, start, ce, info->aw, pasid)) {
+ if (!vtd_iova_sl_range_check(s, start, ce, info->aw, pasid)) {
return -VTD_FR_ADDR_BEYOND_MGAW;
}
- if (!vtd_iova_range_check(s, end, ce, info->aw, pasid)) {
+ if (!vtd_iova_sl_range_check(s, end, ce, info->aw, pasid)) {
/* Fix end so that it reaches the maximum */
end = vtd_iova_limit(s, ce, info->aw, pasid);
}
@@ -1530,7 +1531,7 @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
/* Check if the programming of context-entry is valid */
if (!s->root_scalable &&
- !vtd_is_level_supported(s, vtd_ce_get_level(ce))) {
+ !vtd_is_sl_level_supported(s, vtd_ce_get_level(ce))) {
error_report_once("%s: invalid context entry: hi=%"PRIx64
", lo=%"PRIx64" (level %d not supported)",
__func__, ce->hi, ce->lo,
@@ -1900,7 +1901,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
VTDContextEntry ce;
uint8_t bus_num = pci_bus_num(bus);
VTDContextCacheEntry *cc_entry;
- uint64_t slpte, page_mask;
+ uint64_t pte, page_mask;
uint32_t level, pasid = vtd_as->pasid;
uint16_t source_id = PCI_BUILD_BDF(bus_num, devfn);
int ret_fr;
@@ -1921,13 +1922,13 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
cc_entry = &vtd_as->context_cache_entry;
- /* Try to fetch slpte form IOTLB, we don't need RID2PASID logic */
+ /* Try to fetch pte form IOTLB, we don't need RID2PASID logic */
if (!rid2pasid) {
iotlb_entry = vtd_lookup_iotlb(s, source_id, pasid, addr);
if (iotlb_entry) {
- trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->slpte,
+ trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->pte,
iotlb_entry->domain_id);
- slpte = iotlb_entry->slpte;
+ pte = iotlb_entry->pte;
access_flags = iotlb_entry->access_flags;
page_mask = iotlb_entry->mask;
goto out;
@@ -1999,20 +2000,20 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
return true;
}
- /* Try to fetch slpte form IOTLB for RID2PASID slow path */
+ /* Try to fetch pte form IOTLB for RID2PASID slow path */
if (rid2pasid) {
iotlb_entry = vtd_lookup_iotlb(s, source_id, pasid, addr);
if (iotlb_entry) {
- trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->slpte,
+ trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->pte,
iotlb_entry->domain_id);
- slpte = iotlb_entry->slpte;
+ pte = iotlb_entry->pte;
access_flags = iotlb_entry->access_flags;
page_mask = iotlb_entry->mask;
goto out;
}
}
- ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &slpte, &level,
+ ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
&reads, &writes, s->aw_bits, pasid);
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
@@ -2020,14 +2021,14 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
goto error;
}
- page_mask = vtd_slpt_level_page_mask(level);
+ page_mask = vtd_pt_level_page_mask(level);
access_flags = IOMMU_ACCESS_FLAG(reads, writes);
vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
- addr, slpte, access_flags, level, pasid);
+ addr, pte, access_flags, level, pasid);
out:
vtd_iommu_unlock(s);
entry->iova = addr & page_mask;
- entry->translated_addr = vtd_get_slpte_addr(slpte, s->aw_bits) & page_mask;
+ entry->translated_addr = vtd_get_pte_addr(pte, s->aw_bits) & page_mask;
entry->addr_mask = ~page_mask;
entry->perm = access_flags;
return true;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 06/17] intel_iommu: Implement stage-1 translation
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (4 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 05/17] intel_iommu: Rename slpte to pte Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
` (10 subsequent siblings)
16 siblings, 0 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yi Sun, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Yi Liu <yi.l.liu@intel.com>
This adds stage-1 page table walking to support stage-1 only
transltion in scalable modern mode.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 26 ++++++
hw/i386/intel_iommu.c | 146 ++++++++++++++++++++++++++++++++-
2 files changed, 168 insertions(+), 4 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 1875c2ddd6..36fcc6bb5e 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -440,6 +440,24 @@ typedef union VTDInvDesc VTDInvDesc;
(0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM | VTD_SL_TM)) : \
(0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
+/* Rsvd field masks for fpte */
+#define VTD_FS_UPPER_IGNORED 0xfff0000000000000ULL
+#define VTD_FPTE_PAGE_L1_RSVD_MASK(aw) (~(VTD_HAW_MASK(aw)) & \
+ (~VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L2_RSVD_MASK(aw) (~(VTD_HAW_MASK(aw)) & \
+ (~VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L3_RSVD_MASK(aw) (~(VTD_HAW_MASK(aw)) & \
+ (~VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L3_FS1GP_RSVD_MASK(aw) ((0x3fffe000ULL | \
+ ~(VTD_HAW_MASK(aw))) \
+ & (~VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L2_FS2MP_RSVD_MASK(aw) ((0x1fe000ULL | \
+ ~(VTD_HAW_MASK(aw))) \
+ & (~VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L4_RSVD_MASK(aw) ((0x80ULL | \
+ ~(VTD_HAW_MASK(aw))) \
+ & (~VTD_FS_UPPER_IGNORED))
+
#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
@@ -533,6 +551,14 @@ typedef struct VTDRootEntry VTDRootEntry;
#define VTD_SM_PASID_ENTRY_AW 7ULL /* Adjusted guest-address-width */
#define VTD_SM_PASID_ENTRY_DID(val) ((val) & VTD_DOMAIN_ID_MASK)
+#define VTD_SM_PASID_ENTRY_FLPM 3ULL
+#define VTD_SM_PASID_ENTRY_FLPTPTR (~0xfffULL)
+
+/* First Level Paging Structure */
+/* Masks for First Level Paging Entry */
+#define VTD_FL_P 1ULL
+#define VTD_FL_RW_MASK (1ULL << 1)
+
/* Second Level Page Translation Pointer*/
#define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 94f6532935..287741b687 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -48,6 +48,8 @@
/* pe operations */
#define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
+#define VTD_PE_GET_FL_LEVEL(pe) \
+ (4 + (((pe)->val[2] >> 2) & VTD_SM_PASID_ENTRY_FLPM))
#define VTD_PE_GET_SL_LEVEL(pe) \
(2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
@@ -755,6 +757,11 @@ static inline bool vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
(1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
}
+static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
+{
+ return level == VTD_PML4_LEVEL;
+}
+
/* Return true if check passed, otherwise false */
static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
{
@@ -841,6 +848,11 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
+ if (pgtt == VTD_SM_PASID_ENTRY_FLT &&
+ !vtd_is_fl_level_supported(s, VTD_PE_GET_FL_LEVEL(pe))) {
+ return -VTD_FR_PASID_TABLE_ENTRY_INV;
+ }
+
return 0;
}
@@ -976,7 +988,11 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState *s,
if (s->root_scalable) {
vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
- return VTD_PE_GET_SL_LEVEL(&pe);
+ if (s->scalable_modern) {
+ return VTD_PE_GET_FL_LEVEL(&pe);
+ } else {
+ return VTD_PE_GET_SL_LEVEL(&pe);
+ }
}
return vtd_ce_get_level(ce);
@@ -1063,7 +1079,11 @@ static dma_addr_t vtd_get_iova_pgtbl_base(IntelIOMMUState *s,
if (s->root_scalable) {
vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
- return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
+ if (s->scalable_modern) {
+ return pe.val[2] & VTD_SM_PASID_ENTRY_FLPTPTR;
+ } else {
+ return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
+ }
}
return vtd_ce_get_slpt_base(ce);
@@ -1865,6 +1885,104 @@ out:
trace_vtd_pt_enable_fast_path(source_id, success);
}
+/*
+ * Rsvd field masks for fpte:
+ * vtd_fpte_rsvd 4k pages
+ * vtd_fpte_rsvd_large large pages
+ *
+ * We support only 4-level page tables.
+ */
+#define VTD_FPTE_RSVD_LEN 5
+static uint64_t vtd_fpte_rsvd[VTD_FPTE_RSVD_LEN];
+static uint64_t vtd_fpte_rsvd_large[VTD_FPTE_RSVD_LEN];
+
+static bool vtd_flpte_nonzero_rsvd(uint64_t flpte, uint32_t level)
+{
+ uint64_t rsvd_mask;
+
+ /*
+ * We should have caught a guest-mis-programmed level earlier,
+ * via vtd_is_fl_level_supported.
+ */
+ assert(level < VTD_SPTE_RSVD_LEN);
+ /*
+ * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
+ * checked by vtd_is_last_pte().
+ */
+ assert(level);
+
+ if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
+ (flpte & VTD_PT_PAGE_SIZE_MASK)) {
+ /* large page */
+ rsvd_mask = vtd_fpte_rsvd_large[level];
+ } else {
+ rsvd_mask = vtd_fpte_rsvd[level];
+ }
+
+ return flpte & rsvd_mask;
+}
+
+static inline bool vtd_flpte_present(uint64_t flpte)
+{
+ return !!(flpte & VTD_FL_P);
+}
+
+/*
+ * Given the @iova, get relevant @flptep. @flpte_level will be the last level
+ * of the translation, can be used for deciding the size of large page.
+ */
+static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
+ uint64_t iova, bool is_write,
+ uint64_t *flptep, uint32_t *flpte_level,
+ bool *reads, bool *writes, uint8_t aw_bits,
+ uint32_t pasid)
+{
+ dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
+ uint32_t level = vtd_get_iova_level(s, ce, pasid);
+ uint32_t offset;
+ uint64_t flpte;
+
+ while (true) {
+ offset = vtd_iova_level_offset(iova, level);
+ flpte = vtd_get_pte(addr, offset);
+
+ if (flpte == (uint64_t)-1) {
+ if (level == vtd_get_iova_level(s, ce, pasid)) {
+ /* Invalid programming of context-entry */
+ return -VTD_FR_CONTEXT_ENTRY_INV;
+ } else {
+ return -VTD_FR_PAGING_ENTRY_INV;
+ }
+ }
+ if (!vtd_flpte_present(flpte)) {
+ *reads = false;
+ *writes = false;
+ return -VTD_FR_PAGING_ENTRY_INV;
+ }
+ *reads = true;
+ *writes = (*writes) && (flpte & VTD_FL_RW_MASK);
+ if (is_write && !(flpte & VTD_FL_RW_MASK)) {
+ return -VTD_FR_WRITE;
+ }
+ if (vtd_flpte_nonzero_rsvd(flpte, level)) {
+ error_report_once("%s: detected flpte reserved non-zero "
+ "iova=0x%" PRIx64 ", level=0x%" PRIx32
+ "flpte=0x%" PRIx64 ", pasid=0x%" PRIX32 ")",
+ __func__, iova, level, flpte, pasid);
+ return -VTD_FR_PAGING_ENTRY_RSVD;
+ }
+
+ if (vtd_is_last_pte(flpte, level)) {
+ *flptep = flpte;
+ *flpte_level = level;
+ return 0;
+ }
+
+ addr = vtd_get_pte_addr(flpte, aw_bits);
+ level--;
+ }
+}
+
static void vtd_report_fault(IntelIOMMUState *s,
int err, bool is_fpd_set,
uint16_t source_id,
@@ -2013,8 +2131,13 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
}
}
- ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
- &reads, &writes, s->aw_bits, pasid);
+ if (s->scalable_modern && s->root_scalable) {
+ ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
+ &reads, &writes, s->aw_bits, pasid);
+ } else {
+ ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
+ &reads, &writes, s->aw_bits, pasid);
+ }
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
addr, is_write, pasid != PCI_NO_PASID, pasid);
@@ -4231,6 +4354,21 @@ static void vtd_init(IntelIOMMUState *s)
vtd_spte_rsvd_large[3] = VTD_SPTE_LPAGE_L3_RSVD_MASK(s->aw_bits,
x86_iommu->dt_supported);
+ /*
+ * Rsvd field masks for fpte
+ */
+ vtd_fpte_rsvd[0] = ~0ULL;
+ vtd_fpte_rsvd[1] = VTD_FPTE_PAGE_L1_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd[2] = VTD_FPTE_PAGE_L2_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd[3] = VTD_FPTE_PAGE_L3_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd[4] = VTD_FPTE_PAGE_L4_RSVD_MASK(s->aw_bits);
+
+ vtd_fpte_rsvd_large[0] = ~0ULL;
+ vtd_fpte_rsvd_large[1] = ~0ULL;
+ vtd_fpte_rsvd_large[2] = VTD_FPTE_PAGE_L2_FS2MP_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd_large[3] = VTD_FPTE_PAGE_L3_FS1GP_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd_large[4] = ~0ULL;
+
if (s->scalable_mode || s->snoop_control) {
vtd_spte_rsvd[1] &= ~VTD_SPTE_SNP;
vtd_spte_rsvd_large[2] &= ~VTD_SPTE_SNP;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 07/17] intel_iommu: Check if the input address is canonical
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (5 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
` (9 subsequent siblings)
16 siblings, 0 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
First stage translation must fail if the address to translate is
not canonical.
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 2 ++
hw/i386/intel_iommu.c | 21 +++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 36fcc6bb5e..168185b850 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -322,6 +322,8 @@ typedef enum VTDFaultReason {
VTD_FR_PASID_ENTRY_P = 0x59,
VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
+ VTD_FR_FS_NON_CANONICAL = 0x80, /* SNG.1 : Address for FS not canonical.*/
+
/* Output address in the interrupt address range for scalable mode */
VTD_FR_SM_INTERRUPT_ADDR = 0x87,
VTD_FR_MAX, /* Guard */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 287741b687..495a41cf80 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -1824,6 +1824,7 @@ static const bool vtd_qualified_faults[] = {
[VTD_FR_PASID_ENTRY_P] = true,
[VTD_FR_PASID_TABLE_ENTRY_INV] = true,
[VTD_FR_SM_INTERRUPT_ADDR] = true,
+ [VTD_FR_FS_NON_CANONICAL] = true,
[VTD_FR_MAX] = false,
};
@@ -1927,6 +1928,20 @@ static inline bool vtd_flpte_present(uint64_t flpte)
return !!(flpte & VTD_FL_P);
}
+/* Return true if IOVA is canonical, otherwise false. */
+static bool vtd_iova_fl_check_canonical(IntelIOMMUState *s, uint64_t iova,
+ VTDContextEntry *ce, uint32_t pasid)
+{
+ uint64_t iova_limit = vtd_iova_limit(s, ce, s->aw_bits, pasid);
+ uint64_t upper_bits_mask = ~(iova_limit - 1);
+ uint64_t upper_bits = iova & upper_bits_mask;
+ bool msb = ((iova & (iova_limit >> 1)) != 0);
+ return !(
+ (!msb && (upper_bits != 0)) ||
+ (msb && (upper_bits != upper_bits_mask))
+ );
+}
+
/*
* Given the @iova, get relevant @flptep. @flpte_level will be the last level
* of the translation, can be used for deciding the size of large page.
@@ -1942,6 +1957,12 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
uint32_t offset;
uint64_t flpte;
+ if (!vtd_iova_fl_check_canonical(s, iova, ce, pasid)) {
+ error_report_once("%s: detected non canonical IOVA (iova=0x%" PRIx64 ","
+ "pasid=0x%" PRIx32 ")", __func__, iova, pasid);
+ return -VTD_FR_FS_NON_CANONICAL;
+ }
+
while (true) {
offset = vtd_iova_level_offset(iova, level);
flpte = vtd_get_pte(addr, offset);
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 08/17] intel_iommu: Set accessed and dirty bits during first stage translation
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (6 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
` (8 subsequent siblings)
16 siblings, 0 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 3 +++
hw/i386/intel_iommu.c | 24 ++++++++++++++++++++++++
2 files changed, 27 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 168185b850..cf0f176e06 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -326,6 +326,7 @@ typedef enum VTDFaultReason {
/* Output address in the interrupt address range for scalable mode */
VTD_FR_SM_INTERRUPT_ADDR = 0x87,
+ VTD_FR_FS_BIT_UPDATE_FAILED = 0x91, /* SFS.10 */
VTD_FR_MAX, /* Guard */
} VTDFaultReason;
@@ -560,6 +561,8 @@ typedef struct VTDRootEntry VTDRootEntry;
/* Masks for First Level Paging Entry */
#define VTD_FL_P 1ULL
#define VTD_FL_RW_MASK (1ULL << 1)
+#define VTD_FL_A 0x20
+#define VTD_FL_D 0x40
/* Second Level Page Translation Pointer*/
#define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 495a41cf80..210df32f01 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -1825,6 +1825,7 @@ static const bool vtd_qualified_faults[] = {
[VTD_FR_PASID_TABLE_ENTRY_INV] = true,
[VTD_FR_SM_INTERRUPT_ADDR] = true,
[VTD_FR_FS_NON_CANONICAL] = true,
+ [VTD_FR_FS_BIT_UPDATE_FAILED] = true,
[VTD_FR_MAX] = false,
};
@@ -1942,6 +1943,20 @@ static bool vtd_iova_fl_check_canonical(IntelIOMMUState *s, uint64_t iova,
);
}
+static MemTxResult vtd_set_flag_in_pte(dma_addr_t base_addr, uint32_t index,
+ uint64_t pte, uint64_t flag)
+{
+ if (pte & flag) {
+ return MEMTX_OK;
+ }
+ pte |= flag;
+ pte = cpu_to_le64(pte);
+ return dma_memory_write(&address_space_memory,
+ base_addr + index * sizeof(pte),
+ &pte, sizeof(pte),
+ MEMTXATTRS_UNSPECIFIED);
+}
+
/*
* Given the @iova, get relevant @flptep. @flpte_level will be the last level
* of the translation, can be used for deciding the size of large page.
@@ -1993,7 +2008,16 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
return -VTD_FR_PAGING_ENTRY_RSVD;
}
+ if (vtd_set_flag_in_pte(addr, offset, flpte, VTD_FL_A) != MEMTX_OK) {
+ return -VTD_FR_FS_BIT_UPDATE_FAILED;
+ }
+
if (vtd_is_last_pte(flpte, level)) {
+ if (is_write &&
+ (vtd_set_flag_in_pte(addr, offset, flpte, VTD_FL_D) !=
+ MEMTX_OK)) {
+ return -VTD_FR_FS_BIT_UPDATE_FAILED;
+ }
*flptep = flpte;
*flpte_level = level;
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (7 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
` (7 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
According to spec, Page-Selective-within-Domain Invalidation (11b):
1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-through
(PGTT=100b) mappings associated with the specified domain-id and the
input-address range are invalidated.
2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
mapping associated with specified domain-id are invalidated.
So per spec definition the Page-Selective-within-Domain Invalidation
needs to flush first stage and nested cached IOTLB enties as well.
We don't support nested yet and pass-through mapping is never cached,
so what in iotlb cache are only first-stage and second-stage mappings.
Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
invalidate entries based on PGTT type.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
include/hw/i386/intel_iommu.h | 1 +
hw/i386/intel_iommu.c | 27 +++++++++++++++++++++------
2 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index fe9057c50d..b843d069cc 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -155,6 +155,7 @@ struct VTDIOTLBEntry {
uint64_t pte;
uint64_t mask;
uint8_t access_flags;
+ uint8_t pgtt;
};
/* VT-d Source-ID Qualifier types */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 210df32f01..8d47e5ba78 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -305,9 +305,21 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
- return (entry->domain_id == info->domain_id) &&
- (((entry->gfn & info->mask) == gfn) ||
- (entry->gfn == gfn_tlb));
+
+ if (entry->domain_id != info->domain_id) {
+ return false;
+ }
+
+ /*
+ * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
+ * nested (PGTT=011b) mapping associated with specified domain-id are
+ * invalidated. Nested isn't supported yet, so only need to check 001b.
+ */
+ if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
+ return true;
+ }
+
+ return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
}
/* Reset all the gen of VTDAddressSpace to zero and set the gen of
@@ -382,7 +394,7 @@ out:
static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
uint16_t domain_id, hwaddr addr, uint64_t pte,
uint8_t access_flags, uint32_t level,
- uint32_t pasid)
+ uint32_t pasid, uint8_t pgtt)
{
VTDIOTLBEntry *entry = g_malloc(sizeof(*entry));
struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
@@ -400,6 +412,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
entry->access_flags = access_flags;
entry->mask = vtd_pt_level_page_mask(level);
entry->pasid = pasid;
+ entry->pgtt = pgtt;
key->gfn = gfn;
key->sid = source_id;
@@ -2071,7 +2084,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
bool is_fpd_set = false;
bool reads = true;
bool writes = true;
- uint8_t access_flags;
+ uint8_t access_flags, pgtt;
bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
VTDIOTLBEntry *iotlb_entry;
@@ -2179,9 +2192,11 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
if (s->scalable_modern && s->root_scalable) {
ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
&reads, &writes, s->aw_bits, pasid);
+ pgtt = VTD_SM_PASID_ENTRY_FLT;
} else {
ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
&reads, &writes, s->aw_bits, pasid);
+ pgtt = VTD_SM_PASID_ENTRY_SLT;
}
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
@@ -2192,7 +2207,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
page_mask = vtd_pt_level_page_mask(level);
access_flags = IOMMU_ACCESS_FLAG(reads, writes);
vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
- addr, pte, access_flags, level, pasid);
+ addr, pte, access_flags, level, pasid, pgtt);
out:
vtd_iommu_unlock(s);
entry->iova = addr & page_mask;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 10/17] intel_iommu: Process PASID-based iotlb invalidation
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (8 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-23 16:18 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic Zhenzhong Duan
` (6 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
PASID-based iotlb (piotlb) is used during walking Intel
VT-d stage-1 page table.
This emulates the stage-1 page table iotlb invalidation requested
by a PASID-based IOTLB Invalidate Descriptor (P_IOTLB).
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 3 +++
hw/i386/intel_iommu.c | 45 ++++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index cf0f176e06..7dd8176e86 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -470,6 +470,9 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
VTD_DOMAIN_ID_MASK)
+#define VTD_INV_DESC_PIOTLB_ADDR(val) ((val) & ~0xfffULL)
+#define VTD_INV_DESC_PIOTLB_AM(val) ((val) & 0x3fULL)
+#define VTD_INV_DESC_PIOTLB_IH(val) (((val) >> 6) & 0x1)
/* Information about page-selective IOTLB invalidate */
struct VTDIOTLBPageInvInfo {
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 8d47e5ba78..8ebb6dbd7d 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -322,6 +322,28 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
}
+static gboolean vtd_hash_remove_by_page_piotlb(gpointer key, gpointer value,
+ gpointer user_data)
+{
+ VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
+ VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
+ uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
+ uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
+
+ /*
+ * According to spec, PASID-based-IOTLB Invalidation in page granularity
+ * doesn't invalidate IOTLB entries caching second-stage (PGTT=010b)
+ * or pass-through (PGTT=100b) mappings. Nested isn't supported yet,
+ * so only need to check first-stage (PGTT=001b) mappings.
+ */
+ if (entry->pgtt != VTD_SM_PASID_ENTRY_FLT) {
+ return false;
+ }
+
+ return entry->domain_id == info->domain_id && entry->pasid == info->pasid &&
+ ((entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb);
+}
+
/* Reset all the gen of VTDAddressSpace to zero and set the gen of
* IntelIOMMUState to 1. Must be called with IOMMU lock held.
*/
@@ -2886,11 +2908,30 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
}
}
+static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
+ uint32_t pasid, hwaddr addr, uint8_t am,
+ bool ih)
+{
+ VTDIOTLBPageInvInfo info;
+
+ info.domain_id = domain_id;
+ info.pasid = pasid;
+ info.addr = addr;
+ info.mask = ~((1 << am) - 1);
+
+ vtd_iommu_lock(s);
+ g_hash_table_foreach_remove(s->iotlb,
+ vtd_hash_remove_by_page_piotlb, &info);
+ vtd_iommu_unlock(s);
+}
+
static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
uint16_t domain_id;
uint32_t pasid;
+ uint8_t am;
+ hwaddr addr;
if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
(inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
@@ -2907,6 +2948,10 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
break;
case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
+ am = VTD_INV_DESC_PIOTLB_AM(inv_desc->val[1]);
+ addr = (hwaddr) VTD_INV_DESC_PIOTLB_ADDR(inv_desc->val[1]);
+ vtd_piotlb_page_invalidate(s, domain_id, pasid, addr, am,
+ VTD_INV_DESC_PIOTLB_IH(inv_desc->val[1]));
break;
default:
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (9 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-24 8:35 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 12/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
` (5 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Philippe Mathieu-Daudé,
Zhenzhong Duan, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
This piece of code can be shared by both IOTLB invalidation and
PASID-based IOTLB invalidation
No functional changes intended.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu.c | 57 +++++++++++++++++++++++++------------------
1 file changed, 33 insertions(+), 24 deletions(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 8ebb6dbd7d..4d5a457f92 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -2975,13 +2975,43 @@ static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
return true;
}
+static void do_invalidate_device_tlb(VTDAddressSpace *vtd_dev_as,
+ bool size, hwaddr addr)
+{
+ /*
+ * According to ATS spec table 2.4:
+ * S = 0, bits 15:12 = xxxx range size: 4K
+ * S = 1, bits 15:12 = xxx0 range size: 8K
+ * S = 1, bits 15:12 = xx01 range size: 16K
+ * S = 1, bits 15:12 = x011 range size: 32K
+ * S = 1, bits 15:12 = 0111 range size: 64K
+ * ...
+ */
+
+ IOMMUTLBEvent event;
+ uint64_t sz;
+
+ if (size) {
+ sz = (VTD_PAGE_SIZE * 2) << cto64(addr >> VTD_PAGE_SHIFT);
+ addr &= ~(sz - 1);
+ } else {
+ sz = VTD_PAGE_SIZE;
+ }
+
+ event.type = IOMMU_NOTIFIER_DEVIOTLB_UNMAP;
+ event.entry.target_as = &vtd_dev_as->as;
+ event.entry.addr_mask = sz - 1;
+ event.entry.iova = addr;
+ event.entry.perm = IOMMU_NONE;
+ event.entry.translated_addr = 0;
+ memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
+}
+
static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
VTDAddressSpace *vtd_dev_as;
- IOMMUTLBEvent event;
hwaddr addr;
- uint64_t sz;
uint16_t sid;
bool size;
@@ -3006,28 +3036,7 @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
goto done;
}
- /* According to ATS spec table 2.4:
- * S = 0, bits 15:12 = xxxx range size: 4K
- * S = 1, bits 15:12 = xxx0 range size: 8K
- * S = 1, bits 15:12 = xx01 range size: 16K
- * S = 1, bits 15:12 = x011 range size: 32K
- * S = 1, bits 15:12 = 0111 range size: 64K
- * ...
- */
- if (size) {
- sz = (VTD_PAGE_SIZE * 2) << cto64(addr >> VTD_PAGE_SHIFT);
- addr &= ~(sz - 1);
- } else {
- sz = VTD_PAGE_SIZE;
- }
-
- event.type = IOMMU_NOTIFIER_DEVIOTLB_UNMAP;
- event.entry.target_as = &vtd_dev_as->as;
- event.entry.addr_mask = sz - 1;
- event.entry.iova = addr;
- event.entry.perm = IOMMU_NONE;
- event.entry.translated_addr = 0;
- memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
+ do_invalidate_device_tlb(vtd_dev_as, size, addr);
done:
return true;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 12/17] intel_iommu: Add an internal API to find an address space with PASID
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (10 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 13/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
` (4 subsequent siblings)
16 siblings, 0 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
This will be used to implement the device IOTLB invalidation
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu.c | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 4d5a457f92..a17ce2b1f1 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -70,6 +70,11 @@ struct vtd_hiod_key {
uint8_t devfn;
};
+struct vtd_as_raw_key {
+ uint16_t sid;
+ uint32_t pasid;
+};
+
struct vtd_iotlb_key {
uint64_t gfn;
uint32_t pasid;
@@ -1878,29 +1883,33 @@ static inline bool vtd_is_interrupt_addr(hwaddr addr)
return VTD_INTERRUPT_ADDR_FIRST <= addr && addr <= VTD_INTERRUPT_ADDR_LAST;
}
-static gboolean vtd_find_as_by_sid(gpointer key, gpointer value,
- gpointer user_data)
+static gboolean vtd_find_as_by_sid_and_pasid(gpointer key, gpointer value,
+ gpointer user_data)
{
struct vtd_as_key *as_key = (struct vtd_as_key *)key;
- uint16_t target_sid = *(uint16_t *)user_data;
+ struct vtd_as_raw_key target = *(struct vtd_as_raw_key *)user_data;
uint16_t sid = PCI_BUILD_BDF(pci_bus_num(as_key->bus), as_key->devfn);
- return sid == target_sid;
+
+ return (as_key->pasid == target.pasid) &&
+ (sid == target.sid);
}
-static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
+static VTDAddressSpace *vtd_get_as_by_sid_and_pasid(IntelIOMMUState *s,
+ uint16_t sid,
+ uint32_t pasid)
{
- uint8_t bus_num = PCI_BUS_NUM(sid);
- VTDAddressSpace *vtd_as = s->vtd_as_cache[bus_num];
-
- if (vtd_as &&
- (sid == PCI_BUILD_BDF(pci_bus_num(vtd_as->bus), vtd_as->devfn))) {
- return vtd_as;
- }
+ struct vtd_as_raw_key key = {
+ .sid = sid,
+ .pasid = pasid
+ };
- vtd_as = g_hash_table_find(s->vtd_address_spaces, vtd_find_as_by_sid, &sid);
- s->vtd_as_cache[bus_num] = vtd_as;
+ return g_hash_table_find(s->vtd_address_spaces,
+ vtd_find_as_by_sid_and_pasid, &key);
+}
- return vtd_as;
+static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
+{
+ return vtd_get_as_by_sid_and_pasid(s, sid, PCI_NO_PASID);
}
static void vtd_pt_enable_fast_path(IntelIOMMUState *s, uint16_t source_id)
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 13/17] intel_iommu: Add support for PASID-based device IOTLB invalidation
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (11 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 12/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
` (3 subsequent siblings)
16 siblings, 0 replies; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 11 ++++++++
hw/i386/intel_iommu.c | 50 ++++++++++++++++++++++++++++++++++
2 files changed, 61 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 7dd8176e86..ed358aa763 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -377,6 +377,7 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_WAIT 0x5 /* Invalidation Wait Descriptor */
#define VTD_INV_DESC_PIOTLB 0x6 /* PASID-IOTLB Invalidate Desc */
#define VTD_INV_DESC_PC 0x7 /* PASID-cache Invalidate Desc */
+#define VTD_INV_DESC_DEV_PIOTLB 0x8 /* PASID-based-DIOTLB inv_desc*/
#define VTD_INV_DESC_NONE 0 /* Not an Invalidate Descriptor */
/* Masks for Invalidation Wait Descriptor*/
@@ -420,6 +421,16 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_DEVICE_IOTLB_RSVD_HI 0xffeULL
#define VTD_INV_DESC_DEVICE_IOTLB_RSVD_LO 0xffff0000ffe0fff8
+/* Mask for PASID Device IOTLB Invalidate Descriptor */
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_ADDR(val) ((val) & \
+ 0xfffffffffffff000ULL)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_SIZE(val) ((val >> 11) & 0x1)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_GLOBAL(val) ((val) & 0x1)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_SID(val) (((val) >> 16) & 0xffffULL)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_PASID(val) ((val >> 32) & 0xfffffULL)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_HI 0x7feULL
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_LO 0xfff000000000f000ULL
+
/* Rsvd field masks for spte */
#define VTD_SPTE_SNP 0x800ULL
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index a17ce2b1f1..8b66d6cfa5 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3016,6 +3016,49 @@ static void do_invalidate_device_tlb(VTDAddressSpace *vtd_dev_as,
memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
}
+static bool vtd_process_device_piotlb_desc(IntelIOMMUState *s,
+ VTDInvDesc *inv_desc)
+{
+ uint16_t sid;
+ VTDAddressSpace *vtd_dev_as;
+ bool size;
+ bool global;
+ hwaddr addr;
+ uint32_t pasid;
+
+ if ((inv_desc->hi & VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_HI) ||
+ (inv_desc->lo & VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_LO)) {
+ error_report_once("%s: invalid pasid-based dev iotlb inv desc:"
+ "hi=%"PRIx64 "(reserved nonzero)",
+ __func__, inv_desc->hi);
+ return false;
+ }
+
+ global = VTD_INV_DESC_PASID_DEVICE_IOTLB_GLOBAL(inv_desc->hi);
+ size = VTD_INV_DESC_PASID_DEVICE_IOTLB_SIZE(inv_desc->hi);
+ addr = VTD_INV_DESC_PASID_DEVICE_IOTLB_ADDR(inv_desc->hi);
+ sid = VTD_INV_DESC_PASID_DEVICE_IOTLB_SID(inv_desc->lo);
+ if (global) {
+ QLIST_FOREACH(vtd_dev_as, &s->vtd_as_with_notifiers, next) {
+ if ((vtd_dev_as->pasid != PCI_NO_PASID) &&
+ (PCI_BUILD_BDF(pci_bus_num(vtd_dev_as->bus),
+ vtd_dev_as->devfn) == sid)) {
+ do_invalidate_device_tlb(vtd_dev_as, size, addr);
+ }
+ }
+ } else {
+ pasid = VTD_INV_DESC_PASID_DEVICE_IOTLB_PASID(inv_desc->lo);
+ vtd_dev_as = vtd_get_as_by_sid_and_pasid(s, sid, pasid);
+ if (!vtd_dev_as) {
+ return true;
+ }
+
+ do_invalidate_device_tlb(vtd_dev_as, size, addr);
+ }
+
+ return true;
+}
+
static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
@@ -3110,6 +3153,13 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
}
break;
+ case VTD_INV_DESC_DEV_PIOTLB:
+ trace_vtd_inv_desc("device-piotlb", inv_desc.hi, inv_desc.lo);
+ if (!vtd_process_device_piotlb_desc(s, &inv_desc)) {
+ return false;
+ }
+ break;
+
case VTD_INV_DESC_DEVICE:
trace_vtd_inv_desc("device", inv_desc.hi, inv_desc.lo);
if (!vtd_process_device_iotlb_desc(s, &inv_desc)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (12 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 13/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-24 5:45 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 15/17] intel_iommu: Set default aw_bits to 48 in scalable modren mode Zhenzhong Duan
` (2 subsequent siblings)
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Yi Sun, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
This is used by some emulated devices which caches address
translation result. When piotlb invalidation issued in guest,
those caches should be refreshed.
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu.c | 35 ++++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 8b66d6cfa5..c0116497b1 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -2910,7 +2910,7 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
continue;
}
- if (!s->scalable_modern) {
+ if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
vtd_address_space_sync(vtd_as);
}
}
@@ -2922,6 +2922,9 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
bool ih)
{
VTDIOTLBPageInvInfo info;
+ VTDAddressSpace *vtd_as;
+ VTDContextEntry ce;
+ hwaddr size = (1 << am) * VTD_PAGE_SIZE;
info.domain_id = domain_id;
info.pasid = pasid;
@@ -2932,6 +2935,36 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
g_hash_table_foreach_remove(s->iotlb,
vtd_hash_remove_by_page_piotlb, &info);
vtd_iommu_unlock(s);
+
+ QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
+ if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
+ vtd_as->devfn, &ce) &&
+ domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
+ uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
+ IOMMUTLBEvent event;
+
+ if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
+ vtd_as->pasid != pasid) {
+ continue;
+ }
+
+ /*
+ * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
+ * does not flush stage-2 entries. See spec section 6.5.2.4
+ */
+ if (!s->scalable_modern) {
+ continue;
+ }
+
+ event.type = IOMMU_NOTIFIER_UNMAP;
+ event.entry.target_as = &address_space_memory;
+ event.entry.iova = addr;
+ event.entry.perm = IOMMU_NONE;
+ event.entry.addr_mask = size - 1;
+ event.entry.translated_addr = 0;
+ memory_region_notify_iommu(&vtd_as->iommu, 0, event);
+ }
+ }
}
static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 15/17] intel_iommu: Set default aw_bits to 48 in scalable modren mode
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (13 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 9:14 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
According to VTD spec, stage-1 page table could support 4-level and
5-level paging.
However, 5-level paging translation emulation is unsupported yet.
That means the only supported value for aw_bits is 48.
So default aw_bits to 48 in scalable modern mode. In other cases,
it is still default to 39 for compatibility.
Add a check to ensure user specified value is 48 in modern mode
for now.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
include/hw/i386/intel_iommu.h | 2 +-
hw/i386/intel_iommu.c | 16 +++++++++++++++-
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index b843d069cc..48134bda11 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
#define DMAR_REG_SIZE 0x230
#define VTD_HOST_AW_39BIT 39
#define VTD_HOST_AW_48BIT 48
-#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
+#define VTD_HOST_AW_AUTO 0xff
#define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
#define DMAR_REPORT_F_INTR (1)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index c0116497b1..2804c3628a 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3768,7 +3768,7 @@ static Property vtd_properties[] = {
ON_OFF_AUTO_AUTO),
DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
- VTD_HOST_ADDRESS_WIDTH),
+ VTD_HOST_AW_AUTO),
DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
@@ -4686,6 +4686,14 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
}
}
+ if (s->aw_bits == VTD_HOST_AW_AUTO) {
+ if (s->scalable_modern) {
+ s->aw_bits = VTD_HOST_AW_48BIT;
+ } else {
+ s->aw_bits = VTD_HOST_AW_39BIT;
+ }
+ }
+
if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
(s->aw_bits != VTD_HOST_AW_48BIT) &&
!s->scalable_modern) {
@@ -4694,6 +4702,12 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
return false;
}
+ if ((s->aw_bits != VTD_HOST_AW_48BIT) && s->scalable_modern) {
+ error_setg(errp, "Supported values for aw-bits are: %d",
+ VTD_HOST_AW_48BIT);
+ return false;
+ }
+
if (s->scalable_mode && !s->dma_drain) {
error_setg(errp, "Need to set dma_drain for scalable mode");
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (14 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 15/17] intel_iommu: Set default aw_bits to 48 in scalable modren mode Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-18 9:25 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yi Sun, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Yi Liu <yi.l.liu@intel.com>
Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
related to scalable mode translation, thus there are multiple combinations.
While this vIOMMU implementation wants to simplify it for user by providing
typical combinations. User could config it by "x-scalable-mode" option. The
usage is as below:
"-device intel-iommu,x-scalable-mode=["legacy"|"modern"|"off"]"
- "legacy": gives support for stage-2 page table
- "modern": gives support for stage-1 page table
- "off": no scalable mode support
- if not configured, means no scalable mode support, if not proper
configured, will throw error
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
include/hw/i386/intel_iommu.h | 1 +
hw/i386/intel_iommu.c | 24 +++++++++++++++++++++++-
2 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 48134bda11..650641544c 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -263,6 +263,7 @@ struct IntelIOMMUState {
bool caching_mode; /* RO - is cap CM enabled? */
bool scalable_mode; /* RO - is Scalable Mode supported? */
+ char *scalable_mode_str; /* RO - admin's Scalable Mode config */
bool scalable_modern; /* RO - is modern SM supported? */
bool snoop_control; /* RO - is SNP filed supported? */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 2804c3628a..14d05fce1d 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3770,7 +3770,7 @@ static Property vtd_properties[] = {
DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
VTD_HOST_AW_AUTO),
DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
- DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
+ DEFINE_PROP_STRING("x-scalable-mode", IntelIOMMUState, scalable_mode_str),
DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
@@ -4686,6 +4686,28 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
}
}
+ if (s->scalable_mode_str &&
+ (strcmp(s->scalable_mode_str, "off") &&
+ strcmp(s->scalable_mode_str, "modern") &&
+ strcmp(s->scalable_mode_str, "legacy"))) {
+ error_setg(errp, "Invalid x-scalable-mode config,"
+ "Please use \"modern\", \"legacy\" or \"off\"");
+ return false;
+ }
+
+ if (s->scalable_mode_str &&
+ !strcmp(s->scalable_mode_str, "legacy")) {
+ s->scalable_mode = true;
+ s->scalable_modern = false;
+ } else if (s->scalable_mode_str &&
+ !strcmp(s->scalable_mode_str, "modern")) {
+ s->scalable_mode = true;
+ s->scalable_modern = true;
+ } else {
+ s->scalable_mode = false;
+ s->scalable_modern = false;
+ }
+
if (s->aw_bits == VTD_HOST_AW_AUTO) {
if (s->scalable_modern) {
s->aw_bits = VTD_HOST_AW_48BIT;
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* [PATCH v1 17/17] tests/qtest: Add intel-iommu test
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (15 preceding siblings ...)
2024-07-18 8:16 ` [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option Zhenzhong Duan
@ 2024-07-18 8:16 ` Zhenzhong Duan
2024-07-24 5:58 ` CLEMENT MATHIEU--DRIF
16 siblings, 1 reply; 50+ messages in thread
From: Zhenzhong Duan @ 2024-07-18 8:16 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Thomas Huth, Laurent Vivier, Paolo Bonzini
Add the framework to test the intel-iommu device.
Currently only tested cap/ecap bits correctness in scalable
modern mode. Also tested cap/ecap bits consistency before
and after system reset.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
MAINTAINERS | 1 +
include/hw/i386/intel_iommu.h | 1 +
tests/qtest/intel-iommu-test.c | 71 ++++++++++++++++++++++++++++++++++
tests/qtest/meson.build | 1 +
4 files changed, 74 insertions(+)
create mode 100644 tests/qtest/intel-iommu-test.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7d9811458c..ec765bf3d3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3662,6 +3662,7 @@ S: Supported
F: hw/i386/intel_iommu.c
F: hw/i386/intel_iommu_internal.h
F: include/hw/i386/intel_iommu.h
+F: tests/qtest/intel-iommu-test.c
AMD-Vi Emulation
S: Orphan
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 650641544c..b1848dbec6 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -47,6 +47,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
#define VTD_HOST_AW_48BIT 48
#define VTD_HOST_AW_AUTO 0xff
#define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
+#define VTD_MGAW_FROM_CAP(cap) ((cap >> 16) & 0x3fULL)
#define DMAR_REPORT_F_INTR (1)
diff --git a/tests/qtest/intel-iommu-test.c b/tests/qtest/intel-iommu-test.c
new file mode 100644
index 0000000000..8e07034f6f
--- /dev/null
+++ b/tests/qtest/intel-iommu-test.c
@@ -0,0 +1,71 @@
+/*
+ * QTest testcase for intel-iommu
+ *
+ * Copyright (c) 2024 Intel, Inc.
+ *
+ * Author: Zhenzhong Duan <zhenzhong.duan@intel.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest.h"
+#include "hw/i386/intel_iommu_internal.h"
+
+#define CAP_MODERN_FIXED1 (VTD_CAP_FRO | VTD_CAP_NFR | VTD_CAP_ND | \
+ VTD_CAP_MAMV | VTD_CAP_PSI | VTD_CAP_SLLPS)
+#define ECAP_MODERN_FIXED1 (VTD_ECAP_QI | VTD_ECAP_IRO | VTD_ECAP_MHMV | \
+ VTD_ECAP_SMTS | VTD_ECAP_FLTS)
+
+static inline uint32_t vtd_reg_readl(QTestState *s, uint64_t offset)
+{
+ return qtest_readl(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
+}
+
+static inline uint64_t vtd_reg_readq(QTestState *s, uint64_t offset)
+{
+ return qtest_readq(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
+}
+
+static void test_intel_iommu_modern(void)
+{
+ uint8_t init_csr[DMAR_REG_SIZE]; /* register values */
+ uint8_t post_reset_csr[DMAR_REG_SIZE]; /* register values */
+ uint64_t cap, ecap, tmp;
+ QTestState *s;
+
+ s = qtest_init("-M q35 -device intel-iommu,x-scalable-mode=modern");
+
+ cap = vtd_reg_readq(s, DMAR_CAP_REG);
+ g_assert((cap & CAP_MODERN_FIXED1) == CAP_MODERN_FIXED1);
+
+ tmp = cap & VTD_CAP_SAGAW_MASK;
+ g_assert(tmp == (VTD_CAP_SAGAW_39bit | VTD_CAP_SAGAW_48bit));
+
+ tmp = VTD_MGAW_FROM_CAP(cap);
+ g_assert(tmp == VTD_HOST_AW_48BIT - 1);
+
+ ecap = vtd_reg_readq(s, DMAR_ECAP_REG);
+ g_assert((ecap & ECAP_MODERN_FIXED1) == ECAP_MODERN_FIXED1);
+ g_assert(ecap & VTD_ECAP_IR);
+
+ qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, init_csr, DMAR_REG_SIZE);
+
+ qobject_unref(qtest_qmp(s, "{ 'execute': 'system_reset' }"));
+ qtest_qmp_eventwait(s, "RESET");
+
+ qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, post_reset_csr, DMAR_REG_SIZE);
+ /* Ensure registers are consistent after hard reset */
+ g_assert(!memcmp(init_csr, post_reset_csr, DMAR_REG_SIZE));
+
+ qtest_quit(s);
+}
+
+int main(int argc, char **argv)
+{
+ g_test_init(&argc, &argv, NULL);
+ qtest_add_func("/q35/intel-iommu/modern", test_intel_iommu_modern);
+
+ return g_test_run();
+}
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 6508bfb1a2..20d05d471b 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -79,6 +79,7 @@ qtests_i386 = \
(config_all_devices.has_key('CONFIG_SB16') ? ['fuzz-sb16-test'] : []) + \
(config_all_devices.has_key('CONFIG_SDHCI_PCI') ? ['fuzz-sdcard-test'] : []) + \
(config_all_devices.has_key('CONFIG_ESP_PCI') ? ['am53c974-test'] : []) + \
+ (config_all_devices.has_key('CONFIG_VTD') ? ['intel-iommu-test'] : []) + \
(host_os != 'windows' and \
config_all_devices.has_key('CONFIG_ACPI_ERST') ? ['erst-test'] : []) + \
(config_all_devices.has_key('CONFIG_PCIE_PORT') and \
--
2.34.1
^ permalink raw reply related [flat|nested] 50+ messages in thread
* Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-18 8:16 ` [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
@ 2024-07-18 9:02 ` CLEMENT MATHIEU--DRIF
2024-07-19 2:47 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-18 9:02 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> Add an new element scalable_mode in IntelIOMMUState to mark scalable
> modern mode, this element will be exposed as an intel_iommu property
> finally.
>
> For now, it's only a placehholder and used for cap/ecap initialization,
> compatibility check and block host device passthrough until nesting
> is supported.
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 2 ++
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-----------
> 3 files changed, 26 insertions(+), 11 deletions(-)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index c0ca7b372f..4e0331caba 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -195,6 +195,7 @@
> #define VTD_ECAP_PASID (1ULL << 40)
> #define VTD_ECAP_SMTS (1ULL << 43)
> #define VTD_ECAP_SLTS (1ULL << 46)
> +#define VTD_ECAP_FLTS (1ULL << 47)
>
> /* CAP_REG */
> /* (offset >> 4) << 24 */
> @@ -211,6 +212,7 @@
> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
> #define VTD_CAP_DRAIN_READ (1ULL << 55)
> +#define VTD_CAP_FS1GP (1ULL << 56)
> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ | VTD_CAP_DRAIN_WRITE)
> #define VTD_CAP_CM (1ULL << 7)
> #define VTD_PASID_ID_SHIFT 20
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index 1eb05c29fc..788ed42477 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>
> bool caching_mode; /* RO - is cap CM enabled? */
> bool scalable_mode; /* RO - is Scalable Mode supported? */
> + bool scalable_modern; /* RO - is modern SM supported? */
> bool snoop_control; /* RO - is SNP filed supported? */
>
> dma_addr_t root; /* Current root table pointer */
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 1cff8b00ae..40cbd4a0f4 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -755,16 +755,20 @@ static inline bool vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
> }
>
> /* Return true if check passed, otherwise false */
> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
> - VTDPASIDEntry *pe)
> +static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
> {
What about using the cap/ecap registers to know if the translation types
are supported or not.
Otherwise, we could add a comment to explain why we expect
s->scalable_modern to give us enough information.
> + X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
> +
> switch (VTD_PE_GET_TYPE(pe)) {
> + case VTD_SM_PASID_ENTRY_FLT:
> + return s->scalable_modern;
> case VTD_SM_PASID_ENTRY_SLT:
> - return true;
> + return !s->scalable_modern;
> + case VTD_SM_PASID_ENTRY_NESTED:
> + /* Not support NESTED page table type yet */
> + return false;
> case VTD_SM_PASID_ENTRY_PT:
> return x86_iommu->pt_supported;
> - case VTD_SM_PASID_ENTRY_FLT:
> - case VTD_SM_PASID_ENTRY_NESTED:
> default:
> /* Unknown type */
> return false;
> @@ -813,7 +817,6 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> uint8_t pgtt;
> uint32_t index;
> dma_addr_t entry_size;
> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>
> index = VTD_PASID_TABLE_INDEX(pasid);
> entry_size = VTD_PASID_ENTRY_SIZE;
> @@ -827,7 +830,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> }
>
> /* Do translation type check */
> - if (!vtd_pe_type_check(x86_iommu, pe)) {
> + if (!vtd_pe_type_check(s, pe)) {
> return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> @@ -3861,7 +3864,13 @@ static bool vtd_check_hiod(IntelIOMMUState *s, HostIOMMUDevice *hiod,
> return false;
> }
>
> - return true;
> + if (!s->scalable_modern) {
> + /* All checks requested by VTD non-modern mode pass */
> + return true;
> + }
> +
> + error_setg(errp, "host device is unsupported in scalable modern mode yet");
> + return false;
> }
>
> static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int devfn,
> @@ -4084,7 +4093,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
> }
>
> /* TODO: read cap/ecap from host to decide which cap to be exposed. */
> - if (s->scalable_mode) {
> + if (s->scalable_modern) {
> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
> + s->cap |= VTD_CAP_FS1GP;
> + } else if (s->scalable_mode) {
> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
> }
>
> @@ -4251,9 +4263,9 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> - /* Currently only address widths supported are 39 and 48 bits */
> if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
> - (s->aw_bits != VTD_HOST_AW_48BIT)) {
> + (s->aw_bits != VTD_HOST_AW_48BIT) &&
> + !s->scalable_modern) {
> error_setg(errp, "Supported values for aw-bits are: %d, %d",
> VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
> return false;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 02/17] intel_iommu: Make pasid entry type check accurate
2024-07-18 8:16 ` [PATCH v1 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
@ 2024-07-18 9:06 ` CLEMENT MATHIEU--DRIF
0 siblings, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-18 9:06 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> When guest configures Nested Translation(011b) or First-stage Translation only
> (001b), type check passed unaccurately.
>
> Fails the type check in those cases as their simulation isn't supported yet.
>
> Fixes: fb43cf739e1 ("intel_iommu: scalable mode emulation")
> Suggested-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index e65f5b29a5..1cff8b00ae 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -759,20 +759,16 @@ static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
> VTDPASIDEntry *pe)
> {
> switch (VTD_PE_GET_TYPE(pe)) {
> - case VTD_SM_PASID_ENTRY_FLT:
> case VTD_SM_PASID_ENTRY_SLT:
> - case VTD_SM_PASID_ENTRY_NESTED:
> - break;
> + return true;
> case VTD_SM_PASID_ENTRY_PT:
> - if (!x86_iommu->pt_supported) {
> - return false;
> - }
> - break;
> + return x86_iommu->pt_supported;
> + case VTD_SM_PASID_ENTRY_FLT:
> + case VTD_SM_PASID_ENTRY_NESTED:
> default:
> /* Unknown type */
> return false;
> }
> - return true;
> }
>
> static inline bool vtd_pdire_present(VTDPASIDDirEntry *pdire)
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 15/17] intel_iommu: Set default aw_bits to 48 in scalable modren mode
2024-07-18 8:16 ` [PATCH v1 15/17] intel_iommu: Set default aw_bits to 48 in scalable modren mode Zhenzhong Duan
@ 2024-07-18 9:14 ` CLEMENT MATHIEU--DRIF
0 siblings, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-18 9:14 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> According to VTD spec, stage-1 page table could support 4-level and
> 5-level paging.
>
> However, 5-level paging translation emulation is unsupported yet.
> That means the only supported value for aw_bits is 48.
>
> So default aw_bits to 48 in scalable modern mode. In other cases,
> it is still default to 39 for compatibility.
>
> Add a check to ensure user specified value is 48 in modern mode
> for now.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> include/hw/i386/intel_iommu.h | 2 +-
> hw/i386/intel_iommu.c | 16 +++++++++++++++-
> 2 files changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index b843d069cc..48134bda11 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
> #define DMAR_REG_SIZE 0x230
> #define VTD_HOST_AW_39BIT 39
> #define VTD_HOST_AW_48BIT 48
> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
> +#define VTD_HOST_AW_AUTO 0xff
> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>
> #define DMAR_REPORT_F_INTR (1)
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index c0116497b1..2804c3628a 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3768,7 +3768,7 @@ static Property vtd_properties[] = {
> ON_OFF_AUTO_AUTO),
> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
> - VTD_HOST_ADDRESS_WIDTH),
> + VTD_HOST_AW_AUTO),
> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
> @@ -4686,6 +4686,14 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
> + if (s->scalable_modern) {
> + s->aw_bits = VTD_HOST_AW_48BIT;
> + } else {
> + s->aw_bits = VTD_HOST_AW_39BIT;
> + }
> + }
> +
> if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
> (s->aw_bits != VTD_HOST_AW_48BIT) &&
> !s->scalable_modern) {
> @@ -4694,6 +4702,12 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> return false;
> }
>
> + if ((s->aw_bits != VTD_HOST_AW_48BIT) && s->scalable_modern) {
> + error_setg(errp, "Supported values for aw-bits are: %d",
> + VTD_HOST_AW_48BIT);
> + return false;
> + }
> +
> if (s->scalable_mode && !s->dma_drain) {
> error_setg(errp, "Need to set dma_drain for scalable mode");
> return false;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option
2024-07-18 8:16 ` [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option Zhenzhong Duan
@ 2024-07-18 9:25 ` CLEMENT MATHIEU--DRIF
2024-07-19 2:53 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-18 9:25 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Yi Sun, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> From: Yi Liu <yi.l.liu@intel.com>
>
> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
> related to scalable mode translation, thus there are multiple combinations.
> While this vIOMMU implementation wants to simplify it for user by providing
> typical combinations. User could config it by "x-scalable-mode" option. The
> usage is as below:
>
> "-device intel-iommu,x-scalable-mode=["legacy"|"modern"|"off"]"
>
> - "legacy": gives support for stage-2 page table
> - "modern": gives support for stage-1 page table
> - "off": no scalable mode support
> - if not configured, means no scalable mode support, if not proper
> configured, will throw error
s/proper/properly
Maybe we could split and rephrase the last bullet point to make it clear
that "off" is equivalent to not using the option at all
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 24 +++++++++++++++++++++++-
> 2 files changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index 48134bda11..650641544c 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -263,6 +263,7 @@ struct IntelIOMMUState {
>
> bool caching_mode; /* RO - is cap CM enabled? */
> bool scalable_mode; /* RO - is Scalable Mode supported? */
> + char *scalable_mode_str; /* RO - admin's Scalable Mode config */
> bool scalable_modern; /* RO - is modern SM supported? */
> bool snoop_control; /* RO - is SNP filed supported? */
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 2804c3628a..14d05fce1d 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3770,7 +3770,7 @@ static Property vtd_properties[] = {
> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
> VTD_HOST_AW_AUTO),
> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
> - DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
> + DEFINE_PROP_STRING("x-scalable-mode", IntelIOMMUState, scalable_mode_str),
> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
> @@ -4686,6 +4686,28 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> + if (s->scalable_mode_str &&
> + (strcmp(s->scalable_mode_str, "off") &&
> + strcmp(s->scalable_mode_str, "modern") &&
> + strcmp(s->scalable_mode_str, "legacy"))) {
> + error_setg(errp, "Invalid x-scalable-mode config,"
> + "Please use \"modern\", \"legacy\" or \"off\"");
> + return false;
> + }
> +
> + if (s->scalable_mode_str &&
> + !strcmp(s->scalable_mode_str, "legacy")) {
> + s->scalable_mode = true;
> + s->scalable_modern = false;
> + } else if (s->scalable_mode_str &&
> + !strcmp(s->scalable_mode_str, "modern")) {
> + s->scalable_mode = true;
> + s->scalable_modern = true;
> + } else {
> + s->scalable_mode = false;
> + s->scalable_modern = false;
> + }
> +
> if (s->aw_bits == VTD_HOST_AW_AUTO) {
> if (s->scalable_modern) {
> s->aw_bits = VTD_HOST_AW_48BIT;
> --
> 2.34.1
>
LGTM
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-18 9:02 ` CLEMENT MATHIEU--DRIF
@ 2024-07-19 2:47 ` Duan, Zhenzhong
2024-07-19 3:22 ` Yi Liu
2024-07-19 4:21 ` CLEMENT MATHIEU--DRIF
0 siblings, 2 replies; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-19 2:47 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>scalable modern mode
>
>
>
>On 18/07/2024 10:16, Zhenzhong Duan wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>> Add an new element scalable_mode in IntelIOMMUState to mark scalable
>> modern mode, this element will be exposed as an intel_iommu property
>> finally.
>>
>> For now, it's only a placehholder and used for cap/ecap initialization,
>> compatibility check and block host device passthrough until nesting
>> is supported.
>>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> ---
>> hw/i386/intel_iommu_internal.h | 2 ++
>> include/hw/i386/intel_iommu.h | 1 +
>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-----------
>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>
>> diff --git a/hw/i386/intel_iommu_internal.h
>b/hw/i386/intel_iommu_internal.h
>> index c0ca7b372f..4e0331caba 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -195,6 +195,7 @@
>> #define VTD_ECAP_PASID (1ULL << 40)
>> #define VTD_ECAP_SMTS (1ULL << 43)
>> #define VTD_ECAP_SLTS (1ULL << 46)
>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>
>> /* CAP_REG */
>> /* (offset >> 4) << 24 */
>> @@ -211,6 +212,7 @@
>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>> +#define VTD_CAP_FS1GP (1ULL << 56)
>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>VTD_CAP_DRAIN_WRITE)
>> #define VTD_CAP_CM (1ULL << 7)
>> #define VTD_PASID_ID_SHIFT 20
>> diff --git a/include/hw/i386/intel_iommu.h
>b/include/hw/i386/intel_iommu.h
>> index 1eb05c29fc..788ed42477 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>
>> bool caching_mode; /* RO - is cap CM enabled? */
>> bool scalable_mode; /* RO - is Scalable Mode supported? */
>> + bool scalable_modern; /* RO - is modern SM supported? */
>> bool snoop_control; /* RO - is SNP filed supported? */
>>
>> dma_addr_t root; /* Current root table pointer */
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 1cff8b00ae..40cbd4a0f4 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -755,16 +755,20 @@ static inline bool
>vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>> }
>>
>> /* Return true if check passed, otherwise false */
>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>> - VTDPASIDEntry *pe)
>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>VTDPASIDEntry *pe)
>> {
>What about using the cap/ecap registers to know if the translation types
>are supported or not.
>Otherwise, we could add a comment to explain why we expect
>s->scalable_modern to give us enough information.
What about below:
/*
*VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
*So it's simpler to check s->scalable_modern directly for a PASID entry type instead ecap bits.
*/
Thanks
Zhenzhong
>> + X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>> +
>> switch (VTD_PE_GET_TYPE(pe)) {
>> + case VTD_SM_PASID_ENTRY_FLT:
>> + return s->scalable_modern;
>> case VTD_SM_PASID_ENTRY_SLT:
>> - return true;
>> + return !s->scalable_modern;
>> + case VTD_SM_PASID_ENTRY_NESTED:
>> + /* Not support NESTED page table type yet */
>> + return false;
>> case VTD_SM_PASID_ENTRY_PT:
>> return x86_iommu->pt_supported;
>> - case VTD_SM_PASID_ENTRY_FLT:
>> - case VTD_SM_PASID_ENTRY_NESTED:
>> default:
>> /* Unknown type */
>> return false;
>> @@ -813,7 +817,6 @@ static int
>vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>> uint8_t pgtt;
>> uint32_t index;
>> dma_addr_t entry_size;
>> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>
>> index = VTD_PASID_TABLE_INDEX(pasid);
>> entry_size = VTD_PASID_ENTRY_SIZE;
>> @@ -827,7 +830,7 @@ static int
>vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>> }
>>
>> /* Do translation type check */
>> - if (!vtd_pe_type_check(x86_iommu, pe)) {
>> + if (!vtd_pe_type_check(s, pe)) {
>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>> }
>>
>> @@ -3861,7 +3864,13 @@ static bool vtd_check_hiod(IntelIOMMUState
>*s, HostIOMMUDevice *hiod,
>> return false;
>> }
>>
>> - return true;
>> + if (!s->scalable_modern) {
>> + /* All checks requested by VTD non-modern mode pass */
>> + return true;
>> + }
>> +
>> + error_setg(errp, "host device is unsupported in scalable modern mode
>yet");
>> + return false;
>> }
>>
>> static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int
>devfn,
>> @@ -4084,7 +4093,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
>> }
>>
>> /* TODO: read cap/ecap from host to decide which cap to be exposed.
>*/
>> - if (s->scalable_mode) {
>> + if (s->scalable_modern) {
>> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
>> + s->cap |= VTD_CAP_FS1GP;
>> + } else if (s->scalable_mode) {
>> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
>> }
>>
>> @@ -4251,9 +4263,9 @@ static bool vtd_decide_config(IntelIOMMUState
>*s, Error **errp)
>> }
>> }
>>
>> - /* Currently only address widths supported are 39 and 48 bits */
>> if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
>> - (s->aw_bits != VTD_HOST_AW_48BIT)) {
>> + (s->aw_bits != VTD_HOST_AW_48BIT) &&
>> + !s->scalable_modern) {
>> error_setg(errp, "Supported values for aw-bits are: %d, %d",
>> VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
>> return false;
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option
2024-07-18 9:25 ` CLEMENT MATHIEU--DRIF
@ 2024-07-19 2:53 ` Duan, Zhenzhong
2024-07-19 4:23 ` CLEMENT MATHIEU--DRIF
0 siblings, 1 reply; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-19 2:53 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Yi Sun, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be
>string option
>
>
>
>On 18/07/2024 10:16, Zhenzhong Duan wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>> From: Yi Liu <yi.l.liu@intel.com>
>>
>> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
>> related to scalable mode translation, thus there are multiple combinations.
>> While this vIOMMU implementation wants to simplify it for user by
>providing
>> typical combinations. User could config it by "x-scalable-mode" option. The
>> usage is as below:
>>
>> "-device intel-iommu,x-scalable-mode=["legacy"|"modern"|"off"]"
>>
>> - "legacy": gives support for stage-2 page table
>> - "modern": gives support for stage-1 page table
>> - "off": no scalable mode support
>> - if not configured, means no scalable mode support, if not proper
>> configured, will throw error
>s/proper/properly
>Maybe we could split and rephrase the last bullet point to make it clear
>that "off" is equivalent to not using the option at all
You mean split last bullet as a separate paragraph?
Then what about below:
- "legacy": gives support for stage-2 page table
- "modern": gives support for stage-1 page table
- "off": no scalable mode support
- any other string, will throw error
If x-scalable-mode is not configured, it is equivalent to x-scalable-mode=off.
Thanks
Zhenzhong
>>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> ---
>> include/hw/i386/intel_iommu.h | 1 +
>> hw/i386/intel_iommu.c | 24 +++++++++++++++++++++++-
>> 2 files changed, 24 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/hw/i386/intel_iommu.h
>b/include/hw/i386/intel_iommu.h
>> index 48134bda11..650641544c 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -263,6 +263,7 @@ struct IntelIOMMUState {
>>
>> bool caching_mode; /* RO - is cap CM enabled? */
>> bool scalable_mode; /* RO - is Scalable Mode supported? */
>> + char *scalable_mode_str; /* RO - admin's Scalable Mode config */
>> bool scalable_modern; /* RO - is modern SM supported? */
>> bool snoop_control; /* RO - is SNP filed supported? */
>>
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 2804c3628a..14d05fce1d 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -3770,7 +3770,7 @@ static Property vtd_properties[] = {
>> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
>> VTD_HOST_AW_AUTO),
>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>FALSE),
>> - DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState,
>scalable_mode, FALSE),
>> + DEFINE_PROP_STRING("x-scalable-mode", IntelIOMMUState,
>scalable_mode_str),
>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState,
>snoop_control, false),
>> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
>> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
>> @@ -4686,6 +4686,28 @@ static bool
>vtd_decide_config(IntelIOMMUState *s, Error **errp)
>> }
>> }
>>
>> + if (s->scalable_mode_str &&
>> + (strcmp(s->scalable_mode_str, "off") &&
>> + strcmp(s->scalable_mode_str, "modern") &&
>> + strcmp(s->scalable_mode_str, "legacy"))) {
>> + error_setg(errp, "Invalid x-scalable-mode config,"
>> + "Please use \"modern\", \"legacy\" or \"off\"");
>> + return false;
>> + }
>> +
>> + if (s->scalable_mode_str &&
>> + !strcmp(s->scalable_mode_str, "legacy")) {
>> + s->scalable_mode = true;
>> + s->scalable_modern = false;
>> + } else if (s->scalable_mode_str &&
>> + !strcmp(s->scalable_mode_str, "modern")) {
>> + s->scalable_mode = true;
>> + s->scalable_modern = true;
>> + } else {
>> + s->scalable_mode = false;
>> + s->scalable_modern = false;
>> + }
>> +
>> if (s->aw_bits == VTD_HOST_AW_AUTO) {
>> if (s->scalable_modern) {
>> s->aw_bits = VTD_HOST_AW_48BIT;
>> --
>> 2.34.1
>>
>LGTM
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-19 2:47 ` Duan, Zhenzhong
@ 2024-07-19 3:22 ` Yi Liu
2024-07-19 3:37 ` Duan, Zhenzhong
2024-07-19 4:21 ` CLEMENT MATHIEU--DRIF
1 sibling, 1 reply; 50+ messages in thread
From: Yi Liu @ 2024-07-19 3:22 UTC (permalink / raw)
To: Duan, Zhenzhong, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On 2024/7/19 10:47, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>> scalable modern mode
>>
>>
>>
>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>> Caution: External email. Do not open attachments or click links, unless this
>> email comes from a known sender and you know the content is safe.
>>>
>>>
>>> Add an new element scalable_mode in IntelIOMMUState to mark scalable
>>> modern mode, this element will be exposed as an intel_iommu property
>>> finally.
>>>
>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>> compatibility check and block host device passthrough until nesting
>>> is supported.
>>>
>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 2 ++
>>> include/hw/i386/intel_iommu.h | 1 +
>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-----------
>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h
>> b/hw/i386/intel_iommu_internal.h
>>> index c0ca7b372f..4e0331caba 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -195,6 +195,7 @@
>>> #define VTD_ECAP_PASID (1ULL << 40)
>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>
>>> /* CAP_REG */
>>> /* (offset >> 4) << 24 */
>>> @@ -211,6 +212,7 @@
>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>> VTD_CAP_DRAIN_WRITE)
>>> #define VTD_CAP_CM (1ULL << 7)
>>> #define VTD_PASID_ID_SHIFT 20
>>> diff --git a/include/hw/i386/intel_iommu.h
>> b/include/hw/i386/intel_iommu.h
>>> index 1eb05c29fc..788ed42477 100644
>>> --- a/include/hw/i386/intel_iommu.h
>>> +++ b/include/hw/i386/intel_iommu.h
>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>
>>> bool caching_mode; /* RO - is cap CM enabled? */
>>> bool scalable_mode; /* RO - is Scalable Mode supported? */
>>> + bool scalable_modern; /* RO - is modern SM supported? */
>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>
>>> dma_addr_t root; /* Current root table pointer */
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 1cff8b00ae..40cbd4a0f4 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -755,16 +755,20 @@ static inline bool
>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>> }
>>>
>>> /* Return true if check passed, otherwise false */
>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>> - VTDPASIDEntry *pe)
>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>> VTDPASIDEntry *pe)
>>> {
>> What about using the cap/ecap registers to know if the translation types
>> are supported or not.
>> Otherwise, we could add a comment to explain why we expect
>> s->scalable_modern to give us enough information.
>
> What about below:
>
> /*
> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
> *So it's simpler to check s->scalable_modern directly for a PASID entry type instead ecap bits.
> */
Since this helper is for pasid entry check, so you can just return false
if the pe's PGTT is SS-only.
It might make more sense to check the ecap/cap here as anyhow the
capability is listed in ecap/cap. This may also bring us some convenience.
Say in the future, if we want to add a new mode (e.g. scalable mode 2.0)
that supports both FS and SS for guest, we may need to update this helper
as well if we check the scalable_modern. But if we check the ecap/cap, then
the future change just needs to control the ecap/cap setting at the
beginning of the vIOMMU init. To keep the code aligned, you may need to
check ecap.PT bit for VTD_SM_PASID_ENTRY_PT. :)
>
>>> + X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>> +
>>> switch (VTD_PE_GET_TYPE(pe)) {
>>> + case VTD_SM_PASID_ENTRY_FLT:
>>> + return s->scalable_modern;
>>> case VTD_SM_PASID_ENTRY_SLT:
>>> - return true;
>>> + return !s->scalable_modern;
>>> + case VTD_SM_PASID_ENTRY_NESTED:
>>> + /* Not support NESTED page table type yet */
>>> + return false;
>>> case VTD_SM_PASID_ENTRY_PT:
>>> return x86_iommu->pt_supported;
>>> - case VTD_SM_PASID_ENTRY_FLT:
>>> - case VTD_SM_PASID_ENTRY_NESTED:
>>> default:
>>> /* Unknown type */
>>> return false;
>>> @@ -813,7 +817,6 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> uint8_t pgtt;
>>> uint32_t index;
>>> dma_addr_t entry_size;
>>> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>>
>>> index = VTD_PASID_TABLE_INDEX(pasid);
>>> entry_size = VTD_PASID_ENTRY_SIZE;
>>> @@ -827,7 +830,7 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> }
>>>
>>> /* Do translation type check */
>>> - if (!vtd_pe_type_check(x86_iommu, pe)) {
>>> + if (!vtd_pe_type_check(s, pe)) {
>>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> }
>>>
>>> @@ -3861,7 +3864,13 @@ static bool vtd_check_hiod(IntelIOMMUState
>> *s, HostIOMMUDevice *hiod,
>>> return false;
>>> }
>>>
>>> - return true;
>>> + if (!s->scalable_modern) {
>>> + /* All checks requested by VTD non-modern mode pass */
>>> + return true;
>>> + }
>>> +
>>> + error_setg(errp, "host device is unsupported in scalable modern mode
>> yet");
>>> + return false;
>>> }
>>>
>>> static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int
>> devfn,
>>> @@ -4084,7 +4093,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
>>> }
>>>
>>> /* TODO: read cap/ecap from host to decide which cap to be exposed.
>> */
>>> - if (s->scalable_mode) {
>>> + if (s->scalable_modern) {
>>> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
>>> + s->cap |= VTD_CAP_FS1GP;
>>> + } else if (s->scalable_mode) {
>>> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
>>> }
>>>
>>> @@ -4251,9 +4263,9 @@ static bool vtd_decide_config(IntelIOMMUState
>> *s, Error **errp)
>>> }
>>> }
>>>
>>> - /* Currently only address widths supported are 39 and 48 bits */
>>> if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
>>> - (s->aw_bits != VTD_HOST_AW_48BIT)) {
>>> + (s->aw_bits != VTD_HOST_AW_48BIT) &&
>>> + !s->scalable_modern) {
>>> error_setg(errp, "Supported values for aw-bits are: %d, %d",
>>> VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
>>> return false;
>>> --
>>> 2.34.1
>>>
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-19 3:22 ` Yi Liu
@ 2024-07-19 3:37 ` Duan, Zhenzhong
2024-07-19 3:39 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-19 3:37 UTC (permalink / raw)
To: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>scalable modern mode
>
>On 2024/7/19 10:47, Duan, Zhenzhong wrote:
>>
>>
>>> -----Original Message-----
>>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable
>for
>>> scalable modern mode
>>>
>>>
>>>
>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>> Caution: External email. Do not open attachments or click links, unless
>this
>>> email comes from a known sender and you know the content is safe.
>>>>
>>>>
>>>> Add an new element scalable_mode in IntelIOMMUState to mark
>scalable
>>>> modern mode, this element will be exposed as an intel_iommu property
>>>> finally.
>>>>
>>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>>> compatibility check and block host device passthrough until nesting
>>>> is supported.
>>>>
>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>> ---
>>>> hw/i386/intel_iommu_internal.h | 2 ++
>>>> include/hw/i386/intel_iommu.h | 1 +
>>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++---------
>--
>>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/hw/i386/intel_iommu_internal.h
>>> b/hw/i386/intel_iommu_internal.h
>>>> index c0ca7b372f..4e0331caba 100644
>>>> --- a/hw/i386/intel_iommu_internal.h
>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>> @@ -195,6 +195,7 @@
>>>> #define VTD_ECAP_PASID (1ULL << 40)
>>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>>
>>>> /* CAP_REG */
>>>> /* (offset >> 4) << 24 */
>>>> @@ -211,6 +212,7 @@
>>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>>> VTD_CAP_DRAIN_WRITE)
>>>> #define VTD_CAP_CM (1ULL << 7)
>>>> #define VTD_PASID_ID_SHIFT 20
>>>> diff --git a/include/hw/i386/intel_iommu.h
>>> b/include/hw/i386/intel_iommu.h
>>>> index 1eb05c29fc..788ed42477 100644
>>>> --- a/include/hw/i386/intel_iommu.h
>>>> +++ b/include/hw/i386/intel_iommu.h
>>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>>
>>>> bool caching_mode; /* RO - is cap CM enabled? */
>>>> bool scalable_mode; /* RO - is Scalable Mode supported? */
>>>> + bool scalable_modern; /* RO - is modern SM supported? */
>>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>>
>>>> dma_addr_t root; /* Current root table pointer */
>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>> index 1cff8b00ae..40cbd4a0f4 100644
>>>> --- a/hw/i386/intel_iommu.c
>>>> +++ b/hw/i386/intel_iommu.c
>>>> @@ -755,16 +755,20 @@ static inline bool
>>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>>> }
>>>>
>>>> /* Return true if check passed, otherwise false */
>>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>>> - VTDPASIDEntry *pe)
>>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>>> VTDPASIDEntry *pe)
>>>> {
>>> What about using the cap/ecap registers to know if the translation types
>>> are supported or not.
>>> Otherwise, we could add a comment to explain why we expect
>>> s->scalable_modern to give us enough information.
>>
>> What about below:
>>
>> /*
>> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else
>VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
>> *So it's simpler to check s->scalable_modern directly for a PASID entry
>type instead ecap bits.
>> */
>
>Since this helper is for pasid entry check, so you can just return false
>if the pe's PGTT is SS-only.
It depends on which scalable mode is chosed.
In scalable legacy mode, PGTT is SS-only and we should return true.
>
>It might make more sense to check the ecap/cap here as anyhow the
>capability is listed in ecap/cap. This may also bring us some convenience.
>
>Say in the future, if we want to add a new mode (e.g. scalable mode 2.0)
>that supports both FS and SS for guest, we may need to update this helper
>as well if we check the scalable_modern. But if we check the ecap/cap, then
>the future change just needs to control the ecap/cap setting at the
>beginning of the vIOMMU init. To keep the code aligned, you may need to
>check ecap.PT bit for VTD_SM_PASID_ENTRY_PT. :)
OK, will be like below:
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -826,14 +826,14 @@ static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
switch (VTD_PE_GET_TYPE(pe)) {
case VTD_SM_PASID_ENTRY_FLT:
- return s->scalable_modern;
+ return !!(s->ecap & VTD_ECAP_FLTS);
case VTD_SM_PASID_ENTRY_SLT:
- return !s->scalable_modern;
+ return !!(s->ecap & VTD_ECAP_FLTS) || !(s->ecap & VTD_ECAP_SMTS);
case VTD_SM_PASID_ENTRY_NESTED:
/* Not support NESTED page table type yet */
return false;
case VTD_SM_PASID_ENTRY_PT:
- return x86_iommu->pt_supported;
+ return !!(s->ecap & VTD_ECAP_PT);
default:
/* Unknown type */
return false;
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-19 3:37 ` Duan, Zhenzhong
@ 2024-07-19 3:39 ` Duan, Zhenzhong
2024-07-19 4:26 ` CLEMENT MATHIEU--DRIF
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
0 siblings, 2 replies; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-19 3:39 UTC (permalink / raw)
To: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
>-----Original Message-----
>From: Duan, Zhenzhong]
>Subject: RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>scalable modern mode
>
>
>
>>-----Original Message-----
>>From: Liu, Yi L <yi.l.liu@intel.com>
>>Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>>scalable modern mode
>>
>>On 2024/7/19 10:47, Duan, Zhenzhong wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable
>>for
>>>> scalable modern mode
>>>>
>>>>
>>>>
>>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>>> Caution: External email. Do not open attachments or click links, unless
>>this
>>>> email comes from a known sender and you know the content is safe.
>>>>>
>>>>>
>>>>> Add an new element scalable_mode in IntelIOMMUState to mark
>>scalable
>>>>> modern mode, this element will be exposed as an intel_iommu
>property
>>>>> finally.
>>>>>
>>>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>>>> compatibility check and block host device passthrough until nesting
>>>>> is supported.
>>>>>
>>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>>> ---
>>>>> hw/i386/intel_iommu_internal.h | 2 ++
>>>>> include/hw/i386/intel_iommu.h | 1 +
>>>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-------
>--
>>--
>>>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>>>
>>>>> diff --git a/hw/i386/intel_iommu_internal.h
>>>> b/hw/i386/intel_iommu_internal.h
>>>>> index c0ca7b372f..4e0331caba 100644
>>>>> --- a/hw/i386/intel_iommu_internal.h
>>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>>> @@ -195,6 +195,7 @@
>>>>> #define VTD_ECAP_PASID (1ULL << 40)
>>>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>>>
>>>>> /* CAP_REG */
>>>>> /* (offset >> 4) << 24 */
>>>>> @@ -211,6 +212,7 @@
>>>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>>>> VTD_CAP_DRAIN_WRITE)
>>>>> #define VTD_CAP_CM (1ULL << 7)
>>>>> #define VTD_PASID_ID_SHIFT 20
>>>>> diff --git a/include/hw/i386/intel_iommu.h
>>>> b/include/hw/i386/intel_iommu.h
>>>>> index 1eb05c29fc..788ed42477 100644
>>>>> --- a/include/hw/i386/intel_iommu.h
>>>>> +++ b/include/hw/i386/intel_iommu.h
>>>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>>>
>>>>> bool caching_mode; /* RO - is cap CM enabled? */
>>>>> bool scalable_mode; /* RO - is Scalable Mode supported?
>*/
>>>>> + bool scalable_modern; /* RO - is modern SM supported? */
>>>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>>>
>>>>> dma_addr_t root; /* Current root table pointer */
>>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>>> index 1cff8b00ae..40cbd4a0f4 100644
>>>>> --- a/hw/i386/intel_iommu.c
>>>>> +++ b/hw/i386/intel_iommu.c
>>>>> @@ -755,16 +755,20 @@ static inline bool
>>>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>>>> }
>>>>>
>>>>> /* Return true if check passed, otherwise false */
>>>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>>>> - VTDPASIDEntry *pe)
>>>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>>>> VTDPASIDEntry *pe)
>>>>> {
>>>> What about using the cap/ecap registers to know if the translation types
>>>> are supported or not.
>>>> Otherwise, we could add a comment to explain why we expect
>>>> s->scalable_modern to give us enough information.
>>>
>>> What about below:
>>>
>>> /*
>>> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else
>>VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
>>> *So it's simpler to check s->scalable_modern directly for a PASID entry
>>type instead ecap bits.
>>> */
>>
>>Since this helper is for pasid entry check, so you can just return false
>>if the pe's PGTT is SS-only.
>
>It depends on which scalable mode is chosed.
>In scalable legacy mode, PGTT is SS-only and we should return true.
>
>>
>>It might make more sense to check the ecap/cap here as anyhow the
>>capability is listed in ecap/cap. This may also bring us some convenience.
>>
>>Say in the future, if we want to add a new mode (e.g. scalable mode 2.0)
>>that supports both FS and SS for guest, we may need to update this helper
>>as well if we check the scalable_modern. But if we check the ecap/cap, then
>>the future change just needs to control the ecap/cap setting at the
>>beginning of the vIOMMU init. To keep the code aligned, you may need to
>>check ecap.PT bit for VTD_SM_PASID_ENTRY_PT. :)
>
>OK, will be like below:
>
>--- a/hw/i386/intel_iommu.c
>+++ b/hw/i386/intel_iommu.c
>@@ -826,14 +826,14 @@ static inline bool
>vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
>
> switch (VTD_PE_GET_TYPE(pe)) {
> case VTD_SM_PASID_ENTRY_FLT:
>- return s->scalable_modern;
>+ return !!(s->ecap & VTD_ECAP_FLTS);
> case VTD_SM_PASID_ENTRY_SLT:
>- return !s->scalable_modern;
>+ return !!(s->ecap & VTD_ECAP_FLTS) || !(s->ecap & VTD_ECAP_SMTS);
Sorry typo err, should be:
+ return !!(s->ecap & VTD_ECAP_SLTS) || !(s->ecap & VTD_ECAP_SMTS);
> case VTD_SM_PASID_ENTRY_NESTED:
> /* Not support NESTED page table type yet */
> return false;
> case VTD_SM_PASID_ENTRY_PT:
>- return x86_iommu->pt_supported;
>+ return !!(s->ecap & VTD_ECAP_PT);
> default:
> /* Unknown type */
> return false;
>
>Thanks
>Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-19 2:47 ` Duan, Zhenzhong
2024-07-19 3:22 ` Yi Liu
@ 2024-07-19 4:21 ` CLEMENT MATHIEU--DRIF
1 sibling, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-19 4:21 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
On 19/07/2024 04:47, Duan, Zhenzhong wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
>> -----Original Message-----
>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>> scalable modern mode
>>
>>
>>
>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>> Caution: External email. Do not open attachments or click links, unless this
>> email comes from a known sender and you know the content is safe.
>>>
>>> Add an new element scalable_mode in IntelIOMMUState to mark scalable
>>> modern mode, this element will be exposed as an intel_iommu property
>>> finally.
>>>
>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>> compatibility check and block host device passthrough until nesting
>>> is supported.
>>>
>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 2 ++
>>> include/hw/i386/intel_iommu.h | 1 +
>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-----------
>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h
>> b/hw/i386/intel_iommu_internal.h
>>> index c0ca7b372f..4e0331caba 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -195,6 +195,7 @@
>>> #define VTD_ECAP_PASID (1ULL << 40)
>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>
>>> /* CAP_REG */
>>> /* (offset >> 4) << 24 */
>>> @@ -211,6 +212,7 @@
>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>> VTD_CAP_DRAIN_WRITE)
>>> #define VTD_CAP_CM (1ULL << 7)
>>> #define VTD_PASID_ID_SHIFT 20
>>> diff --git a/include/hw/i386/intel_iommu.h
>> b/include/hw/i386/intel_iommu.h
>>> index 1eb05c29fc..788ed42477 100644
>>> --- a/include/hw/i386/intel_iommu.h
>>> +++ b/include/hw/i386/intel_iommu.h
>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>
>>> bool caching_mode; /* RO - is cap CM enabled? */
>>> bool scalable_mode; /* RO - is Scalable Mode supported? */
>>> + bool scalable_modern; /* RO - is modern SM supported? */
>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>
>>> dma_addr_t root; /* Current root table pointer */
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 1cff8b00ae..40cbd4a0f4 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -755,16 +755,20 @@ static inline bool
>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>> }
>>>
>>> /* Return true if check passed, otherwise false */
>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>> - VTDPASIDEntry *pe)
>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>> VTDPASIDEntry *pe)
>>> {
>> What about using the cap/ecap registers to know if the translation types
>> are supported or not.
>> Otherwise, we could add a comment to explain why we expect
>> s->scalable_modern to give us enough information.
> What about below:
>
> /*
> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
> *So it's simpler to check s->scalable_modern directly for a PASID entry type instead ecap bits.
> */
Fine ;)
>
> Thanks
> Zhenzhong
>
>>> + X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>> +
>>> switch (VTD_PE_GET_TYPE(pe)) {
>>> + case VTD_SM_PASID_ENTRY_FLT:
>>> + return s->scalable_modern;
>>> case VTD_SM_PASID_ENTRY_SLT:
>>> - return true;
>>> + return !s->scalable_modern;
>>> + case VTD_SM_PASID_ENTRY_NESTED:
>>> + /* Not support NESTED page table type yet */
>>> + return false;
>>> case VTD_SM_PASID_ENTRY_PT:
>>> return x86_iommu->pt_supported;
>>> - case VTD_SM_PASID_ENTRY_FLT:
>>> - case VTD_SM_PASID_ENTRY_NESTED:
>>> default:
>>> /* Unknown type */
>>> return false;
>>> @@ -813,7 +817,6 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> uint8_t pgtt;
>>> uint32_t index;
>>> dma_addr_t entry_size;
>>> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>>
>>> index = VTD_PASID_TABLE_INDEX(pasid);
>>> entry_size = VTD_PASID_ENTRY_SIZE;
>>> @@ -827,7 +830,7 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> }
>>>
>>> /* Do translation type check */
>>> - if (!vtd_pe_type_check(x86_iommu, pe)) {
>>> + if (!vtd_pe_type_check(s, pe)) {
>>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> }
>>>
>>> @@ -3861,7 +3864,13 @@ static bool vtd_check_hiod(IntelIOMMUState
>> *s, HostIOMMUDevice *hiod,
>>> return false;
>>> }
>>>
>>> - return true;
>>> + if (!s->scalable_modern) {
>>> + /* All checks requested by VTD non-modern mode pass */
>>> + return true;
>>> + }
>>> +
>>> + error_setg(errp, "host device is unsupported in scalable modern mode
>> yet");
>>> + return false;
>>> }
>>>
>>> static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int
>> devfn,
>>> @@ -4084,7 +4093,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
>>> }
>>>
>>> /* TODO: read cap/ecap from host to decide which cap to be exposed.
>> */
>>> - if (s->scalable_mode) {
>>> + if (s->scalable_modern) {
>>> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
>>> + s->cap |= VTD_CAP_FS1GP;
>>> + } else if (s->scalable_mode) {
>>> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
>>> }
>>>
>>> @@ -4251,9 +4263,9 @@ static bool vtd_decide_config(IntelIOMMUState
>> *s, Error **errp)
>>> }
>>> }
>>>
>>> - /* Currently only address widths supported are 39 and 48 bits */
>>> if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
>>> - (s->aw_bits != VTD_HOST_AW_48BIT)) {
>>> + (s->aw_bits != VTD_HOST_AW_48BIT) &&
>>> + !s->scalable_modern) {
>>> error_setg(errp, "Supported values for aw-bits are: %d, %d",
>>> VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
>>> return false;
>>> --
>>> 2.34.1
>>>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option
2024-07-19 2:53 ` Duan, Zhenzhong
@ 2024-07-19 4:23 ` CLEMENT MATHIEU--DRIF
0 siblings, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-19 4:23 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Yi Sun, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
On 19/07/2024 04:53, Duan, Zhenzhong wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
>> -----Original Message-----
>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>> Subject: Re: [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be
>> string option
>>
>>
>>
>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>> Caution: External email. Do not open attachments or click links, unless this
>> email comes from a known sender and you know the content is safe.
>>>
>>> From: Yi Liu <yi.l.liu@intel.com>
>>>
>>> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
>>> related to scalable mode translation, thus there are multiple combinations.
>>> While this vIOMMU implementation wants to simplify it for user by
>> providing
>>> typical combinations. User could config it by "x-scalable-mode" option. The
>>> usage is as below:
>>>
>>> "-device intel-iommu,x-scalable-mode=["legacy"|"modern"|"off"]"
>>>
>>> - "legacy": gives support for stage-2 page table
>>> - "modern": gives support for stage-1 page table
>>> - "off": no scalable mode support
>>> - if not configured, means no scalable mode support, if not proper
>>> configured, will throw error
>> s/proper/properly
>> Maybe we could split and rephrase the last bullet point to make it clear
>> that "off" is equivalent to not using the option at all
> You mean split last bullet as a separate paragraph?
> Then what about below:
>
> - "legacy": gives support for stage-2 page table
> - "modern": gives support for stage-1 page table
> - "off": no scalable mode support
> - any other string, will throw error
>
> If x-scalable-mode is not configured, it is equivalent to x-scalable-mode=off.
Yes, lgtm
>
> Thanks
> Zhenzhong
>
>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> ---
>>> include/hw/i386/intel_iommu.h | 1 +
>>> hw/i386/intel_iommu.c | 24 +++++++++++++++++++++++-
>>> 2 files changed, 24 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/hw/i386/intel_iommu.h
>> b/include/hw/i386/intel_iommu.h
>>> index 48134bda11..650641544c 100644
>>> --- a/include/hw/i386/intel_iommu.h
>>> +++ b/include/hw/i386/intel_iommu.h
>>> @@ -263,6 +263,7 @@ struct IntelIOMMUState {
>>>
>>> bool caching_mode; /* RO - is cap CM enabled? */
>>> bool scalable_mode; /* RO - is Scalable Mode supported? */
>>> + char *scalable_mode_str; /* RO - admin's Scalable Mode config */
>>> bool scalable_modern; /* RO - is modern SM supported? */
>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 2804c3628a..14d05fce1d 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -3770,7 +3770,7 @@ static Property vtd_properties[] = {
>>> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
>>> VTD_HOST_AW_AUTO),
>>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>> FALSE),
>>> - DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState,
>> scalable_mode, FALSE),
>>> + DEFINE_PROP_STRING("x-scalable-mode", IntelIOMMUState,
>> scalable_mode_str),
>>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState,
>> snoop_control, false),
>>> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
>>> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
>>> @@ -4686,6 +4686,28 @@ static bool
>> vtd_decide_config(IntelIOMMUState *s, Error **errp)
>>> }
>>> }
>>>
>>> + if (s->scalable_mode_str &&
>>> + (strcmp(s->scalable_mode_str, "off") &&
>>> + strcmp(s->scalable_mode_str, "modern") &&
>>> + strcmp(s->scalable_mode_str, "legacy"))) {
>>> + error_setg(errp, "Invalid x-scalable-mode config,"
>>> + "Please use \"modern\", \"legacy\" or \"off\"");
>>> + return false;
>>> + }
>>> +
>>> + if (s->scalable_mode_str &&
>>> + !strcmp(s->scalable_mode_str, "legacy")) {
>>> + s->scalable_mode = true;
>>> + s->scalable_modern = false;
>>> + } else if (s->scalable_mode_str &&
>>> + !strcmp(s->scalable_mode_str, "modern")) {
>>> + s->scalable_mode = true;
>>> + s->scalable_modern = true;
>>> + } else {
>>> + s->scalable_mode = false;
>>> + s->scalable_modern = false;
>>> + }
>>> +
>>> if (s->aw_bits == VTD_HOST_AW_AUTO) {
>>> if (s->scalable_modern) {
>>> s->aw_bits = VTD_HOST_AW_48BIT;
>>> --
>>> 2.34.1
>>>
>> LGTM
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-19 3:39 ` Duan, Zhenzhong
@ 2024-07-19 4:26 ` CLEMENT MATHIEU--DRIF
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
1 sibling, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-19 4:26 UTC (permalink / raw)
To: Duan, Zhenzhong, Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On 19/07/2024 05:39, Duan, Zhenzhong wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
>> -----Original Message-----
>> From: Duan, Zhenzhong]
>> Subject: RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>> scalable modern mode
>>
>>
>>
>>> -----Original Message-----
>>> From: Liu, Yi L <yi.l.liu@intel.com>
>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>>> scalable modern mode
>>>
>>> On 2024/7/19 10:47, Duan, Zhenzhong wrote:
>>>>
>>>>> -----Original Message-----
>>>>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>>>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable
>>> for
>>>>> scalable modern mode
>>>>>
>>>>>
>>>>>
>>>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>>>> Caution: External email. Do not open attachments or click links, unless
>>> this
>>>>> email comes from a known sender and you know the content is safe.
>>>>>>
>>>>>> Add an new element scalable_mode in IntelIOMMUState to mark
>>> scalable
>>>>>> modern mode, this element will be exposed as an intel_iommu
>> property
>>>>>> finally.
>>>>>>
>>>>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>>>>> compatibility check and block host device passthrough until nesting
>>>>>> is supported.
>>>>>>
>>>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>>>> ---
>>>>>> hw/i386/intel_iommu_internal.h | 2 ++
>>>>>> include/hw/i386/intel_iommu.h | 1 +
>>>>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-------
>> --
>>> --
>>>>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>>>>
>>>>>> diff --git a/hw/i386/intel_iommu_internal.h
>>>>> b/hw/i386/intel_iommu_internal.h
>>>>>> index c0ca7b372f..4e0331caba 100644
>>>>>> --- a/hw/i386/intel_iommu_internal.h
>>>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>>>> @@ -195,6 +195,7 @@
>>>>>> #define VTD_ECAP_PASID (1ULL << 40)
>>>>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>>>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>>>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>>>>
>>>>>> /* CAP_REG */
>>>>>> /* (offset >> 4) << 24 */
>>>>>> @@ -211,6 +212,7 @@
>>>>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>>>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>>>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>>>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>>>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>>>>> VTD_CAP_DRAIN_WRITE)
>>>>>> #define VTD_CAP_CM (1ULL << 7)
>>>>>> #define VTD_PASID_ID_SHIFT 20
>>>>>> diff --git a/include/hw/i386/intel_iommu.h
>>>>> b/include/hw/i386/intel_iommu.h
>>>>>> index 1eb05c29fc..788ed42477 100644
>>>>>> --- a/include/hw/i386/intel_iommu.h
>>>>>> +++ b/include/hw/i386/intel_iommu.h
>>>>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>>>>
>>>>>> bool caching_mode; /* RO - is cap CM enabled? */
>>>>>> bool scalable_mode; /* RO - is Scalable Mode supported?
>> */
>>>>>> + bool scalable_modern; /* RO - is modern SM supported? */
>>>>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>>>>
>>>>>> dma_addr_t root; /* Current root table pointer */
>>>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>>>> index 1cff8b00ae..40cbd4a0f4 100644
>>>>>> --- a/hw/i386/intel_iommu.c
>>>>>> +++ b/hw/i386/intel_iommu.c
>>>>>> @@ -755,16 +755,20 @@ static inline bool
>>>>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>>>>> }
>>>>>>
>>>>>> /* Return true if check passed, otherwise false */
>>>>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>>>>> - VTDPASIDEntry *pe)
>>>>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>>>>> VTDPASIDEntry *pe)
>>>>>> {
>>>>> What about using the cap/ecap registers to know if the translation types
>>>>> are supported or not.
>>>>> Otherwise, we could add a comment to explain why we expect
>>>>> s->scalable_modern to give us enough information.
>>>> What about below:
>>>>
>>>> /*
>>>> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else
>>> VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
>>>> *So it's simpler to check s->scalable_modern directly for a PASID entry
>>> type instead ecap bits.
>>>> */
>>> Since this helper is for pasid entry check, so you can just return false
>>> if the pe's PGTT is SS-only.
>> It depends on which scalable mode is chosed.
>> In scalable legacy mode, PGTT is SS-only and we should return true.
>>
>>> It might make more sense to check the ecap/cap here as anyhow the
>>> capability is listed in ecap/cap. This may also bring us some convenience.
>>>
>>> Say in the future, if we want to add a new mode (e.g. scalable mode 2.0)
>>> that supports both FS and SS for guest, we may need to update this helper
>>> as well if we check the scalable_modern. But if we check the ecap/cap, then
>>> the future change just needs to control the ecap/cap setting at the
>>> beginning of the vIOMMU init. To keep the code aligned, you may need to
>>> check ecap.PT bit for VTD_SM_PASID_ENTRY_PT. :)
>> OK, will be like below:
>>
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -826,14 +826,14 @@ static inline bool
>> vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
>>
>> switch (VTD_PE_GET_TYPE(pe)) {
>> case VTD_SM_PASID_ENTRY_FLT:
>> - return s->scalable_modern;
>> + return !!(s->ecap & VTD_ECAP_FLTS);
>> case VTD_SM_PASID_ENTRY_SLT:
>> - return !s->scalable_modern;
>> + return !!(s->ecap & VTD_ECAP_FLTS) || !(s->ecap & VTD_ECAP_SMTS);
> Sorry typo err, should be:
>
> + return !!(s->ecap & VTD_ECAP_SLTS) || !(s->ecap & VTD_ECAP_SMTS);
>
I agree with Yi's point of view, this version looks good
>> case VTD_SM_PASID_ENTRY_NESTED:
>> /* Not support NESTED page table type yet */
>> return false;
>> case VTD_SM_PASID_ENTRY_PT:
>> - return x86_iommu->pt_supported;
>> + return !!(s->ecap & VTD_ECAP_PT);
>> default:
>> /* Unknown type */
>> return false;
>>
>> Thanks
>> Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation
2024-07-18 8:16 ` [PATCH v1 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
@ 2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
0 siblings, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-23 7:12 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> According to spec, Page-Selective-within-Domain Invalidation (11b):
>
> 1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-through
> (PGTT=100b) mappings associated with the specified domain-id and the
> input-address range are invalidated.
> 2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
> mapping associated with specified domain-id are invalidated.
>
> So per spec definition the Page-Selective-within-Domain Invalidation
> needs to flush first stage and nested cached IOTLB enties as well.
>
> We don't support nested yet and pass-through mapping is never cached,
> so what in iotlb cache are only first-stage and second-stage mappings.
>
> Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
> invalidate entries based on PGTT type.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 27 +++++++++++++++++++++------
> 2 files changed, 22 insertions(+), 6 deletions(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index fe9057c50d..b843d069cc 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -155,6 +155,7 @@ struct VTDIOTLBEntry {
> uint64_t pte;
> uint64_t mask;
> uint8_t access_flags;
> + uint8_t pgtt;
> };
>
> /* VT-d Source-ID Qualifier types */
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 210df32f01..8d47e5ba78 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -305,9 +305,21 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
> VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
> uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
> - return (entry->domain_id == info->domain_id) &&
> - (((entry->gfn & info->mask) == gfn) ||
> - (entry->gfn == gfn_tlb));
> +
> + if (entry->domain_id != info->domain_id) {
> + return false;
> + }
> +
> + /*
> + * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
> + * nested (PGTT=011b) mapping associated with specified domain-id are
> + * invalidated. Nested isn't supported yet, so only need to check 001b.
> + */
> + if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
> + return true;
> + }
> +
> + return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
> }
>
> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
> @@ -382,7 +394,7 @@ out:
> static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
> uint16_t domain_id, hwaddr addr, uint64_t pte,
> uint8_t access_flags, uint32_t level,
> - uint32_t pasid)
> + uint32_t pasid, uint8_t pgtt)
> {
> VTDIOTLBEntry *entry = g_malloc(sizeof(*entry));
> struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
> @@ -400,6 +412,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
> entry->access_flags = access_flags;
> entry->mask = vtd_pt_level_page_mask(level);
> entry->pasid = pasid;
> + entry->pgtt = pgtt;
>
> key->gfn = gfn;
> key->sid = source_id;
> @@ -2071,7 +2084,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> bool is_fpd_set = false;
> bool reads = true;
> bool writes = true;
> - uint8_t access_flags;
> + uint8_t access_flags, pgtt;
> bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
> VTDIOTLBEntry *iotlb_entry;
>
> @@ -2179,9 +2192,11 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> if (s->scalable_modern && s->root_scalable) {
> ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
> &reads, &writes, s->aw_bits, pasid);
> + pgtt = VTD_SM_PASID_ENTRY_FLT;
> } else {
> ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
> &reads, &writes, s->aw_bits, pasid);
> + pgtt = VTD_SM_PASID_ENTRY_SLT;
> }
> if (ret_fr) {
> vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
> @@ -2192,7 +2207,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> page_mask = vtd_pt_level_page_mask(level);
> access_flags = IOMMU_ACCESS_FLAG(reads, writes);
> vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
> - addr, pte, access_flags, level, pasid);
> + addr, pte, access_flags, level, pasid, pgtt);
> out:
> vtd_iommu_unlock(s);
> entry->iova = addr & page_mask;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec
2024-07-18 8:16 ` [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
@ 2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-29 7:39 ` Yi Liu
1 sibling, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-23 7:12 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Yu Zhang, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> From: Yu Zhang <yu.c.zhang@linux.intel.com>
>
> Spec revision 3.0 or above defines more detailed fault reasons for
> scalable mode. So introduce them into emulation code, see spec
> section 7.1.2 for details.
>
> Note spec revision has no relation with VERSION register, Guest
> kernel should not use that register to judge what features are
> supported. Instead cap/ecap bits should be checked.
>
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 9 ++++++++-
> hw/i386/intel_iommu.c | 25 ++++++++++++++++---------
> 2 files changed, 24 insertions(+), 10 deletions(-)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index f8cf99bddf..c0ca7b372f 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -311,7 +311,14 @@ typedef enum VTDFaultReason {
> * request while disabled */
> VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */
>
> - VTD_FR_PASID_TABLE_INV = 0x58, /*Invalid PASID table entry */
> + /* PASID directory entry access failure */
> + VTD_FR_PASID_DIR_ACCESS_ERR = 0x50,
> + /* The Present(P) field of pasid directory entry is 0 */
> + VTD_FR_PASID_DIR_ENTRY_P = 0x51,
> + VTD_FR_PASID_TABLE_ACCESS_ERR = 0x58, /* PASID table entry access failure */
> + /* The Present(P) field of pasid table entry is 0 */
> + VTD_FR_PASID_ENTRY_P = 0x59,
> + VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
>
> /* Output address in the interrupt address range for scalable mode */
> VTD_FR_SM_INTERRUPT_ADDR = 0x87,
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 37c21a0aec..e65f5b29a5 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -796,7 +796,7 @@ static int vtd_get_pdire_from_pdir_table(dma_addr_t pasid_dir_base,
> addr = pasid_dir_base + index * entry_size;
> if (dma_memory_read(&address_space_memory, addr,
> pdire, entry_size, MEMTXATTRS_UNSPECIFIED)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_DIR_ACCESS_ERR;
> }
>
> pdire->val = le64_to_cpu(pdire->val);
> @@ -814,6 +814,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> dma_addr_t addr,
> VTDPASIDEntry *pe)
> {
> + uint8_t pgtt;
> uint32_t index;
> dma_addr_t entry_size;
> X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
> @@ -823,7 +824,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> addr = addr + index * entry_size;
> if (dma_memory_read(&address_space_memory, addr,
> pe, entry_size, MEMTXATTRS_UNSPECIFIED)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_TABLE_ACCESS_ERR;
> }
> for (size_t i = 0; i < ARRAY_SIZE(pe->val); i++) {
> pe->val[i] = le64_to_cpu(pe->val[i]);
> @@ -831,11 +832,13 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>
> /* Do translation type check */
> if (!vtd_pe_type_check(x86_iommu, pe)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> - if (!vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
> - return -VTD_FR_PASID_TABLE_INV;
> + pgtt = VTD_PE_GET_TYPE(pe);
> + if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
> + !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> return 0;
> @@ -876,7 +879,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
> }
>
> if (!vtd_pdire_present(&pdire)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_DIR_ENTRY_P;
> }
>
> ret = vtd_get_pe_from_pdire(s, pasid, &pdire, pe);
> @@ -885,7 +888,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
> }
>
> if (!vtd_pe_present(pe)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_ENTRY_P;
> }
>
> return 0;
> @@ -938,7 +941,7 @@ static int vtd_ce_get_pasid_fpd(IntelIOMMUState *s,
> }
>
> if (!vtd_pdire_present(&pdire)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_DIR_ENTRY_P;
> }
>
> /*
> @@ -1795,7 +1798,11 @@ static const bool vtd_qualified_faults[] = {
> [VTD_FR_ROOT_ENTRY_RSVD] = false,
> [VTD_FR_PAGING_ENTRY_RSVD] = true,
> [VTD_FR_CONTEXT_ENTRY_TT] = true,
> - [VTD_FR_PASID_TABLE_INV] = false,
> + [VTD_FR_PASID_DIR_ACCESS_ERR] = false,
> + [VTD_FR_PASID_DIR_ENTRY_P] = true,
> + [VTD_FR_PASID_TABLE_ACCESS_ERR] = false,
> + [VTD_FR_PASID_ENTRY_P] = true,
> + [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
> [VTD_FR_SM_INTERRUPT_ADDR] = true,
> [VTD_FR_MAX] = false,
> };
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-19 3:39 ` Duan, Zhenzhong
2024-07-19 4:26 ` CLEMENT MATHIEU--DRIF
@ 2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-23 8:50 ` Duan, Zhenzhong
1 sibling, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-23 7:12 UTC (permalink / raw)
To: Duan, Zhenzhong, Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On 19/07/2024 05:39, Duan, Zhenzhong wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
>> -----Original Message-----
>> From: Duan, Zhenzhong]
>> Subject: RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>> scalable modern mode
>>
>>
>>
>>> -----Original Message-----
>>> From: Liu, Yi L <yi.l.liu@intel.com>
>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>>> scalable modern mode
>>>
>>> On 2024/7/19 10:47, Duan, Zhenzhong wrote:
>>>>
>>>>> -----Original Message-----
>>>>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>>>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable
>>> for
>>>>> scalable modern mode
>>>>>
>>>>>
>>>>>
>>>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>>>> Caution: External email. Do not open attachments or click links, unless
>>> this
>>>>> email comes from a known sender and you know the content is safe.
>>>>>>
>>>>>> Add an new element scalable_mode in IntelIOMMUState to mark
>>> scalable
>>>>>> modern mode, this element will be exposed as an intel_iommu
>> property
>>>>>> finally.
>>>>>>
>>>>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>>>>> compatibility check and block host device passthrough until nesting
>>>>>> is supported.
>>>>>>
>>>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>>>> ---
>>>>>> hw/i386/intel_iommu_internal.h | 2 ++
>>>>>> include/hw/i386/intel_iommu.h | 1 +
>>>>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++-------
>> --
>>> --
>>>>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>>>>
>>>>>> diff --git a/hw/i386/intel_iommu_internal.h
>>>>> b/hw/i386/intel_iommu_internal.h
>>>>>> index c0ca7b372f..4e0331caba 100644
>>>>>> --- a/hw/i386/intel_iommu_internal.h
>>>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>>>> @@ -195,6 +195,7 @@
>>>>>> #define VTD_ECAP_PASID (1ULL << 40)
>>>>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>>>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>>>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>>>>
>>>>>> /* CAP_REG */
>>>>>> /* (offset >> 4) << 24 */
>>>>>> @@ -211,6 +212,7 @@
>>>>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>>>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>>>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>>>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>>>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>>>>> VTD_CAP_DRAIN_WRITE)
>>>>>> #define VTD_CAP_CM (1ULL << 7)
>>>>>> #define VTD_PASID_ID_SHIFT 20
>>>>>> diff --git a/include/hw/i386/intel_iommu.h
>>>>> b/include/hw/i386/intel_iommu.h
>>>>>> index 1eb05c29fc..788ed42477 100644
>>>>>> --- a/include/hw/i386/intel_iommu.h
>>>>>> +++ b/include/hw/i386/intel_iommu.h
>>>>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>>>>
>>>>>> bool caching_mode; /* RO - is cap CM enabled? */
>>>>>> bool scalable_mode; /* RO - is Scalable Mode supported?
>> */
>>>>>> + bool scalable_modern; /* RO - is modern SM supported? */
>>>>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>>>>
>>>>>> dma_addr_t root; /* Current root table pointer */
>>>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>>>> index 1cff8b00ae..40cbd4a0f4 100644
>>>>>> --- a/hw/i386/intel_iommu.c
>>>>>> +++ b/hw/i386/intel_iommu.c
>>>>>> @@ -755,16 +755,20 @@ static inline bool
>>>>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>>>>> }
>>>>>>
>>>>>> /* Return true if check passed, otherwise false */
>>>>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>>>>> - VTDPASIDEntry *pe)
>>>>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>>>>> VTDPASIDEntry *pe)
>>>>>> {
>>>>> What about using the cap/ecap registers to know if the translation types
>>>>> are supported or not.
>>>>> Otherwise, we could add a comment to explain why we expect
>>>>> s->scalable_modern to give us enough information.
>>>> What about below:
>>>>
>>>> /*
>>>> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else
>>> VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
>>>> *So it's simpler to check s->scalable_modern directly for a PASID entry
>>> type instead ecap bits.
>>>> */
>>> Since this helper is for pasid entry check, so you can just return false
>>> if the pe's PGTT is SS-only.
>> It depends on which scalable mode is chosed.
>> In scalable legacy mode, PGTT is SS-only and we should return true.
>>
>>> It might make more sense to check the ecap/cap here as anyhow the
>>> capability is listed in ecap/cap. This may also bring us some convenience.
>>>
>>> Say in the future, if we want to add a new mode (e.g. scalable mode 2.0)
>>> that supports both FS and SS for guest, we may need to update this helper
>>> as well if we check the scalable_modern. But if we check the ecap/cap, then
>>> the future change just needs to control the ecap/cap setting at the
>>> beginning of the vIOMMU init. To keep the code aligned, you may need to
>>> check ecap.PT bit for VTD_SM_PASID_ENTRY_PT. :)
>> OK, will be like below:
>>
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -826,14 +826,14 @@ static inline bool
>> vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
>>
>> switch (VTD_PE_GET_TYPE(pe)) {
>> case VTD_SM_PASID_ENTRY_FLT:
>> - return s->scalable_modern;
>> + return !!(s->ecap & VTD_ECAP_FLTS);
>> case VTD_SM_PASID_ENTRY_SLT:
>> - return !s->scalable_modern;
>> + return !!(s->ecap & VTD_ECAP_FLTS) || !(s->ecap & VTD_ECAP_SMTS);
> Sorry typo err, should be:
>
> + return !!(s->ecap & VTD_ECAP_SLTS) || !(s->ecap & VTD_ECAP_SMTS);
>
Moreover, shouldn't we declare the capabilities after the feature is
implemented?
I think FLTS and FS1GP should not be declared that early.
>> case VTD_SM_PASID_ENTRY_NESTED:
>> /* Not support NESTED page table type yet */
>> return false;
>> case VTD_SM_PASID_ENTRY_PT:
>> - return x86_iommu->pt_supported;
>> + return !!(s->ecap & VTD_ECAP_PT);
>> default:
>> /* Unknown type */
>> return false;
>>
>> Thanks
>> Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
@ 2024-07-23 8:50 ` Duan, Zhenzhong
0 siblings, 0 replies; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-23 8:50 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable for
>scalable modern mode
>
>
>
>On 19/07/2024 05:39, Duan, Zhenzhong wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>>> -----Original Message-----
>>> From: Duan, Zhenzhong]
>>> Subject: RE: [PATCH v1 03/17] intel_iommu: Add a placeholder variable
>for
>>> scalable modern mode
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Liu, Yi L <yi.l.liu@intel.com>
>>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder variable
>for
>>>> scalable modern mode
>>>>
>>>> On 2024/7/19 10:47, Duan, Zhenzhong wrote:
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>>>>>> Subject: Re: [PATCH v1 03/17] intel_iommu: Add a placeholder
>variable
>>>> for
>>>>>> scalable modern mode
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>>>>> Caution: External email. Do not open attachments or click links,
>unless
>>>> this
>>>>>> email comes from a known sender and you know the content is safe.
>>>>>>>
>>>>>>> Add an new element scalable_mode in IntelIOMMUState to mark
>>>> scalable
>>>>>>> modern mode, this element will be exposed as an intel_iommu
>>> property
>>>>>>> finally.
>>>>>>>
>>>>>>> For now, it's only a placehholder and used for cap/ecap initialization,
>>>>>>> compatibility check and block host device passthrough until nesting
>>>>>>> is supported.
>>>>>>>
>>>>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>>>>> ---
>>>>>>> hw/i386/intel_iommu_internal.h | 2 ++
>>>>>>> include/hw/i386/intel_iommu.h | 1 +
>>>>>>> hw/i386/intel_iommu.c | 34 +++++++++++++++++++++++--
>-----
>>> --
>>>> --
>>>>>>> 3 files changed, 26 insertions(+), 11 deletions(-)
>>>>>>>
>>>>>>> diff --git a/hw/i386/intel_iommu_internal.h
>>>>>> b/hw/i386/intel_iommu_internal.h
>>>>>>> index c0ca7b372f..4e0331caba 100644
>>>>>>> --- a/hw/i386/intel_iommu_internal.h
>>>>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>>>>> @@ -195,6 +195,7 @@
>>>>>>> #define VTD_ECAP_PASID (1ULL << 40)
>>>>>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>>>>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>>>>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>>>>>
>>>>>>> /* CAP_REG */
>>>>>>> /* (offset >> 4) << 24 */
>>>>>>> @@ -211,6 +212,7 @@
>>>>>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>>>>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>>>>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>>>>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>>>>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>>>>>> VTD_CAP_DRAIN_WRITE)
>>>>>>> #define VTD_CAP_CM (1ULL << 7)
>>>>>>> #define VTD_PASID_ID_SHIFT 20
>>>>>>> diff --git a/include/hw/i386/intel_iommu.h
>>>>>> b/include/hw/i386/intel_iommu.h
>>>>>>> index 1eb05c29fc..788ed42477 100644
>>>>>>> --- a/include/hw/i386/intel_iommu.h
>>>>>>> +++ b/include/hw/i386/intel_iommu.h
>>>>>>> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>>>>>>>
>>>>>>> bool caching_mode; /* RO - is cap CM enabled? */
>>>>>>> bool scalable_mode; /* RO - is Scalable Mode supported?
>>> */
>>>>>>> + bool scalable_modern; /* RO - is modern SM supported?
>*/
>>>>>>> bool snoop_control; /* RO - is SNP filed supported? */
>>>>>>>
>>>>>>> dma_addr_t root; /* Current root table pointer */
>>>>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>>>>> index 1cff8b00ae..40cbd4a0f4 100644
>>>>>>> --- a/hw/i386/intel_iommu.c
>>>>>>> +++ b/hw/i386/intel_iommu.c
>>>>>>> @@ -755,16 +755,20 @@ static inline bool
>>>>>> vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
>>>>>>> }
>>>>>>>
>>>>>>> /* Return true if check passed, otherwise false */
>>>>>>> -static inline bool vtd_pe_type_check(X86IOMMUState
>*x86_iommu,
>>>>>>> - VTDPASIDEntry *pe)
>>>>>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s,
>>>>>> VTDPASIDEntry *pe)
>>>>>>> {
>>>>>> What about using the cap/ecap registers to know if the translation
>types
>>>>>> are supported or not.
>>>>>> Otherwise, we could add a comment to explain why we expect
>>>>>> s->scalable_modern to give us enough information.
>>>>> What about below:
>>>>>
>>>>> /*
>>>>> *VTD_ECAP_FLTS in ecap is set if s->scalable_modern is true, or else
>>>> VTD_ECAP_SLTS can be set or not depending on s->scalable_mode.
>>>>> *So it's simpler to check s->scalable_modern directly for a PASID
>entry
>>>> type instead ecap bits.
>>>>> */
>>>> Since this helper is for pasid entry check, so you can just return false
>>>> if the pe's PGTT is SS-only.
>>> It depends on which scalable mode is chosed.
>>> In scalable legacy mode, PGTT is SS-only and we should return true.
>>>
>>>> It might make more sense to check the ecap/cap here as anyhow the
>>>> capability is listed in ecap/cap. This may also bring us some convenience.
>>>>
>>>> Say in the future, if we want to add a new mode (e.g. scalable mode 2.0)
>>>> that supports both FS and SS for guest, we may need to update this
>helper
>>>> as well if we check the scalable_modern. But if we check the ecap/cap,
>then
>>>> the future change just needs to control the ecap/cap setting at the
>>>> beginning of the vIOMMU init. To keep the code aligned, you may need
>to
>>>> check ecap.PT bit for VTD_SM_PASID_ENTRY_PT. :)
>>> OK, will be like below:
>>>
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -826,14 +826,14 @@ static inline bool
>>> vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
>>>
>>> switch (VTD_PE_GET_TYPE(pe)) {
>>> case VTD_SM_PASID_ENTRY_FLT:
>>> - return s->scalable_modern;
>>> + return !!(s->ecap & VTD_ECAP_FLTS);
>>> case VTD_SM_PASID_ENTRY_SLT:
>>> - return !s->scalable_modern;
>>> + return !!(s->ecap & VTD_ECAP_FLTS) || !(s->ecap &
>VTD_ECAP_SMTS);
>> Sorry typo err, should be:
>>
>> + return !!(s->ecap & VTD_ECAP_SLTS) || !(s->ecap &
>VTD_ECAP_SMTS);
>>
>Moreover, shouldn't we declare the capabilities after the feature is
>implemented?
>I think FLTS and FS1GP should not be declared that early.
OK, I can move it to " [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option",
In fact, before patch16, there is no way to enable s->scalable_modern, so those caps can never be enabled.
Thanks
Zhenzhong
>>> case VTD_SM_PASID_ENTRY_NESTED:
>>> /* Not support NESTED page table type yet */
>>> return false;
>>> case VTD_SM_PASID_ENTRY_PT:
>>> - return x86_iommu->pt_supported;
>>> + return !!(s->ecap & VTD_ECAP_PT);
>>> default:
>>> /* Unknown type */
>>> return false;
>>>
>>> Thanks
>>> Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation
2024-07-18 8:16 ` [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation Zhenzhong Duan
@ 2024-07-23 16:02 ` CLEMENT MATHIEU--DRIF
2024-07-24 2:59 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-23 16:02 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
> flush stage-2 iotlb entries with matching domain id and pasid.
>
> With scalable modern mode introduced, guest could send PASID-selective
> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 10 +++++
> hw/i386/intel_iommu.c | 78 ++++++++++++++++++++++++++++++++++
> 2 files changed, 88 insertions(+)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 4e0331caba..f71fc91234 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -440,6 +440,16 @@ typedef union VTDInvDesc VTDInvDesc;
> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM | VTD_SL_TM)) : \
> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>
> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
> +
> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
> +
> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
> + VTD_DOMAIN_ID_MASK)
> +
> /* Information about page-selective IOTLB invalidate */
> struct VTDIOTLBPageInvInfo {
> uint16_t domain_id;
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 40cbd4a0f4..075a27adac 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -2659,6 +2659,80 @@ static bool vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
> return true;
> }
>
> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
> + gpointer user_data)
> +{
> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> +
> + return ((entry->domain_id == info->domain_id) &&
> + (entry->pasid == info->pasid));
> +}
> +
> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> + uint16_t domain_id, uint32_t pasid)
> +{
> + VTDIOTLBPageInvInfo info;
> + VTDAddressSpace *vtd_as;
> + VTDContextEntry ce;
> +
> + info.domain_id = domain_id;
> + info.pasid = pasid;
> +
> + vtd_iommu_lock(s);
> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
> + &info);
> + vtd_iommu_unlock(s);
> +
> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> + vtd_as->devfn, &ce) &&
> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
> +
> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
> + vtd_as->pasid != pasid) {
> + continue;
> + }
> +
> + if (!s->scalable_modern) {
> + vtd_address_space_sync(vtd_as);
> + }
> + }
> + }
> +}
> +
> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> + VTDInvDesc *inv_desc)
> +{
> + uint16_t domain_id;
> + uint32_t pasid;
> +
> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
> + error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%" PRIx64
> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
This error is not formatted as the other similar messages we print when
reserved bits are non-zero.
Here is what we've done in vtd_process_iotlb_desc:
error_report_once("%s: invalid iotlb inv desc: hi=0x%"PRIx64
", lo=0x%"PRIx64" (reserved bits unzero)",
__func__, inv_desc->hi, inv_desc->lo);
> + return false;
> + }
> +
> + domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
> + pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
> + switch (inv_desc->val[0] & VTD_INV_DESC_IOTLB_G) {
Not critical but why don't we have VTD_INV_DESC_PIOTLB_G?
> + case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
> + vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
> + break;
> +
> + case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
> + break;
> +
> + default:
> + error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%" PRIx64
> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
Same comment, I think we should make the messages consistent across
descriptor handlers.
> + return false;
> + }
> + return true;
> +}
> +
> static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
> VTDInvDesc *inv_desc)
> {
> @@ -2769,6 +2843,10 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
> break;
>
> case VTD_INV_DESC_PIOTLB:
> + trace_vtd_inv_desc("p-iotlb", inv_desc.val[1], inv_desc.val[0]);
> + if (!vtd_process_piotlb_desc(s, &inv_desc)) {
> + return false;
> + }
> break;
>
> case VTD_INV_DESC_WAIT:
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 10/17] intel_iommu: Process PASID-based iotlb invalidation
2024-07-18 8:16 ` [PATCH v1 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
@ 2024-07-23 16:18 ` CLEMENT MATHIEU--DRIF
0 siblings, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-23 16:18 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> PASID-based iotlb (piotlb) is used during walking Intel
> VT-d stage-1 page table.
>
> This emulates the stage-1 page table iotlb invalidation requested
> by a PASID-based IOTLB Invalidate Descriptor (P_IOTLB).
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 3 +++
> hw/i386/intel_iommu.c | 45 ++++++++++++++++++++++++++++++++++
> 2 files changed, 48 insertions(+)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index cf0f176e06..7dd8176e86 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -470,6 +470,9 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> #define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
> VTD_DOMAIN_ID_MASK)
> +#define VTD_INV_DESC_PIOTLB_ADDR(val) ((val) & ~0xfffULL)
> +#define VTD_INV_DESC_PIOTLB_AM(val) ((val) & 0x3fULL)
> +#define VTD_INV_DESC_PIOTLB_IH(val) (((val) >> 6) & 0x1)
>
> /* Information about page-selective IOTLB invalidate */
> struct VTDIOTLBPageInvInfo {
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 8d47e5ba78..8ebb6dbd7d 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -322,6 +322,28 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
> return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
> }
>
> +static gboolean vtd_hash_remove_by_page_piotlb(gpointer key, gpointer value,
> + gpointer user_data)
> +{
> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> + uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
> + uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
> +
> + /*
> + * According to spec, PASID-based-IOTLB Invalidation in page granularity
> + * doesn't invalidate IOTLB entries caching second-stage (PGTT=010b)
> + * or pass-through (PGTT=100b) mappings. Nested isn't supported yet,
> + * so only need to check first-stage (PGTT=001b) mappings.
> + */
> + if (entry->pgtt != VTD_SM_PASID_ENTRY_FLT) {
> + return false;
> + }
> +
> + return entry->domain_id == info->domain_id && entry->pasid == info->pasid &&
> + ((entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb);
> +}
> +
> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
> * IntelIOMMUState to 1. Must be called with IOMMU lock held.
> */
> @@ -2886,11 +2908,30 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> }
> }
>
> +static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
> + uint32_t pasid, hwaddr addr, uint8_t am,
> + bool ih)
> +{
> + VTDIOTLBPageInvInfo info;
> +
> + info.domain_id = domain_id;
> + info.pasid = pasid;
> + info.addr = addr;
> + info.mask = ~((1 << am) - 1);
> +
> + vtd_iommu_lock(s);
> + g_hash_table_foreach_remove(s->iotlb,
> + vtd_hash_remove_by_page_piotlb, &info);
> + vtd_iommu_unlock(s);
> +}
> +
> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> VTDInvDesc *inv_desc)
> {
> uint16_t domain_id;
> uint32_t pasid;
> + uint8_t am;
> + hwaddr addr;
>
> if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
> @@ -2907,6 +2948,10 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> break;
>
> case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
> + am = VTD_INV_DESC_PIOTLB_AM(inv_desc->val[1]);
> + addr = (hwaddr) VTD_INV_DESC_PIOTLB_ADDR(inv_desc->val[1]);
> + vtd_piotlb_page_invalidate(s, domain_id, pasid, addr, am,
> + VTD_INV_DESC_PIOTLB_IH(inv_desc->val[1]));
> break;
>
> default:
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation
2024-07-23 16:02 ` CLEMENT MATHIEU--DRIF
@ 2024-07-24 2:59 ` Duan, Zhenzhong
2024-07-24 5:16 ` CLEMENT MATHIEU--DRIF
0 siblings, 1 reply; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-24 2:59 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-
>selective PASID-based iotlb invalidation
>
>
>
>On 18/07/2024 10:16, Zhenzhong Duan wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>> flush stage-2 iotlb entries with matching domain id and pasid.
>>
>> With scalable modern mode introduced, guest could send PASID-selective
>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> ---
>> hw/i386/intel_iommu_internal.h | 10 +++++
>> hw/i386/intel_iommu.c | 78
>++++++++++++++++++++++++++++++++++
>> 2 files changed, 88 insertions(+)
>>
>> diff --git a/hw/i386/intel_iommu_internal.h
>b/hw/i386/intel_iommu_internal.h
>> index 4e0331caba..f71fc91234 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -440,6 +440,16 @@ typedef union VTDInvDesc VTDInvDesc;
>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM |
>VTD_SL_TM)) : \
>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>
>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>> +
>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>> +
>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
>> + VTD_DOMAIN_ID_MASK)
>> +
>> /* Information about page-selective IOTLB invalidate */
>> struct VTDIOTLBPageInvInfo {
>> uint16_t domain_id;
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 40cbd4a0f4..075a27adac 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -2659,6 +2659,80 @@ static bool
>vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>> return true;
>> }
>>
>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
>> + gpointer user_data)
>> +{
>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>> +
>> + return ((entry->domain_id == info->domain_id) &&
>> + (entry->pasid == info->pasid));
>> +}
>> +
>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> + uint16_t domain_id, uint32_t pasid)
>> +{
>> + VTDIOTLBPageInvInfo info;
>> + VTDAddressSpace *vtd_as;
>> + VTDContextEntry ce;
>> +
>> + info.domain_id = domain_id;
>> + info.pasid = pasid;
>> +
>> + vtd_iommu_lock(s);
>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
>> + &info);
>> + vtd_iommu_unlock(s);
>> +
>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>> + vtd_as->devfn, &ce) &&
>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>> +
>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>> + vtd_as->pasid != pasid) {
>> + continue;
>> + }
>> +
>> + if (!s->scalable_modern) {
>> + vtd_address_space_sync(vtd_as);
>> + }
>> + }
>> + }
>> +}
>> +
>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>> + VTDInvDesc *inv_desc)
>> +{
>> + uint16_t domain_id;
>> + uint32_t pasid;
>> +
>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
>> + error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%"
>PRIx64
>> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
>This error is not formatted as the other similar messages we print when
>reserved bits are non-zero.
>Here is what we've done in vtd_process_iotlb_desc:
Sure, will change as below,
>
> error_report_once("%s: invalid iotlb inv desc: hi=0x%"PRIx64
> ", lo=0x%"PRIx64" (reserved bits unzero)",
> __func__, inv_desc->hi, inv_desc->lo);
>> + return false;
>> + }
>> +
>> + domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
>> + pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
>> + switch (inv_desc->val[0] & VTD_INV_DESC_IOTLB_G) {
>Not critical but why don't we have VTD_INV_DESC_PIOTLB_G?
Will add.
>> + case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
>> + vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
>> + break;
>> +
>> + case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
>> + break;
>> +
>> + default:
>> + error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%"
>PRIx64
>> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
>Same comment, I think we should make the messages consistent across
>descriptor handlers.
What about below:
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 3290761595..e76fd9d377 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -479,9 +479,10 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
+/* Masks for IOTLB Invalidate Descriptor */
+#define VTD_INV_DESC_IOTLB_G (3ULL << 4)
#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
-
#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
VTD_DOMAIN_ID_MASK)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 0733180501..9dd41b835b 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3708,8 +3708,9 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
(inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
- error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%" PRIx64
- " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
+ error_report_once("%s: invalid piotlb inv desc hi=0x%"PRIx64
+ " lo=0x%"PRIx64" (reserved bits unzero)",
+ __func__, inv_desc->val[1], inv_desc->val[0]);
return false;
}
@@ -3728,8 +3729,10 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
break;
default:
- error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%" PRIx64
- " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
+ error_report_once("%s: invalid piotlb inv desc: hi=0x%"PRIx64
+ ", lo=0x%"PRIx64" (type mismatch: 0x%llx)",
+ __func__, inv_desc->val[1], inv_desc->val[0],
+ inv_desc->val[0] & VTD_INV_DESC_IOTLB_G);
return false;
}
return true;
Thanks
Zhenzhong
^ permalink raw reply related [flat|nested] 50+ messages in thread
* Re: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation
2024-07-24 2:59 ` Duan, Zhenzhong
@ 2024-07-24 5:16 ` CLEMENT MATHIEU--DRIF
2024-07-24 5:19 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-24 5:16 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
On 24/07/2024 04:59, Duan, Zhenzhong wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
>> -----Original Message-----
>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>> Subject: Re: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-
>> selective PASID-based iotlb invalidation
>>
>>
>>
>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>> Caution: External email. Do not open attachments or click links, unless this
>> email comes from a known sender and you know the content is safe.
>>>
>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>>> flush stage-2 iotlb entries with matching domain id and pasid.
>>>
>>> With scalable modern mode introduced, guest could send PASID-selective
>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>>>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 10 +++++
>>> hw/i386/intel_iommu.c | 78
>> ++++++++++++++++++++++++++++++++++
>>> 2 files changed, 88 insertions(+)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h
>> b/hw/i386/intel_iommu_internal.h
>>> index 4e0331caba..f71fc91234 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -440,6 +440,16 @@ typedef union VTDInvDesc VTDInvDesc;
>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM |
>> VTD_SL_TM)) : \
>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>>
>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>>> +
>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>>> +
>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
>>> + VTD_DOMAIN_ID_MASK)
>>> +
>>> /* Information about page-selective IOTLB invalidate */
>>> struct VTDIOTLBPageInvInfo {
>>> uint16_t domain_id;
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 40cbd4a0f4..075a27adac 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -2659,6 +2659,80 @@ static bool
>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>>> return true;
>>> }
>>>
>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
>>> + gpointer user_data)
>>> +{
>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>> +
>>> + return ((entry->domain_id == info->domain_id) &&
>>> + (entry->pasid == info->pasid));
>>> +}
>>> +
>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>> + uint16_t domain_id, uint32_t pasid)
>>> +{
>>> + VTDIOTLBPageInvInfo info;
>>> + VTDAddressSpace *vtd_as;
>>> + VTDContextEntry ce;
>>> +
>>> + info.domain_id = domain_id;
>>> + info.pasid = pasid;
>>> +
>>> + vtd_iommu_lock(s);
>>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
>>> + &info);
>>> + vtd_iommu_unlock(s);
>>> +
>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>> + vtd_as->devfn, &ce) &&
>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>> +
>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>> + vtd_as->pasid != pasid) {
>>> + continue;
>>> + }
>>> +
>>> + if (!s->scalable_modern) {
>>> + vtd_address_space_sync(vtd_as);
>>> + }
>>> + }
>>> + }
>>> +}
>>> +
>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>>> + VTDInvDesc *inv_desc)
>>> +{
>>> + uint16_t domain_id;
>>> + uint32_t pasid;
>>> +
>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
>>> + error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%"
>> PRIx64
>>> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
>> This error is not formatted as the other similar messages we print when
>> reserved bits are non-zero.
>> Here is what we've done in vtd_process_iotlb_desc:
> Sure, will change as below,
>
>> error_report_once("%s: invalid iotlb inv desc: hi=0x%"PRIx64
>> ", lo=0x%"PRIx64" (reserved bits unzero)",
>> __func__, inv_desc->hi, inv_desc->lo);
>>> + return false;
>>> + }
>>> +
>>> + domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
>>> + pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
>>> + switch (inv_desc->val[0] & VTD_INV_DESC_IOTLB_G) {
>> Not critical but why don't we have VTD_INV_DESC_PIOTLB_G?
> Will add.
>
>>> + case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
>>> + vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
>>> + break;
>>> +
>>> + case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
>>> + break;
>>> +
>>> + default:
>>> + error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%"
>> PRIx64
>>> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
>> Same comment, I think we should make the messages consistent across
>> descriptor handlers.
> What about below:
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 3290761595..e76fd9d377 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -479,9 +479,10 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
> #define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>
> +/* Masks for IOTLB Invalidate Descriptor */
> +#define VTD_INV_DESC_IOTLB_G (3ULL << 4)
This one is already defined
> #define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
> #define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
> -
> #define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> #define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
> VTD_DOMAIN_ID_MASK)
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 0733180501..9dd41b835b 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3708,8 +3708,9 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>
> if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
> - error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%" PRIx64
> - " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
> + error_report_once("%s: invalid piotlb inv desc hi=0x%"PRIx64
> + " lo=0x%"PRIx64" (reserved bits unzero)",
> + __func__, inv_desc->val[1], inv_desc->val[0]);
> return false;
> }
lgtm
>
> @@ -3728,8 +3729,10 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> break;
>
> default:
> - error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%" PRIx64
> - " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
> + error_report_once("%s: invalid piotlb inv desc: hi=0x%"PRIx64
> + ", lo=0x%"PRIx64" (type mismatch: 0x%llx)",
> + __func__, inv_desc->val[1], inv_desc->val[0],
> + inv_desc->val[0] & VTD_INV_DESC_IOTLB_G);
> return false;
> }
> return true;
lgtm
>
> Thanks
> Zhenzhong
Thanks
>cmd
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation
2024-07-24 5:16 ` CLEMENT MATHIEU--DRIF
@ 2024-07-24 5:19 ` Duan, Zhenzhong
0 siblings, 0 replies; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-24 5:19 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-
>selective PASID-based iotlb invalidation
>
>
>
>On 24/07/2024 04:59, Duan, Zhenzhong wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>>> -----Original Message-----
>>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>>> Subject: Re: [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in
>PADID-
>>> selective PASID-based iotlb invalidation
>>>
>>>
>>>
>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>> Caution: External email. Do not open attachments or click links, unless
>this
>>> email comes from a known sender and you know the content is safe.
>>>>
>>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>>>> flush stage-2 iotlb entries with matching domain id and pasid.
>>>>
>>>> With scalable modern mode introduced, guest could send PASID-
>selective
>>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>>>>
>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>> ---
>>>> hw/i386/intel_iommu_internal.h | 10 +++++
>>>> hw/i386/intel_iommu.c | 78
>>> ++++++++++++++++++++++++++++++++++
>>>> 2 files changed, 88 insertions(+)
>>>>
>>>> diff --git a/hw/i386/intel_iommu_internal.h
>>> b/hw/i386/intel_iommu_internal.h
>>>> index 4e0331caba..f71fc91234 100644
>>>> --- a/hw/i386/intel_iommu_internal.h
>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>> @@ -440,6 +440,16 @@ typedef union VTDInvDesc VTDInvDesc;
>>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM |
>>> VTD_SL_TM)) : \
>>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>>>
>>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>>>> +
>>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000ffc0ULL
>>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>>>> +
>>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & \
>>>> + VTD_DOMAIN_ID_MASK)
>>>> +
>>>> /* Information about page-selective IOTLB invalidate */
>>>> struct VTDIOTLBPageInvInfo {
>>>> uint16_t domain_id;
>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>> index 40cbd4a0f4..075a27adac 100644
>>>> --- a/hw/i386/intel_iommu.c
>>>> +++ b/hw/i386/intel_iommu.c
>>>> @@ -2659,6 +2659,80 @@ static bool
>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>>>> return true;
>>>> }
>>>>
>>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer
>value,
>>>> + gpointer user_data)
>>>> +{
>>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>>> +
>>>> + return ((entry->domain_id == info->domain_id) &&
>>>> + (entry->pasid == info->pasid));
>>>> +}
>>>> +
>>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>>> + uint16_t domain_id, uint32_t pasid)
>>>> +{
>>>> + VTDIOTLBPageInvInfo info;
>>>> + VTDAddressSpace *vtd_as;
>>>> + VTDContextEntry ce;
>>>> +
>>>> + info.domain_id = domain_id;
>>>> + info.pasid = pasid;
>>>> +
>>>> + vtd_iommu_lock(s);
>>>> + g_hash_table_foreach_remove(s->iotlb,
>vtd_hash_remove_by_pasid,
>>>> + &info);
>>>> + vtd_iommu_unlock(s);
>>>> +
>>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>>> + vtd_as->devfn, &ce) &&
>>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>>> +
>>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>>> + vtd_as->pasid != pasid) {
>>>> + continue;
>>>> + }
>>>> +
>>>> + if (!s->scalable_modern) {
>>>> + vtd_address_space_sync(vtd_as);
>>>> + }
>>>> + }
>>>> + }
>>>> +}
>>>> +
>>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>>>> + VTDInvDesc *inv_desc)
>>>> +{
>>>> + uint16_t domain_id;
>>>> + uint32_t pasid;
>>>> +
>>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1)) {
>>>> + error_report_once("non-zero-field-in-piotlb_inv_desc hi: 0x%"
>>> PRIx64
>>>> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
>>> This error is not formatted as the other similar messages we print when
>>> reserved bits are non-zero.
>>> Here is what we've done in vtd_process_iotlb_desc:
>> Sure, will change as below,
>>
>>> error_report_once("%s: invalid iotlb inv desc: hi=0x%"PRIx64
>>> ", lo=0x%"PRIx64" (reserved bits unzero)",
>>> __func__, inv_desc->hi, inv_desc->lo);
>>>> + return false;
>>>> + }
>>>> +
>>>> + domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
>>>> + pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
>>>> + switch (inv_desc->val[0] & VTD_INV_DESC_IOTLB_G) {
>>> Not critical but why don't we have VTD_INV_DESC_PIOTLB_G?
>> Will add.
>>
>>>> + case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
>>>> + vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
>>>> + break;
>>>> +
>>>> + case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
>>>> + break;
>>>> +
>>>> + default:
>>>> + error_report_once("Invalid granularity in P-IOTLB desc hi: 0x%"
>>> PRIx64
>>>> + " lo: 0x%" PRIx64, inv_desc->val[1], inv_desc->val[0]);
>>> Same comment, I think we should make the messages consistent across
>>> descriptor handlers.
>> What about below:
>>
>> diff --git a/hw/i386/intel_iommu_internal.h
>b/hw/i386/intel_iommu_internal.h
>> index 3290761595..e76fd9d377 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -479,9 +479,10 @@ typedef union VTDInvDesc VTDInvDesc;
>> #define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>> #define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>>
>> +/* Masks for IOTLB Invalidate Descriptor */
>> +#define VTD_INV_DESC_IOTLB_G (3ULL << 4)
>This one is already defined
Ah, typo, I mean:
+#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap
2024-07-18 8:16 ` [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
@ 2024-07-24 5:45 ` CLEMENT MATHIEU--DRIF
2024-07-24 6:04 ` CLEMENT MATHIEU--DRIF
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-24 5:45 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Yi Sun, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
Maybe I'm missing something but why do we invalidate device IOTLB
upon piotlb receipt of a regular IOTLB inv desc?
I don't get why we don't wait for a device IOTLB inv desc?
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> This is used by some emulated devices which caches address
> translation result. When piotlb invalidation issued in guest,
> those caches should be refreshed.
>
> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu.c | 35 ++++++++++++++++++++++++++++++++++-
> 1 file changed, 34 insertions(+), 1 deletion(-)
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 8b66d6cfa5..c0116497b1 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -2910,7 +2910,7 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> continue;
> }
>
> - if (!s->scalable_modern) {
> + if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
> vtd_address_space_sync(vtd_as);
> }
> }
> @@ -2922,6 +2922,9 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
> bool ih)
> {
> VTDIOTLBPageInvInfo info;
> + VTDAddressSpace *vtd_as;
> + VTDContextEntry ce;
> + hwaddr size = (1 << am) * VTD_PAGE_SIZE;
>
> info.domain_id = domain_id;
> info.pasid = pasid;
> @@ -2932,6 +2935,36 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
> g_hash_table_foreach_remove(s->iotlb,
> vtd_hash_remove_by_page_piotlb, &info);
> vtd_iommu_unlock(s);
> +
> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> + vtd_as->devfn, &ce) &&
> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
> + IOMMUTLBEvent event;
> +
> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
> + vtd_as->pasid != pasid) {
> + continue;
> + }
> +
> + /*
> + * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
> + * does not flush stage-2 entries. See spec section 6.5.2.4
> + */
> + if (!s->scalable_modern) {
> + continue;
> + }
> +
> + event.type = IOMMU_NOTIFIER_UNMAP;
> + event.entry.target_as = &address_space_memory;
> + event.entry.iova = addr;
> + event.entry.perm = IOMMU_NONE;
> + event.entry.addr_mask = size - 1;
> + event.entry.translated_addr = 0;
> + memory_region_notify_iommu(&vtd_as->iommu, 0, event);
> + }
> + }
> }
>
> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 17/17] tests/qtest: Add intel-iommu test
2024-07-18 8:16 ` [PATCH v1 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
@ 2024-07-24 5:58 ` CLEMENT MATHIEU--DRIF
2024-07-24 6:14 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-24 5:58 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Marcel Apfelbaum, Thomas Huth, Laurent Vivier, Paolo Bonzini
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> Add the framework to test the intel-iommu device.
>
> Currently only tested cap/ecap bits correctness in scalable
> modern mode. Also tested cap/ecap bits consistency before
> and after system reset.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> MAINTAINERS | 1 +
> include/hw/i386/intel_iommu.h | 1 +
> tests/qtest/intel-iommu-test.c | 71 ++++++++++++++++++++++++++++++++++
> tests/qtest/meson.build | 1 +
> 4 files changed, 74 insertions(+)
> create mode 100644 tests/qtest/intel-iommu-test.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7d9811458c..ec765bf3d3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3662,6 +3662,7 @@ S: Supported
> F: hw/i386/intel_iommu.c
> F: hw/i386/intel_iommu_internal.h
> F: include/hw/i386/intel_iommu.h
> +F: tests/qtest/intel-iommu-test.c
>
> AMD-Vi Emulation
> S: Orphan
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index 650641544c..b1848dbec6 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -47,6 +47,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
> #define VTD_HOST_AW_48BIT 48
> #define VTD_HOST_AW_AUTO 0xff
> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
> +#define VTD_MGAW_FROM_CAP(cap) ((cap >> 16) & 0x3fULL)
>
> #define DMAR_REPORT_F_INTR (1)
>
> diff --git a/tests/qtest/intel-iommu-test.c b/tests/qtest/intel-iommu-test.c
> new file mode 100644
> index 0000000000..8e07034f6f
> --- /dev/null
> +++ b/tests/qtest/intel-iommu-test.c
> @@ -0,0 +1,71 @@
> +/*
> + * QTest testcase for intel-iommu
> + *
> + * Copyright (c) 2024 Intel, Inc.
> + *
> + * Author: Zhenzhong Duan <zhenzhong.duan@intel.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "libqtest.h"
> +#include "hw/i386/intel_iommu_internal.h"
> +
> +#define CAP_MODERN_FIXED1 (VTD_CAP_FRO | VTD_CAP_NFR | VTD_CAP_ND | \
> + VTD_CAP_MAMV | VTD_CAP_PSI | VTD_CAP_SLLPS)
> +#define ECAP_MODERN_FIXED1 (VTD_ECAP_QI | VTD_ECAP_IRO | VTD_ECAP_MHMV | \
> + VTD_ECAP_SMTS | VTD_ECAP_FLTS)
> +
> +static inline uint32_t vtd_reg_readl(QTestState *s, uint64_t offset)
> +{
> + return qtest_readl(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
> +}
> +
> +static inline uint64_t vtd_reg_readq(QTestState *s, uint64_t offset)
> +{
> + return qtest_readq(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
> +}
> +
> +static void test_intel_iommu_modern(void)
> +{
> + uint8_t init_csr[DMAR_REG_SIZE]; /* register values */
> + uint8_t post_reset_csr[DMAR_REG_SIZE]; /* register values */
> + uint64_t cap, ecap, tmp;
> + QTestState *s;
> +
> + s = qtest_init("-M q35 -device intel-iommu,x-scalable-mode=modern");
> +
> + cap = vtd_reg_readq(s, DMAR_CAP_REG);
> + g_assert((cap & CAP_MODERN_FIXED1) == CAP_MODERN_FIXED1);
> +
> + tmp = cap & VTD_CAP_SAGAW_MASK;
> + g_assert(tmp == (VTD_CAP_SAGAW_39bit | VTD_CAP_SAGAW_48bit));
> +
> + tmp = VTD_MGAW_FROM_CAP(cap);
> + g_assert(tmp == VTD_HOST_AW_48BIT - 1);
> +
> + ecap = vtd_reg_readq(s, DMAR_ECAP_REG);
> + g_assert((ecap & ECAP_MODERN_FIXED1) == ECAP_MODERN_FIXED1);
> + g_assert(ecap & VTD_ECAP_IR);
Can we add VTD_ECAP_IR to ECAP_MODERN_FIXED1?
> +
> + qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, init_csr, DMAR_REG_SIZE);
> +
> + qobject_unref(qtest_qmp(s, "{ 'execute': 'system_reset' }"));
> + qtest_qmp_eventwait(s, "RESET");
> +
> + qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, post_reset_csr, DMAR_REG_SIZE);
> + /* Ensure registers are consistent after hard reset */
> + g_assert(!memcmp(init_csr, post_reset_csr, DMAR_REG_SIZE));
> +
> + qtest_quit(s);
> +}
> +
> +int main(int argc, char **argv)
> +{
> + g_test_init(&argc, &argv, NULL);
> + qtest_add_func("/q35/intel-iommu/modern", test_intel_iommu_modern);
> +
> + return g_test_run();
> +}
> diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
> index 6508bfb1a2..20d05d471b 100644
> --- a/tests/qtest/meson.build
> +++ b/tests/qtest/meson.build
> @@ -79,6 +79,7 @@ qtests_i386 = \
> (config_all_devices.has_key('CONFIG_SB16') ? ['fuzz-sb16-test'] : []) + \
> (config_all_devices.has_key('CONFIG_SDHCI_PCI') ? ['fuzz-sdcard-test'] : []) + \
> (config_all_devices.has_key('CONFIG_ESP_PCI') ? ['am53c974-test'] : []) + \
> + (config_all_devices.has_key('CONFIG_VTD') ? ['intel-iommu-test'] : []) + \
> (host_os != 'windows' and \
> config_all_devices.has_key('CONFIG_ACPI_ERST') ? ['erst-test'] : []) + \
> (config_all_devices.has_key('CONFIG_PCIE_PORT') and \
> --
> 2.34.1
>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap
2024-07-24 5:45 ` CLEMENT MATHIEU--DRIF
@ 2024-07-24 6:04 ` CLEMENT MATHIEU--DRIF
2024-07-24 6:07 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-24 6:04 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Yi Sun, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 24/07/2024 07:45, CLEMENT MATHIEU--DRIF wrote:
> Maybe I'm missing something but why do we invalidate device IOTLB
> upon piotlb receipt of a regular IOTLB inv desc?
> I don't get why we don't wait for a device IOTLB inv desc?
I thought you were planning to remove that after the last rfc version
>
> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>>
>>
>> This is used by some emulated devices which caches address
>> translation result. When piotlb invalidation issued in guest,
>> those caches should be refreshed.
>>
>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> ---
>> hw/i386/intel_iommu.c | 35 ++++++++++++++++++++++++++++++++++-
>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 8b66d6cfa5..c0116497b1 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -2910,7 +2910,7 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> continue;
>> }
>>
>> - if (!s->scalable_modern) {
>> + if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
>> vtd_address_space_sync(vtd_as);
>> }
>> }
>> @@ -2922,6 +2922,9 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>> bool ih)
>> {
>> VTDIOTLBPageInvInfo info;
>> + VTDAddressSpace *vtd_as;
>> + VTDContextEntry ce;
>> + hwaddr size = (1 << am) * VTD_PAGE_SIZE;
>>
>> info.domain_id = domain_id;
>> info.pasid = pasid;
>> @@ -2932,6 +2935,36 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>> g_hash_table_foreach_remove(s->iotlb,
>> vtd_hash_remove_by_page_piotlb, &info);
>> vtd_iommu_unlock(s);
>> +
>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>> + vtd_as->devfn, &ce) &&
>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>> + IOMMUTLBEvent event;
>> +
>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>> + vtd_as->pasid != pasid) {
>> + continue;
>> + }
>> +
>> + /*
>> + * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
>> + * does not flush stage-2 entries. See spec section 6.5.2.4
>> + */
>> + if (!s->scalable_modern) {
>> + continue;
>> + }
>> +
>> + event.type = IOMMU_NOTIFIER_UNMAP;
>> + event.entry.target_as = &address_space_memory;
>> + event.entry.iova = addr;
>> + event.entry.perm = IOMMU_NONE;
>> + event.entry.addr_mask = size - 1;
>> + event.entry.translated_addr = 0;
>> + memory_region_notify_iommu(&vtd_as->iommu, 0, event);
>> + }
>> + }
>> }
>>
>> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap
2024-07-24 6:04 ` CLEMENT MATHIEU--DRIF
@ 2024-07-24 6:07 ` Duan, Zhenzhong
2024-07-24 6:11 ` CLEMENT MATHIEU--DRIF
0 siblings, 1 reply; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-24 6:07 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Yi Sun, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 14/17] intel_iommu: piotlb invalidation should
>notify unmap
>
>
>
>On 24/07/2024 07:45, CLEMENT MATHIEU--DRIF wrote:
>> Maybe I'm missing something but why do we invalidate device IOTLB
>> upon piotlb receipt of a regular IOTLB inv desc?
>> I don't get why we don't wait for a device IOTLB inv desc?
>I thought you were planning to remove that after the last rfc version
Look at vtd_iotlb_page_invalidate(), it has same operation.
Reason is even if we don't enable device IOTLB, devices such as vhost may still caches IOTLB entries. So we need to flush those stale IOTLB entries in this case.
Thanks
Zhenzhong
>>
>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>> Caution: External email. Do not open attachments or click links, unless
>this email comes from a known sender and you know the content is safe.
>>>
>>>
>>> This is used by some emulated devices which caches address
>>> translation result. When piotlb invalidation issued in guest,
>>> those caches should be refreshed.
>>>
>>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> ---
>>> hw/i386/intel_iommu.c | 35
>++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 8b66d6cfa5..c0116497b1 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -2910,7 +2910,7 @@ static void
>vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>> continue;
>>> }
>>>
>>> - if (!s->scalable_modern) {
>>> + if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
>>> vtd_address_space_sync(vtd_as);
>>> }
>>> }
>>> @@ -2922,6 +2922,9 @@ static void
>vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>>> bool ih)
>>> {
>>> VTDIOTLBPageInvInfo info;
>>> + VTDAddressSpace *vtd_as;
>>> + VTDContextEntry ce;
>>> + hwaddr size = (1 << am) * VTD_PAGE_SIZE;
>>>
>>> info.domain_id = domain_id;
>>> info.pasid = pasid;
>>> @@ -2932,6 +2935,36 @@ static void
>vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>>> g_hash_table_foreach_remove(s->iotlb,
>>> vtd_hash_remove_by_page_piotlb, &info);
>>> vtd_iommu_unlock(s);
>>> +
>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>> + vtd_as->devfn, &ce) &&
>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>> + IOMMUTLBEvent event;
>>> +
>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>> + vtd_as->pasid != pasid) {
>>> + continue;
>>> + }
>>> +
>>> + /*
>>> + * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
>>> + * does not flush stage-2 entries. See spec section 6.5.2.4
>>> + */
>>> + if (!s->scalable_modern) {
>>> + continue;
>>> + }
>>> +
>>> + event.type = IOMMU_NOTIFIER_UNMAP;
>>> + event.entry.target_as = &address_space_memory;
>>> + event.entry.iova = addr;
>>> + event.entry.perm = IOMMU_NONE;
>>> + event.entry.addr_mask = size - 1;
>>> + event.entry.translated_addr = 0;
>>> + memory_region_notify_iommu(&vtd_as->iommu, 0, event);
>>> + }
>>> + }
>>> }
>>>
>>> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>>> --
>>> 2.34.1
>>>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap
2024-07-24 6:07 ` Duan, Zhenzhong
@ 2024-07-24 6:11 ` CLEMENT MATHIEU--DRIF
0 siblings, 0 replies; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-24 6:11 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Yi Sun, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
On 24/07/2024 08:07, Duan, Zhenzhong wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
>> -----Original Message-----
>> From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>> Subject: Re: [PATCH v1 14/17] intel_iommu: piotlb invalidation should
>> notify unmap
>>
>>
>>
>> On 24/07/2024 07:45, CLEMENT MATHIEU--DRIF wrote:
>>> Maybe I'm missing something but why do we invalidate device IOTLB
>>> upon piotlb receipt of a regular IOTLB inv desc?
>>> I don't get why we don't wait for a device IOTLB inv desc?
>> I thought you were planning to remove that after the last rfc version
> Look at vtd_iotlb_page_invalidate(), it has same operation.
> Reason is even if we don't enable device IOTLB, devices such as vhost may still caches IOTLB entries. So we need to flush those stale IOTLB entries in this case.
>
> Thanks
> Zhenzhong
Ok fine,
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>
>>> On 18/07/2024 10:16, Zhenzhong Duan wrote:
>>>> Caution: External email. Do not open attachments or click links, unless
>> this email comes from a known sender and you know the content is safe.
>>>>
>>>> This is used by some emulated devices which caches address
>>>> translation result. When piotlb invalidation issued in guest,
>>>> those caches should be refreshed.
>>>>
>>>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>> ---
>>>> hw/i386/intel_iommu.c | 35
>> ++++++++++++++++++++++++++++++++++-
>>>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>> index 8b66d6cfa5..c0116497b1 100644
>>>> --- a/hw/i386/intel_iommu.c
>>>> +++ b/hw/i386/intel_iommu.c
>>>> @@ -2910,7 +2910,7 @@ static void
>> vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>>> continue;
>>>> }
>>>>
>>>> - if (!s->scalable_modern) {
>>>> + if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
>>>> vtd_address_space_sync(vtd_as);
>>>> }
>>>> }
>>>> @@ -2922,6 +2922,9 @@ static void
>> vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>>>> bool ih)
>>>> {
>>>> VTDIOTLBPageInvInfo info;
>>>> + VTDAddressSpace *vtd_as;
>>>> + VTDContextEntry ce;
>>>> + hwaddr size = (1 << am) * VTD_PAGE_SIZE;
>>>>
>>>> info.domain_id = domain_id;
>>>> info.pasid = pasid;
>>>> @@ -2932,6 +2935,36 @@ static void
>> vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>>>> g_hash_table_foreach_remove(s->iotlb,
>>>> vtd_hash_remove_by_page_piotlb, &info);
>>>> vtd_iommu_unlock(s);
>>>> +
>>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>>> + vtd_as->devfn, &ce) &&
>>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>>> + IOMMUTLBEvent event;
>>>> +
>>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>>> + vtd_as->pasid != pasid) {
>>>> + continue;
>>>> + }
>>>> +
>>>> + /*
>>>> + * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
>>>> + * does not flush stage-2 entries. See spec section 6.5.2.4
>>>> + */
>>>> + if (!s->scalable_modern) {
>>>> + continue;
>>>> + }
>>>> +
>>>> + event.type = IOMMU_NOTIFIER_UNMAP;
>>>> + event.entry.target_as = &address_space_memory;
>>>> + event.entry.iova = addr;
>>>> + event.entry.perm = IOMMU_NONE;
>>>> + event.entry.addr_mask = size - 1;
>>>> + event.entry.translated_addr = 0;
>>>> + memory_region_notify_iommu(&vtd_as->iommu, 0, event);
>>>> + }
>>>> + }
>>>> }
>>>>
>>>> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>>>> --
>>>> 2.34.1
>>>>
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 17/17] tests/qtest: Add intel-iommu test
2024-07-24 5:58 ` CLEMENT MATHIEU--DRIF
@ 2024-07-24 6:14 ` Duan, Zhenzhong
0 siblings, 0 replies; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-24 6:14 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Marcel Apfelbaum,
Thomas Huth, Laurent Vivier, Paolo Bonzini
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 17/17] tests/qtest: Add intel-iommu test
>
>
>
>On 18/07/2024 10:16, Zhenzhong Duan wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>> Add the framework to test the intel-iommu device.
>>
>> Currently only tested cap/ecap bits correctness in scalable
>> modern mode. Also tested cap/ecap bits consistency before
>> and after system reset.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> ---
>> MAINTAINERS | 1 +
>> include/hw/i386/intel_iommu.h | 1 +
>> tests/qtest/intel-iommu-test.c | 71
>++++++++++++++++++++++++++++++++++
>> tests/qtest/meson.build | 1 +
>> 4 files changed, 74 insertions(+)
>> create mode 100644 tests/qtest/intel-iommu-test.c
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 7d9811458c..ec765bf3d3 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -3662,6 +3662,7 @@ S: Supported
>> F: hw/i386/intel_iommu.c
>> F: hw/i386/intel_iommu_internal.h
>> F: include/hw/i386/intel_iommu.h
>> +F: tests/qtest/intel-iommu-test.c
>>
>> AMD-Vi Emulation
>> S: Orphan
>> diff --git a/include/hw/i386/intel_iommu.h
>b/include/hw/i386/intel_iommu.h
>> index 650641544c..b1848dbec6 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -47,6 +47,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
>INTEL_IOMMU_DEVICE)
>> #define VTD_HOST_AW_48BIT 48
>> #define VTD_HOST_AW_AUTO 0xff
>> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>> +#define VTD_MGAW_FROM_CAP(cap) ((cap >> 16) & 0x3fULL)
>>
>> #define DMAR_REPORT_F_INTR (1)
>>
>> diff --git a/tests/qtest/intel-iommu-test.c b/tests/qtest/intel-iommu-test.c
>> new file mode 100644
>> index 0000000000..8e07034f6f
>> --- /dev/null
>> +++ b/tests/qtest/intel-iommu-test.c
>> @@ -0,0 +1,71 @@
>> +/*
>> + * QTest testcase for intel-iommu
>> + *
>> + * Copyright (c) 2024 Intel, Inc.
>> + *
>> + * Author: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> + *
>> + * This work is licensed under the terms of the GNU GPL, version 2 or
>later.
>> + * See the COPYING file in the top-level directory.
>> + */
>> +
>> +#include "qemu/osdep.h"
>> +#include "libqtest.h"
>> +#include "hw/i386/intel_iommu_internal.h"
>> +
>> +#define CAP_MODERN_FIXED1 (VTD_CAP_FRO | VTD_CAP_NFR |
>VTD_CAP_ND | \
>> + VTD_CAP_MAMV | VTD_CAP_PSI | VTD_CAP_SLLPS)
>> +#define ECAP_MODERN_FIXED1 (VTD_ECAP_QI | VTD_ECAP_IRO |
>VTD_ECAP_MHMV | \
>> + VTD_ECAP_SMTS | VTD_ECAP_FLTS)
>> +
>> +static inline uint32_t vtd_reg_readl(QTestState *s, uint64_t offset)
>> +{
>> + return qtest_readl(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
>> +}
>> +
>> +static inline uint64_t vtd_reg_readq(QTestState *s, uint64_t offset)
>> +{
>> + return qtest_readq(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
>> +}
>> +
>> +static void test_intel_iommu_modern(void)
>> +{
>> + uint8_t init_csr[DMAR_REG_SIZE]; /* register values */
>> + uint8_t post_reset_csr[DMAR_REG_SIZE]; /* register values */
>> + uint64_t cap, ecap, tmp;
>> + QTestState *s;
>> +
>> + s = qtest_init("-M q35 -device intel-iommu,x-scalable-mode=modern");
>> +
>> + cap = vtd_reg_readq(s, DMAR_CAP_REG);
>> + g_assert((cap & CAP_MODERN_FIXED1) == CAP_MODERN_FIXED1);
>> +
>> + tmp = cap & VTD_CAP_SAGAW_MASK;
>> + g_assert(tmp == (VTD_CAP_SAGAW_39bit | VTD_CAP_SAGAW_48bit));
>> +
>> + tmp = VTD_MGAW_FROM_CAP(cap);
>> + g_assert(tmp == VTD_HOST_AW_48BIT - 1);
>> +
>> + ecap = vtd_reg_readq(s, DMAR_ECAP_REG);
>> + g_assert((ecap & ECAP_MODERN_FIXED1) == ECAP_MODERN_FIXED1);
>> + g_assert(ecap & VTD_ECAP_IR);
>Can we add VTD_ECAP_IR to ECAP_MODERN_FIXED1?
Will do.
Thanks
Zhenzhong
>> +
>> + qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, init_csr,
>DMAR_REG_SIZE);
>> +
>> + qobject_unref(qtest_qmp(s, "{ 'execute': 'system_reset' }"));
>> + qtest_qmp_eventwait(s, "RESET");
>> +
>> + qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, post_reset_csr,
>DMAR_REG_SIZE);
>> + /* Ensure registers are consistent after hard reset */
>> + g_assert(!memcmp(init_csr, post_reset_csr, DMAR_REG_SIZE));
>> +
>> + qtest_quit(s);
>> +}
>> +
>> +int main(int argc, char **argv)
>> +{
>> + g_test_init(&argc, &argv, NULL);
>> + qtest_add_func("/q35/intel-iommu/modern",
>test_intel_iommu_modern);
>> +
>> + return g_test_run();
>> +}
>> diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
>> index 6508bfb1a2..20d05d471b 100644
>> --- a/tests/qtest/meson.build
>> +++ b/tests/qtest/meson.build
>> @@ -79,6 +79,7 @@ qtests_i386 = \
>> (config_all_devices.has_key('CONFIG_SB16') ? ['fuzz-sb16-test'] : []) +
>\
>> (config_all_devices.has_key('CONFIG_SDHCI_PCI') ? ['fuzz-sdcard-test'] :
>[]) + \
>> (config_all_devices.has_key('CONFIG_ESP_PCI') ? ['am53c974-test'] : [])
>+ \
>> + (config_all_devices.has_key('CONFIG_VTD') ? ['intel-iommu-test'] : []) +
>\
>> (host_os != 'windows' and \
>> config_all_devices.has_key('CONFIG_ACPI_ERST') ? ['erst-test'] : []) +
>\
>> (config_all_devices.has_key('CONFIG_PCIE_PORT') and
>\
>> --
>> 2.34.1
>>
>
>Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic
2024-07-18 8:16 ` [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic Zhenzhong Duan
@ 2024-07-24 8:35 ` CLEMENT MATHIEU--DRIF
2024-07-24 8:42 ` Duan, Zhenzhong
0 siblings, 1 reply; 50+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-07-24 8:35 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Philippe Mathieu-Daudé, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost
Hi Zhenzhong,
This patch has been merged into staging this morning, be careful when
re-sending your series.
Here is the link :
https://github.com/qemu/qemu/commit/6410f877f5ed535acd01bbfaa4baec379e44d0ef#diff-c19adbf518f644e9b651b67266802e14787292ab9d6cd4210b4f974585be6009
>cmd
On 18/07/2024 10:16, Zhenzhong Duan wrote:
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>
> This piece of code can be shared by both IOTLB invalidation and
> PASID-based IOTLB invalidation
>
> No functional changes intended.
>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu.c | 57 +++++++++++++++++++++++++------------------
> 1 file changed, 33 insertions(+), 24 deletions(-)
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 8ebb6dbd7d..4d5a457f92 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -2975,13 +2975,43 @@ static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
> return true;
> }
>
> +static void do_invalidate_device_tlb(VTDAddressSpace *vtd_dev_as,
> + bool size, hwaddr addr)
> +{
> + /*
> + * According to ATS spec table 2.4:
> + * S = 0, bits 15:12 = xxxx range size: 4K
> + * S = 1, bits 15:12 = xxx0 range size: 8K
> + * S = 1, bits 15:12 = xx01 range size: 16K
> + * S = 1, bits 15:12 = x011 range size: 32K
> + * S = 1, bits 15:12 = 0111 range size: 64K
> + * ...
> + */
> +
> + IOMMUTLBEvent event;
> + uint64_t sz;
> +
> + if (size) {
> + sz = (VTD_PAGE_SIZE * 2) << cto64(addr >> VTD_PAGE_SHIFT);
> + addr &= ~(sz - 1);
> + } else {
> + sz = VTD_PAGE_SIZE;
> + }
> +
> + event.type = IOMMU_NOTIFIER_DEVIOTLB_UNMAP;
> + event.entry.target_as = &vtd_dev_as->as;
> + event.entry.addr_mask = sz - 1;
> + event.entry.iova = addr;
> + event.entry.perm = IOMMU_NONE;
> + event.entry.translated_addr = 0;
> + memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
> +}
> +
> static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
> VTDInvDesc *inv_desc)
> {
> VTDAddressSpace *vtd_dev_as;
> - IOMMUTLBEvent event;
> hwaddr addr;
> - uint64_t sz;
> uint16_t sid;
> bool size;
>
> @@ -3006,28 +3036,7 @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
> goto done;
> }
>
> - /* According to ATS spec table 2.4:
> - * S = 0, bits 15:12 = xxxx range size: 4K
> - * S = 1, bits 15:12 = xxx0 range size: 8K
> - * S = 1, bits 15:12 = xx01 range size: 16K
> - * S = 1, bits 15:12 = x011 range size: 32K
> - * S = 1, bits 15:12 = 0111 range size: 64K
> - * ...
> - */
> - if (size) {
> - sz = (VTD_PAGE_SIZE * 2) << cto64(addr >> VTD_PAGE_SHIFT);
> - addr &= ~(sz - 1);
> - } else {
> - sz = VTD_PAGE_SIZE;
> - }
> -
> - event.type = IOMMU_NOTIFIER_DEVIOTLB_UNMAP;
> - event.entry.target_as = &vtd_dev_as->as;
> - event.entry.addr_mask = sz - 1;
> - event.entry.iova = addr;
> - event.entry.perm = IOMMU_NONE;
> - event.entry.translated_addr = 0;
> - memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
> + do_invalidate_device_tlb(vtd_dev_as, size, addr);
>
> done:
> return true;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 50+ messages in thread
* RE: [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic
2024-07-24 8:35 ` CLEMENT MATHIEU--DRIF
@ 2024-07-24 8:42 ` Duan, Zhenzhong
0 siblings, 0 replies; 50+ messages in thread
From: Duan, Zhenzhong @ 2024-07-24 8:42 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Liu, Yi L, Peng, Chao P, Philippe Mathieu-Daudé,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
Sure, thanks for reminding.
BRs.
Zhenzhong
>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--drif@eviden.com>
>Subject: Re: [PATCH v1 11/17] intel_iommu: Extract device IOTLB
>invalidation logic
>
>Hi Zhenzhong,
>
>This patch has been merged into staging this morning, be careful when
>re-sending your series.
>Here is the link :
>https://github.com/qemu/qemu/commit/6410f877f5ed535acd01bbfaa4ba
>ec379e44d0ef#diff-
>c19adbf518f644e9b651b67266802e14787292ab9d6cd4210b4f974585be6
>009
Sure, thanks for reminding😊
BRs.
Zhenzhong
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec
2024-07-18 8:16 ` [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
@ 2024-07-29 7:39 ` Yi Liu
2024-07-29 8:42 ` Michael S. Tsirkin
1 sibling, 1 reply; 50+ messages in thread
From: Yi Liu @ 2024-07-29 7:39 UTC (permalink / raw)
To: jasowang, mst
Cc: alex.williamson, clg, eric.auger, peterx, jgg, nicolinc,
joao.m.martins, clement.mathieu--drif, kevin.tian, chao.p.peng,
Yu Zhang, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum, Zhenzhong Duan, qemu-devel
On 2024/7/18 16:16, Zhenzhong Duan wrote:
> From: Yu Zhang <yu.c.zhang@linux.intel.com>
>
> Spec revision 3.0 or above defines more detailed fault reasons for
> scalable mode. So introduce them into emulation code, see spec
> section 7.1.2 for details.
>
> Note spec revision has no relation with VERSION register, Guest
> kernel should not use that register to judge what features are
> supported. Instead cap/ecap bits should be checked.
>
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 9 ++++++++-
> hw/i386/intel_iommu.c | 25 ++++++++++++++++---------
> 2 files changed, 24 insertions(+), 10 deletions(-)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index f8cf99bddf..c0ca7b372f 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -311,7 +311,14 @@ typedef enum VTDFaultReason {
> * request while disabled */
> VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */
>
> - VTD_FR_PASID_TABLE_INV = 0x58, /*Invalid PASID table entry */
> + /* PASID directory entry access failure */
> + VTD_FR_PASID_DIR_ACCESS_ERR = 0x50,
> + /* The Present(P) field of pasid directory entry is 0 */
> + VTD_FR_PASID_DIR_ENTRY_P = 0x51,
> + VTD_FR_PASID_TABLE_ACCESS_ERR = 0x58, /* PASID table entry access failure */
> + /* The Present(P) field of pasid table entry is 0 */
> + VTD_FR_PASID_ENTRY_P = 0x59,
> + VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
>
> /* Output address in the interrupt address range for scalable mode */
> VTD_FR_SM_INTERRUPT_ADDR = 0x87,
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 37c21a0aec..e65f5b29a5 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -796,7 +796,7 @@ static int vtd_get_pdire_from_pdir_table(dma_addr_t pasid_dir_base,
> addr = pasid_dir_base + index * entry_size;
> if (dma_memory_read(&address_space_memory, addr,
> pdire, entry_size, MEMTXATTRS_UNSPECIFIED)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_DIR_ACCESS_ERR;
> }
>
> pdire->val = le64_to_cpu(pdire->val);
> @@ -814,6 +814,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> dma_addr_t addr,
> VTDPASIDEntry *pe)
> {
> + uint8_t pgtt;
> uint32_t index;
> dma_addr_t entry_size;
> X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
> @@ -823,7 +824,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> addr = addr + index * entry_size;
> if (dma_memory_read(&address_space_memory, addr,
> pe, entry_size, MEMTXATTRS_UNSPECIFIED)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_TABLE_ACCESS_ERR;
> }
> for (size_t i = 0; i < ARRAY_SIZE(pe->val); i++) {
> pe->val[i] = le64_to_cpu(pe->val[i]);
> @@ -831,11 +832,13 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>
> /* Do translation type check */
> if (!vtd_pe_type_check(x86_iommu, pe)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> - if (!vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
> - return -VTD_FR_PASID_TABLE_INV;
> + pgtt = VTD_PE_GET_TYPE(pe);
> + if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
> + !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> return 0;
> @@ -876,7 +879,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
> }
>
> if (!vtd_pdire_present(&pdire)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_DIR_ENTRY_P;
> }
>
> ret = vtd_get_pe_from_pdire(s, pasid, &pdire, pe);
> @@ -885,7 +888,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
> }
>
> if (!vtd_pe_present(pe)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_ENTRY_P;
> }
>
> return 0;
> @@ -938,7 +941,7 @@ static int vtd_ce_get_pasid_fpd(IntelIOMMUState *s,
> }
>
> if (!vtd_pdire_present(&pdire)) {
> - return -VTD_FR_PASID_TABLE_INV;
> + return -VTD_FR_PASID_DIR_ENTRY_P;
> }
>
> /*
> @@ -1795,7 +1798,11 @@ static const bool vtd_qualified_faults[] = {
> [VTD_FR_ROOT_ENTRY_RSVD] = false,
> [VTD_FR_PAGING_ENTRY_RSVD] = true,
> [VTD_FR_CONTEXT_ENTRY_TT] = true,
> - [VTD_FR_PASID_TABLE_INV] = false,
> + [VTD_FR_PASID_DIR_ACCESS_ERR] = false,
> + [VTD_FR_PASID_DIR_ENTRY_P] = true,
> + [VTD_FR_PASID_TABLE_ACCESS_ERR] = false,
> + [VTD_FR_PASID_ENTRY_P] = true,
> + [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
> [VTD_FR_SM_INTERRUPT_ADDR] = true,
> [VTD_FR_MAX] = false,
> };
@Jason, @Michael,
Do you know the rule of setting this table? I noticed it was introduced
since day-1[1]. I didn't see any history discussion on it on lore. So not
quite sure about the purpose of it. Per the usage of this table, it is used
as a filter when the iommu driver has set the FPD bit. If FPD is set, some
errors need not to trigger a trace which is mostly for debug purpose.
I noticed Peter had asked it as well[2]. But I don't think it was clarified
clearly. May we have a clarification for it here? BTW. I didn't see VT-d
spec has any definition on it. So it should just be a software trick. :)
[1]
https://lore.kernel.org/qemu-devel/1408168544-28605-3-git-send-email-tamlokveer@gmail.com/
[2] https://lore.kernel.org/qemu-devel/20190301065219.GA22229@xz-x1/
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec
2024-07-29 7:39 ` Yi Liu
@ 2024-07-29 8:42 ` Michael S. Tsirkin
2024-07-29 9:39 ` Yi Liu
0 siblings, 1 reply; 50+ messages in thread
From: Michael S. Tsirkin @ 2024-07-29 8:42 UTC (permalink / raw)
To: Yi Liu
Cc: jasowang, alex.williamson, clg, eric.auger, peterx, jgg, nicolinc,
joao.m.martins, clement.mathieu--drif, kevin.tian, chao.p.peng,
Yu Zhang, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum, Zhenzhong Duan, qemu-devel
On Mon, Jul 29, 2024 at 03:39:03PM +0800, Yi Liu wrote:
> On 2024/7/18 16:16, Zhenzhong Duan wrote:
> > From: Yu Zhang <yu.c.zhang@linux.intel.com>
> >
> > Spec revision 3.0 or above defines more detailed fault reasons for
> > scalable mode. So introduce them into emulation code, see spec
> > section 7.1.2 for details.
> >
> > Note spec revision has no relation with VERSION register, Guest
> > kernel should not use that register to judge what features are
> > supported. Instead cap/ecap bits should be checked.
> >
> > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> > Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> > ---
> > hw/i386/intel_iommu_internal.h | 9 ++++++++-
> > hw/i386/intel_iommu.c | 25 ++++++++++++++++---------
> > 2 files changed, 24 insertions(+), 10 deletions(-)
> >
> > diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> > index f8cf99bddf..c0ca7b372f 100644
> > --- a/hw/i386/intel_iommu_internal.h
> > +++ b/hw/i386/intel_iommu_internal.h
> > @@ -311,7 +311,14 @@ typedef enum VTDFaultReason {
> > * request while disabled */
> > VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */
> > - VTD_FR_PASID_TABLE_INV = 0x58, /*Invalid PASID table entry */
> > + /* PASID directory entry access failure */
> > + VTD_FR_PASID_DIR_ACCESS_ERR = 0x50,
> > + /* The Present(P) field of pasid directory entry is 0 */
> > + VTD_FR_PASID_DIR_ENTRY_P = 0x51,
> > + VTD_FR_PASID_TABLE_ACCESS_ERR = 0x58, /* PASID table entry access failure */
> > + /* The Present(P) field of pasid table entry is 0 */
> > + VTD_FR_PASID_ENTRY_P = 0x59,
> > + VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
> > /* Output address in the interrupt address range for scalable mode */
> > VTD_FR_SM_INTERRUPT_ADDR = 0x87,
> > diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> > index 37c21a0aec..e65f5b29a5 100644
> > --- a/hw/i386/intel_iommu.c
> > +++ b/hw/i386/intel_iommu.c
> > @@ -796,7 +796,7 @@ static int vtd_get_pdire_from_pdir_table(dma_addr_t pasid_dir_base,
> > addr = pasid_dir_base + index * entry_size;
> > if (dma_memory_read(&address_space_memory, addr,
> > pdire, entry_size, MEMTXATTRS_UNSPECIFIED)) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + return -VTD_FR_PASID_DIR_ACCESS_ERR;
> > }
> > pdire->val = le64_to_cpu(pdire->val);
> > @@ -814,6 +814,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> > dma_addr_t addr,
> > VTDPASIDEntry *pe)
> > {
> > + uint8_t pgtt;
> > uint32_t index;
> > dma_addr_t entry_size;
> > X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
> > @@ -823,7 +824,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> > addr = addr + index * entry_size;
> > if (dma_memory_read(&address_space_memory, addr,
> > pe, entry_size, MEMTXATTRS_UNSPECIFIED)) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + return -VTD_FR_PASID_TABLE_ACCESS_ERR;
> > }
> > for (size_t i = 0; i < ARRAY_SIZE(pe->val); i++) {
> > pe->val[i] = le64_to_cpu(pe->val[i]);
> > @@ -831,11 +832,13 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> > /* Do translation type check */
> > if (!vtd_pe_type_check(x86_iommu, pe)) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> > }
> > - if (!vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + pgtt = VTD_PE_GET_TYPE(pe);
> > + if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
> > + !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
> > + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> > }
> > return 0;
> > @@ -876,7 +879,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
> > }
> > if (!vtd_pdire_present(&pdire)) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + return -VTD_FR_PASID_DIR_ENTRY_P;
> > }
> > ret = vtd_get_pe_from_pdire(s, pasid, &pdire, pe);
> > @@ -885,7 +888,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
> > }
> > if (!vtd_pe_present(pe)) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + return -VTD_FR_PASID_ENTRY_P;
> > }
> > return 0;
> > @@ -938,7 +941,7 @@ static int vtd_ce_get_pasid_fpd(IntelIOMMUState *s,
> > }
> > if (!vtd_pdire_present(&pdire)) {
> > - return -VTD_FR_PASID_TABLE_INV;
> > + return -VTD_FR_PASID_DIR_ENTRY_P;
> > }
> > /*
> > @@ -1795,7 +1798,11 @@ static const bool vtd_qualified_faults[] = {
> > [VTD_FR_ROOT_ENTRY_RSVD] = false,
> > [VTD_FR_PAGING_ENTRY_RSVD] = true,
> > [VTD_FR_CONTEXT_ENTRY_TT] = true,
> > - [VTD_FR_PASID_TABLE_INV] = false,
> > + [VTD_FR_PASID_DIR_ACCESS_ERR] = false,
> > + [VTD_FR_PASID_DIR_ENTRY_P] = true,
> > + [VTD_FR_PASID_TABLE_ACCESS_ERR] = false,
> > + [VTD_FR_PASID_ENTRY_P] = true,
> > + [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
> > [VTD_FR_SM_INTERRUPT_ADDR] = true,
> > [VTD_FR_MAX] = false,
> > };
>
> @Jason, @Michael,
>
> Do you know the rule of setting this table? I noticed it was introduced
> since day-1[1]. I didn't see any history discussion on it on lore. So not
> quite sure about the purpose of it. Per the usage of this table, it is used
> as a filter when the iommu driver has set the FPD bit. If FPD is set, some
> errors need not to trigger a trace which is mostly for debug purpose.
>
> I noticed Peter had asked it as well[2]. But I don't think it was clarified
> clearly. May we have a clarification for it here? BTW. I didn't see VT-d
> spec has any definition on it. So it should just be a software trick. :)
>
> [1] https://lore.kernel.org/qemu-devel/1408168544-28605-3-git-send-email-tamlokveer@gmail.com/
> [2] https://lore.kernel.org/qemu-devel/20190301065219.GA22229@xz-x1/
>
> --
> Regards,
> Yi Liu
Are you asking for a definition of qualified fault conditions?
7.1.1 Non-Recoverable Address Translation Faults
Non-recoverable address translation faults can be detected by remapping hardware for many different
kinds of requests as shown by Table 26. A non-recoverable fault condition is considered “qualified” if
software can suppress reporting of the fault by setting one of the Fault Processing Disable (FPD) bits
available in one or more of the address translation structures (i.e., the context-entry, scalable-mode
context-entry, scalable-mode PASID-directory entry, scalable-mode PASID-table entry). For a request
that encounters a “qualified” non-recoverable fault condition, if the remapping hardware encountered
any translation structure entry with an FPD field value of 1, the remapping hardware must not report
the fault to software. For example, when processing a request that encounters an FPD field with a value
of 1 in the scalable-mode context-entry and encounters any “qualified” fault such as SCT.*, SPD.*, SPT.*,
SFS.*, SSS.*, or SGN.*, the remapping hardware will not report the fault to software. Memory requests
that result in non-recoverable address translation faults are blocked by hardware
--
MST
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec
2024-07-29 8:42 ` Michael S. Tsirkin
@ 2024-07-29 9:39 ` Yi Liu
0 siblings, 0 replies; 50+ messages in thread
From: Yi Liu @ 2024-07-29 9:39 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: jasowang, alex.williamson, clg, eric.auger, peterx, jgg, nicolinc,
joao.m.martins, clement.mathieu--drif, kevin.tian, chao.p.peng,
Yu Zhang, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum, Zhenzhong Duan, qemu-devel
On 2024/7/29 16:42, Michael S. Tsirkin wrote:
> On Mon, Jul 29, 2024 at 03:39:03PM +0800, Yi Liu wrote:
>> On 2024/7/18 16:16, Zhenzhong Duan wrote:
>>> From: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>
>>> Spec revision 3.0 or above defines more detailed fault reasons for
>>> scalable mode. So introduce them into emulation code, see spec
>>> section 7.1.2 for details.
>>>
>>> Note spec revision has no relation with VERSION register, Guest
>>> kernel should not use that register to judge what features are
>>> supported. Instead cap/ecap bits should be checked.
>>>
>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 9 ++++++++-
>>> hw/i386/intel_iommu.c | 25 ++++++++++++++++---------
>>> 2 files changed, 24 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>>> index f8cf99bddf..c0ca7b372f 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -311,7 +311,14 @@ typedef enum VTDFaultReason {
>>> * request while disabled */
>>> VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */
>>> - VTD_FR_PASID_TABLE_INV = 0x58, /*Invalid PASID table entry */
>>> + /* PASID directory entry access failure */
>>> + VTD_FR_PASID_DIR_ACCESS_ERR = 0x50,
>>> + /* The Present(P) field of pasid directory entry is 0 */
>>> + VTD_FR_PASID_DIR_ENTRY_P = 0x51,
>>> + VTD_FR_PASID_TABLE_ACCESS_ERR = 0x58, /* PASID table entry access failure */
>>> + /* The Present(P) field of pasid table entry is 0 */
>>> + VTD_FR_PASID_ENTRY_P = 0x59,
>>> + VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
>>> /* Output address in the interrupt address range for scalable mode */
>>> VTD_FR_SM_INTERRUPT_ADDR = 0x87,
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 37c21a0aec..e65f5b29a5 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -796,7 +796,7 @@ static int vtd_get_pdire_from_pdir_table(dma_addr_t pasid_dir_base,
>>> addr = pasid_dir_base + index * entry_size;
>>> if (dma_memory_read(&address_space_memory, addr,
>>> pdire, entry_size, MEMTXATTRS_UNSPECIFIED)) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + return -VTD_FR_PASID_DIR_ACCESS_ERR;
>>> }
>>> pdire->val = le64_to_cpu(pdire->val);
>>> @@ -814,6 +814,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> dma_addr_t addr,
>>> VTDPASIDEntry *pe)
>>> {
>>> + uint8_t pgtt;
>>> uint32_t index;
>>> dma_addr_t entry_size;
>>> X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>> @@ -823,7 +824,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> addr = addr + index * entry_size;
>>> if (dma_memory_read(&address_space_memory, addr,
>>> pe, entry_size, MEMTXATTRS_UNSPECIFIED)) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + return -VTD_FR_PASID_TABLE_ACCESS_ERR;
>>> }
>>> for (size_t i = 0; i < ARRAY_SIZE(pe->val); i++) {
>>> pe->val[i] = le64_to_cpu(pe->val[i]);
>>> @@ -831,11 +832,13 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> /* Do translation type check */
>>> if (!vtd_pe_type_check(x86_iommu, pe)) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> }
>>> - if (!vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + pgtt = VTD_PE_GET_TYPE(pe);
>>> + if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
>>> + !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
>>> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> }
>>> return 0;
>>> @@ -876,7 +879,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
>>> }
>>> if (!vtd_pdire_present(&pdire)) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + return -VTD_FR_PASID_DIR_ENTRY_P;
>>> }
>>> ret = vtd_get_pe_from_pdire(s, pasid, &pdire, pe);
>>> @@ -885,7 +888,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
>>> }
>>> if (!vtd_pe_present(pe)) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + return -VTD_FR_PASID_ENTRY_P;
>>> }
>>> return 0;
>>> @@ -938,7 +941,7 @@ static int vtd_ce_get_pasid_fpd(IntelIOMMUState *s,
>>> }
>>> if (!vtd_pdire_present(&pdire)) {
>>> - return -VTD_FR_PASID_TABLE_INV;
>>> + return -VTD_FR_PASID_DIR_ENTRY_P;
>>> }
>>> /*
>>> @@ -1795,7 +1798,11 @@ static const bool vtd_qualified_faults[] = {
>>> [VTD_FR_ROOT_ENTRY_RSVD] = false,
>>> [VTD_FR_PAGING_ENTRY_RSVD] = true,
>>> [VTD_FR_CONTEXT_ENTRY_TT] = true,
>>> - [VTD_FR_PASID_TABLE_INV] = false,
>>> + [VTD_FR_PASID_DIR_ACCESS_ERR] = false,
>>> + [VTD_FR_PASID_DIR_ENTRY_P] = true,
>>> + [VTD_FR_PASID_TABLE_ACCESS_ERR] = false,
>>> + [VTD_FR_PASID_ENTRY_P] = true,
>>> + [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
>>> [VTD_FR_SM_INTERRUPT_ADDR] = true,
>>> [VTD_FR_MAX] = false,
>>> };
>>
>> @Jason, @Michael,
>>
>> Do you know the rule of setting this table? I noticed it was introduced
>> since day-1[1]. I didn't see any history discussion on it on lore. So not
>> quite sure about the purpose of it. Per the usage of this table, it is used
>> as a filter when the iommu driver has set the FPD bit. If FPD is set, some
>> errors need not to trigger a trace which is mostly for debug purpose.
>>
>> I noticed Peter had asked it as well[2]. But I don't think it was clarified
>> clearly. May we have a clarification for it here? BTW. I didn't see VT-d
>> spec has any definition on it. So it should just be a software trick. :)
>>
>> [1] https://lore.kernel.org/qemu-devel/1408168544-28605-3-git-send-email-tamlokveer@gmail.com/
>> [2] https://lore.kernel.org/qemu-devel/20190301065219.GA22229@xz-x1/
>>
>> --
>> Regards,
>> Yi Liu
>
> Are you asking for a definition of qualified fault conditions?
>
>
> 7.1.1 Non-Recoverable Address Translation Faults
> Non-recoverable address translation faults can be detected by remapping hardware for many different
> kinds of requests as shown by Table 26. A non-recoverable fault condition is considered “qualified” if
> software can suppress reporting of the fault by setting one of the Fault Processing Disable (FPD) bits
> available in one or more of the address translation structures (i.e., the context-entry, scalable-mode
> context-entry, scalable-mode PASID-directory entry, scalable-mode PASID-table entry). For a request
> that encounters a “qualified” non-recoverable fault condition, if the remapping hardware encountered
> any translation structure entry with an FPD field value of 1, the remapping hardware must not report
> the fault to software. For example, when processing a request that encounters an FPD field with a value
> of 1 in the scalable-mode context-entry and encounters any “qualified” fault such as SCT.*, SPD.*, SPT.*,
> SFS.*, SSS.*, or SGN.*, the remapping hardware will not report the fault to software. Memory requests
> that result in non-recoverable address translation faults are blocked by hardware
>
I see. Table 26 has the "qualified" column. It's clear to me now. Thanks. :)
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 50+ messages in thread
end of thread, other threads:[~2024-07-29 9:36 UTC | newest]
Thread overview: 50+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-18 8:16 [PATCH v1 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-29 7:39 ` Yi Liu
2024-07-29 8:42 ` Michael S. Tsirkin
2024-07-29 9:39 ` Yi Liu
2024-07-18 8:16 ` [PATCH v1 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
2024-07-18 9:06 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
2024-07-18 9:02 ` CLEMENT MATHIEU--DRIF
2024-07-19 2:47 ` Duan, Zhenzhong
2024-07-19 3:22 ` Yi Liu
2024-07-19 3:37 ` Duan, Zhenzhong
2024-07-19 3:39 ` Duan, Zhenzhong
2024-07-19 4:26 ` CLEMENT MATHIEU--DRIF
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-23 8:50 ` Duan, Zhenzhong
2024-07-19 4:21 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 04/17] intel_iommu: Flush stage-2 cache in PADID-selective PASID-based iotlb invalidation Zhenzhong Duan
2024-07-23 16:02 ` CLEMENT MATHIEU--DRIF
2024-07-24 2:59 ` Duan, Zhenzhong
2024-07-24 5:16 ` CLEMENT MATHIEU--DRIF
2024-07-24 5:19 ` Duan, Zhenzhong
2024-07-18 8:16 ` [PATCH v1 05/17] intel_iommu: Rename slpte to pte Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
2024-07-23 7:12 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
2024-07-23 16:18 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 11/17] intel_iommu: Extract device IOTLB invalidation logic Zhenzhong Duan
2024-07-24 8:35 ` CLEMENT MATHIEU--DRIF
2024-07-24 8:42 ` Duan, Zhenzhong
2024-07-18 8:16 ` [PATCH v1 12/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 13/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
2024-07-18 8:16 ` [PATCH v1 14/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
2024-07-24 5:45 ` CLEMENT MATHIEU--DRIF
2024-07-24 6:04 ` CLEMENT MATHIEU--DRIF
2024-07-24 6:07 ` Duan, Zhenzhong
2024-07-24 6:11 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 15/17] intel_iommu: Set default aw_bits to 48 in scalable modren mode Zhenzhong Duan
2024-07-18 9:14 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 16/17] intel_iommu: Modify x-scalable-mode to be string option Zhenzhong Duan
2024-07-18 9:25 ` CLEMENT MATHIEU--DRIF
2024-07-19 2:53 ` Duan, Zhenzhong
2024-07-19 4:23 ` CLEMENT MATHIEU--DRIF
2024-07-18 8:16 ` [PATCH v1 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
2024-07-24 5:58 ` CLEMENT MATHIEU--DRIF
2024-07-24 6:14 ` Duan, Zhenzhong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).