* [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device
@ 2024-09-30 9:26 Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
` (17 more replies)
0 siblings, 18 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan
Hi,
Per Jason Wang's suggestion, iommufd nesting series[1] is split into
"Enable stage-1 translation for emulated device" series and
"Enable stage-1 translation for passthrough device" series.
This series enables stage-1 translation support for emulated device
in intel iommu which we called "modern" mode.
PATCH1-5: Some preparing work before support stage-1 translation
PATCH6-8: Implement stage-1 translation for emulated device
PATCH9-13: Emulate iotlb invalidation of stage-1 mapping
PATCH14: Set default aw_bits to 48 in scalable modren mode
PATCH15-16:Expose scalable modern mode "x-fls" and "fs1gp" to cmdline
PATCH17: Add qtest
Note in spec revision 3.4, it renames "First-level" to "First-stage",
"Second-level" to "Second-stage". But the scalable mode was added
before that change. So we keep old favor using First-level/fl/Second-level/sl
in code but change to use stage-1/stage-2 in commit log.
But keep in mind First-level/fl/stage-1 all have same meaning,
same for Second-level/sl/stage-2.
Qemu code can be found at [2]
The whole nesting series can be found at [3]
[1] https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg02740.html
[2] https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_stage1_emu_v4
[3] https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_nesting_rfcv2
Thanks
Zhenzhong
Changelog:
v4:
- s/Scalable legacy/Scalable in logging (Clement)
- test the mode first to make the intention clearer (Clement)
- s/x-cap-fs1gp/fs1gp and s/VTD_FL_RW_MASK/VTD_FL_RW (Jason)
- introduce x-fls instead of updating x-scalable-mode (Jason)
- Refine comment log in patch4 (jason)
- s/tansltion/translation/ and s/VTD_SPTE_RSVD_LEN/VTD_FPTE_RSVD_LEN/ (Liuyi)
- Update the order and naming of VTD_FPTE_PAGE_* (Liuyi)
v3:
- drop unnecessary !(s->ecap & VTD_ECAP_SMTS) (Clement)
- simplify calculation of return value for vtd_iova_fl_check_canonical() (Liuyi)
- make A/D bit setting atomic (Liuyi)
- refine error msg (Clement, Liuyi)
v2:
- check ecap/cap bits instead of s->scalable_modern in vtd_pe_type_check() (Clement)
- declare VTD_ECAP_FLTS/FS1GP after the feature is implemented (Clement)
- define VTD_INV_DESC_PIOTLB_G (Clement)
- make error msg consistent in vtd_process_piotlb_desc() (Clement)
- refine commit log in patch16 (Clement)
- add VTD_ECAP_IR to ECAP_MODERN_FIXED1 (Clement)
- add a knob x-cap-fs1gp to control stage-1 1G paging capability
- collect Clement's R-B
v1:
- define VTD_HOST_AW_AUTO (Clement)
- passing pgtt as a parameter to vtd_update_iotlb (Clement)
- prefix sl_/fl_ to second/first level specific functions (Clement)
- pick reserved bit check from Clement, add his Co-developed-by
- Update test without using libqtest-single.h (Thomas)
rfcv2:
- split from nesting series (Jason)
- merged some commits from Clement
- add qtest (jason)
Clément Mathieu--Drif (4):
intel_iommu: Check if the input address is canonical
intel_iommu: Set accessed and dirty bits during first stage
translation
intel_iommu: Add an internal API to find an address space with PASID
intel_iommu: Add support for PASID-based device IOTLB invalidation
Yi Liu (2):
intel_iommu: Rename slpte to pte
intel_iommu: Implement stage-1 translation
Yu Zhang (1):
intel_iommu: Use the latest fault reasons defined by spec
Zhenzhong Duan (10):
intel_iommu: Make pasid entry type check accurate
intel_iommu: Add a placeholder variable for scalable modern mode
intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb
invalidation
intel_iommu: Flush stage-1 cache in iotlb invalidation
intel_iommu: Process PASID-based iotlb invalidation
intel_iommu: piotlb invalidation should notify unmap
intel_iommu: Set default aw_bits to 48 in scalable modern mode
intel_iommu: Introduce a property x-fls for scalable modern mode
intel_iommu: Introduce a property to control FS1GP cap bit setting
tests/qtest: Add intel-iommu test
MAINTAINERS | 1 +
hw/i386/intel_iommu_internal.h | 92 ++++-
include/hw/i386/intel_iommu.h | 8 +-
hw/i386/intel_iommu.c | 681 +++++++++++++++++++++++++++------
tests/qtest/intel-iommu-test.c | 65 ++++
tests/qtest/meson.build | 1 +
6 files changed, 716 insertions(+), 132 deletions(-)
create mode 100644 tests/qtest/intel-iommu-test.c
--
2.34.1
^ permalink raw reply [flat|nested] 67+ messages in thread
* [PATCH v4 01/17] intel_iommu: Use the latest fault reasons defined by spec
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
` (16 subsequent siblings)
17 siblings, 0 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yu Zhang, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Yu Zhang <yu.c.zhang@linux.intel.com>
Spec revision 3.0 or above defines more detailed fault reasons for
scalable mode. So introduce them into emulation code, see spec
section 7.1.2 for details.
Note spec revision has no relation with VERSION register, Guest
kernel should not use that register to judge what features are
supported. Instead cap/ecap bits should be checked.
Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu_internal.h | 9 ++++++++-
hw/i386/intel_iommu.c | 25 ++++++++++++++++---------
2 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index c818c819fe..d0f9d4589d 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -311,7 +311,14 @@ typedef enum VTDFaultReason {
* request while disabled */
VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */
- VTD_FR_PASID_TABLE_INV = 0x58, /*Invalid PASID table entry */
+ /* PASID directory entry access failure */
+ VTD_FR_PASID_DIR_ACCESS_ERR = 0x50,
+ /* The Present(P) field of pasid directory entry is 0 */
+ VTD_FR_PASID_DIR_ENTRY_P = 0x51,
+ VTD_FR_PASID_TABLE_ACCESS_ERR = 0x58, /* PASID table entry access failure */
+ /* The Present(P) field of pasid table entry is 0 */
+ VTD_FR_PASID_ENTRY_P = 0x59,
+ VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
/* Output address in the interrupt address range for scalable mode */
VTD_FR_SM_INTERRUPT_ADDR = 0x87,
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index eb5aa2b2d5..378e417b27 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -796,7 +796,7 @@ static int vtd_get_pdire_from_pdir_table(dma_addr_t pasid_dir_base,
addr = pasid_dir_base + index * entry_size;
if (dma_memory_read(&address_space_memory, addr,
pdire, entry_size, MEMTXATTRS_UNSPECIFIED)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_DIR_ACCESS_ERR;
}
pdire->val = le64_to_cpu(pdire->val);
@@ -814,6 +814,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
dma_addr_t addr,
VTDPASIDEntry *pe)
{
+ uint8_t pgtt;
uint32_t index;
dma_addr_t entry_size;
X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
@@ -823,7 +824,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
addr = addr + index * entry_size;
if (dma_memory_read(&address_space_memory, addr,
pe, entry_size, MEMTXATTRS_UNSPECIFIED)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_TABLE_ACCESS_ERR;
}
for (size_t i = 0; i < ARRAY_SIZE(pe->val); i++) {
pe->val[i] = le64_to_cpu(pe->val[i]);
@@ -831,11 +832,13 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
/* Do translation type check */
if (!vtd_pe_type_check(x86_iommu, pe)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
- if (!vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
- return -VTD_FR_PASID_TABLE_INV;
+ pgtt = VTD_PE_GET_TYPE(pe);
+ if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
+ !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
+ return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
return 0;
@@ -876,7 +879,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
}
if (!vtd_pdire_present(&pdire)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_DIR_ENTRY_P;
}
ret = vtd_get_pe_from_pdire(s, pasid, &pdire, pe);
@@ -885,7 +888,7 @@ static int vtd_get_pe_from_pasid_table(IntelIOMMUState *s,
}
if (!vtd_pe_present(pe)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_ENTRY_P;
}
return 0;
@@ -938,7 +941,7 @@ static int vtd_ce_get_pasid_fpd(IntelIOMMUState *s,
}
if (!vtd_pdire_present(&pdire)) {
- return -VTD_FR_PASID_TABLE_INV;
+ return -VTD_FR_PASID_DIR_ENTRY_P;
}
/*
@@ -1795,7 +1798,11 @@ static const bool vtd_qualified_faults[] = {
[VTD_FR_ROOT_ENTRY_RSVD] = false,
[VTD_FR_PAGING_ENTRY_RSVD] = true,
[VTD_FR_CONTEXT_ENTRY_TT] = true,
- [VTD_FR_PASID_TABLE_INV] = false,
+ [VTD_FR_PASID_DIR_ACCESS_ERR] = false,
+ [VTD_FR_PASID_DIR_ENTRY_P] = true,
+ [VTD_FR_PASID_TABLE_ACCESS_ERR] = false,
+ [VTD_FR_PASID_ENTRY_P] = true,
+ [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
[VTD_FR_SM_INTERRUPT_ADDR] = true,
[VTD_FR_MAX] = false,
};
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 02/17] intel_iommu: Make pasid entry type check accurate
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
` (15 subsequent siblings)
17 siblings, 0 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
When guest configures Nested Translation(011b) or First-stage Translation only
(001b), type check passed unaccurately.
Fails the type check in those cases as their simulation isn't supported yet.
Fixes: fb43cf739e1 ("intel_iommu: scalable mode emulation")
Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 378e417b27..be7c8a670b 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -759,20 +759,16 @@ static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
VTDPASIDEntry *pe)
{
switch (VTD_PE_GET_TYPE(pe)) {
- case VTD_SM_PASID_ENTRY_FLT:
case VTD_SM_PASID_ENTRY_SLT:
- case VTD_SM_PASID_ENTRY_NESTED:
- break;
+ return true;
case VTD_SM_PASID_ENTRY_PT:
- if (!x86_iommu->pt_supported) {
- return false;
- }
- break;
+ return x86_iommu->pt_supported;
+ case VTD_SM_PASID_ENTRY_FLT:
+ case VTD_SM_PASID_ENTRY_NESTED:
default:
/* Unknown type */
return false;
}
- return true;
}
static inline bool vtd_pdire_present(VTDPASIDDirEntry *pdire)
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-10-04 5:22 ` CLEMENT MATHIEU--DRIF
2024-11-03 14:21 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation Zhenzhong Duan
` (14 subsequent siblings)
17 siblings, 2 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
Add an new element scalable_mode in IntelIOMMUState to mark scalable
modern mode, this element will be exposed as an intel_iommu property
finally.
For now, it's only a placehholder and used for address width
compatibility check and block host device passthrough until nesting
is supported.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
include/hw/i386/intel_iommu.h | 1 +
hw/i386/intel_iommu.c | 23 ++++++++++++++++++-----
2 files changed, 19 insertions(+), 5 deletions(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 1eb05c29fc..788ed42477 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -262,6 +262,7 @@ struct IntelIOMMUState {
bool caching_mode; /* RO - is cap CM enabled? */
bool scalable_mode; /* RO - is Scalable Mode supported? */
+ bool scalable_modern; /* RO - is modern SM supported? */
bool snoop_control; /* RO - is SNP filed supported? */
dma_addr_t root; /* Current root table pointer */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index be7c8a670b..9e6ef0cb99 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3872,7 +3872,13 @@ static bool vtd_check_hiod(IntelIOMMUState *s, HostIOMMUDevice *hiod,
return false;
}
- return true;
+ if (!s->scalable_modern) {
+ /* All checks requested by VTD non-modern mode pass */
+ return true;
+ }
+
+ error_setg(errp, "host device is unsupported in scalable modern mode yet");
+ return false;
}
static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int devfn,
@@ -4257,14 +4263,21 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
}
}
- /* Currently only address widths supported are 39 and 48 bits */
- if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
- (s->aw_bits != VTD_HOST_AW_48BIT)) {
- error_setg(errp, "Supported values for aw-bits are: %d, %d",
+ if (!s->scalable_modern && s->aw_bits != VTD_HOST_AW_39BIT &&
+ s->aw_bits != VTD_HOST_AW_48BIT) {
+ error_setg(errp, "%s mode: supported values for aw-bits are: %d, %d",
+ s->scalable_mode ? "Scalable" : "Legacy",
VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
return false;
}
+ if (s->scalable_modern && s->aw_bits != VTD_HOST_AW_48BIT) {
+ error_setg(errp,
+ "Scalable modern mode: supported values for aw-bits is: %d",
+ VTD_HOST_AW_48BIT);
+ return false;
+ }
+
if (s->scalable_mode && !s->dma_drain) {
error_setg(errp, "Need to set dma_drain for scalable mode");
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (2 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 2:49 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 05/17] intel_iommu: Rename slpte to pte Zhenzhong Duan
` (13 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
flush stage-2 iotlb entries with matching domain id and pasid.
With scalable modern mode introduced, guest could send PASID-selective
PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
By this chance, remove old IOTLB related definitions which were unused.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu_internal.h | 14 ++++--
hw/i386/intel_iommu.c | 88 +++++++++++++++++++++++++++++++++-
2 files changed, 96 insertions(+), 6 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index d0f9d4589d..eec8090190 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
#define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
#define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
-#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
-#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
-#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) & VTD_PASID_ID_MASK)
-#define VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
-#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
/* Mask for Device IOTLB Invalidate Descriptor */
#define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) & 0xfffffffffffff000ULL)
@@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
(0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
+/* Masks for PIOTLB Invalidate Descriptor */
+#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
+#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
+#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
+#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & VTD_DOMAIN_ID_MASK)
+#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
+#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
+#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
+
/* Information about page-selective IOTLB invalidate */
struct VTDIOTLBPageInvInfo {
uint16_t domain_id;
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 9e6ef0cb99..72c9c91d4f 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -2656,6 +2656,86 @@ static bool vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
return true;
}
+static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
+ gpointer user_data)
+{
+ VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
+ VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
+
+ return ((entry->domain_id == info->domain_id) &&
+ (entry->pasid == info->pasid));
+}
+
+static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
+ uint16_t domain_id, uint32_t pasid)
+{
+ VTDIOTLBPageInvInfo info;
+ VTDAddressSpace *vtd_as;
+ VTDContextEntry ce;
+
+ info.domain_id = domain_id;
+ info.pasid = pasid;
+
+ vtd_iommu_lock(s);
+ g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
+ &info);
+ vtd_iommu_unlock(s);
+
+ QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
+ if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
+ vtd_as->devfn, &ce) &&
+ domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
+ uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
+
+ if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
+ vtd_as->pasid != pasid) {
+ continue;
+ }
+
+ if (!s->scalable_modern) {
+ vtd_address_space_sync(vtd_as);
+ }
+ }
+ }
+}
+
+static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
+ VTDInvDesc *inv_desc)
+{
+ uint16_t domain_id;
+ uint32_t pasid;
+
+ if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
+ (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
+ inv_desc->val[2] || inv_desc->val[3]) {
+ error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
+ " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
+ " val[0]=0x%"PRIx64" (reserved bits unzero)",
+ __func__, inv_desc->val[3], inv_desc->val[2],
+ inv_desc->val[1], inv_desc->val[0]);
+ return false;
+ }
+
+ domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
+ pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
+ switch (inv_desc->val[0] & VTD_INV_DESC_PIOTLB_G) {
+ case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
+ vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
+ break;
+
+ case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
+ break;
+
+ default:
+ error_report_once("%s: invalid piotlb inv desc: hi=0x%"PRIx64
+ ", lo=0x%"PRIx64" (type mismatch: 0x%llx)",
+ __func__, inv_desc->val[1], inv_desc->val[0],
+ inv_desc->val[0] & VTD_INV_DESC_IOTLB_G);
+ return false;
+ }
+ return true;
+}
+
static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
@@ -2766,6 +2846,13 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
}
break;
+ case VTD_INV_DESC_PIOTLB:
+ trace_vtd_inv_desc("p-iotlb", inv_desc.val[1], inv_desc.val[0]);
+ if (!vtd_process_piotlb_desc(s, &inv_desc)) {
+ return false;
+ }
+ break;
+
case VTD_INV_DESC_WAIT:
trace_vtd_inv_desc("wait", inv_desc.hi, inv_desc.lo);
if (!vtd_process_wait_desc(s, &inv_desc)) {
@@ -2793,7 +2880,6 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
* iommu driver) work, just return true is enough so far.
*/
case VTD_INV_DESC_PC:
- case VTD_INV_DESC_PIOTLB:
if (s->scalable_mode) {
break;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 05/17] intel_iommu: Rename slpte to pte
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (3 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
` (12 subsequent siblings)
17 siblings, 0 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yi Sun, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Yi Liu <yi.l.liu@intel.com>
Because we will support both FST(a.k.a, FLT) and SST(a.k.a, SLT) translation,
rename variable and functions from slpte to pte whenever possible.
But some are SST only, they are renamed with sl_ prefix.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
---
hw/i386/intel_iommu_internal.h | 24 +++---
include/hw/i386/intel_iommu.h | 2 +-
hw/i386/intel_iommu.c | 129 +++++++++++++++++----------------
3 files changed, 78 insertions(+), 77 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index eec8090190..20fcc73938 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -523,23 +523,23 @@ typedef struct VTDRootEntry VTDRootEntry;
/* Second Level Page Translation Pointer*/
#define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
-/* Paging Structure common */
-#define VTD_SL_PT_PAGE_SIZE_MASK (1ULL << 7)
-/* Bits to decide the offset for each level */
-#define VTD_SL_LEVEL_BITS 9
-
/* Second Level Paging Structure */
-#define VTD_SL_PML4_LEVEL 4
-#define VTD_SL_PDP_LEVEL 3
-#define VTD_SL_PD_LEVEL 2
-#define VTD_SL_PT_LEVEL 1
-#define VTD_SL_PT_ENTRY_NR 512
-
/* Masks for Second Level Paging Entry */
#define VTD_SL_RW_MASK 3ULL
#define VTD_SL_R 1ULL
#define VTD_SL_W (1ULL << 1)
-#define VTD_SL_PT_BASE_ADDR_MASK(aw) (~(VTD_PAGE_SIZE - 1) & VTD_HAW_MASK(aw))
#define VTD_SL_IGN_COM 0xbff0000000000000ULL
+/* Common for both First Level and Second Level */
+#define VTD_PML4_LEVEL 4
+#define VTD_PDP_LEVEL 3
+#define VTD_PD_LEVEL 2
+#define VTD_PT_LEVEL 1
+#define VTD_PT_ENTRY_NR 512
+#define VTD_PT_PAGE_SIZE_MASK (1ULL << 7)
+#define VTD_PT_BASE_ADDR_MASK(aw) (~(VTD_PAGE_SIZE - 1) & VTD_HAW_MASK(aw))
+
+/* Bits to decide the offset for each level */
+#define VTD_LEVEL_BITS 9
+
#endif
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 788ed42477..fe9057c50d 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -152,7 +152,7 @@ struct VTDIOTLBEntry {
uint64_t gfn;
uint16_t domain_id;
uint32_t pasid;
- uint64_t slpte;
+ uint64_t pte;
uint64_t mask;
uint8_t access_flags;
};
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 72c9c91d4f..6f2414898c 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -48,7 +48,8 @@
/* pe operations */
#define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
-#define VTD_PE_GET_LEVEL(pe) (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
+#define VTD_PE_GET_SL_LEVEL(pe) \
+ (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
/*
* PCI bus number (or SID) is not reliable since the device is usaully
@@ -284,15 +285,15 @@ static gboolean vtd_hash_remove_by_domain(gpointer key, gpointer value,
}
/* The shift of an addr for a certain level of paging structure */
-static inline uint32_t vtd_slpt_level_shift(uint32_t level)
+static inline uint32_t vtd_pt_level_shift(uint32_t level)
{
assert(level != 0);
- return VTD_PAGE_SHIFT_4K + (level - 1) * VTD_SL_LEVEL_BITS;
+ return VTD_PAGE_SHIFT_4K + (level - 1) * VTD_LEVEL_BITS;
}
-static inline uint64_t vtd_slpt_level_page_mask(uint32_t level)
+static inline uint64_t vtd_pt_level_page_mask(uint32_t level)
{
- return ~((1ULL << vtd_slpt_level_shift(level)) - 1);
+ return ~((1ULL << vtd_pt_level_shift(level)) - 1);
}
static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
@@ -349,7 +350,7 @@ static void vtd_reset_caches(IntelIOMMUState *s)
static uint64_t vtd_get_iotlb_gfn(hwaddr addr, uint32_t level)
{
- return (addr & vtd_slpt_level_page_mask(level)) >> VTD_PAGE_SHIFT_4K;
+ return (addr & vtd_pt_level_page_mask(level)) >> VTD_PAGE_SHIFT_4K;
}
/* Must be called with IOMMU lock held */
@@ -360,7 +361,7 @@ static VTDIOTLBEntry *vtd_lookup_iotlb(IntelIOMMUState *s, uint16_t source_id,
VTDIOTLBEntry *entry;
unsigned level;
- for (level = VTD_SL_PT_LEVEL; level < VTD_SL_PML4_LEVEL; level++) {
+ for (level = VTD_PT_LEVEL; level < VTD_PML4_LEVEL; level++) {
key.gfn = vtd_get_iotlb_gfn(addr, level);
key.level = level;
key.sid = source_id;
@@ -377,7 +378,7 @@ out:
/* Must be with IOMMU lock held */
static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
- uint16_t domain_id, hwaddr addr, uint64_t slpte,
+ uint16_t domain_id, hwaddr addr, uint64_t pte,
uint8_t access_flags, uint32_t level,
uint32_t pasid)
{
@@ -385,7 +386,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
uint64_t gfn = vtd_get_iotlb_gfn(addr, level);
- trace_vtd_iotlb_page_update(source_id, addr, slpte, domain_id);
+ trace_vtd_iotlb_page_update(source_id, addr, pte, domain_id);
if (g_hash_table_size(s->iotlb) >= VTD_IOTLB_MAX_SIZE) {
trace_vtd_iotlb_reset("iotlb exceeds size limit");
vtd_reset_iotlb_locked(s);
@@ -393,9 +394,9 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
entry->gfn = gfn;
entry->domain_id = domain_id;
- entry->slpte = slpte;
+ entry->pte = pte;
entry->access_flags = access_flags;
- entry->mask = vtd_slpt_level_page_mask(level);
+ entry->mask = vtd_pt_level_page_mask(level);
entry->pasid = pasid;
key->gfn = gfn;
@@ -710,32 +711,32 @@ static inline dma_addr_t vtd_ce_get_slpt_base(VTDContextEntry *ce)
return ce->lo & VTD_CONTEXT_ENTRY_SLPTPTR;
}
-static inline uint64_t vtd_get_slpte_addr(uint64_t slpte, uint8_t aw)
+static inline uint64_t vtd_get_pte_addr(uint64_t pte, uint8_t aw)
{
- return slpte & VTD_SL_PT_BASE_ADDR_MASK(aw);
+ return pte & VTD_PT_BASE_ADDR_MASK(aw);
}
/* Whether the pte indicates the address of the page frame */
-static inline bool vtd_is_last_slpte(uint64_t slpte, uint32_t level)
+static inline bool vtd_is_last_pte(uint64_t pte, uint32_t level)
{
- return level == VTD_SL_PT_LEVEL || (slpte & VTD_SL_PT_PAGE_SIZE_MASK);
+ return level == VTD_PT_LEVEL || (pte & VTD_PT_PAGE_SIZE_MASK);
}
-/* Get the content of a spte located in @base_addr[@index] */
-static uint64_t vtd_get_slpte(dma_addr_t base_addr, uint32_t index)
+/* Get the content of a pte located in @base_addr[@index] */
+static uint64_t vtd_get_pte(dma_addr_t base_addr, uint32_t index)
{
- uint64_t slpte;
+ uint64_t pte;
- assert(index < VTD_SL_PT_ENTRY_NR);
+ assert(index < VTD_PT_ENTRY_NR);
if (dma_memory_read(&address_space_memory,
- base_addr + index * sizeof(slpte),
- &slpte, sizeof(slpte), MEMTXATTRS_UNSPECIFIED)) {
- slpte = (uint64_t)-1;
- return slpte;
+ base_addr + index * sizeof(pte),
+ &pte, sizeof(pte), MEMTXATTRS_UNSPECIFIED)) {
+ pte = (uint64_t)-1;
+ return pte;
}
- slpte = le64_to_cpu(slpte);
- return slpte;
+ pte = le64_to_cpu(pte);
+ return pte;
}
/* Given an iova and the level of paging structure, return the offset
@@ -743,12 +744,12 @@ static uint64_t vtd_get_slpte(dma_addr_t base_addr, uint32_t index)
*/
static inline uint32_t vtd_iova_level_offset(uint64_t iova, uint32_t level)
{
- return (iova >> vtd_slpt_level_shift(level)) &
- ((1ULL << VTD_SL_LEVEL_BITS) - 1);
+ return (iova >> vtd_pt_level_shift(level)) &
+ ((1ULL << VTD_LEVEL_BITS) - 1);
}
/* Check Capability Register to see if the @level of page-table is supported */
-static inline bool vtd_is_level_supported(IntelIOMMUState *s, uint32_t level)
+static inline bool vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
{
return VTD_CAP_SAGAW_MASK & s->cap &
(1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
@@ -833,7 +834,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
pgtt = VTD_PE_GET_TYPE(pe);
if (pgtt == VTD_SM_PASID_ENTRY_SLT &&
- !vtd_is_level_supported(s, VTD_PE_GET_LEVEL(pe))) {
+ !vtd_is_sl_level_supported(s, VTD_PE_GET_SL_LEVEL(pe))) {
return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
@@ -972,7 +973,7 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState *s,
if (s->root_scalable) {
vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
- return VTD_PE_GET_LEVEL(&pe);
+ return VTD_PE_GET_SL_LEVEL(&pe);
}
return vtd_ce_get_level(ce);
@@ -1040,9 +1041,9 @@ static inline uint64_t vtd_iova_limit(IntelIOMMUState *s,
}
/* Return true if IOVA passes range check, otherwise false. */
-static inline bool vtd_iova_range_check(IntelIOMMUState *s,
- uint64_t iova, VTDContextEntry *ce,
- uint8_t aw, uint32_t pasid)
+static inline bool vtd_iova_sl_range_check(IntelIOMMUState *s,
+ uint64_t iova, VTDContextEntry *ce,
+ uint8_t aw, uint32_t pasid)
{
/*
* Check if @iova is above 2^X-1, where X is the minimum of MGAW
@@ -1083,17 +1084,17 @@ static bool vtd_slpte_nonzero_rsvd(uint64_t slpte, uint32_t level)
/*
* We should have caught a guest-mis-programmed level earlier,
- * via vtd_is_level_supported.
+ * via vtd_is_sl_level_supported.
*/
assert(level < VTD_SPTE_RSVD_LEN);
/*
- * Zero level doesn't exist. The smallest level is VTD_SL_PT_LEVEL=1 and
- * checked by vtd_is_last_slpte().
+ * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
+ * checked by vtd_is_last_pte().
*/
assert(level);
- if ((level == VTD_SL_PD_LEVEL || level == VTD_SL_PDP_LEVEL) &&
- (slpte & VTD_SL_PT_PAGE_SIZE_MASK)) {
+ if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
+ (slpte & VTD_PT_PAGE_SIZE_MASK)) {
/* large page */
rsvd_mask = vtd_spte_rsvd_large[level];
} else {
@@ -1119,7 +1120,7 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
uint64_t access_right_check;
uint64_t xlat, size;
- if (!vtd_iova_range_check(s, iova, ce, aw_bits, pasid)) {
+ if (!vtd_iova_sl_range_check(s, iova, ce, aw_bits, pasid)) {
error_report_once("%s: detected IOVA overflow (iova=0x%" PRIx64 ","
"pasid=0x%" PRIx32 ")", __func__, iova, pasid);
return -VTD_FR_ADDR_BEYOND_MGAW;
@@ -1130,7 +1131,7 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
while (true) {
offset = vtd_iova_level_offset(iova, level);
- slpte = vtd_get_slpte(addr, offset);
+ slpte = vtd_get_pte(addr, offset);
if (slpte == (uint64_t)-1) {
error_report_once("%s: detected read error on DMAR slpte "
@@ -1161,17 +1162,17 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
return -VTD_FR_PAGING_ENTRY_RSVD;
}
- if (vtd_is_last_slpte(slpte, level)) {
+ if (vtd_is_last_pte(slpte, level)) {
*slptep = slpte;
*slpte_level = level;
break;
}
- addr = vtd_get_slpte_addr(slpte, aw_bits);
+ addr = vtd_get_pte_addr(slpte, aw_bits);
level--;
}
- xlat = vtd_get_slpte_addr(*slptep, aw_bits);
- size = ~vtd_slpt_level_page_mask(level) + 1;
+ xlat = vtd_get_pte_addr(*slptep, aw_bits);
+ size = ~vtd_pt_level_page_mask(level) + 1;
/*
* From VT-d spec 3.14: Untranslated requests and translation
@@ -1322,14 +1323,14 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
trace_vtd_page_walk_level(addr, level, start, end);
- subpage_size = 1ULL << vtd_slpt_level_shift(level);
- subpage_mask = vtd_slpt_level_page_mask(level);
+ subpage_size = 1ULL << vtd_pt_level_shift(level);
+ subpage_mask = vtd_pt_level_page_mask(level);
while (iova < end) {
iova_next = (iova & subpage_mask) + subpage_size;
offset = vtd_iova_level_offset(iova, level);
- slpte = vtd_get_slpte(addr, offset);
+ slpte = vtd_get_pte(addr, offset);
if (slpte == (uint64_t)-1) {
trace_vtd_page_walk_skip_read(iova, iova_next);
@@ -1352,12 +1353,12 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
*/
entry_valid = read_cur | write_cur;
- if (!vtd_is_last_slpte(slpte, level) && entry_valid) {
+ if (!vtd_is_last_pte(slpte, level) && entry_valid) {
/*
* This is a valid PDE (or even bigger than PDE). We need
* to walk one further level.
*/
- ret = vtd_page_walk_level(vtd_get_slpte_addr(slpte, info->aw),
+ ret = vtd_page_walk_level(vtd_get_pte_addr(slpte, info->aw),
iova, MIN(iova_next, end), level - 1,
read_cur, write_cur, info);
} else {
@@ -1374,7 +1375,7 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
event.entry.perm = IOMMU_ACCESS_FLAG(read_cur, write_cur);
event.entry.addr_mask = ~subpage_mask;
/* NOTE: this is only meaningful if entry_valid == true */
- event.entry.translated_addr = vtd_get_slpte_addr(slpte, info->aw);
+ event.entry.translated_addr = vtd_get_pte_addr(slpte, info->aw);
event.type = event.entry.perm ? IOMMU_NOTIFIER_MAP :
IOMMU_NOTIFIER_UNMAP;
ret = vtd_page_walk_one(&event, info);
@@ -1408,11 +1409,11 @@ static int vtd_page_walk(IntelIOMMUState *s, VTDContextEntry *ce,
dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
uint32_t level = vtd_get_iova_level(s, ce, pasid);
- if (!vtd_iova_range_check(s, start, ce, info->aw, pasid)) {
+ if (!vtd_iova_sl_range_check(s, start, ce, info->aw, pasid)) {
return -VTD_FR_ADDR_BEYOND_MGAW;
}
- if (!vtd_iova_range_check(s, end, ce, info->aw, pasid)) {
+ if (!vtd_iova_sl_range_check(s, end, ce, info->aw, pasid)) {
/* Fix end so that it reaches the maximum */
end = vtd_iova_limit(s, ce, info->aw, pasid);
}
@@ -1527,7 +1528,7 @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
/* Check if the programming of context-entry is valid */
if (!s->root_scalable &&
- !vtd_is_level_supported(s, vtd_ce_get_level(ce))) {
+ !vtd_is_sl_level_supported(s, vtd_ce_get_level(ce))) {
error_report_once("%s: invalid context entry: hi=%"PRIx64
", lo=%"PRIx64" (level %d not supported)",
__func__, ce->hi, ce->lo,
@@ -1897,7 +1898,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
VTDContextEntry ce;
uint8_t bus_num = pci_bus_num(bus);
VTDContextCacheEntry *cc_entry;
- uint64_t slpte, page_mask;
+ uint64_t pte, page_mask;
uint32_t level, pasid = vtd_as->pasid;
uint16_t source_id = PCI_BUILD_BDF(bus_num, devfn);
int ret_fr;
@@ -1918,13 +1919,13 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
cc_entry = &vtd_as->context_cache_entry;
- /* Try to fetch slpte form IOTLB, we don't need RID2PASID logic */
+ /* Try to fetch pte from IOTLB, we don't need RID2PASID logic */
if (!rid2pasid) {
iotlb_entry = vtd_lookup_iotlb(s, source_id, pasid, addr);
if (iotlb_entry) {
- trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->slpte,
+ trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->pte,
iotlb_entry->domain_id);
- slpte = iotlb_entry->slpte;
+ pte = iotlb_entry->pte;
access_flags = iotlb_entry->access_flags;
page_mask = iotlb_entry->mask;
goto out;
@@ -1996,20 +1997,20 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
return true;
}
- /* Try to fetch slpte form IOTLB for RID2PASID slow path */
+ /* Try to fetch pte from IOTLB for RID2PASID slow path */
if (rid2pasid) {
iotlb_entry = vtd_lookup_iotlb(s, source_id, pasid, addr);
if (iotlb_entry) {
- trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->slpte,
+ trace_vtd_iotlb_page_hit(source_id, addr, iotlb_entry->pte,
iotlb_entry->domain_id);
- slpte = iotlb_entry->slpte;
+ pte = iotlb_entry->pte;
access_flags = iotlb_entry->access_flags;
page_mask = iotlb_entry->mask;
goto out;
}
}
- ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &slpte, &level,
+ ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
&reads, &writes, s->aw_bits, pasid);
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
@@ -2017,14 +2018,14 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
goto error;
}
- page_mask = vtd_slpt_level_page_mask(level);
+ page_mask = vtd_pt_level_page_mask(level);
access_flags = IOMMU_ACCESS_FLAG(reads, writes);
vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
- addr, slpte, access_flags, level, pasid);
+ addr, pte, access_flags, level, pasid);
out:
vtd_iommu_unlock(s);
entry->iova = addr & page_mask;
- entry->translated_addr = vtd_get_slpte_addr(slpte, s->aw_bits) & page_mask;
+ entry->translated_addr = vtd_get_pte_addr(pte, s->aw_bits) & page_mask;
entry->addr_mask = ~page_mask;
entry->perm = access_flags;
return true;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 06/17] intel_iommu: Implement stage-1 translation
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (4 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 05/17] intel_iommu: Rename slpte to pte Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-03 14:21 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
` (11 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yi Sun, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Yi Liu <yi.l.liu@intel.com>
This adds stage-1 page table walking to support stage-1 only
translation in scalable modern mode.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu_internal.h | 24 ++++++
hw/i386/intel_iommu.c | 143 ++++++++++++++++++++++++++++++++-
2 files changed, 163 insertions(+), 4 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 20fcc73938..38bf0c7a06 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -428,6 +428,22 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
(0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
+/* Rsvd field masks for fpte */
+#define VTD_FS_UPPER_IGNORED 0xfff0000000000000ULL
+#define VTD_FPTE_PAGE_L1_RSVD_MASK(aw) \
+ (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L2_RSVD_MASK(aw) \
+ (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L3_RSVD_MASK(aw) \
+ (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_PAGE_L4_RSVD_MASK(aw) \
+ (0x80ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
+
+#define VTD_FPTE_LPAGE_L2_RSVD_MASK(aw) \
+ (0x1fe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
+#define VTD_FPTE_LPAGE_L3_RSVD_MASK(aw) \
+ (0x3fffe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
+
/* Masks for PIOTLB Invalidate Descriptor */
#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
@@ -520,6 +536,14 @@ typedef struct VTDRootEntry VTDRootEntry;
#define VTD_SM_PASID_ENTRY_AW 7ULL /* Adjusted guest-address-width */
#define VTD_SM_PASID_ENTRY_DID(val) ((val) & VTD_DOMAIN_ID_MASK)
+#define VTD_SM_PASID_ENTRY_FLPM 3ULL
+#define VTD_SM_PASID_ENTRY_FLPTPTR (~0xfffULL)
+
+/* First Level Paging Structure */
+/* Masks for First Level Paging Entry */
+#define VTD_FL_P 1ULL
+#define VTD_FL_RW (1ULL << 1)
+
/* Second Level Page Translation Pointer*/
#define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 6f2414898c..56d5933e93 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -48,6 +48,8 @@
/* pe operations */
#define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
+#define VTD_PE_GET_FL_LEVEL(pe) \
+ (4 + (((pe)->val[2] >> 2) & VTD_SM_PASID_ENTRY_FLPM))
#define VTD_PE_GET_SL_LEVEL(pe) \
(2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
@@ -755,6 +757,11 @@ static inline bool vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
(1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
}
+static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
+{
+ return level == VTD_PML4_LEVEL;
+}
+
/* Return true if check passed, otherwise false */
static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
VTDPASIDEntry *pe)
@@ -838,6 +845,11 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
+ if (pgtt == VTD_SM_PASID_ENTRY_FLT &&
+ !vtd_is_fl_level_supported(s, VTD_PE_GET_FL_LEVEL(pe))) {
+ return -VTD_FR_PASID_TABLE_ENTRY_INV;
+ }
+
return 0;
}
@@ -973,7 +985,11 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState *s,
if (s->root_scalable) {
vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
- return VTD_PE_GET_SL_LEVEL(&pe);
+ if (s->scalable_modern) {
+ return VTD_PE_GET_FL_LEVEL(&pe);
+ } else {
+ return VTD_PE_GET_SL_LEVEL(&pe);
+ }
}
return vtd_ce_get_level(ce);
@@ -1060,7 +1076,11 @@ static dma_addr_t vtd_get_iova_pgtbl_base(IntelIOMMUState *s,
if (s->root_scalable) {
vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
- return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
+ if (s->scalable_modern) {
+ return pe.val[2] & VTD_SM_PASID_ENTRY_FLPTPTR;
+ } else {
+ return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
+ }
}
return vtd_ce_get_slpt_base(ce);
@@ -1862,6 +1882,104 @@ out:
trace_vtd_pt_enable_fast_path(source_id, success);
}
+/*
+ * Rsvd field masks for fpte:
+ * vtd_fpte_rsvd 4k pages
+ * vtd_fpte_rsvd_large large pages
+ *
+ * We support only 4-level page tables.
+ */
+#define VTD_FPTE_RSVD_LEN 5
+static uint64_t vtd_fpte_rsvd[VTD_FPTE_RSVD_LEN];
+static uint64_t vtd_fpte_rsvd_large[VTD_FPTE_RSVD_LEN];
+
+static bool vtd_flpte_nonzero_rsvd(uint64_t flpte, uint32_t level)
+{
+ uint64_t rsvd_mask;
+
+ /*
+ * We should have caught a guest-mis-programmed level earlier,
+ * via vtd_is_fl_level_supported.
+ */
+ assert(level < VTD_FPTE_RSVD_LEN);
+ /*
+ * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
+ * checked by vtd_is_last_pte().
+ */
+ assert(level);
+
+ if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
+ (flpte & VTD_PT_PAGE_SIZE_MASK)) {
+ /* large page */
+ rsvd_mask = vtd_fpte_rsvd_large[level];
+ } else {
+ rsvd_mask = vtd_fpte_rsvd[level];
+ }
+
+ return flpte & rsvd_mask;
+}
+
+static inline bool vtd_flpte_present(uint64_t flpte)
+{
+ return !!(flpte & VTD_FL_P);
+}
+
+/*
+ * Given the @iova, get relevant @flptep. @flpte_level will be the last level
+ * of the translation, can be used for deciding the size of large page.
+ */
+static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
+ uint64_t iova, bool is_write,
+ uint64_t *flptep, uint32_t *flpte_level,
+ bool *reads, bool *writes, uint8_t aw_bits,
+ uint32_t pasid)
+{
+ dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
+ uint32_t level = vtd_get_iova_level(s, ce, pasid);
+ uint32_t offset;
+ uint64_t flpte;
+
+ while (true) {
+ offset = vtd_iova_level_offset(iova, level);
+ flpte = vtd_get_pte(addr, offset);
+
+ if (flpte == (uint64_t)-1) {
+ if (level == vtd_get_iova_level(s, ce, pasid)) {
+ /* Invalid programming of context-entry */
+ return -VTD_FR_CONTEXT_ENTRY_INV;
+ } else {
+ return -VTD_FR_PAGING_ENTRY_INV;
+ }
+ }
+ if (!vtd_flpte_present(flpte)) {
+ *reads = false;
+ *writes = false;
+ return -VTD_FR_PAGING_ENTRY_INV;
+ }
+ *reads = true;
+ *writes = (*writes) && (flpte & VTD_FL_RW);
+ if (is_write && !(flpte & VTD_FL_RW)) {
+ return -VTD_FR_WRITE;
+ }
+ if (vtd_flpte_nonzero_rsvd(flpte, level)) {
+ error_report_once("%s: detected flpte reserved non-zero "
+ "iova=0x%" PRIx64 ", level=0x%" PRIx32
+ "flpte=0x%" PRIx64 ", pasid=0x%" PRIX32 ")",
+ __func__, iova, level, flpte, pasid);
+ return -VTD_FR_PAGING_ENTRY_RSVD;
+ }
+
+ if (vtd_is_last_pte(flpte, level)) {
+ *flptep = flpte;
+ *flpte_level = level;
+ return 0;
+ }
+
+ addr = vtd_get_pte_addr(flpte, aw_bits);
+ level--;
+ }
+}
+
static void vtd_report_fault(IntelIOMMUState *s,
int err, bool is_fpd_set,
uint16_t source_id,
@@ -2010,8 +2128,13 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
}
}
- ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
- &reads, &writes, s->aw_bits, pasid);
+ if (s->scalable_modern && s->root_scalable) {
+ ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
+ &reads, &writes, s->aw_bits, pasid);
+ } else {
+ ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
+ &reads, &writes, s->aw_bits, pasid);
+ }
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
addr, is_write, pasid != PCI_NO_PASID, pasid);
@@ -4239,6 +4362,18 @@ static void vtd_init(IntelIOMMUState *s)
vtd_spte_rsvd_large[2] = VTD_SPTE_LPAGE_L2_RSVD_MASK(s->aw_bits);
vtd_spte_rsvd_large[3] = VTD_SPTE_LPAGE_L3_RSVD_MASK(s->aw_bits);
+ /*
+ * Rsvd field masks for fpte
+ */
+ vtd_fpte_rsvd[0] = ~0ULL;
+ vtd_fpte_rsvd[1] = VTD_FPTE_PAGE_L1_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd[2] = VTD_FPTE_PAGE_L2_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd[3] = VTD_FPTE_PAGE_L3_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd[4] = VTD_FPTE_PAGE_L4_RSVD_MASK(s->aw_bits);
+
+ vtd_fpte_rsvd_large[2] = VTD_FPTE_LPAGE_L2_RSVD_MASK(s->aw_bits);
+ vtd_fpte_rsvd_large[3] = VTD_FPTE_LPAGE_L3_RSVD_MASK(s->aw_bits);
+
if (s->scalable_mode || s->snoop_control) {
vtd_spte_rsvd[1] &= ~VTD_SPTE_SNP;
vtd_spte_rsvd_large[2] &= ~VTD_SPTE_SNP;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 07/17] intel_iommu: Check if the input address is canonical
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (5 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-03 14:22 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
` (10 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
First stage translation must fail if the address to translate is
not canonical.
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu_internal.h | 2 ++
hw/i386/intel_iommu.c | 23 +++++++++++++++++++++++
2 files changed, 25 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 38bf0c7a06..57c50648ce 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -320,6 +320,8 @@ typedef enum VTDFaultReason {
VTD_FR_PASID_ENTRY_P = 0x59,
VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
+ VTD_FR_FS_NON_CANONICAL = 0x80, /* SNG.1 : Address for FS not canonical.*/
+
/* Output address in the interrupt address range for scalable mode */
VTD_FR_SM_INTERRUPT_ADDR = 0x87,
VTD_FR_MAX, /* Guard */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 56d5933e93..ec0596c2b2 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -1821,6 +1821,7 @@ static const bool vtd_qualified_faults[] = {
[VTD_FR_PASID_ENTRY_P] = true,
[VTD_FR_PASID_TABLE_ENTRY_INV] = true,
[VTD_FR_SM_INTERRUPT_ADDR] = true,
+ [VTD_FR_FS_NON_CANONICAL] = true,
[VTD_FR_MAX] = false,
};
@@ -1924,6 +1925,22 @@ static inline bool vtd_flpte_present(uint64_t flpte)
return !!(flpte & VTD_FL_P);
}
+/* Return true if IOVA is canonical, otherwise false. */
+static bool vtd_iova_fl_check_canonical(IntelIOMMUState *s, uint64_t iova,
+ VTDContextEntry *ce, uint32_t pasid)
+{
+ uint64_t iova_limit = vtd_iova_limit(s, ce, s->aw_bits, pasid);
+ uint64_t upper_bits_mask = ~(iova_limit - 1);
+ uint64_t upper_bits = iova & upper_bits_mask;
+ bool msb = ((iova & (iova_limit >> 1)) != 0);
+
+ if (msb) {
+ return upper_bits == upper_bits_mask;
+ } else {
+ return !upper_bits;
+ }
+}
+
/*
* Given the @iova, get relevant @flptep. @flpte_level will be the last level
* of the translation, can be used for deciding the size of large page.
@@ -1939,6 +1956,12 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
uint32_t offset;
uint64_t flpte;
+ if (!vtd_iova_fl_check_canonical(s, iova, ce, pasid)) {
+ error_report_once("%s: detected non canonical IOVA (iova=0x%" PRIx64 ","
+ "pasid=0x%" PRIx32 ")", __func__, iova, pasid);
+ return -VTD_FR_FS_NON_CANONICAL;
+ }
+
while (true) {
offset = vtd_iova_level_offset(iova, level);
flpte = vtd_get_pte(addr, offset);
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (6 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 2:49 ` Yi Liu
2024-11-08 3:15 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
` (9 subsequent siblings)
17 siblings, 2 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 3 +++
hw/i386/intel_iommu.c | 25 ++++++++++++++++++++++++-
2 files changed, 27 insertions(+), 1 deletion(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 57c50648ce..4c3e75e593 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -324,6 +324,7 @@ typedef enum VTDFaultReason {
/* Output address in the interrupt address range for scalable mode */
VTD_FR_SM_INTERRUPT_ADDR = 0x87,
+ VTD_FR_FS_BIT_UPDATE_FAILED = 0x91, /* SFS.10 */
VTD_FR_MAX, /* Guard */
} VTDFaultReason;
@@ -545,6 +546,8 @@ typedef struct VTDRootEntry VTDRootEntry;
/* Masks for First Level Paging Entry */
#define VTD_FL_P 1ULL
#define VTD_FL_RW (1ULL << 1)
+#define VTD_FL_A (1ULL << 5)
+#define VTD_FL_D (1ULL << 6)
/* Second Level Page Translation Pointer*/
#define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index ec0596c2b2..99bb3f42ea 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -1822,6 +1822,7 @@ static const bool vtd_qualified_faults[] = {
[VTD_FR_PASID_TABLE_ENTRY_INV] = true,
[VTD_FR_SM_INTERRUPT_ADDR] = true,
[VTD_FR_FS_NON_CANONICAL] = true,
+ [VTD_FR_FS_BIT_UPDATE_FAILED] = true,
[VTD_FR_MAX] = false,
};
@@ -1941,6 +1942,20 @@ static bool vtd_iova_fl_check_canonical(IntelIOMMUState *s, uint64_t iova,
}
}
+static MemTxResult vtd_set_flag_in_pte(dma_addr_t base_addr, uint32_t index,
+ uint64_t pte, uint64_t flag)
+{
+ if (pte & flag) {
+ return MEMTX_OK;
+ }
+ pte |= flag;
+ pte = cpu_to_le64(pte);
+ return dma_memory_write(&address_space_memory,
+ base_addr + index * sizeof(pte),
+ &pte, sizeof(pte),
+ MEMTXATTRS_UNSPECIFIED);
+}
+
/*
* Given the @iova, get relevant @flptep. @flpte_level will be the last level
* of the translation, can be used for deciding the size of large page.
@@ -1954,7 +1969,7 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
uint32_t level = vtd_get_iova_level(s, ce, pasid);
uint32_t offset;
- uint64_t flpte;
+ uint64_t flpte, flag_ad = VTD_FL_A;
if (!vtd_iova_fl_check_canonical(s, iova, ce, pasid)) {
error_report_once("%s: detected non canonical IOVA (iova=0x%" PRIx64 ","
@@ -1992,6 +2007,14 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
return -VTD_FR_PAGING_ENTRY_RSVD;
}
+ if (vtd_is_last_pte(flpte, level) && is_write) {
+ flag_ad |= VTD_FL_D;
+ }
+
+ if (vtd_set_flag_in_pte(addr, offset, flpte, flag_ad) != MEMTX_OK) {
+ return -VTD_FR_FS_BIT_UPDATE_FAILED;
+ }
+
if (vtd_is_last_pte(flpte, level)) {
*flptep = flpte;
*flpte_level = level;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (7 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 2:50 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
` (8 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
According to spec, Page-Selective-within-Domain Invalidation (11b):
1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-through
(PGTT=100b) mappings associated with the specified domain-id and the
input-address range are invalidated.
2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
mapping associated with specified domain-id are invalidated.
So per spec definition the Page-Selective-within-Domain Invalidation
needs to flush first stage and nested cached IOTLB enties as well.
We don't support nested yet and pass-through mapping is never cached,
so what in iotlb cache are only first-stage and second-stage mappings.
Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
invalidate entries based on PGTT type.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
include/hw/i386/intel_iommu.h | 1 +
hw/i386/intel_iommu.c | 27 +++++++++++++++++++++------
2 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index fe9057c50d..b843d069cc 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -155,6 +155,7 @@ struct VTDIOTLBEntry {
uint64_t pte;
uint64_t mask;
uint8_t access_flags;
+ uint8_t pgtt;
};
/* VT-d Source-ID Qualifier types */
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 99bb3f42ea..46bde1ad40 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -305,9 +305,21 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
- return (entry->domain_id == info->domain_id) &&
- (((entry->gfn & info->mask) == gfn) ||
- (entry->gfn == gfn_tlb));
+
+ if (entry->domain_id != info->domain_id) {
+ return false;
+ }
+
+ /*
+ * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
+ * nested (PGTT=011b) mapping associated with specified domain-id are
+ * invalidated. Nested isn't supported yet, so only need to check 001b.
+ */
+ if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
+ return true;
+ }
+
+ return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
}
/* Reset all the gen of VTDAddressSpace to zero and set the gen of
@@ -382,7 +394,7 @@ out:
static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
uint16_t domain_id, hwaddr addr, uint64_t pte,
uint8_t access_flags, uint32_t level,
- uint32_t pasid)
+ uint32_t pasid, uint8_t pgtt)
{
VTDIOTLBEntry *entry = g_malloc(sizeof(*entry));
struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
@@ -400,6 +412,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
entry->access_flags = access_flags;
entry->mask = vtd_pt_level_page_mask(level);
entry->pasid = pasid;
+ entry->pgtt = pgtt;
key->gfn = gfn;
key->sid = source_id;
@@ -2069,7 +2082,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
bool is_fpd_set = false;
bool reads = true;
bool writes = true;
- uint8_t access_flags;
+ uint8_t access_flags, pgtt;
bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
VTDIOTLBEntry *iotlb_entry;
@@ -2177,9 +2190,11 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
if (s->scalable_modern && s->root_scalable) {
ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
&reads, &writes, s->aw_bits, pasid);
+ pgtt = VTD_SM_PASID_ENTRY_FLT;
} else {
ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
&reads, &writes, s->aw_bits, pasid);
+ pgtt = VTD_SM_PASID_ENTRY_SLT;
}
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
@@ -2190,7 +2205,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
page_mask = vtd_pt_level_page_mask(level);
access_flags = IOMMU_ACCESS_FLAG(reads, writes);
vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
- addr, pte, access_flags, level, pasid);
+ addr, pte, access_flags, level, pasid, pgtt);
out:
vtd_iommu_unlock(s);
entry->iova = addr & page_mask;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 10/17] intel_iommu: Process PASID-based iotlb invalidation
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (8 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 2:50 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
` (7 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
PASID-based iotlb (piotlb) is used during walking Intel
VT-d stage-1 page table.
This emulates the stage-1 page table iotlb invalidation requested
by a PASID-based IOTLB Invalidate Descriptor (P_IOTLB).
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu_internal.h | 3 +++
hw/i386/intel_iommu.c | 45 ++++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 4c3e75e593..20d922d600 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -453,6 +453,9 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & VTD_DOMAIN_ID_MASK)
#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
+#define VTD_INV_DESC_PIOTLB_AM(val) ((val) & 0x3fULL)
+#define VTD_INV_DESC_PIOTLB_IH(val) (((val) >> 6) & 0x1)
+#define VTD_INV_DESC_PIOTLB_ADDR(val) ((val) & ~0xfffULL)
#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 46bde1ad40..289278ce30 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -322,6 +322,28 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
}
+static gboolean vtd_hash_remove_by_page_piotlb(gpointer key, gpointer value,
+ gpointer user_data)
+{
+ VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
+ VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
+ uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
+ uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
+
+ /*
+ * According to spec, PASID-based-IOTLB Invalidation in page granularity
+ * doesn't invalidate IOTLB entries caching second-stage (PGTT=010b)
+ * or pass-through (PGTT=100b) mappings. Nested isn't supported yet,
+ * so only need to check first-stage (PGTT=001b) mappings.
+ */
+ if (entry->pgtt != VTD_SM_PASID_ENTRY_FLT) {
+ return false;
+ }
+
+ return entry->domain_id == info->domain_id && entry->pasid == info->pasid &&
+ ((entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb);
+}
+
/* Reset all the gen of VTDAddressSpace to zero and set the gen of
* IntelIOMMUState to 1. Must be called with IOMMU lock held.
*/
@@ -2884,11 +2906,30 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
}
}
+static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
+ uint32_t pasid, hwaddr addr, uint8_t am,
+ bool ih)
+{
+ VTDIOTLBPageInvInfo info;
+
+ info.domain_id = domain_id;
+ info.pasid = pasid;
+ info.addr = addr;
+ info.mask = ~((1 << am) - 1);
+
+ vtd_iommu_lock(s);
+ g_hash_table_foreach_remove(s->iotlb,
+ vtd_hash_remove_by_page_piotlb, &info);
+ vtd_iommu_unlock(s);
+}
+
static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
uint16_t domain_id;
uint32_t pasid;
+ uint8_t am;
+ hwaddr addr;
if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
(inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
@@ -2909,6 +2950,10 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
break;
case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
+ am = VTD_INV_DESC_PIOTLB_AM(inv_desc->val[1]);
+ addr = (hwaddr) VTD_INV_DESC_PIOTLB_ADDR(inv_desc->val[1]);
+ vtd_piotlb_page_invalidate(s, domain_id, pasid, addr, am,
+ VTD_INV_DESC_PIOTLB_IH(inv_desc->val[1]));
break;
default:
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (9 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 2:50 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 12/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
` (6 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
This will be used to implement the device IOTLB invalidation
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/i386/intel_iommu.c | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 289278ce30..a1596ba47d 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -70,6 +70,11 @@ struct vtd_hiod_key {
uint8_t devfn;
};
+struct vtd_as_raw_key {
+ uint16_t sid;
+ uint32_t pasid;
+};
+
struct vtd_iotlb_key {
uint64_t gfn;
uint32_t pasid;
@@ -1875,29 +1880,33 @@ static inline bool vtd_is_interrupt_addr(hwaddr addr)
return VTD_INTERRUPT_ADDR_FIRST <= addr && addr <= VTD_INTERRUPT_ADDR_LAST;
}
-static gboolean vtd_find_as_by_sid(gpointer key, gpointer value,
- gpointer user_data)
+static gboolean vtd_find_as_by_sid_and_pasid(gpointer key, gpointer value,
+ gpointer user_data)
{
struct vtd_as_key *as_key = (struct vtd_as_key *)key;
- uint16_t target_sid = *(uint16_t *)user_data;
+ struct vtd_as_raw_key target = *(struct vtd_as_raw_key *)user_data;
uint16_t sid = PCI_BUILD_BDF(pci_bus_num(as_key->bus), as_key->devfn);
- return sid == target_sid;
+
+ return (as_key->pasid == target.pasid) &&
+ (sid == target.sid);
}
-static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
+static VTDAddressSpace *vtd_get_as_by_sid_and_pasid(IntelIOMMUState *s,
+ uint16_t sid,
+ uint32_t pasid)
{
- uint8_t bus_num = PCI_BUS_NUM(sid);
- VTDAddressSpace *vtd_as = s->vtd_as_cache[bus_num];
-
- if (vtd_as &&
- (sid == PCI_BUILD_BDF(pci_bus_num(vtd_as->bus), vtd_as->devfn))) {
- return vtd_as;
- }
+ struct vtd_as_raw_key key = {
+ .sid = sid,
+ .pasid = pasid
+ };
- vtd_as = g_hash_table_find(s->vtd_address_spaces, vtd_find_as_by_sid, &sid);
- s->vtd_as_cache[bus_num] = vtd_as;
+ return g_hash_table_find(s->vtd_address_spaces,
+ vtd_find_as_by_sid_and_pasid, &key);
+}
- return vtd_as;
+static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
+{
+ return vtd_get_as_by_sid_and_pasid(s, sid, PCI_NO_PASID);
}
static void vtd_pt_enable_fast_path(IntelIOMMUState *s, uint16_t source_id)
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 12/17] intel_iommu: Add support for PASID-based device IOTLB invalidation
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (10 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 2:51 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
` (5 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 11 ++++++++
hw/i386/intel_iommu.c | 50 ++++++++++++++++++++++++++++++++++
2 files changed, 61 insertions(+)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 20d922d600..2702edd27f 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -376,6 +376,7 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_WAIT 0x5 /* Invalidation Wait Descriptor */
#define VTD_INV_DESC_PIOTLB 0x6 /* PASID-IOTLB Invalidate Desc */
#define VTD_INV_DESC_PC 0x7 /* PASID-cache Invalidate Desc */
+#define VTD_INV_DESC_DEV_PIOTLB 0x8 /* PASID-based-DIOTLB inv_desc*/
#define VTD_INV_DESC_NONE 0 /* Not an Invalidate Descriptor */
/* Masks for Invalidation Wait Descriptor*/
@@ -414,6 +415,16 @@ typedef union VTDInvDesc VTDInvDesc;
#define VTD_INV_DESC_DEVICE_IOTLB_RSVD_HI 0xffeULL
#define VTD_INV_DESC_DEVICE_IOTLB_RSVD_LO 0xffff0000ffe0f1f0
+/* Mask for PASID Device IOTLB Invalidate Descriptor */
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_ADDR(val) ((val) & \
+ 0xfffffffffffff000ULL)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_SIZE(val) ((val >> 11) & 0x1)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_GLOBAL(val) ((val) & 0x1)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_SID(val) (((val) >> 16) & 0xffffULL)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_PASID(val) ((val >> 32) & 0xfffffULL)
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_HI 0x7feULL
+#define VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_LO 0xfff000000000f000ULL
+
/* Rsvd field masks for spte */
#define VTD_SPTE_SNP 0x800ULL
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index a1596ba47d..5ea59167b3 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3020,6 +3020,49 @@ static void do_invalidate_device_tlb(VTDAddressSpace *vtd_dev_as,
memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
}
+static bool vtd_process_device_piotlb_desc(IntelIOMMUState *s,
+ VTDInvDesc *inv_desc)
+{
+ uint16_t sid;
+ VTDAddressSpace *vtd_dev_as;
+ bool size;
+ bool global;
+ hwaddr addr;
+ uint32_t pasid;
+
+ if ((inv_desc->hi & VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_HI) ||
+ (inv_desc->lo & VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_LO)) {
+ error_report_once("%s: invalid pasid-based dev iotlb inv desc:"
+ "hi=%"PRIx64 "(reserved nonzero)",
+ __func__, inv_desc->hi);
+ return false;
+ }
+
+ global = VTD_INV_DESC_PASID_DEVICE_IOTLB_GLOBAL(inv_desc->hi);
+ size = VTD_INV_DESC_PASID_DEVICE_IOTLB_SIZE(inv_desc->hi);
+ addr = VTD_INV_DESC_PASID_DEVICE_IOTLB_ADDR(inv_desc->hi);
+ sid = VTD_INV_DESC_PASID_DEVICE_IOTLB_SID(inv_desc->lo);
+ if (global) {
+ QLIST_FOREACH(vtd_dev_as, &s->vtd_as_with_notifiers, next) {
+ if ((vtd_dev_as->pasid != PCI_NO_PASID) &&
+ (PCI_BUILD_BDF(pci_bus_num(vtd_dev_as->bus),
+ vtd_dev_as->devfn) == sid)) {
+ do_invalidate_device_tlb(vtd_dev_as, size, addr);
+ }
+ }
+ } else {
+ pasid = VTD_INV_DESC_PASID_DEVICE_IOTLB_PASID(inv_desc->lo);
+ vtd_dev_as = vtd_get_as_by_sid_and_pasid(s, sid, pasid);
+ if (!vtd_dev_as) {
+ return true;
+ }
+
+ do_invalidate_device_tlb(vtd_dev_as, size, addr);
+ }
+
+ return true;
+}
+
static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
VTDInvDesc *inv_desc)
{
@@ -3106,6 +3149,13 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
}
break;
+ case VTD_INV_DESC_DEV_PIOTLB:
+ trace_vtd_inv_desc("device-piotlb", inv_desc.hi, inv_desc.lo);
+ if (!vtd_process_device_piotlb_desc(s, &inv_desc)) {
+ return false;
+ }
+ break;
+
case VTD_INV_DESC_DEVICE:
trace_vtd_inv_desc("device", inv_desc.hi, inv_desc.lo);
if (!vtd_process_device_iotlb_desc(s, &inv_desc)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (11 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 12/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 3:05 ` Yi Liu
2024-11-08 4:39 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode Zhenzhong Duan
` (4 subsequent siblings)
17 siblings, 2 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Yi Sun, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
This is used by some emulated devices which caches address
translation result. When piotlb invalidation issued in guest,
those caches should be refreshed.
For device that does not implement ATS capability or disable
it but still caches the translation result, it is better to
implement ATS cap or enable it if there is need to cache the
translation result.
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
---
hw/i386/intel_iommu.c | 35 ++++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 5ea59167b3..91d7b1abfa 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -2908,7 +2908,7 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
continue;
}
- if (!s->scalable_modern) {
+ if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
vtd_address_space_sync(vtd_as);
}
}
@@ -2920,6 +2920,9 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
bool ih)
{
VTDIOTLBPageInvInfo info;
+ VTDAddressSpace *vtd_as;
+ VTDContextEntry ce;
+ hwaddr size = (1 << am) * VTD_PAGE_SIZE;
info.domain_id = domain_id;
info.pasid = pasid;
@@ -2930,6 +2933,36 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
g_hash_table_foreach_remove(s->iotlb,
vtd_hash_remove_by_page_piotlb, &info);
vtd_iommu_unlock(s);
+
+ QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
+ if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
+ vtd_as->devfn, &ce) &&
+ domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
+ uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
+ IOMMUTLBEvent event;
+
+ if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
+ vtd_as->pasid != pasid) {
+ continue;
+ }
+
+ /*
+ * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
+ * does not flush stage-2 entries. See spec section 6.5.2.4
+ */
+ if (!s->scalable_modern) {
+ continue;
+ }
+
+ event.type = IOMMU_NOTIFIER_UNMAP;
+ event.entry.target_as = &address_space_memory;
+ event.entry.iova = addr;
+ event.entry.perm = IOMMU_NONE;
+ event.entry.addr_mask = size - 1;
+ event.entry.translated_addr = 0;
+ memory_region_notify_iommu(&vtd_as->iommu, 0, event);
+ }
+ }
}
static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (12 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 3:16 ` Yi Liu
2024-11-08 4:41 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for " Zhenzhong Duan
` (3 subsequent siblings)
17 siblings, 2 replies; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
According to VTD spec, stage-1 page table could support 4-level and
5-level paging.
However, 5-level paging translation emulation is unsupported yet.
That means the only supported value for aw_bits is 48.
So default aw_bits to 48 in scalable modern mode. In other cases,
it is still default to 39 for backward compatibility.
Add a check to ensure user specified value is 48 in modern mode
for now.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
---
include/hw/i386/intel_iommu.h | 2 +-
hw/i386/intel_iommu.c | 10 +++++++++-
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index b843d069cc..48134bda11 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
#define DMAR_REG_SIZE 0x230
#define VTD_HOST_AW_39BIT 39
#define VTD_HOST_AW_48BIT 48
-#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
+#define VTD_HOST_AW_AUTO 0xff
#define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
#define DMAR_REPORT_F_INTR (1)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 91d7b1abfa..068a08f522 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
ON_OFF_AUTO_AUTO),
DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
- VTD_HOST_ADDRESS_WIDTH),
+ VTD_HOST_AW_AUTO),
DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
@@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
}
}
+ if (s->aw_bits == VTD_HOST_AW_AUTO) {
+ if (s->scalable_modern) {
+ s->aw_bits = VTD_HOST_AW_48BIT;
+ } else {
+ s->aw_bits = VTD_HOST_AW_39BIT;
+ }
+ }
+
if (!s->scalable_modern && s->aw_bits != VTD_HOST_AW_39BIT &&
s->aw_bits != VTD_HOST_AW_48BIT) {
error_setg(errp, "%s mode: supported values for aw-bits are: %d, %d",
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (13 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 4:25 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting Zhenzhong Duan
` (2 subsequent siblings)
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Yi Sun, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
related to scalable mode translation, thus there are multiple combinations.
This vIOMMU implementation wants to simplify it with a new property "x-fls".
When enabled in scalable mode, first stage translation also known as scalable
modern mode is supported. When enabled in legacy mode, throw out error.
With scalable modern mode exposed to user, also accurate the pasid entry
check in vtd_pe_type_check().
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/i386/intel_iommu_internal.h | 2 ++
hw/i386/intel_iommu.c | 28 +++++++++++++++++++---------
2 files changed, 21 insertions(+), 9 deletions(-)
diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
index 2702edd27f..f13576d334 100644
--- a/hw/i386/intel_iommu_internal.h
+++ b/hw/i386/intel_iommu_internal.h
@@ -195,6 +195,7 @@
#define VTD_ECAP_PASID (1ULL << 40)
#define VTD_ECAP_SMTS (1ULL << 43)
#define VTD_ECAP_SLTS (1ULL << 46)
+#define VTD_ECAP_FLTS (1ULL << 47)
/* CAP_REG */
/* (offset >> 4) << 24 */
@@ -211,6 +212,7 @@
#define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
#define VTD_CAP_DRAIN_WRITE (1ULL << 54)
#define VTD_CAP_DRAIN_READ (1ULL << 55)
+#define VTD_CAP_FS1GP (1ULL << 56)
#define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ | VTD_CAP_DRAIN_WRITE)
#define VTD_CAP_CM (1ULL << 7)
#define VTD_PASID_ID_SHIFT 20
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 068a08f522..14578655e1 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -803,16 +803,18 @@ static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
}
/* Return true if check passed, otherwise false */
-static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
- VTDPASIDEntry *pe)
+static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
{
switch (VTD_PE_GET_TYPE(pe)) {
- case VTD_SM_PASID_ENTRY_SLT:
- return true;
- case VTD_SM_PASID_ENTRY_PT:
- return x86_iommu->pt_supported;
case VTD_SM_PASID_ENTRY_FLT:
+ return !!(s->ecap & VTD_ECAP_FLTS);
+ case VTD_SM_PASID_ENTRY_SLT:
+ return !!(s->ecap & VTD_ECAP_SLTS);
case VTD_SM_PASID_ENTRY_NESTED:
+ /* Not support NESTED page table type yet */
+ return false;
+ case VTD_SM_PASID_ENTRY_PT:
+ return !!(s->ecap & VTD_ECAP_PT);
default:
/* Unknown type */
return false;
@@ -861,7 +863,6 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
uint8_t pgtt;
uint32_t index;
dma_addr_t entry_size;
- X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
index = VTD_PASID_TABLE_INDEX(pasid);
entry_size = VTD_PASID_ENTRY_SIZE;
@@ -875,7 +876,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
}
/* Do translation type check */
- if (!vtd_pe_type_check(x86_iommu, pe)) {
+ if (!vtd_pe_type_check(s, pe)) {
return -VTD_FR_PASID_TABLE_ENTRY_INV;
}
@@ -3779,6 +3780,7 @@ static Property vtd_properties[] = {
VTD_HOST_AW_AUTO),
DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
+ DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern, FALSE),
DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
@@ -4509,7 +4511,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
}
/* TODO: read cap/ecap from host to decide which cap to be exposed. */
- if (s->scalable_mode) {
+ if (s->scalable_modern) {
+ s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
+ s->cap |= VTD_CAP_FS1GP;
+ } else if (s->scalable_mode) {
s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
}
@@ -4683,6 +4688,11 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
}
}
+ if (!s->scalable_mode && s->scalable_modern) {
+ error_setg(errp, "Legacy mode: not support x-fls=on");
+ return false;
+ }
+
if (s->aw_bits == VTD_HOST_AW_AUTO) {
if (s->scalable_modern) {
s->aw_bits = VTD_HOST_AW_48BIT;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (14 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for " Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-11-04 7:00 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
2024-10-25 6:32 ` [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Duan, Zhenzhong
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Marcel Apfelbaum,
Paolo Bonzini, Richard Henderson, Eduardo Habkost
This gives user flexibility to turn off FS1GP for debug purpose.
It is also useful for future nesting feature. When host IOMMU doesn't
support FS1GP but vIOMMU does, nested page table on host side works
after turn FS1GP off in vIOMMU.
This property has no effect when vIOMMU isn't in scalable modern
mode.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
---
include/hw/i386/intel_iommu.h | 1 +
hw/i386/intel_iommu.c | 5 ++++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 48134bda11..4d6acb2314 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -307,6 +307,7 @@ struct IntelIOMMUState {
bool dma_drain; /* Whether DMA r/w draining enabled */
bool dma_translation; /* Whether DMA translation supported */
bool pasid; /* Whether to support PASID */
+ bool fs1gp; /* First Stage 1-GByte Page Support */
/*
* Protects IOMMU states in general. Currently it protects the
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index 14578655e1..f8f196aeed 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -3785,6 +3785,7 @@ static Property vtd_properties[] = {
DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
DEFINE_PROP_BOOL("dma-translation", IntelIOMMUState, dma_translation, true),
+ DEFINE_PROP_BOOL("fs1gp", IntelIOMMUState, fs1gp, true),
DEFINE_PROP_END_OF_LIST(),
};
@@ -4513,7 +4514,9 @@ static void vtd_cap_init(IntelIOMMUState *s)
/* TODO: read cap/ecap from host to decide which cap to be exposed. */
if (s->scalable_modern) {
s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
- s->cap |= VTD_CAP_FS1GP;
+ if (s->fs1gp) {
+ s->cap |= VTD_CAP_FS1GP;
+ }
} else if (s->scalable_mode) {
s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* [PATCH v4 17/17] tests/qtest: Add intel-iommu test
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (15 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting Zhenzhong Duan
@ 2024-09-30 9:26 ` Zhenzhong Duan
2024-09-30 9:52 ` Duan, Zhenzhong
2024-10-25 6:32 ` [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Duan, Zhenzhong
17 siblings, 1 reply; 67+ messages in thread
From: Zhenzhong Duan @ 2024-09-30 9:26 UTC (permalink / raw)
To: qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Zhenzhong Duan, Thomas Huth,
Marcel Apfelbaum, Laurent Vivier, Paolo Bonzini
Add the framework to test the intel-iommu device.
Currently only tested cap/ecap bits correctness in scalable
modern mode. Also tested cap/ecap bits consistency before
and after system reset.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
MAINTAINERS | 1 +
include/hw/i386/intel_iommu.h | 1 +
tests/qtest/intel-iommu-test.c | 65 ++++++++++++++++++++++++++++++++++
tests/qtest/meson.build | 1 +
4 files changed, 68 insertions(+)
create mode 100644 tests/qtest/intel-iommu-test.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 62f5255f40..331b7c7a13 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3679,6 +3679,7 @@ S: Supported
F: hw/i386/intel_iommu.c
F: hw/i386/intel_iommu_internal.h
F: include/hw/i386/intel_iommu.h
+F: tests/qtest/intel-iommu-test.c
AMD-Vi Emulation
S: Orphan
diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
index 4d6acb2314..a1858898f1 100644
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -47,6 +47,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
#define VTD_HOST_AW_48BIT 48
#define VTD_HOST_AW_AUTO 0xff
#define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
+#define VTD_MGAW_FROM_CAP(cap) ((cap >> 16) & 0x3fULL)
#define DMAR_REPORT_F_INTR (1)
diff --git a/tests/qtest/intel-iommu-test.c b/tests/qtest/intel-iommu-test.c
new file mode 100644
index 0000000000..6131e20117
--- /dev/null
+++ b/tests/qtest/intel-iommu-test.c
@@ -0,0 +1,65 @@
+/*
+ * QTest testcase for intel-iommu
+ *
+ * Copyright (c) 2024 Intel, Inc.
+ *
+ * Author: Zhenzhong Duan <zhenzhong.duan@intel.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest.h"
+#include "hw/i386/intel_iommu_internal.h"
+
+#define CAP_MODERN_FIXED1 (VTD_CAP_FRO | VTD_CAP_NFR | VTD_CAP_ND | \
+ VTD_CAP_MAMV | VTD_CAP_PSI | VTD_CAP_SLLPS)
+#define ECAP_MODERN_FIXED1 (VTD_ECAP_QI | VTD_ECAP_IR | VTD_ECAP_IRO | \
+ VTD_ECAP_MHMV | VTD_ECAP_SMTS | VTD_ECAP_FLTS)
+
+static inline uint64_t vtd_reg_readq(QTestState *s, uint64_t offset)
+{
+ return qtest_readq(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
+}
+
+static void test_intel_iommu_modern(void)
+{
+ uint8_t init_csr[DMAR_REG_SIZE]; /* register values */
+ uint8_t post_reset_csr[DMAR_REG_SIZE]; /* register values */
+ uint64_t cap, ecap, tmp;
+ QTestState *s;
+
+ s = qtest_init("-M q35 -device intel-iommu,x-scalable-mode=modern");
+
+ cap = vtd_reg_readq(s, DMAR_CAP_REG);
+ g_assert((cap & CAP_MODERN_FIXED1) == CAP_MODERN_FIXED1);
+
+ tmp = cap & VTD_CAP_SAGAW_MASK;
+ g_assert(tmp == (VTD_CAP_SAGAW_39bit | VTD_CAP_SAGAW_48bit));
+
+ tmp = VTD_MGAW_FROM_CAP(cap);
+ g_assert(tmp == VTD_HOST_AW_48BIT - 1);
+
+ ecap = vtd_reg_readq(s, DMAR_ECAP_REG);
+ g_assert((ecap & ECAP_MODERN_FIXED1) == ECAP_MODERN_FIXED1);
+
+ qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, init_csr, DMAR_REG_SIZE);
+
+ qobject_unref(qtest_qmp(s, "{ 'execute': 'system_reset' }"));
+ qtest_qmp_eventwait(s, "RESET");
+
+ qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, post_reset_csr, DMAR_REG_SIZE);
+ /* Ensure registers are consistent after hard reset */
+ g_assert(!memcmp(init_csr, post_reset_csr, DMAR_REG_SIZE));
+
+ qtest_quit(s);
+}
+
+int main(int argc, char **argv)
+{
+ g_test_init(&argc, &argv, NULL);
+ qtest_add_func("/q35/intel-iommu/modern", test_intel_iommu_modern);
+
+ return g_test_run();
+}
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 310865e49c..8a928caf70 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -90,6 +90,7 @@ qtests_i386 = \
(config_all_devices.has_key('CONFIG_SB16') ? ['fuzz-sb16-test'] : []) + \
(config_all_devices.has_key('CONFIG_SDHCI_PCI') ? ['fuzz-sdcard-test'] : []) + \
(config_all_devices.has_key('CONFIG_ESP_PCI') ? ['am53c974-test'] : []) + \
+ (config_all_devices.has_key('CONFIG_VTD') ? ['intel-iommu-test'] : []) + \
(host_os != 'windows' and \
config_all_devices.has_key('CONFIG_ACPI_ERST') ? ['erst-test'] : []) + \
(config_all_devices.has_key('CONFIG_PCIE_PORT') and \
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread
* RE: [PATCH v4 17/17] tests/qtest: Add intel-iommu test
2024-09-30 9:26 ` [PATCH v4 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
@ 2024-09-30 9:52 ` Duan, Zhenzhong
0 siblings, 0 replies; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-09-30 9:52 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Liu, Yi L,
Peng, Chao P, Thomas Huth, Marcel Apfelbaum, Laurent Vivier,
Paolo Bonzini
Sorry, forgot to update to new parameter "x-scalable-mode=on,x-fls=on", will resend this patch only.
Thanks
Zhenzhong
>-----Original Message-----
>From: Duan, Zhenzhong <zhenzhong.duan@intel.com>
>Subject: [PATCH v4 17/17] tests/qtest: Add intel-iommu test
>
>Add the framework to test the intel-iommu device.
>
>Currently only tested cap/ecap bits correctness in scalable
>modern mode. Also tested cap/ecap bits consistency before
>and after system reset.
>
>Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>Acked-by: Thomas Huth <thuth@redhat.com>
>Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>Acked-by: Jason Wang <jasowang@redhat.com>
>---
> MAINTAINERS | 1 +
> include/hw/i386/intel_iommu.h | 1 +
> tests/qtest/intel-iommu-test.c | 65
>++++++++++++++++++++++++++++++++++
> tests/qtest/meson.build | 1 +
> 4 files changed, 68 insertions(+)
> create mode 100644 tests/qtest/intel-iommu-test.c
>
>diff --git a/MAINTAINERS b/MAINTAINERS
>index 62f5255f40..331b7c7a13 100644
>--- a/MAINTAINERS
>+++ b/MAINTAINERS
>@@ -3679,6 +3679,7 @@ S: Supported
> F: hw/i386/intel_iommu.c
> F: hw/i386/intel_iommu_internal.h
> F: include/hw/i386/intel_iommu.h
>+F: tests/qtest/intel-iommu-test.c
>
> AMD-Vi Emulation
> S: Orphan
>diff --git a/include/hw/i386/intel_iommu.h
>b/include/hw/i386/intel_iommu.h
>index 4d6acb2314..a1858898f1 100644
>--- a/include/hw/i386/intel_iommu.h
>+++ b/include/hw/i386/intel_iommu.h
>@@ -47,6 +47,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
>INTEL_IOMMU_DEVICE)
> #define VTD_HOST_AW_48BIT 48
> #define VTD_HOST_AW_AUTO 0xff
> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>+#define VTD_MGAW_FROM_CAP(cap) ((cap >> 16) & 0x3fULL)
>
> #define DMAR_REPORT_F_INTR (1)
>
>diff --git a/tests/qtest/intel-iommu-test.c b/tests/qtest/intel-iommu-test.c
>new file mode 100644
>index 0000000000..6131e20117
>--- /dev/null
>+++ b/tests/qtest/intel-iommu-test.c
>@@ -0,0 +1,65 @@
>+/*
>+ * QTest testcase for intel-iommu
>+ *
>+ * Copyright (c) 2024 Intel, Inc.
>+ *
>+ * Author: Zhenzhong Duan <zhenzhong.duan@intel.com>
>+ *
>+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
>+ * See the COPYING file in the top-level directory.
>+ */
>+
>+#include "qemu/osdep.h"
>+#include "libqtest.h"
>+#include "hw/i386/intel_iommu_internal.h"
>+
>+#define CAP_MODERN_FIXED1 (VTD_CAP_FRO | VTD_CAP_NFR |
>VTD_CAP_ND | \
>+ VTD_CAP_MAMV | VTD_CAP_PSI | VTD_CAP_SLLPS)
>+#define ECAP_MODERN_FIXED1 (VTD_ECAP_QI | VTD_ECAP_IR |
>VTD_ECAP_IRO | \
>+ VTD_ECAP_MHMV | VTD_ECAP_SMTS | VTD_ECAP_FLTS)
>+
>+static inline uint64_t vtd_reg_readq(QTestState *s, uint64_t offset)
>+{
>+ return qtest_readq(s, Q35_HOST_BRIDGE_IOMMU_ADDR + offset);
>+}
>+
>+static void test_intel_iommu_modern(void)
>+{
>+ uint8_t init_csr[DMAR_REG_SIZE]; /* register values */
>+ uint8_t post_reset_csr[DMAR_REG_SIZE]; /* register values */
>+ uint64_t cap, ecap, tmp;
>+ QTestState *s;
>+
>+ s = qtest_init("-M q35 -device intel-iommu,x-scalable-mode=modern");
>+
>+ cap = vtd_reg_readq(s, DMAR_CAP_REG);
>+ g_assert((cap & CAP_MODERN_FIXED1) == CAP_MODERN_FIXED1);
>+
>+ tmp = cap & VTD_CAP_SAGAW_MASK;
>+ g_assert(tmp == (VTD_CAP_SAGAW_39bit | VTD_CAP_SAGAW_48bit));
>+
>+ tmp = VTD_MGAW_FROM_CAP(cap);
>+ g_assert(tmp == VTD_HOST_AW_48BIT - 1);
>+
>+ ecap = vtd_reg_readq(s, DMAR_ECAP_REG);
>+ g_assert((ecap & ECAP_MODERN_FIXED1) == ECAP_MODERN_FIXED1);
>+
>+ qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, init_csr,
>DMAR_REG_SIZE);
>+
>+ qobject_unref(qtest_qmp(s, "{ 'execute': 'system_reset' }"));
>+ qtest_qmp_eventwait(s, "RESET");
>+
>+ qtest_memread(s, Q35_HOST_BRIDGE_IOMMU_ADDR, post_reset_csr,
>DMAR_REG_SIZE);
>+ /* Ensure registers are consistent after hard reset */
>+ g_assert(!memcmp(init_csr, post_reset_csr, DMAR_REG_SIZE));
>+
>+ qtest_quit(s);
>+}
>+
>+int main(int argc, char **argv)
>+{
>+ g_test_init(&argc, &argv, NULL);
>+ qtest_add_func("/q35/intel-iommu/modern",
>test_intel_iommu_modern);
>+
>+ return g_test_run();
>+}
>diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
>index 310865e49c..8a928caf70 100644
>--- a/tests/qtest/meson.build
>+++ b/tests/qtest/meson.build
>@@ -90,6 +90,7 @@ qtests_i386 = \
> (config_all_devices.has_key('CONFIG_SB16') ? ['fuzz-sb16-test'] : []) +
>\
> (config_all_devices.has_key('CONFIG_SDHCI_PCI') ? ['fuzz-sdcard-test'] : [])
>+ \
> (config_all_devices.has_key('CONFIG_ESP_PCI') ? ['am53c974-test'] : []) +
>\
>+ (config_all_devices.has_key('CONFIG_VTD') ? ['intel-iommu-test'] : []) +
>\
> (host_os != 'windows' and \
> config_all_devices.has_key('CONFIG_ACPI_ERST') ? ['erst-test'] : []) +
>\
> (config_all_devices.has_key('CONFIG_PCIE_PORT') and
>\
>--
>2.34.1
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-09-30 9:26 ` [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
@ 2024-10-04 5:22 ` CLEMENT MATHIEU--DRIF
2024-11-03 14:21 ` Yi Liu
1 sibling, 0 replies; 67+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-10-04 5:22 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, yi.l.liu@intel.com, chao.p.peng@intel.com,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
LGTM, thanks for the update
Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> Caution: External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.
>
>
> Add an new element scalable_mode in IntelIOMMUState to mark scalable
> modern mode, this element will be exposed as an intel_iommu property
> finally.
>
> For now, it's only a placehholder and used for address width
> compatibility check and block host device passthrough until nesting
> is supported.
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 23 ++++++++++++++++++-----
> 2 files changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index 1eb05c29fc..788ed42477 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>
> bool caching_mode; /* RO - is cap CM enabled? */
> bool scalable_mode; /* RO - is Scalable Mode supported? */
> + bool scalable_modern; /* RO - is modern SM supported? */
> bool snoop_control; /* RO - is SNP filed supported? */
>
> dma_addr_t root; /* Current root table pointer */
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index be7c8a670b..9e6ef0cb99 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3872,7 +3872,13 @@ static bool vtd_check_hiod(IntelIOMMUState *s, HostIOMMUDevice *hiod,
> return false;
> }
>
> - return true;
> + if (!s->scalable_modern) {
> + /* All checks requested by VTD non-modern mode pass */
> + return true;
> + }
> +
> + error_setg(errp, "host device is unsupported in scalable modern mode yet");
> + return false;
> }
>
> static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int devfn,
> @@ -4257,14 +4263,21 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> - /* Currently only address widths supported are 39 and 48 bits */
> - if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
> - (s->aw_bits != VTD_HOST_AW_48BIT)) {
> - error_setg(errp, "Supported values for aw-bits are: %d, %d",
> + if (!s->scalable_modern && s->aw_bits != VTD_HOST_AW_39BIT &&
> + s->aw_bits != VTD_HOST_AW_48BIT) {
> + error_setg(errp, "%s mode: supported values for aw-bits are: %d, %d",
> + s->scalable_mode ? "Scalable" : "Legacy",
> VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
> return false;
> }
>
> + if (s->scalable_modern && s->aw_bits != VTD_HOST_AW_48BIT) {
> + error_setg(errp,
> + "Scalable modern mode: supported values for aw-bits is: %d",
> + VTD_HOST_AW_48BIT);
> + return false;
> + }
> +
> if (s->scalable_mode && !s->dma_drain) {
> error_setg(errp, "Need to set dma_drain for scalable mode");
> return false;
> --
> 2.34.1
>
>
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
` (16 preceding siblings ...)
2024-09-30 9:26 ` [PATCH v4 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
@ 2024-10-25 6:32 ` Duan, Zhenzhong
17 siblings, 0 replies; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-10-25 6:32 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Liu, Yi L,
Peng, Chao P
Hi All,
Kindly ping, any more comments?
Thanks
Zhenzhong
>-----Original Message-----
>From: Duan, Zhenzhong <zhenzhong.duan@intel.com>
>Sent: Monday, September 30, 2024 5:26 PM
>Subject: [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated
>device
>
>Hi,
>
>Per Jason Wang's suggestion, iommufd nesting series[1] is split into
>"Enable stage-1 translation for emulated device" series and
>"Enable stage-1 translation for passthrough device" series.
>
>This series enables stage-1 translation support for emulated device
>in intel iommu which we called "modern" mode.
>
>PATCH1-5: Some preparing work before support stage-1 translation
>PATCH6-8: Implement stage-1 translation for emulated device
>PATCH9-13: Emulate iotlb invalidation of stage-1 mapping
>PATCH14: Set default aw_bits to 48 in scalable modren mode
>PATCH15-16:Expose scalable modern mode "x-fls" and "fs1gp" to cmdline
>PATCH17: Add qtest
>
>Note in spec revision 3.4, it renames "First-level" to "First-stage",
>"Second-level" to "Second-stage". But the scalable mode was added
>before that change. So we keep old favor using First-level/fl/Second-level/sl
>in code but change to use stage-1/stage-2 in commit log.
>But keep in mind First-level/fl/stage-1 all have same meaning,
>same for Second-level/sl/stage-2.
>
>Qemu code can be found at [2]
>The whole nesting series can be found at [3]
>
>[1] https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg02740.html
>[2]
>https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_stage1_emu_v4
>[3] https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_nesting_rfcv2
>
>Thanks
>Zhenzhong
>
>Changelog:
>v4:
>- s/Scalable legacy/Scalable in logging (Clement)
>- test the mode first to make the intention clearer (Clement)
>- s/x-cap-fs1gp/fs1gp and s/VTD_FL_RW_MASK/VTD_FL_RW (Jason)
>- introduce x-fls instead of updating x-scalable-mode (Jason)
>- Refine comment log in patch4 (jason)
>- s/tansltion/translation/ and s/VTD_SPTE_RSVD_LEN/VTD_FPTE_RSVD_LEN/
>(Liuyi)
>- Update the order and naming of VTD_FPTE_PAGE_* (Liuyi)
>
>v3:
>- drop unnecessary !(s->ecap & VTD_ECAP_SMTS) (Clement)
>- simplify calculation of return value for vtd_iova_fl_check_canonical() (Liuyi)
>- make A/D bit setting atomic (Liuyi)
>- refine error msg (Clement, Liuyi)
>
>v2:
>- check ecap/cap bits instead of s->scalable_modern in vtd_pe_type_check()
>(Clement)
>- declare VTD_ECAP_FLTS/FS1GP after the feature is implemented (Clement)
>- define VTD_INV_DESC_PIOTLB_G (Clement)
>- make error msg consistent in vtd_process_piotlb_desc() (Clement)
>- refine commit log in patch16 (Clement)
>- add VTD_ECAP_IR to ECAP_MODERN_FIXED1 (Clement)
>- add a knob x-cap-fs1gp to control stage-1 1G paging capability
>- collect Clement's R-B
>
>v1:
>- define VTD_HOST_AW_AUTO (Clement)
>- passing pgtt as a parameter to vtd_update_iotlb (Clement)
>- prefix sl_/fl_ to second/first level specific functions (Clement)
>- pick reserved bit check from Clement, add his Co-developed-by
>- Update test without using libqtest-single.h (Thomas)
>
>rfcv2:
>- split from nesting series (Jason)
>- merged some commits from Clement
>- add qtest (jason)
>
>
>Clément Mathieu--Drif (4):
> intel_iommu: Check if the input address is canonical
> intel_iommu: Set accessed and dirty bits during first stage
> translation
> intel_iommu: Add an internal API to find an address space with PASID
> intel_iommu: Add support for PASID-based device IOTLB invalidation
>
>Yi Liu (2):
> intel_iommu: Rename slpte to pte
> intel_iommu: Implement stage-1 translation
>
>Yu Zhang (1):
> intel_iommu: Use the latest fault reasons defined by spec
>
>Zhenzhong Duan (10):
> intel_iommu: Make pasid entry type check accurate
> intel_iommu: Add a placeholder variable for scalable modern mode
> intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb
> invalidation
> intel_iommu: Flush stage-1 cache in iotlb invalidation
> intel_iommu: Process PASID-based iotlb invalidation
> intel_iommu: piotlb invalidation should notify unmap
> intel_iommu: Set default aw_bits to 48 in scalable modern mode
> intel_iommu: Introduce a property x-fls for scalable modern mode
> intel_iommu: Introduce a property to control FS1GP cap bit setting
> tests/qtest: Add intel-iommu test
>
> MAINTAINERS | 1 +
> hw/i386/intel_iommu_internal.h | 92 ++++-
> include/hw/i386/intel_iommu.h | 8 +-
> hw/i386/intel_iommu.c | 681 +++++++++++++++++++++++++++------
> tests/qtest/intel-iommu-test.c | 65 ++++
> tests/qtest/meson.build | 1 +
> 6 files changed, 716 insertions(+), 132 deletions(-)
> create mode 100644 tests/qtest/intel-iommu-test.c
>
>--
>2.34.1
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode
2024-09-30 9:26 ` [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
2024-10-04 5:22 ` CLEMENT MATHIEU--DRIF
@ 2024-11-03 14:21 ` Yi Liu
1 sibling, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-03 14:21 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> Add an new element scalable_mode in IntelIOMMUState to mark scalable
> modern mode, this element will be exposed as an intel_iommu property
> finally.
>
> For now, it's only a placehholder and used for address width
> compatibility check and block host device passthrough until nesting
> is supported.
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 23 ++++++++++++++++++-----
> 2 files changed, 19 insertions(+), 5 deletions(-)
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index 1eb05c29fc..788ed42477 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -262,6 +262,7 @@ struct IntelIOMMUState {
>
> bool caching_mode; /* RO - is cap CM enabled? */
> bool scalable_mode; /* RO - is Scalable Mode supported? */
> + bool scalable_modern; /* RO - is modern SM supported? */
> bool snoop_control; /* RO - is SNP filed supported? */
>
> dma_addr_t root; /* Current root table pointer */
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index be7c8a670b..9e6ef0cb99 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3872,7 +3872,13 @@ static bool vtd_check_hiod(IntelIOMMUState *s, HostIOMMUDevice *hiod,
> return false;
> }
>
> - return true;
> + if (!s->scalable_modern) {
> + /* All checks requested by VTD non-modern mode pass */
> + return true;
> + }
> +
> + error_setg(errp, "host device is unsupported in scalable modern mode yet");
> + return false;
> }
>
> static bool vtd_dev_set_iommu_device(PCIBus *bus, void *opaque, int devfn,
> @@ -4257,14 +4263,21 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> - /* Currently only address widths supported are 39 and 48 bits */
> - if ((s->aw_bits != VTD_HOST_AW_39BIT) &&
> - (s->aw_bits != VTD_HOST_AW_48BIT)) {
> - error_setg(errp, "Supported values for aw-bits are: %d, %d",
> + if (!s->scalable_modern && s->aw_bits != VTD_HOST_AW_39BIT &&
> + s->aw_bits != VTD_HOST_AW_48BIT) {
> + error_setg(errp, "%s mode: supported values for aw-bits are: %d, %d",
> + s->scalable_mode ? "Scalable" : "Legacy",
> VTD_HOST_AW_39BIT, VTD_HOST_AW_48BIT);
> return false;
> }
>
> + if (s->scalable_modern && s->aw_bits != VTD_HOST_AW_48BIT) {
> + error_setg(errp,
> + "Scalable modern mode: supported values for aw-bits is: %d",
> + VTD_HOST_AW_48BIT);
> + return false;
> + }
> +
> if (s->scalable_mode && !s->dma_drain) {
> error_setg(errp, "Need to set dma_drain for scalable mode");
> return false;
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 06/17] intel_iommu: Implement stage-1 translation
2024-09-30 9:26 ` [PATCH v4 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
@ 2024-11-03 14:21 ` Yi Liu
2024-11-04 3:05 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-03 14:21 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Yi Sun, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> From: Yi Liu <yi.l.liu@intel.com>
>
> This adds stage-1 page table walking to support stage-1 only
> translation in scalable modern mode.
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/i386/intel_iommu_internal.h | 24 ++++++
> hw/i386/intel_iommu.c | 143 ++++++++++++++++++++++++++++++++-
> 2 files changed, 163 insertions(+), 4 deletions(-)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 20fcc73938..38bf0c7a06 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -428,6 +428,22 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>
> +/* Rsvd field masks for fpte */
> +#define VTD_FS_UPPER_IGNORED 0xfff0000000000000ULL
> +#define VTD_FPTE_PAGE_L1_RSVD_MASK(aw) \
> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
> +#define VTD_FPTE_PAGE_L2_RSVD_MASK(aw) \
> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
> +#define VTD_FPTE_PAGE_L3_RSVD_MASK(aw) \
> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
> +#define VTD_FPTE_PAGE_L4_RSVD_MASK(aw) \
> + (0x80ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
> +
> +#define VTD_FPTE_LPAGE_L2_RSVD_MASK(aw) \
> + (0x1fe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
> +#define VTD_FPTE_LPAGE_L3_RSVD_MASK(aw) \
> + (0x3fffe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
> +
> /* Masks for PIOTLB Invalidate Descriptor */
> #define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
> #define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
> @@ -520,6 +536,14 @@ typedef struct VTDRootEntry VTDRootEntry;
> #define VTD_SM_PASID_ENTRY_AW 7ULL /* Adjusted guest-address-width */
> #define VTD_SM_PASID_ENTRY_DID(val) ((val) & VTD_DOMAIN_ID_MASK)
>
> +#define VTD_SM_PASID_ENTRY_FLPM 3ULL
> +#define VTD_SM_PASID_ENTRY_FLPTPTR (~0xfffULL)
> +
> +/* First Level Paging Structure */
> +/* Masks for First Level Paging Entry */
> +#define VTD_FL_P 1ULL
> +#define VTD_FL_RW (1ULL << 1)
> +
> /* Second Level Page Translation Pointer*/
> #define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 6f2414898c..56d5933e93 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -48,6 +48,8 @@
>
> /* pe operations */
> #define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
> +#define VTD_PE_GET_FL_LEVEL(pe) \
> + (4 + (((pe)->val[2] >> 2) & VTD_SM_PASID_ENTRY_FLPM))
> #define VTD_PE_GET_SL_LEVEL(pe) \
> (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
>
> @@ -755,6 +757,11 @@ static inline bool vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
> (1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
> }
>
> +static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
> +{
> + return level == VTD_PML4_LEVEL;
> +}
> +
> /* Return true if check passed, otherwise false */
> static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
> VTDPASIDEntry *pe)
> @@ -838,6 +845,11 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> + if (pgtt == VTD_SM_PASID_ENTRY_FLT &&
> + !vtd_is_fl_level_supported(s, VTD_PE_GET_FL_LEVEL(pe))) {
> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
> + }
> +
> return 0;
> }
>
> @@ -973,7 +985,11 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState *s,
>
> if (s->root_scalable) {
> vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
> - return VTD_PE_GET_SL_LEVEL(&pe);
> + if (s->scalable_modern) {
> + return VTD_PE_GET_FL_LEVEL(&pe);
> + } else {
> + return VTD_PE_GET_SL_LEVEL(&pe);
> + }
> }
>
> return vtd_ce_get_level(ce);
> @@ -1060,7 +1076,11 @@ static dma_addr_t vtd_get_iova_pgtbl_base(IntelIOMMUState *s,
>
> if (s->root_scalable) {
> vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
> - return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
> + if (s->scalable_modern) {
> + return pe.val[2] & VTD_SM_PASID_ENTRY_FLPTPTR;
> + } else {
> + return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
> + }
> }
>
> return vtd_ce_get_slpt_base(ce);
> @@ -1862,6 +1882,104 @@ out:
> trace_vtd_pt_enable_fast_path(source_id, success);
> }
>
> +/*
> + * Rsvd field masks for fpte:
> + * vtd_fpte_rsvd 4k pages
> + * vtd_fpte_rsvd_large large pages
> + *
> + * We support only 4-level page tables.
> + */
> +#define VTD_FPTE_RSVD_LEN 5
> +static uint64_t vtd_fpte_rsvd[VTD_FPTE_RSVD_LEN];
> +static uint64_t vtd_fpte_rsvd_large[VTD_FPTE_RSVD_LEN];
> +
> +static bool vtd_flpte_nonzero_rsvd(uint64_t flpte, uint32_t level)
> +{
> + uint64_t rsvd_mask;
> +
> + /*
> + * We should have caught a guest-mis-programmed level earlier,
> + * via vtd_is_fl_level_supported.
> + */
> + assert(level < VTD_FPTE_RSVD_LEN);
> + /*
> + * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
> + * checked by vtd_is_last_pte().
> + */
> + assert(level);
> +
> + if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
> + (flpte & VTD_PT_PAGE_SIZE_MASK)) {
> + /* large page */
> + rsvd_mask = vtd_fpte_rsvd_large[level];
> + } else {
> + rsvd_mask = vtd_fpte_rsvd[level];
> + }
> +
> + return flpte & rsvd_mask;
> +}
> +
> +static inline bool vtd_flpte_present(uint64_t flpte)
> +{
> + return !!(flpte & VTD_FL_P);
> +}
> +
> +/*
> + * Given the @iova, get relevant @flptep. @flpte_level will be the last level
> + * of the translation, can be used for deciding the size of large page.
> + */
> +static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
> + uint64_t iova, bool is_write,
> + uint64_t *flptep, uint32_t *flpte_level,
> + bool *reads, bool *writes, uint8_t aw_bits,
> + uint32_t pasid)
> +{
> + dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
> + uint32_t level = vtd_get_iova_level(s, ce, pasid);
> + uint32_t offset;
> + uint64_t flpte;
> +
> + while (true) {
> + offset = vtd_iova_level_offset(iova, level);
> + flpte = vtd_get_pte(addr, offset);
> +
> + if (flpte == (uint64_t)-1) {
> + if (level == vtd_get_iova_level(s, ce, pasid)) {
> + /* Invalid programming of context-entry */
> + return -VTD_FR_CONTEXT_ENTRY_INV;
> + } else {
> + return -VTD_FR_PAGING_ENTRY_INV;
> + }
> + }
> + if (!vtd_flpte_present(flpte)) {
> + *reads = false;
> + *writes = false;
> + return -VTD_FR_PAGING_ENTRY_INV;
> + }
> + *reads = true;
> + *writes = (*writes) && (flpte & VTD_FL_RW);
> + if (is_write && !(flpte & VTD_FL_RW)) {
> + return -VTD_FR_WRITE;
> + }
> + if (vtd_flpte_nonzero_rsvd(flpte, level)) {
> + error_report_once("%s: detected flpte reserved non-zero "
> + "iova=0x%" PRIx64 ", level=0x%" PRIx32
> + "flpte=0x%" PRIx64 ", pasid=0x%" PRIX32 ")",
> + __func__, iova, level, flpte, pasid);
> + return -VTD_FR_PAGING_ENTRY_RSVD;
> + }
> +
> + if (vtd_is_last_pte(flpte, level)) {
> + *flptep = flpte;
> + *flpte_level = level;
> + return 0;
> + }
> +
> + addr = vtd_get_pte_addr(flpte, aw_bits);
> + level--;
> + }
As I replied in last version, it should check the ir range for the
translation result. I saw your reply, but that only covers the input
address, my comment is about the output addr.
[1]
https://lore.kernel.org/qemu-devel/SJ0PR11MB6744D2B572D278DAF8BF267692762@SJ0PR11MB6744.namprd11.prod.outlook.com/
> +}
> +
> static void vtd_report_fault(IntelIOMMUState *s,
> int err, bool is_fpd_set,
> uint16_t source_id,
> @@ -2010,8 +2128,13 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> }
> }
>
> - ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
> - &reads, &writes, s->aw_bits, pasid);
> + if (s->scalable_modern && s->root_scalable) {
> + ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
> + &reads, &writes, s->aw_bits, pasid);
> + } else {
> + ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
> + &reads, &writes, s->aw_bits, pasid);
> + }
> if (ret_fr) {
> vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
> addr, is_write, pasid != PCI_NO_PASID, pasid);
> @@ -4239,6 +4362,18 @@ static void vtd_init(IntelIOMMUState *s)
> vtd_spte_rsvd_large[2] = VTD_SPTE_LPAGE_L2_RSVD_MASK(s->aw_bits);
> vtd_spte_rsvd_large[3] = VTD_SPTE_LPAGE_L3_RSVD_MASK(s->aw_bits);
>
> + /*
> + * Rsvd field masks for fpte
> + */
> + vtd_fpte_rsvd[0] = ~0ULL;
> + vtd_fpte_rsvd[1] = VTD_FPTE_PAGE_L1_RSVD_MASK(s->aw_bits);
> + vtd_fpte_rsvd[2] = VTD_FPTE_PAGE_L2_RSVD_MASK(s->aw_bits);
> + vtd_fpte_rsvd[3] = VTD_FPTE_PAGE_L3_RSVD_MASK(s->aw_bits);
> + vtd_fpte_rsvd[4] = VTD_FPTE_PAGE_L4_RSVD_MASK(s->aw_bits);
> +
> + vtd_fpte_rsvd_large[2] = VTD_FPTE_LPAGE_L2_RSVD_MASK(s->aw_bits);
> + vtd_fpte_rsvd_large[3] = VTD_FPTE_LPAGE_L3_RSVD_MASK(s->aw_bits);
> +
> if (s->scalable_mode || s->snoop_control) {
> vtd_spte_rsvd[1] &= ~VTD_SPTE_SNP;
> vtd_spte_rsvd_large[2] &= ~VTD_SPTE_SNP;
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 07/17] intel_iommu: Check if the input address is canonical
2024-09-30 9:26 ` [PATCH v4 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
@ 2024-11-03 14:22 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-03 14:22 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>
> First stage translation must fail if the address to translate is
> not canonical.
>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/i386/intel_iommu_internal.h | 2 ++
> hw/i386/intel_iommu.c | 23 +++++++++++++++++++++++
> 2 files changed, 25 insertions(+)
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 38bf0c7a06..57c50648ce 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -320,6 +320,8 @@ typedef enum VTDFaultReason {
> VTD_FR_PASID_ENTRY_P = 0x59,
> VTD_FR_PASID_TABLE_ENTRY_INV = 0x5b, /*Invalid PASID table entry */
>
> + VTD_FR_FS_NON_CANONICAL = 0x80, /* SNG.1 : Address for FS not canonical.*/
> +
> /* Output address in the interrupt address range for scalable mode */
> VTD_FR_SM_INTERRUPT_ADDR = 0x87,
> VTD_FR_MAX, /* Guard */
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 56d5933e93..ec0596c2b2 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -1821,6 +1821,7 @@ static const bool vtd_qualified_faults[] = {
> [VTD_FR_PASID_ENTRY_P] = true,
> [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
> [VTD_FR_SM_INTERRUPT_ADDR] = true,
> + [VTD_FR_FS_NON_CANONICAL] = true,
> [VTD_FR_MAX] = false,
> };
>
> @@ -1924,6 +1925,22 @@ static inline bool vtd_flpte_present(uint64_t flpte)
> return !!(flpte & VTD_FL_P);
> }
>
> +/* Return true if IOVA is canonical, otherwise false. */
> +static bool vtd_iova_fl_check_canonical(IntelIOMMUState *s, uint64_t iova,
> + VTDContextEntry *ce, uint32_t pasid)
> +{
> + uint64_t iova_limit = vtd_iova_limit(s, ce, s->aw_bits, pasid);
> + uint64_t upper_bits_mask = ~(iova_limit - 1);
> + uint64_t upper_bits = iova & upper_bits_mask;
> + bool msb = ((iova & (iova_limit >> 1)) != 0);
> +
> + if (msb) {
> + return upper_bits == upper_bits_mask;
> + } else {
> + return !upper_bits;
> + }
> +}
> +
> /*
> * Given the @iova, get relevant @flptep. @flpte_level will be the last level
> * of the translation, can be used for deciding the size of large page.
> @@ -1939,6 +1956,12 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
> uint32_t offset;
> uint64_t flpte;
>
> + if (!vtd_iova_fl_check_canonical(s, iova, ce, pasid)) {
> + error_report_once("%s: detected non canonical IOVA (iova=0x%" PRIx64 ","
> + "pasid=0x%" PRIx32 ")", __func__, iova, pasid);
> + return -VTD_FR_FS_NON_CANONICAL;
> + }
> +
> while (true) {
> offset = vtd_iova_level_offset(iova, level);
> flpte = vtd_get_pte(addr, offset);
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-09-30 9:26 ` [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation Zhenzhong Duan
@ 2024-11-04 2:49 ` Yi Liu
2024-11-04 7:37 ` CLEMENT MATHIEU--DRIF
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 2:49 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
> flush stage-2 iotlb entries with matching domain id and pasid.
Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
VT-d spec 4.1.
> With scalable modern mode introduced, guest could send PASID-selective
> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>
> By this chance, remove old IOTLB related definitions which were unused.
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/i386/intel_iommu_internal.h | 14 ++++--
> hw/i386/intel_iommu.c | 88 +++++++++++++++++++++++++++++++++-
> 2 files changed, 96 insertions(+), 6 deletions(-)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index d0f9d4589d..eec8090190 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) & VTD_PASID_ID_MASK)
> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
>
> /* Mask for Device IOTLB Invalidate Descriptor */
> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) & 0xfffffffffffff000ULL)
> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>
> +/* Masks for PIOTLB Invalidate Descriptor */
> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & VTD_DOMAIN_ID_MASK)
> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
> +
> /* Information about page-selective IOTLB invalidate */
> struct VTDIOTLBPageInvInfo {
> uint16_t domain_id;
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 9e6ef0cb99..72c9c91d4f 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -2656,6 +2656,86 @@ static bool vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
> return true;
> }
>
> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
> + gpointer user_data)
> +{
> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> +
> + return ((entry->domain_id == info->domain_id) &&
> + (entry->pasid == info->pasid));
> +}
> +
> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> + uint16_t domain_id, uint32_t pasid)
> +{
> + VTDIOTLBPageInvInfo info;
> + VTDAddressSpace *vtd_as;
> + VTDContextEntry ce;
> +
> + info.domain_id = domain_id;
> + info.pasid = pasid;
> +
> + vtd_iommu_lock(s);
> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
> + &info);
> + vtd_iommu_unlock(s);
> +
> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> + vtd_as->devfn, &ce) &&
> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
> +
> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
> + vtd_as->pasid != pasid) {
> + continue;
> + }
> +
> + if (!s->scalable_modern) {
> + vtd_address_space_sync(vtd_as);
> + }
> + }
> + }
> +}
> +
> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> + VTDInvDesc *inv_desc)
> +{
> + uint16_t domain_id;
> + uint32_t pasid;
> +
> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
> + inv_desc->val[2] || inv_desc->val[3]) {
> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
> + __func__, inv_desc->val[3], inv_desc->val[2],
> + inv_desc->val[1], inv_desc->val[0]);
> + return false;
> + }
Need to consider the below behaviour as well.
"
This
descriptor is a 256-bit descriptor and will result in an invalid descriptor
error if submitted in an IQ that
is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
"
Also there are descriptions about the old inv desc types (e.g.
iotlb_inv_desc) that can be either 128bits or 256bits.
"If a 128-bit
version of this descriptor is submitted into an IQ that is setup to provide
hardware with 256-bit
descriptors or vice-versa it will result in an invalid descriptor error.
"
If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
submits 128bits desc, then the high 128bits would be non-zero if there is
more than one desc. But if there is only one desc in the queue, then the
high 128bits would be zero as well. While, it may be captured by the
tail register update. Bit4 is reserved when DW==1, and guest would use
bit4 when it only submits one desc.
If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
it would appear to be two descs from vIOMMU p.o.v. The first 128bits
can be identified as valid except for the types that does not requires
256bits. The higher 128bits would be subjected to the desc sanity check
as well.
Based on the above, I think you may need to add two more checks. If DW==0,
vIOMMU should fail the inv types that requires 256bits; If DW==1, you
should check the inv_desc->val[2] and inv_desc->val[3]. You've already
done it in this patch.
Thoughts are welcomed here.
> +
> + domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
> + pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
> + switch (inv_desc->val[0] & VTD_INV_DESC_PIOTLB_G) {
> + case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
> + vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
> + break;
> +
> + case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
> + break;
> +
> + default:
> + error_report_once("%s: invalid piotlb inv desc: hi=0x%"PRIx64
> + ", lo=0x%"PRIx64" (type mismatch: 0x%llx)",
> + __func__, inv_desc->val[1], inv_desc->val[0],
> + inv_desc->val[0] & VTD_INV_DESC_IOTLB_G);
> + return false;
> + }
> + return true;
> +}
> +
> static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
> VTDInvDesc *inv_desc)
> {
> @@ -2766,6 +2846,13 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
> }
> break;
>
> + case VTD_INV_DESC_PIOTLB:
> + trace_vtd_inv_desc("p-iotlb", inv_desc.val[1], inv_desc.val[0]);
> + if (!vtd_process_piotlb_desc(s, &inv_desc)) {
> + return false;
> + }
> + break;
> +
> case VTD_INV_DESC_WAIT:
> trace_vtd_inv_desc("wait", inv_desc.hi, inv_desc.lo);
> if (!vtd_process_wait_desc(s, &inv_desc)) {
> @@ -2793,7 +2880,6 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
> * iommu driver) work, just return true is enough so far.
> */
> case VTD_INV_DESC_PC:
> - case VTD_INV_DESC_PIOTLB:
> if (s->scalable_mode) {
> break;
> }
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation
2024-09-30 9:26 ` [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
@ 2024-11-04 2:49 ` Yi Liu
2024-11-08 3:15 ` Jason Wang
1 sibling, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-04 2:49 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 3 +++
> hw/i386/intel_iommu.c | 25 ++++++++++++++++++++++++-
> 2 files changed, 27 insertions(+), 1 deletion(-)
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 57c50648ce..4c3e75e593 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -324,6 +324,7 @@ typedef enum VTDFaultReason {
>
> /* Output address in the interrupt address range for scalable mode */
> VTD_FR_SM_INTERRUPT_ADDR = 0x87,
> + VTD_FR_FS_BIT_UPDATE_FAILED = 0x91, /* SFS.10 */
> VTD_FR_MAX, /* Guard */
> } VTDFaultReason;
>
> @@ -545,6 +546,8 @@ typedef struct VTDRootEntry VTDRootEntry;
> /* Masks for First Level Paging Entry */
> #define VTD_FL_P 1ULL
> #define VTD_FL_RW (1ULL << 1)
> +#define VTD_FL_A (1ULL << 5)
> +#define VTD_FL_D (1ULL << 6)
>
> /* Second Level Page Translation Pointer*/
> #define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index ec0596c2b2..99bb3f42ea 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -1822,6 +1822,7 @@ static const bool vtd_qualified_faults[] = {
> [VTD_FR_PASID_TABLE_ENTRY_INV] = true,
> [VTD_FR_SM_INTERRUPT_ADDR] = true,
> [VTD_FR_FS_NON_CANONICAL] = true,
> + [VTD_FR_FS_BIT_UPDATE_FAILED] = true,
> [VTD_FR_MAX] = false,
> };
>
> @@ -1941,6 +1942,20 @@ static bool vtd_iova_fl_check_canonical(IntelIOMMUState *s, uint64_t iova,
> }
> }
>
> +static MemTxResult vtd_set_flag_in_pte(dma_addr_t base_addr, uint32_t index,
> + uint64_t pte, uint64_t flag)
> +{
> + if (pte & flag) {
> + return MEMTX_OK;
> + }
> + pte |= flag;
> + pte = cpu_to_le64(pte);
> + return dma_memory_write(&address_space_memory,
> + base_addr + index * sizeof(pte),
> + &pte, sizeof(pte),
> + MEMTXATTRS_UNSPECIFIED);
> +}
> +
> /*
> * Given the @iova, get relevant @flptep. @flpte_level will be the last level
> * of the translation, can be used for deciding the size of large page.
> @@ -1954,7 +1969,7 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
> dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
> uint32_t level = vtd_get_iova_level(s, ce, pasid);
> uint32_t offset;
> - uint64_t flpte;
> + uint64_t flpte, flag_ad = VTD_FL_A;
>
> if (!vtd_iova_fl_check_canonical(s, iova, ce, pasid)) {
> error_report_once("%s: detected non canonical IOVA (iova=0x%" PRIx64 ","
> @@ -1992,6 +2007,14 @@ static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
> return -VTD_FR_PAGING_ENTRY_RSVD;
> }
>
> + if (vtd_is_last_pte(flpte, level) && is_write) {
> + flag_ad |= VTD_FL_D;
> + }
> +
> + if (vtd_set_flag_in_pte(addr, offset, flpte, flag_ad) != MEMTX_OK) {
> + return -VTD_FR_FS_BIT_UPDATE_FAILED;
> + }
> +
> if (vtd_is_last_pte(flpte, level)) {
> *flptep = flpte;
> *flpte_level = level;
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation
2024-09-30 9:26 ` [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
@ 2024-11-04 2:50 ` Yi Liu
2024-11-04 3:38 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 2:50 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> According to spec, Page-Selective-within-Domain Invalidation (11b):
>
> 1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-through
> (PGTT=100b) mappings associated with the specified domain-id and the
> input-address range are invalidated.
> 2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
> mapping associated with specified domain-id are invalidated.
>
> So per spec definition the Page-Selective-within-Domain Invalidation
> needs to flush first stage and nested cached IOTLB enties as well.
>
> We don't support nested yet and pass-through mapping is never cached,
> so what in iotlb cache are only first-stage and second-stage mappings.
a side question, how about cache paging structure?
> Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
> invalidate entries based on PGTT type.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 27 +++++++++++++++++++++------
> 2 files changed, 22 insertions(+), 6 deletions(-)
anyhow, this patch looks good to me.
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index fe9057c50d..b843d069cc 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -155,6 +155,7 @@ struct VTDIOTLBEntry {
> uint64_t pte;
> uint64_t mask;
> uint8_t access_flags;
> + uint8_t pgtt;
> };
>
> /* VT-d Source-ID Qualifier types */
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 99bb3f42ea..46bde1ad40 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -305,9 +305,21 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
> VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
> uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
> - return (entry->domain_id == info->domain_id) &&
> - (((entry->gfn & info->mask) == gfn) ||
> - (entry->gfn == gfn_tlb));
> +
> + if (entry->domain_id != info->domain_id) {
> + return false;
> + }
> +
> + /*
> + * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
> + * nested (PGTT=011b) mapping associated with specified domain-id are
> + * invalidated. Nested isn't supported yet, so only need to check 001b.
> + */
> + if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
> + return true;
> + }
> +
> + return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
> }
>
> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
> @@ -382,7 +394,7 @@ out:
> static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
> uint16_t domain_id, hwaddr addr, uint64_t pte,
> uint8_t access_flags, uint32_t level,
> - uint32_t pasid)
> + uint32_t pasid, uint8_t pgtt)
> {
> VTDIOTLBEntry *entry = g_malloc(sizeof(*entry));
> struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
> @@ -400,6 +412,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
> entry->access_flags = access_flags;
> entry->mask = vtd_pt_level_page_mask(level);
> entry->pasid = pasid;
> + entry->pgtt = pgtt;
>
> key->gfn = gfn;
> key->sid = source_id;
> @@ -2069,7 +2082,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> bool is_fpd_set = false;
> bool reads = true;
> bool writes = true;
> - uint8_t access_flags;
> + uint8_t access_flags, pgtt;
> bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
> VTDIOTLBEntry *iotlb_entry;
>
> @@ -2177,9 +2190,11 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> if (s->scalable_modern && s->root_scalable) {
> ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
> &reads, &writes, s->aw_bits, pasid);
> + pgtt = VTD_SM_PASID_ENTRY_FLT;
> } else {
> ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
> &reads, &writes, s->aw_bits, pasid);
> + pgtt = VTD_SM_PASID_ENTRY_SLT;
> }
> if (ret_fr) {
> vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
> @@ -2190,7 +2205,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> page_mask = vtd_pt_level_page_mask(level);
> access_flags = IOMMU_ACCESS_FLAG(reads, writes);
> vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
> - addr, pte, access_flags, level, pasid);
> + addr, pte, access_flags, level, pasid, pgtt);
> out:
> vtd_iommu_unlock(s);
> entry->iova = addr & page_mask;
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 10/17] intel_iommu: Process PASID-based iotlb invalidation
2024-09-30 9:26 ` [PATCH v4 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
@ 2024-11-04 2:50 ` Yi Liu
2024-11-04 5:40 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 2:50 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> PASID-based iotlb (piotlb) is used during walking Intel
> VT-d stage-1 page table.
>
> This emulates the stage-1 page table iotlb invalidation requested
> by a PASID-based IOTLB Invalidate Descriptor (P_IOTLB).
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/i386/intel_iommu_internal.h | 3 +++
> hw/i386/intel_iommu.c | 45 ++++++++++++++++++++++++++++++++++
> 2 files changed, 48 insertions(+)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 4c3e75e593..20d922d600 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -453,6 +453,9 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
> #define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) & VTD_DOMAIN_ID_MASK)
> #define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> +#define VTD_INV_DESC_PIOTLB_AM(val) ((val) & 0x3fULL)
> +#define VTD_INV_DESC_PIOTLB_IH(val) (((val) >> 6) & 0x1)
> +#define VTD_INV_DESC_PIOTLB_ADDR(val) ((val) & ~0xfffULL)
> #define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
> #define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 46bde1ad40..289278ce30 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -322,6 +322,28 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
> return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
> }
>
> +static gboolean vtd_hash_remove_by_page_piotlb(gpointer key, gpointer value,
> + gpointer user_data)
> +{
> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> + uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
> + uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
> +
> + /*
> + * According to spec, PASID-based-IOTLB Invalidation in page granularity
> + * doesn't invalidate IOTLB entries caching second-stage (PGTT=010b)
> + * or pass-through (PGTT=100b) mappings. Nested isn't supported yet,
> + * so only need to check first-stage (PGTT=001b) mappings.
> + */
> + if (entry->pgtt != VTD_SM_PASID_ENTRY_FLT) {
> + return false;
> + }
> +
> + return entry->domain_id == info->domain_id && entry->pasid == info->pasid &&
> + ((entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb);
> +}
> +
> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
> * IntelIOMMUState to 1. Must be called with IOMMU lock held.
> */
> @@ -2884,11 +2906,30 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> }
> }
>
> +static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
> + uint32_t pasid, hwaddr addr, uint8_t am,
> + bool ih)
@ih is not used, perhaps you can drop it. Seems like we don't cache paging
structure, hence ih can be ignored so far. Besides this, the patch looks
good to me.
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> +{
> + VTDIOTLBPageInvInfo info;
> +
> + info.domain_id = domain_id;
> + info.pasid = pasid;
> + info.addr = addr;
> + info.mask = ~((1 << am) - 1);
> +
> + vtd_iommu_lock(s);
> + g_hash_table_foreach_remove(s->iotlb,
> + vtd_hash_remove_by_page_piotlb, &info);
> + vtd_iommu_unlock(s);
> +}
> +
> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> VTDInvDesc *inv_desc)
> {
> uint16_t domain_id;
> uint32_t pasid;
> + uint8_t am;
> + hwaddr addr;
>
> if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
> @@ -2909,6 +2950,10 @@ static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> break;
>
> case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
> + am = VTD_INV_DESC_PIOTLB_AM(inv_desc->val[1]);
> + addr = (hwaddr) VTD_INV_DESC_PIOTLB_ADDR(inv_desc->val[1]);
> + vtd_piotlb_page_invalidate(s, domain_id, pasid, addr, am,
> + VTD_INV_DESC_PIOTLB_IH(inv_desc->val[1]));
> break;
>
> default:
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID
2024-09-30 9:26 ` [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
@ 2024-11-04 2:50 ` Yi Liu
2024-11-04 5:47 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 2:50 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>
> This will be used to implement the device IOTLB invalidation
>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/i386/intel_iommu.c | 39 ++++++++++++++++++++++++---------------
> 1 file changed, 24 insertions(+), 15 deletions(-)
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 289278ce30..a1596ba47d 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -70,6 +70,11 @@ struct vtd_hiod_key {
> uint8_t devfn;
> };
>
> +struct vtd_as_raw_key {
> + uint16_t sid;
> + uint32_t pasid;
> +};
> +
> struct vtd_iotlb_key {
> uint64_t gfn;
> uint32_t pasid;
> @@ -1875,29 +1880,33 @@ static inline bool vtd_is_interrupt_addr(hwaddr addr)
> return VTD_INTERRUPT_ADDR_FIRST <= addr && addr <= VTD_INTERRUPT_ADDR_LAST;
> }
>
> -static gboolean vtd_find_as_by_sid(gpointer key, gpointer value,
> - gpointer user_data)
> +static gboolean vtd_find_as_by_sid_and_pasid(gpointer key, gpointer value,
> + gpointer user_data)
> {
> struct vtd_as_key *as_key = (struct vtd_as_key *)key;
> - uint16_t target_sid = *(uint16_t *)user_data;
> + struct vtd_as_raw_key target = *(struct vtd_as_raw_key *)user_data;
why not just define target as a pointer?
> uint16_t sid = PCI_BUILD_BDF(pci_bus_num(as_key->bus), as_key->devfn);
> - return sid == target_sid;
> +
> + return (as_key->pasid == target.pasid) &&
> + (sid == target.sid);
hence using target->pasid and target->sid here. Otherwise, looks good to me.
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> }
>
> -static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
> +static VTDAddressSpace *vtd_get_as_by_sid_and_pasid(IntelIOMMUState *s,
> + uint16_t sid,
> + uint32_t pasid)
> {
> - uint8_t bus_num = PCI_BUS_NUM(sid);
> - VTDAddressSpace *vtd_as = s->vtd_as_cache[bus_num];
> -
> - if (vtd_as &&
> - (sid == PCI_BUILD_BDF(pci_bus_num(vtd_as->bus), vtd_as->devfn))) {
> - return vtd_as;
> - }
> + struct vtd_as_raw_key key = {
> + .sid = sid,
> + .pasid = pasid
> + };
>
> - vtd_as = g_hash_table_find(s->vtd_address_spaces, vtd_find_as_by_sid, &sid);
> - s->vtd_as_cache[bus_num] = vtd_as;
> + return g_hash_table_find(s->vtd_address_spaces,
> + vtd_find_as_by_sid_and_pasid, &key);
> +}
>
> - return vtd_as;
> +static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
> +{
> + return vtd_get_as_by_sid_and_pasid(s, sid, PCI_NO_PASID);
> }
>
> static void vtd_pt_enable_fast_path(IntelIOMMUState *s, uint16_t source_id)
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 12/17] intel_iommu: Add support for PASID-based device IOTLB invalidation
2024-09-30 9:26 ` [PATCH v4 12/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
@ 2024-11-04 2:51 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-04 2:51 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/i386/intel_iommu_internal.h | 11 ++++++++
> hw/i386/intel_iommu.c | 50 ++++++++++++++++++++++++++++++++++
> 2 files changed, 61 insertions(+)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 20d922d600..2702edd27f 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -376,6 +376,7 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_INV_DESC_WAIT 0x5 /* Invalidation Wait Descriptor */
> #define VTD_INV_DESC_PIOTLB 0x6 /* PASID-IOTLB Invalidate Desc */
> #define VTD_INV_DESC_PC 0x7 /* PASID-cache Invalidate Desc */
> +#define VTD_INV_DESC_DEV_PIOTLB 0x8 /* PASID-based-DIOTLB inv_desc*/
> #define VTD_INV_DESC_NONE 0 /* Not an Invalidate Descriptor */
>
> /* Masks for Invalidation Wait Descriptor*/
> @@ -414,6 +415,16 @@ typedef union VTDInvDesc VTDInvDesc;
> #define VTD_INV_DESC_DEVICE_IOTLB_RSVD_HI 0xffeULL
> #define VTD_INV_DESC_DEVICE_IOTLB_RSVD_LO 0xffff0000ffe0f1f0
>
> +/* Mask for PASID Device IOTLB Invalidate Descriptor */
nit: s/Mask/Masks
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_ADDR(val) ((val) & \
> + 0xfffffffffffff000ULL)
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_SIZE(val) ((val >> 11) & 0x1)
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_GLOBAL(val) ((val) & 0x1)
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_SID(val) (((val) >> 16) & 0xffffULL)
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_PASID(val) ((val >> 32) & 0xfffffULL)
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_HI 0x7feULL
> +#define VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_LO 0xfff000000000f000ULL
> +
> /* Rsvd field masks for spte */
> #define VTD_SPTE_SNP 0x800ULL
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index a1596ba47d..5ea59167b3 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3020,6 +3020,49 @@ static void do_invalidate_device_tlb(VTDAddressSpace *vtd_dev_as,
> memory_region_notify_iommu(&vtd_dev_as->iommu, 0, event);
> }
>
> +static bool vtd_process_device_piotlb_desc(IntelIOMMUState *s,
> + VTDInvDesc *inv_desc)
> +{
> + uint16_t sid;
> + VTDAddressSpace *vtd_dev_as;
> + bool size;
> + bool global;
> + hwaddr addr;
> + uint32_t pasid;
> +
> + if ((inv_desc->hi & VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_HI) ||
> + (inv_desc->lo & VTD_INV_DESC_PASID_DEVICE_IOTLB_RSVD_LO)) {
> + error_report_once("%s: invalid pasid-based dev iotlb inv desc:"
> + "hi=%"PRIx64 "(reserved nonzero)",
> + __func__, inv_desc->hi);
> + return false;
> + }
Echo the prior comment about the check to the higher 128 bits.
> +
> + global = VTD_INV_DESC_PASID_DEVICE_IOTLB_GLOBAL(inv_desc->hi);
> + size = VTD_INV_DESC_PASID_DEVICE_IOTLB_SIZE(inv_desc->hi);
> + addr = VTD_INV_DESC_PASID_DEVICE_IOTLB_ADDR(inv_desc->hi);
> + sid = VTD_INV_DESC_PASID_DEVICE_IOTLB_SID(inv_desc->lo);
> + if (global) {
> + QLIST_FOREACH(vtd_dev_as, &s->vtd_as_with_notifiers, next) {
> + if ((vtd_dev_as->pasid != PCI_NO_PASID) &&
> + (PCI_BUILD_BDF(pci_bus_num(vtd_dev_as->bus),
> + vtd_dev_as->devfn) == sid)) {
> + do_invalidate_device_tlb(vtd_dev_as, size, addr);
> + }
> + }
> + } else {
> + pasid = VTD_INV_DESC_PASID_DEVICE_IOTLB_PASID(inv_desc->lo);
> + vtd_dev_as = vtd_get_as_by_sid_and_pasid(s, sid, pasid);
> + if (!vtd_dev_as) {
> + return true;
> + }
> +
> + do_invalidate_device_tlb(vtd_dev_as, size, addr);
> + }
> +
> + return true;
> +}
> +
> static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
> VTDInvDesc *inv_desc)
> {
> @@ -3106,6 +3149,13 @@ static bool vtd_process_inv_desc(IntelIOMMUState *s)
> }
> break;
>
> + case VTD_INV_DESC_DEV_PIOTLB:
> + trace_vtd_inv_desc("device-piotlb", inv_desc.hi, inv_desc.lo);
> + if (!vtd_process_device_piotlb_desc(s, &inv_desc)) {
> + return false;
> + }
> + break;
> +
> case VTD_INV_DESC_DEVICE:
> trace_vtd_inv_desc("device", inv_desc.hi, inv_desc.lo);
> if (!vtd_process_device_iotlb_desc(s, &inv_desc)) {
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 06/17] intel_iommu: Implement stage-1 translation
2024-11-03 14:21 ` Yi Liu
@ 2024-11-04 3:05 ` Duan, Zhenzhong
2024-11-04 7:02 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 3:05 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Sunday, November 3, 2024 10:22 PM
>Subject: Re: [PATCH v4 06/17] intel_iommu: Implement stage-1 translation
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> From: Yi Liu <yi.l.liu@intel.com>
>>
>> This adds stage-1 page table walking to support stage-1 only
>> translation in scalable modern mode.
>>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Acked-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/i386/intel_iommu_internal.h | 24 ++++++
>> hw/i386/intel_iommu.c | 143 ++++++++++++++++++++++++++++++++-
>> 2 files changed, 163 insertions(+), 4 deletions(-)
>>
>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>> index 20fcc73938..38bf0c7a06 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -428,6 +428,22 @@ typedef union VTDInvDesc VTDInvDesc;
>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>
>> +/* Rsvd field masks for fpte */
>> +#define VTD_FS_UPPER_IGNORED 0xfff0000000000000ULL
>> +#define VTD_FPTE_PAGE_L1_RSVD_MASK(aw) \
>> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>> +#define VTD_FPTE_PAGE_L2_RSVD_MASK(aw) \
>> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>> +#define VTD_FPTE_PAGE_L3_RSVD_MASK(aw) \
>> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>> +#define VTD_FPTE_PAGE_L4_RSVD_MASK(aw) \
>> + (0x80ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>> +
>> +#define VTD_FPTE_LPAGE_L2_RSVD_MASK(aw) \
>> + (0x1fe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>> +#define VTD_FPTE_LPAGE_L3_RSVD_MASK(aw) \
>> + (0x3fffe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>> +
>> /* Masks for PIOTLB Invalidate Descriptor */
>> #define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>> #define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>> @@ -520,6 +536,14 @@ typedef struct VTDRootEntry VTDRootEntry;
>> #define VTD_SM_PASID_ENTRY_AW 7ULL /* Adjusted guest-address-
>width */
>> #define VTD_SM_PASID_ENTRY_DID(val) ((val) & VTD_DOMAIN_ID_MASK)
>>
>> +#define VTD_SM_PASID_ENTRY_FLPM 3ULL
>> +#define VTD_SM_PASID_ENTRY_FLPTPTR (~0xfffULL)
>> +
>> +/* First Level Paging Structure */
>> +/* Masks for First Level Paging Entry */
>> +#define VTD_FL_P 1ULL
>> +#define VTD_FL_RW (1ULL << 1)
>> +
>> /* Second Level Page Translation Pointer*/
>> #define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
>>
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 6f2414898c..56d5933e93 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -48,6 +48,8 @@
>>
>> /* pe operations */
>> #define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
>> +#define VTD_PE_GET_FL_LEVEL(pe) \
>> + (4 + (((pe)->val[2] >> 2) & VTD_SM_PASID_ENTRY_FLPM))
>> #define VTD_PE_GET_SL_LEVEL(pe) \
>> (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
>>
>> @@ -755,6 +757,11 @@ static inline bool
>vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
>> (1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
>> }
>>
>> +static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t
>level)
>> +{
>> + return level == VTD_PML4_LEVEL;
>> +}
>> +
>> /* Return true if check passed, otherwise false */
>> static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>> VTDPASIDEntry *pe)
>> @@ -838,6 +845,11 @@ static int
>vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>> }
>>
>> + if (pgtt == VTD_SM_PASID_ENTRY_FLT &&
>> + !vtd_is_fl_level_supported(s, VTD_PE_GET_FL_LEVEL(pe))) {
>> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
>> + }
>> +
>> return 0;
>> }
>>
>> @@ -973,7 +985,11 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState
>*s,
>>
>> if (s->root_scalable) {
>> vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
>> - return VTD_PE_GET_SL_LEVEL(&pe);
>> + if (s->scalable_modern) {
>> + return VTD_PE_GET_FL_LEVEL(&pe);
>> + } else {
>> + return VTD_PE_GET_SL_LEVEL(&pe);
>> + }
>> }
>>
>> return vtd_ce_get_level(ce);
>> @@ -1060,7 +1076,11 @@ static dma_addr_t
>vtd_get_iova_pgtbl_base(IntelIOMMUState *s,
>>
>> if (s->root_scalable) {
>> vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
>> - return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
>> + if (s->scalable_modern) {
>> + return pe.val[2] & VTD_SM_PASID_ENTRY_FLPTPTR;
>> + } else {
>> + return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
>> + }
>> }
>>
>> return vtd_ce_get_slpt_base(ce);
>> @@ -1862,6 +1882,104 @@ out:
>> trace_vtd_pt_enable_fast_path(source_id, success);
>> }
>>
>> +/*
>> + * Rsvd field masks for fpte:
>> + * vtd_fpte_rsvd 4k pages
>> + * vtd_fpte_rsvd_large large pages
>> + *
>> + * We support only 4-level page tables.
>> + */
>> +#define VTD_FPTE_RSVD_LEN 5
>> +static uint64_t vtd_fpte_rsvd[VTD_FPTE_RSVD_LEN];
>> +static uint64_t vtd_fpte_rsvd_large[VTD_FPTE_RSVD_LEN];
>> +
>> +static bool vtd_flpte_nonzero_rsvd(uint64_t flpte, uint32_t level)
>> +{
>> + uint64_t rsvd_mask;
>> +
>> + /*
>> + * We should have caught a guest-mis-programmed level earlier,
>> + * via vtd_is_fl_level_supported.
>> + */
>> + assert(level < VTD_FPTE_RSVD_LEN);
>> + /*
>> + * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
>> + * checked by vtd_is_last_pte().
>> + */
>> + assert(level);
>> +
>> + if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
>> + (flpte & VTD_PT_PAGE_SIZE_MASK)) {
>> + /* large page */
>> + rsvd_mask = vtd_fpte_rsvd_large[level];
>> + } else {
>> + rsvd_mask = vtd_fpte_rsvd[level];
>> + }
>> +
>> + return flpte & rsvd_mask;
>> +}
>> +
>> +static inline bool vtd_flpte_present(uint64_t flpte)
>> +{
>> + return !!(flpte & VTD_FL_P);
>> +}
>> +
>> +/*
>> + * Given the @iova, get relevant @flptep. @flpte_level will be the last level
>> + * of the translation, can be used for deciding the size of large page.
>> + */
>> +static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
>> + uint64_t iova, bool is_write,
>> + uint64_t *flptep, uint32_t *flpte_level,
>> + bool *reads, bool *writes, uint8_t aw_bits,
>> + uint32_t pasid)
>> +{
>> + dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
>> + uint32_t level = vtd_get_iova_level(s, ce, pasid);
>> + uint32_t offset;
>> + uint64_t flpte;
>> +
>> + while (true) {
>> + offset = vtd_iova_level_offset(iova, level);
>> + flpte = vtd_get_pte(addr, offset);
>> +
>> + if (flpte == (uint64_t)-1) {
>> + if (level == vtd_get_iova_level(s, ce, pasid)) {
>> + /* Invalid programming of context-entry */
>> + return -VTD_FR_CONTEXT_ENTRY_INV;
>> + } else {
>> + return -VTD_FR_PAGING_ENTRY_INV;
>> + }
>> + }
>> + if (!vtd_flpte_present(flpte)) {
>> + *reads = false;
>> + *writes = false;
>> + return -VTD_FR_PAGING_ENTRY_INV;
>> + }
>> + *reads = true;
>> + *writes = (*writes) && (flpte & VTD_FL_RW);
>> + if (is_write && !(flpte & VTD_FL_RW)) {
>> + return -VTD_FR_WRITE;
>> + }
>> + if (vtd_flpte_nonzero_rsvd(flpte, level)) {
>> + error_report_once("%s: detected flpte reserved non-zero "
>> + "iova=0x%" PRIx64 ", level=0x%" PRIx32
>> + "flpte=0x%" PRIx64 ", pasid=0x%" PRIX32 ")",
>> + __func__, iova, level, flpte, pasid);
>> + return -VTD_FR_PAGING_ENTRY_RSVD;
>> + }
>> +
>> + if (vtd_is_last_pte(flpte, level)) {
>> + *flptep = flpte;
>> + *flpte_level = level;
>> + return 0;
>> + }
>> +
>> + addr = vtd_get_pte_addr(flpte, aw_bits);
>> + level--;
>> + }
>
>As I replied in last version, it should check the ir range for the
>translation result. I saw your reply, but that only covers the input
>address, my comment is about the output addr.
>
>[1]
>https://lore.kernel.org/qemu-
>devel/SJ0PR11MB6744D2B572D278DAF8BF267692762@SJ0PR11MB6744.nampr
>d11.prod.outlook.com/
Oh, I see, you are right! As the check for ir range is common for both stage-2 and stage-1,
I plan to move it to common place with a separate patch like below, let me know if you
prefer to have separate check for both stage-2 and stage-1.
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -1235,7 +1235,6 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
uint32_t offset;
uint64_t slpte;
uint64_t access_right_check;
- uint64_t xlat, size;
if (!vtd_iova_sl_range_check(s, iova, ce, aw_bits, pasid)) {
error_report_once("%s: detected IOVA overflow (iova=0x%" PRIx64 ","
@@ -1288,28 +1287,7 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
level--;
}
- xlat = vtd_get_pte_addr(*slptep, aw_bits);
- size = ~vtd_pt_level_page_mask(level) + 1;
-
- /*
- * From VT-d spec 3.14: Untranslated requests and translation
- * requests that result in an address in the interrupt range will be
- * blocked with condition code LGN.4 or SGN.8.
- */
- if ((xlat > VTD_INTERRUPT_ADDR_LAST ||
- xlat + size - 1 < VTD_INTERRUPT_ADDR_FIRST)) {
- return 0;
- } else {
- error_report_once("%s: xlat address is in interrupt range "
- "(iova=0x%" PRIx64 ", level=0x%" PRIx32 ", "
- "slpte=0x%" PRIx64 ", write=%d, "
- "xlat=0x%" PRIx64 ", size=0x%" PRIx64 ", "
- "pasid=0x%" PRIx32 ")",
- __func__, iova, level, slpte, is_write,
- xlat, size, pasid);
- return s->scalable_mode ? -VTD_FR_SM_INTERRUPT_ADDR :
- -VTD_FR_INTERRUPT_ADDR;
- }
+ return 0;
}
typedef int (*vtd_page_walk_hook)(const IOMMUTLBEvent *event, void *private);
@@ -2201,6 +2179,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
uint8_t access_flags, pgtt;
bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
VTDIOTLBEntry *iotlb_entry;
+ uint64_t xlat, size;
/*
* We have standalone memory region for interrupt addresses, we
@@ -2312,6 +2291,29 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
&reads, &writes, s->aw_bits, pasid);
pgtt = VTD_SM_PASID_ENTRY_SLT;
}
+ if (!ret_fr) {
+ xlat = vtd_get_pte_addr(pte, s->aw_bits);
+ size = ~vtd_pt_level_page_mask(level) + 1;
+
+ /*
+ * From VT-d spec 3.14: Untranslated requests and translation
+ * requests that result in an address in the interrupt range will be
+ * blocked with condition code LGN.4 or SGN.8.
+ */
+ if ((xlat <= VTD_INTERRUPT_ADDR_LAST &&
+ xlat + size - 1 >= VTD_INTERRUPT_ADDR_FIRST)) {
+ error_report_once("%s: xlat address is in interrupt range "
+ "(iova=0x%" PRIx64 ", level=0x%" PRIx32 ", "
+ "pte=0x%" PRIx64 ", write=%d, "
+ "xlat=0x%" PRIx64 ", size=0x%" PRIx64 ", "
+ "pasid=0x%" PRIx32 ")",
+ __func__, addr, level, pte, is_write,
+ xlat, size, pasid);
+ ret_fr = s->scalable_mode ? -VTD_FR_SM_INTERRUPT_ADDR :
+ -VTD_FR_INTERRUPT_ADDR;
+ }
+ }
+
if (ret_fr) {
vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
addr, is_write, pasid != PCI_NO_PASID, pasid);
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap
2024-09-30 9:26 ` [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
@ 2024-11-04 3:05 ` Yi Liu
2024-11-04 8:15 ` Duan, Zhenzhong
2024-11-08 4:39 ` Jason Wang
1 sibling, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 3:05 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Yi Sun, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> This is used by some emulated devices which caches address
> translation result. When piotlb invalidation issued in guest,
> those caches should be refreshed.
>
> For device that does not implement ATS capability or disable
> it but still caches the translation result, it is better to
> implement ATS cap or enable it if there is need to cache the
> translation result.
Is there a list of such devices? Though I don't object this patch,
but it may be helpful to list such devices. One day we may remove
this when the list becomes empty.
> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> ---
> hw/i386/intel_iommu.c | 35 ++++++++++++++++++++++++++++++++++-
> 1 file changed, 34 insertions(+), 1 deletion(-)
>
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 5ea59167b3..91d7b1abfa 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -2908,7 +2908,7 @@ static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> continue;
> }
>
> - if (!s->scalable_modern) {
> + if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
> vtd_address_space_sync(vtd_as);
> }
> }
> @@ -2920,6 +2920,9 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
> bool ih)
> {
> VTDIOTLBPageInvInfo info;
> + VTDAddressSpace *vtd_as;
> + VTDContextEntry ce;
> + hwaddr size = (1 << am) * VTD_PAGE_SIZE;
>
> info.domain_id = domain_id;
> info.pasid = pasid;
> @@ -2930,6 +2933,36 @@ static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
> g_hash_table_foreach_remove(s->iotlb,
> vtd_hash_remove_by_page_piotlb, &info);
> vtd_iommu_unlock(s);
> +
> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> + vtd_as->devfn, &ce) &&
> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
> + IOMMUTLBEvent event;
> +
> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
> + vtd_as->pasid != pasid) {
> + continue;
not quite get the logic here. patch 4 has a similar logic.
> + }
> +
> + /*
> + * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
> + * does not flush stage-2 entries. See spec section 6.5.2.4
> + */
> + if (!s->scalable_modern) {
> + continue;
> + }
> +
> + event.type = IOMMU_NOTIFIER_UNMAP;
> + event.entry.target_as = &address_space_memory;
> + event.entry.iova = addr;
> + event.entry.perm = IOMMU_NONE;
> + event.entry.addr_mask = size - 1;
> + event.entry.translated_addr = 0;
> + memory_region_notify_iommu(&vtd_as->iommu, 0, event);
> + }
> + }
> }
>
> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-09-30 9:26 ` [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode Zhenzhong Duan
@ 2024-11-04 3:16 ` Yi Liu
2024-11-04 3:19 ` Duan, Zhenzhong
2024-11-08 4:41 ` Jason Wang
1 sibling, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 3:16 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> According to VTD spec, stage-1 page table could support 4-level and
> 5-level paging.
>
> However, 5-level paging translation emulation is unsupported yet.
> That means the only supported value for aw_bits is 48.
>
> So default aw_bits to 48 in scalable modern mode. In other cases,
> it is still default to 39 for backward compatibility.
>
> Add a check to ensure user specified value is 48 in modern mode
> for now.
this is not a simple check. I think your patch makes an auto selection
of aw_bits.
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> ---
> include/hw/i386/intel_iommu.h | 2 +-
> hw/i386/intel_iommu.c | 10 +++++++++-
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index b843d069cc..48134bda11 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
> #define DMAR_REG_SIZE 0x230
> #define VTD_HOST_AW_39BIT 39
> #define VTD_HOST_AW_48BIT 48
> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
> +#define VTD_HOST_AW_AUTO 0xff
> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>
> #define DMAR_REPORT_F_INTR (1)
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 91d7b1abfa..068a08f522 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
> ON_OFF_AUTO_AUTO),
> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
> - VTD_HOST_ADDRESS_WIDTH),
> + VTD_HOST_AW_AUTO),
> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
> + if (s->scalable_modern) {
> + s->aw_bits = VTD_HOST_AW_48BIT;
> + } else {
> + s->aw_bits = VTD_HOST_AW_39BIT;
> + }
> + }
If the default value of s->aw_bits is still 39, you don't know if it's
set by the admin or the orchestration stack. This is why you need
to change it. right?
> +
> if (!s->scalable_modern && s->aw_bits != VTD_HOST_AW_39BIT &&
> s->aw_bits != VTD_HOST_AW_48BIT) {
> error_setg(errp, "%s mode: supported values for aw-bits are: %d, %d",
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-11-04 3:16 ` Yi Liu
@ 2024-11-04 3:19 ` Duan, Zhenzhong
2024-11-04 7:25 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 3:19 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 11:16 AM
>Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable
>modern mode
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> According to VTD spec, stage-1 page table could support 4-level and
>> 5-level paging.
>>
>> However, 5-level paging translation emulation is unsupported yet.
>> That means the only supported value for aw_bits is 48.
>>
>> So default aw_bits to 48 in scalable modern mode. In other cases,
>> it is still default to 39 for backward compatibility.
>>
>> Add a check to ensure user specified value is 48 in modern mode
>> for now.
>
>this is not a simple check. I think your patch makes an auto selection
>of aw_bits.
Yes, if user doesn't specify it, will auto select a default.
>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> ---
>> include/hw/i386/intel_iommu.h | 2 +-
>> hw/i386/intel_iommu.c | 10 +++++++++-
>> 2 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
>> index b843d069cc..48134bda11 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
>INTEL_IOMMU_DEVICE)
>> #define DMAR_REG_SIZE 0x230
>> #define VTD_HOST_AW_39BIT 39
>> #define VTD_HOST_AW_48BIT 48
>> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
>> +#define VTD_HOST_AW_AUTO 0xff
>> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>>
>> #define DMAR_REPORT_F_INTR (1)
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 91d7b1abfa..068a08f522 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
>> ON_OFF_AUTO_AUTO),
>> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
>> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
>> - VTD_HOST_ADDRESS_WIDTH),
>> + VTD_HOST_AW_AUTO),
>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>FALSE),
>> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode,
>FALSE),
>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>false),
>> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState *s,
>Error **errp)
>> }
>> }
>>
>> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
>> + if (s->scalable_modern) {
>> + s->aw_bits = VTD_HOST_AW_48BIT;
>> + } else {
>> + s->aw_bits = VTD_HOST_AW_39BIT;
>> + }
>> + }
>
>If the default value of s->aw_bits is still 39, you don't know if it's
>set by the admin or the orchestration stack. This is why you need
>to change it. right?
Exactly, that's reason of introducing VTD_HOST_AW_AUTO(0xff).
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation
2024-11-04 2:50 ` Yi Liu
@ 2024-11-04 3:38 ` Duan, Zhenzhong
2024-11-04 7:36 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 3:38 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 10:51 AM
>Subject: Re: [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb
>invalidation
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> According to spec, Page-Selective-within-Domain Invalidation (11b):
>>
>> 1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-through
>> (PGTT=100b) mappings associated with the specified domain-id and the
>> input-address range are invalidated.
>> 2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
>> mapping associated with specified domain-id are invalidated.
>>
>> So per spec definition the Page-Selective-within-Domain Invalidation
>> needs to flush first stage and nested cached IOTLB enties as well.
>>
>> We don't support nested yet and pass-through mapping is never cached,
>> so what in iotlb cache are only first-stage and second-stage mappings.
>
>a side question, how about cache paging structure?
We don't cache paging structures in current vIOMMU emulation code,
I thought the reason is it's cheap for vIOMMU to get paging structure
compared to real IOMMU hw. Even if we cache paging structure, we need
to compare address tag and read memory to get result, seems not much benefit.
Thanks
Zhenzhong
>
>> Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
>> invalidate entries based on PGTT type.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> Acked-by: Jason Wang <jasowang@redhat.com>
>> ---
>> include/hw/i386/intel_iommu.h | 1 +
>> hw/i386/intel_iommu.c | 27 +++++++++++++++++++++------
>> 2 files changed, 22 insertions(+), 6 deletions(-)
>
>anyhow, this patch looks good to me.
>
>Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>
>> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
>> index fe9057c50d..b843d069cc 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -155,6 +155,7 @@ struct VTDIOTLBEntry {
>> uint64_t pte;
>> uint64_t mask;
>> uint8_t access_flags;
>> + uint8_t pgtt;
>> };
>>
>> /* VT-d Source-ID Qualifier types */
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 99bb3f42ea..46bde1ad40 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -305,9 +305,21 @@ static gboolean vtd_hash_remove_by_page(gpointer
>key, gpointer value,
>> VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>> uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
>> uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
>> - return (entry->domain_id == info->domain_id) &&
>> - (((entry->gfn & info->mask) == gfn) ||
>> - (entry->gfn == gfn_tlb));
>> +
>> + if (entry->domain_id != info->domain_id) {
>> + return false;
>> + }
>> +
>> + /*
>> + * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
>> + * nested (PGTT=011b) mapping associated with specified domain-id are
>> + * invalidated. Nested isn't supported yet, so only need to check 001b.
>> + */
>> + if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
>> + return true;
>> + }
>> +
>> + return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
>> }
>>
>> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
>> @@ -382,7 +394,7 @@ out:
>> static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
>> uint16_t domain_id, hwaddr addr, uint64_t pte,
>> uint8_t access_flags, uint32_t level,
>> - uint32_t pasid)
>> + uint32_t pasid, uint8_t pgtt)
>> {
>> VTDIOTLBEntry *entry = g_malloc(sizeof(*entry));
>> struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
>> @@ -400,6 +412,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s,
>uint16_t source_id,
>> entry->access_flags = access_flags;
>> entry->mask = vtd_pt_level_page_mask(level);
>> entry->pasid = pasid;
>> + entry->pgtt = pgtt;
>>
>> key->gfn = gfn;
>> key->sid = source_id;
>> @@ -2069,7 +2082,7 @@ static bool
>vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
>> bool is_fpd_set = false;
>> bool reads = true;
>> bool writes = true;
>> - uint8_t access_flags;
>> + uint8_t access_flags, pgtt;
>> bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
>> VTDIOTLBEntry *iotlb_entry;
>>
>> @@ -2177,9 +2190,11 @@ static bool
>vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
>> if (s->scalable_modern && s->root_scalable) {
>> ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
>> &reads, &writes, s->aw_bits, pasid);
>> + pgtt = VTD_SM_PASID_ENTRY_FLT;
>> } else {
>> ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
>> &reads, &writes, s->aw_bits, pasid);
>> + pgtt = VTD_SM_PASID_ENTRY_SLT;
>> }
>> if (ret_fr) {
>> vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
>> @@ -2190,7 +2205,7 @@ static bool
>vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
>> page_mask = vtd_pt_level_page_mask(level);
>> access_flags = IOMMU_ACCESS_FLAG(reads, writes);
>> vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
>> - addr, pte, access_flags, level, pasid);
>> + addr, pte, access_flags, level, pasid, pgtt);
>> out:
>> vtd_iommu_unlock(s);
>> entry->iova = addr & page_mask;
>
>--
>Regards,
>Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-09-30 9:26 ` [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for " Zhenzhong Duan
@ 2024-11-04 4:25 ` Yi Liu
2024-11-04 6:25 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 4:25 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Yi Sun, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
> related to scalable mode translation, thus there are multiple combinations.
>
> This vIOMMU implementation wants to simplify it with a new property "x-fls".
> When enabled in scalable mode, first stage translation also known as scalable
> modern mode is supported. When enabled in legacy mode, throw out error.
>
> With scalable modern mode exposed to user, also accurate the pasid entry
> check in vtd_pe_type_check().
>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Maybe a Suggested-by tag can help to understand where this idea comes. :)
> ---
> hw/i386/intel_iommu_internal.h | 2 ++
> hw/i386/intel_iommu.c | 28 +++++++++++++++++++---------
> 2 files changed, 21 insertions(+), 9 deletions(-)
>
> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
> index 2702edd27f..f13576d334 100644
> --- a/hw/i386/intel_iommu_internal.h
> +++ b/hw/i386/intel_iommu_internal.h
> @@ -195,6 +195,7 @@
> #define VTD_ECAP_PASID (1ULL << 40)
> #define VTD_ECAP_SMTS (1ULL << 43)
> #define VTD_ECAP_SLTS (1ULL << 46)
> +#define VTD_ECAP_FLTS (1ULL << 47)
>
> /* CAP_REG */
> /* (offset >> 4) << 24 */
> @@ -211,6 +212,7 @@
> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
> #define VTD_CAP_DRAIN_READ (1ULL << 55)
> +#define VTD_CAP_FS1GP (1ULL << 56)
> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ | VTD_CAP_DRAIN_WRITE)
> #define VTD_CAP_CM (1ULL << 7)
> #define VTD_PASID_ID_SHIFT 20
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 068a08f522..14578655e1 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -803,16 +803,18 @@ static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
> }
>
> /* Return true if check passed, otherwise false */
> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
> - VTDPASIDEntry *pe)
> +static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
> {
> switch (VTD_PE_GET_TYPE(pe)) {
> - case VTD_SM_PASID_ENTRY_SLT:
> - return true;
> - case VTD_SM_PASID_ENTRY_PT:
> - return x86_iommu->pt_supported;
> case VTD_SM_PASID_ENTRY_FLT:
> + return !!(s->ecap & VTD_ECAP_FLTS);
> + case VTD_SM_PASID_ENTRY_SLT:
> + return !!(s->ecap & VTD_ECAP_SLTS);
> case VTD_SM_PASID_ENTRY_NESTED:
> + /* Not support NESTED page table type yet */
> + return false;
> + case VTD_SM_PASID_ENTRY_PT:
> + return !!(s->ecap & VTD_ECAP_PT);
> default:
> /* Unknown type */
> return false;
> @@ -861,7 +863,6 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> uint8_t pgtt;
> uint32_t index;
> dma_addr_t entry_size;
> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>
> index = VTD_PASID_TABLE_INDEX(pasid);
> entry_size = VTD_PASID_ENTRY_SIZE;
> @@ -875,7 +876,7 @@ static int vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
> }
>
> /* Do translation type check */
> - if (!vtd_pe_type_check(x86_iommu, pe)) {
> + if (!vtd_pe_type_check(s, pe)) {
> return -VTD_FR_PASID_TABLE_ENTRY_INV;
> }
>
> @@ -3779,6 +3780,7 @@ static Property vtd_properties[] = {
> VTD_HOST_AW_AUTO),
> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern, FALSE),
> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
a question: is there any requirement on the layout of this array? Should
new fields added in the end?
> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
> @@ -4509,7 +4511,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
> }
>
> /* TODO: read cap/ecap from host to decide which cap to be exposed. */
> - if (s->scalable_mode) {
> + if (s->scalable_modern) {
> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
> + s->cap |= VTD_CAP_FS1GP;
> + } else if (s->scalable_mode) {
> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
> }
>
> @@ -4683,6 +4688,11 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> + if (!s->scalable_mode && s->scalable_modern) {
> + error_setg(errp, "Legacy mode: not support x-fls=on");
> + return false;
> + }
> +
> if (s->aw_bits == VTD_HOST_AW_AUTO) {
> if (s->scalable_modern) {
> s->aw_bits = VTD_HOST_AW_48BIT;
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 10/17] intel_iommu: Process PASID-based iotlb invalidation
2024-11-04 2:50 ` Yi Liu
@ 2024-11-04 5:40 ` Duan, Zhenzhong
2024-11-04 7:05 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 5:40 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 10:51 AM
>Subject: Re: [PATCH v4 10/17] intel_iommu: Process PASID-based iotlb
>invalidation
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> PASID-based iotlb (piotlb) is used during walking Intel
>> VT-d stage-1 page table.
>>
>> This emulates the stage-1 page table iotlb invalidation requested
>> by a PASID-based IOTLB Invalidate Descriptor (P_IOTLB).
>>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> Acked-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/i386/intel_iommu_internal.h | 3 +++
>> hw/i386/intel_iommu.c | 45 ++++++++++++++++++++++++++++++++++
>> 2 files changed, 48 insertions(+)
>>
>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>> index 4c3e75e593..20d922d600 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -453,6 +453,9 @@ typedef union VTDInvDesc VTDInvDesc;
>> #define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>> #define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>VTD_DOMAIN_ID_MASK)
>> #define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>> +#define VTD_INV_DESC_PIOTLB_AM(val) ((val) & 0x3fULL)
>> +#define VTD_INV_DESC_PIOTLB_IH(val) (((val) >> 6) & 0x1)
>> +#define VTD_INV_DESC_PIOTLB_ADDR(val) ((val) & ~0xfffULL)
>> #define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>> #define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>>
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 46bde1ad40..289278ce30 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -322,6 +322,28 @@ static gboolean vtd_hash_remove_by_page(gpointer
>key, gpointer value,
>> return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
>> }
>>
>> +static gboolean vtd_hash_remove_by_page_piotlb(gpointer key, gpointer
>value,
>> + gpointer user_data)
>> +{
>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>> + uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
>> + uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
>> +
>> + /*
>> + * According to spec, PASID-based-IOTLB Invalidation in page granularity
>> + * doesn't invalidate IOTLB entries caching second-stage (PGTT=010b)
>> + * or pass-through (PGTT=100b) mappings. Nested isn't supported yet,
>> + * so only need to check first-stage (PGTT=001b) mappings.
>> + */
>> + if (entry->pgtt != VTD_SM_PASID_ENTRY_FLT) {
>> + return false;
>> + }
>> +
>> + return entry->domain_id == info->domain_id && entry->pasid == info->pasid
>&&
>> + ((entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb);
>> +}
>> +
>> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
>> * IntelIOMMUState to 1. Must be called with IOMMU lock held.
>> */
>> @@ -2884,11 +2906,30 @@ static void
>vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> }
>> }
>>
>> +static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t
>domain_id,
>> + uint32_t pasid, hwaddr addr, uint8_t am,
>> + bool ih)
>
>@ih is not used, perhaps you can drop it. Seems like we don't cache paging
>structure, hence ih can be ignored so far. Besides this, the patch looks
>good to me.
OK, will drop it. But nesting series needs it, see below.
I'll drop it in this series and add back in nesting series.
/**
* enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d
* stage-1 cache invalidation
* @IOMMU_VTD_INV_FLAGS_LEAF: Indicates whether the invalidation applies
* to all-levels page structure cache or just
* the leaf PTE cache.
*/
enum iommu_hwpt_vtd_s1_invalidate_flags {
IOMMU_VTD_INV_FLAGS_LEAF = 1 << 0,
};
Thanks
Zhenzhong
>
>Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>
>> +{
>> + VTDIOTLBPageInvInfo info;
>> +
>> + info.domain_id = domain_id;
>> + info.pasid = pasid;
>> + info.addr = addr;
>> + info.mask = ~((1 << am) - 1);
>> +
>> + vtd_iommu_lock(s);
>> + g_hash_table_foreach_remove(s->iotlb,
>> + vtd_hash_remove_by_page_piotlb, &info);
>> + vtd_iommu_unlock(s);
>> +}
>> +
>> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>> VTDInvDesc *inv_desc)
>> {
>> uint16_t domain_id;
>> uint32_t pasid;
>> + uint8_t am;
>> + hwaddr addr;
>>
>> if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>> (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
>> @@ -2909,6 +2950,10 @@ static bool
>vtd_process_piotlb_desc(IntelIOMMUState *s,
>> break;
>>
>> case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
>> + am = VTD_INV_DESC_PIOTLB_AM(inv_desc->val[1]);
>> + addr = (hwaddr) VTD_INV_DESC_PIOTLB_ADDR(inv_desc->val[1]);
>> + vtd_piotlb_page_invalidate(s, domain_id, pasid, addr, am,
>> + VTD_INV_DESC_PIOTLB_IH(inv_desc->val[1]));
>> break;
>>
>> default:
>
>--
>Regards,
>Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID
2024-11-04 2:50 ` Yi Liu
@ 2024-11-04 5:47 ` Duan, Zhenzhong
0 siblings, 0 replies; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 5:47 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 10:51 AM
>Subject: Re: [PATCH v4 11/17] intel_iommu: Add an internal API to find an address
>space with PASID
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>>
>> This will be used to implement the device IOTLB invalidation
>>
>> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Acked-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/i386/intel_iommu.c | 39 ++++++++++++++++++++++++---------------
>> 1 file changed, 24 insertions(+), 15 deletions(-)
>>
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 289278ce30..a1596ba47d 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -70,6 +70,11 @@ struct vtd_hiod_key {
>> uint8_t devfn;
>> };
>>
>> +struct vtd_as_raw_key {
>> + uint16_t sid;
>> + uint32_t pasid;
>> +};
>> +
>> struct vtd_iotlb_key {
>> uint64_t gfn;
>> uint32_t pasid;
>> @@ -1875,29 +1880,33 @@ static inline bool vtd_is_interrupt_addr(hwaddr
>addr)
>> return VTD_INTERRUPT_ADDR_FIRST <= addr && addr <=
>VTD_INTERRUPT_ADDR_LAST;
>> }
>>
>> -static gboolean vtd_find_as_by_sid(gpointer key, gpointer value,
>> - gpointer user_data)
>> +static gboolean vtd_find_as_by_sid_and_pasid(gpointer key, gpointer value,
>> + gpointer user_data)
>> {
>> struct vtd_as_key *as_key = (struct vtd_as_key *)key;
>> - uint16_t target_sid = *(uint16_t *)user_data;
>> + struct vtd_as_raw_key target = *(struct vtd_as_raw_key *)user_data;
>
>why not just define target as a pointer?
>
>> uint16_t sid = PCI_BUILD_BDF(pci_bus_num(as_key->bus), as_key->devfn);
>> - return sid == target_sid;
>> +
>> + return (as_key->pasid == target.pasid) &&
>> + (sid == target.sid);
>
>hence using target->pasid and target->sid here.
Sure, will do.
Thanks
Zhenzhong
> Otherwise, looks good to me.
>
>Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>
>> }
>>
>> -static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
>> +static VTDAddressSpace *vtd_get_as_by_sid_and_pasid(IntelIOMMUState *s,
>> + uint16_t sid,
>> + uint32_t pasid)
>> {
>> - uint8_t bus_num = PCI_BUS_NUM(sid);
>> - VTDAddressSpace *vtd_as = s->vtd_as_cache[bus_num];
>> -
>> - if (vtd_as &&
>> - (sid == PCI_BUILD_BDF(pci_bus_num(vtd_as->bus), vtd_as->devfn))) {
>> - return vtd_as;
>> - }
>> + struct vtd_as_raw_key key = {
>> + .sid = sid,
>> + .pasid = pasid
>> + };
>>
>> - vtd_as = g_hash_table_find(s->vtd_address_spaces, vtd_find_as_by_sid,
>&sid);
>> - s->vtd_as_cache[bus_num] = vtd_as;
>> + return g_hash_table_find(s->vtd_address_spaces,
>> + vtd_find_as_by_sid_and_pasid, &key);
>> +}
>>
>> - return vtd_as;
>> +static VTDAddressSpace *vtd_get_as_by_sid(IntelIOMMUState *s, uint16_t sid)
>> +{
>> + return vtd_get_as_by_sid_and_pasid(s, sid, PCI_NO_PASID);
>> }
>>
>> static void vtd_pt_enable_fast_path(IntelIOMMUState *s, uint16_t source_id)
>
>--
>Regards,
>Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-11-04 4:25 ` Yi Liu
@ 2024-11-04 6:25 ` Duan, Zhenzhong
2024-11-04 7:23 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 6:25 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 12:25 PM
>Subject: Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for
>scalable modern mode
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
>> related to scalable mode translation, thus there are multiple combinations.
>>
>> This vIOMMU implementation wants to simplify it with a new property "x-fls".
>> When enabled in scalable mode, first stage translation also known as scalable
>> modern mode is supported. When enabled in legacy mode, throw out error.
>>
>> With scalable modern mode exposed to user, also accurate the pasid entry
>> check in vtd_pe_type_check().
>>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>
>Maybe a Suggested-by tag can help to understand where this idea comes. :)
Will add:
Suggested-by: Jason Wang <jasowang@redhat.com>
>
>> ---
>> hw/i386/intel_iommu_internal.h | 2 ++
>> hw/i386/intel_iommu.c | 28 +++++++++++++++++++---------
>> 2 files changed, 21 insertions(+), 9 deletions(-)
>>
>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>> index 2702edd27f..f13576d334 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -195,6 +195,7 @@
>> #define VTD_ECAP_PASID (1ULL << 40)
>> #define VTD_ECAP_SMTS (1ULL << 43)
>> #define VTD_ECAP_SLTS (1ULL << 46)
>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>
>> /* CAP_REG */
>> /* (offset >> 4) << 24 */
>> @@ -211,6 +212,7 @@
>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>> +#define VTD_CAP_FS1GP (1ULL << 56)
>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>VTD_CAP_DRAIN_WRITE)
>> #define VTD_CAP_CM (1ULL << 7)
>> #define VTD_PASID_ID_SHIFT 20
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 068a08f522..14578655e1 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -803,16 +803,18 @@ static inline bool
>vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
>> }
>>
>> /* Return true if check passed, otherwise false */
>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>> - VTDPASIDEntry *pe)
>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
>> {
>> switch (VTD_PE_GET_TYPE(pe)) {
>> - case VTD_SM_PASID_ENTRY_SLT:
>> - return true;
>> - case VTD_SM_PASID_ENTRY_PT:
>> - return x86_iommu->pt_supported;
>> case VTD_SM_PASID_ENTRY_FLT:
>> + return !!(s->ecap & VTD_ECAP_FLTS);
>> + case VTD_SM_PASID_ENTRY_SLT:
>> + return !!(s->ecap & VTD_ECAP_SLTS);
>> case VTD_SM_PASID_ENTRY_NESTED:
>> + /* Not support NESTED page table type yet */
>> + return false;
>> + case VTD_SM_PASID_ENTRY_PT:
>> + return !!(s->ecap & VTD_ECAP_PT);
>> default:
>> /* Unknown type */
>> return false;
>> @@ -861,7 +863,6 @@ static int
>vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>> uint8_t pgtt;
>> uint32_t index;
>> dma_addr_t entry_size;
>> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>
>> index = VTD_PASID_TABLE_INDEX(pasid);
>> entry_size = VTD_PASID_ENTRY_SIZE;
>> @@ -875,7 +876,7 @@ static int
>vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>> }
>>
>> /* Do translation type check */
>> - if (!vtd_pe_type_check(x86_iommu, pe)) {
>> + if (!vtd_pe_type_check(s, pe)) {
>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>> }
>>
>> @@ -3779,6 +3780,7 @@ static Property vtd_properties[] = {
>> VTD_HOST_AW_AUTO),
>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>FALSE),
>> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode,
>FALSE),
>> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern, FALSE),
>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>false),
>
>a question: is there any requirement on the layout of this array? Should
>new fields added in the end?
Looked over the history, seems we didn't have an explicit rule in vtd_properties.
I put "x-fls" just under "x-scalable-mode" as stage-1 is a sub-feature of scalable mode.
Let me know if you have preference to add in the end.
Thanks
Zhenzhong
>
>> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
>> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
>> @@ -4509,7 +4511,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
>> }
>>
>> /* TODO: read cap/ecap from host to decide which cap to be exposed. */
>> - if (s->scalable_mode) {
>> + if (s->scalable_modern) {
>> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
>> + s->cap |= VTD_CAP_FS1GP;
>> + } else if (s->scalable_mode) {
>> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
>> }
>>
>> @@ -4683,6 +4688,11 @@ static bool vtd_decide_config(IntelIOMMUState *s,
>Error **errp)
>> }
>> }
>>
>> + if (!s->scalable_mode && s->scalable_modern) {
>> + error_setg(errp, "Legacy mode: not support x-fls=on");
>> + return false;
>> + }
>> +
>> if (s->aw_bits == VTD_HOST_AW_AUTO) {
>> if (s->scalable_modern) {
>> s->aw_bits = VTD_HOST_AW_48BIT;
>
>--
>Regards,
>Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting
2024-09-30 9:26 ` [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting Zhenzhong Duan
@ 2024-11-04 7:00 ` Yi Liu
2024-11-08 4:45 ` Jason Wang
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 7:00 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex.williamson, clg, eric.auger, mst, peterx, jasowang, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
chao.p.peng, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/9/30 17:26, Zhenzhong Duan wrote:
> This gives user flexibility to turn off FS1GP for debug purpose.
>
> It is also useful for future nesting feature. When host IOMMU doesn't
> support FS1GP but vIOMMU does, nested page table on host side works
> after turn FS1GP off in vIOMMU.
s/turn/turning
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> This property has no effect when vIOMMU isn't in scalable modern
> mode.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> ---
> include/hw/i386/intel_iommu.h | 1 +
> hw/i386/intel_iommu.c | 5 ++++-
> 2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index 48134bda11..4d6acb2314 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -307,6 +307,7 @@ struct IntelIOMMUState {
> bool dma_drain; /* Whether DMA r/w draining enabled */
> bool dma_translation; /* Whether DMA translation supported */
> bool pasid; /* Whether to support PASID */
> + bool fs1gp; /* First Stage 1-GByte Page Support */
>
> /*
> * Protects IOMMU states in general. Currently it protects the
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 14578655e1..f8f196aeed 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3785,6 +3785,7 @@ static Property vtd_properties[] = {
> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
> DEFINE_PROP_BOOL("dma-translation", IntelIOMMUState, dma_translation, true),
> + DEFINE_PROP_BOOL("fs1gp", IntelIOMMUState, fs1gp, true),
> DEFINE_PROP_END_OF_LIST(),
> };
>
> @@ -4513,7 +4514,9 @@ static void vtd_cap_init(IntelIOMMUState *s)
> /* TODO: read cap/ecap from host to decide which cap to be exposed. */
> if (s->scalable_modern) {
> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
> - s->cap |= VTD_CAP_FS1GP;
> + if (s->fs1gp) {
> + s->cap |= VTD_CAP_FS1GP;
> + }
> } else if (s->scalable_mode) {
> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
> }
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 06/17] intel_iommu: Implement stage-1 translation
2024-11-04 3:05 ` Duan, Zhenzhong
@ 2024-11-04 7:02 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-04 7:02 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/11/4 11:05, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Sent: Sunday, November 3, 2024 10:22 PM
>> Subject: Re: [PATCH v4 06/17] intel_iommu: Implement stage-1 translation
>>
>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>> From: Yi Liu <yi.l.liu@intel.com>
>>>
>>> This adds stage-1 page table walking to support stage-1 only
>>> translation in scalable modern mode.
>>>
>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>> Co-developed-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>>> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> Acked-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 24 ++++++
>>> hw/i386/intel_iommu.c | 143 ++++++++++++++++++++++++++++++++-
>>> 2 files changed, 163 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>>> index 20fcc73938..38bf0c7a06 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -428,6 +428,22 @@ typedef union VTDInvDesc VTDInvDesc;
>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>>
>>> +/* Rsvd field masks for fpte */
>>> +#define VTD_FS_UPPER_IGNORED 0xfff0000000000000ULL
>>> +#define VTD_FPTE_PAGE_L1_RSVD_MASK(aw) \
>>> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>>> +#define VTD_FPTE_PAGE_L2_RSVD_MASK(aw) \
>>> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>>> +#define VTD_FPTE_PAGE_L3_RSVD_MASK(aw) \
>>> + (~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>>> +#define VTD_FPTE_PAGE_L4_RSVD_MASK(aw) \
>>> + (0x80ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>>> +
>>> +#define VTD_FPTE_LPAGE_L2_RSVD_MASK(aw) \
>>> + (0x1fe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>>> +#define VTD_FPTE_LPAGE_L3_RSVD_MASK(aw) \
>>> + (0x3fffe000ULL | ~(VTD_HAW_MASK(aw) | VTD_FS_UPPER_IGNORED))
>>> +
>>> /* Masks for PIOTLB Invalidate Descriptor */
>>> #define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>>> #define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>>> @@ -520,6 +536,14 @@ typedef struct VTDRootEntry VTDRootEntry;
>>> #define VTD_SM_PASID_ENTRY_AW 7ULL /* Adjusted guest-address-
>> width */
>>> #define VTD_SM_PASID_ENTRY_DID(val) ((val) & VTD_DOMAIN_ID_MASK)
>>>
>>> +#define VTD_SM_PASID_ENTRY_FLPM 3ULL
>>> +#define VTD_SM_PASID_ENTRY_FLPTPTR (~0xfffULL)
>>> +
>>> +/* First Level Paging Structure */
>>> +/* Masks for First Level Paging Entry */
>>> +#define VTD_FL_P 1ULL
>>> +#define VTD_FL_RW (1ULL << 1)
>>> +
>>> /* Second Level Page Translation Pointer*/
>>> #define VTD_SM_PASID_ENTRY_SLPTPTR (~0xfffULL)
>>>
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 6f2414898c..56d5933e93 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -48,6 +48,8 @@
>>>
>>> /* pe operations */
>>> #define VTD_PE_GET_TYPE(pe) ((pe)->val[0] & VTD_SM_PASID_ENTRY_PGTT)
>>> +#define VTD_PE_GET_FL_LEVEL(pe) \
>>> + (4 + (((pe)->val[2] >> 2) & VTD_SM_PASID_ENTRY_FLPM))
>>> #define VTD_PE_GET_SL_LEVEL(pe) \
>>> (2 + (((pe)->val[0] >> 2) & VTD_SM_PASID_ENTRY_AW))
>>>
>>> @@ -755,6 +757,11 @@ static inline bool
>> vtd_is_sl_level_supported(IntelIOMMUState *s, uint32_t level)
>>> (1ULL << (level - 2 + VTD_CAP_SAGAW_SHIFT));
>>> }
>>>
>>> +static inline bool vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t
>> level)
>>> +{
>>> + return level == VTD_PML4_LEVEL;
>>> +}
>>> +
>>> /* Return true if check passed, otherwise false */
>>> static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>> VTDPASIDEntry *pe)
>>> @@ -838,6 +845,11 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> }
>>>
>>> + if (pgtt == VTD_SM_PASID_ENTRY_FLT &&
>>> + !vtd_is_fl_level_supported(s, VTD_PE_GET_FL_LEVEL(pe))) {
>>> + return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> + }
>>> +
>>> return 0;
>>> }
>>>
>>> @@ -973,7 +985,11 @@ static uint32_t vtd_get_iova_level(IntelIOMMUState
>> *s,
>>>
>>> if (s->root_scalable) {
>>> vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
>>> - return VTD_PE_GET_SL_LEVEL(&pe);
>>> + if (s->scalable_modern) {
>>> + return VTD_PE_GET_FL_LEVEL(&pe);
>>> + } else {
>>> + return VTD_PE_GET_SL_LEVEL(&pe);
>>> + }
>>> }
>>>
>>> return vtd_ce_get_level(ce);
>>> @@ -1060,7 +1076,11 @@ static dma_addr_t
>> vtd_get_iova_pgtbl_base(IntelIOMMUState *s,
>>>
>>> if (s->root_scalable) {
>>> vtd_ce_get_rid2pasid_entry(s, ce, &pe, pasid);
>>> - return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
>>> + if (s->scalable_modern) {
>>> + return pe.val[2] & VTD_SM_PASID_ENTRY_FLPTPTR;
>>> + } else {
>>> + return pe.val[0] & VTD_SM_PASID_ENTRY_SLPTPTR;
>>> + }
>>> }
>>>
>>> return vtd_ce_get_slpt_base(ce);
>>> @@ -1862,6 +1882,104 @@ out:
>>> trace_vtd_pt_enable_fast_path(source_id, success);
>>> }
>>>
>>> +/*
>>> + * Rsvd field masks for fpte:
>>> + * vtd_fpte_rsvd 4k pages
>>> + * vtd_fpte_rsvd_large large pages
>>> + *
>>> + * We support only 4-level page tables.
>>> + */
>>> +#define VTD_FPTE_RSVD_LEN 5
>>> +static uint64_t vtd_fpte_rsvd[VTD_FPTE_RSVD_LEN];
>>> +static uint64_t vtd_fpte_rsvd_large[VTD_FPTE_RSVD_LEN];
>>> +
>>> +static bool vtd_flpte_nonzero_rsvd(uint64_t flpte, uint32_t level)
>>> +{
>>> + uint64_t rsvd_mask;
>>> +
>>> + /*
>>> + * We should have caught a guest-mis-programmed level earlier,
>>> + * via vtd_is_fl_level_supported.
>>> + */
>>> + assert(level < VTD_FPTE_RSVD_LEN);
>>> + /*
>>> + * Zero level doesn't exist. The smallest level is VTD_PT_LEVEL=1 and
>>> + * checked by vtd_is_last_pte().
>>> + */
>>> + assert(level);
>>> +
>>> + if ((level == VTD_PD_LEVEL || level == VTD_PDP_LEVEL) &&
>>> + (flpte & VTD_PT_PAGE_SIZE_MASK)) {
>>> + /* large page */
>>> + rsvd_mask = vtd_fpte_rsvd_large[level];
>>> + } else {
>>> + rsvd_mask = vtd_fpte_rsvd[level];
>>> + }
>>> +
>>> + return flpte & rsvd_mask;
>>> +}
>>> +
>>> +static inline bool vtd_flpte_present(uint64_t flpte)
>>> +{
>>> + return !!(flpte & VTD_FL_P);
>>> +}
>>> +
>>> +/*
>>> + * Given the @iova, get relevant @flptep. @flpte_level will be the last level
>>> + * of the translation, can be used for deciding the size of large page.
>>> + */
>>> +static int vtd_iova_to_flpte(IntelIOMMUState *s, VTDContextEntry *ce,
>>> + uint64_t iova, bool is_write,
>>> + uint64_t *flptep, uint32_t *flpte_level,
>>> + bool *reads, bool *writes, uint8_t aw_bits,
>>> + uint32_t pasid)
>>> +{
>>> + dma_addr_t addr = vtd_get_iova_pgtbl_base(s, ce, pasid);
>>> + uint32_t level = vtd_get_iova_level(s, ce, pasid);
>>> + uint32_t offset;
>>> + uint64_t flpte;
>>> +
>>> + while (true) {
>>> + offset = vtd_iova_level_offset(iova, level);
>>> + flpte = vtd_get_pte(addr, offset);
>>> +
>>> + if (flpte == (uint64_t)-1) {
>>> + if (level == vtd_get_iova_level(s, ce, pasid)) {
>>> + /* Invalid programming of context-entry */
>>> + return -VTD_FR_CONTEXT_ENTRY_INV;
>>> + } else {
>>> + return -VTD_FR_PAGING_ENTRY_INV;
>>> + }
>>> + }
>>> + if (!vtd_flpte_present(flpte)) {
>>> + *reads = false;
>>> + *writes = false;
>>> + return -VTD_FR_PAGING_ENTRY_INV;
>>> + }
>>> + *reads = true;
>>> + *writes = (*writes) && (flpte & VTD_FL_RW);
>>> + if (is_write && !(flpte & VTD_FL_RW)) {
>>> + return -VTD_FR_WRITE;
>>> + }
>>> + if (vtd_flpte_nonzero_rsvd(flpte, level)) {
>>> + error_report_once("%s: detected flpte reserved non-zero "
>>> + "iova=0x%" PRIx64 ", level=0x%" PRIx32
>>> + "flpte=0x%" PRIx64 ", pasid=0x%" PRIX32 ")",
>>> + __func__, iova, level, flpte, pasid);
>>> + return -VTD_FR_PAGING_ENTRY_RSVD;
>>> + }
>>> +
>>> + if (vtd_is_last_pte(flpte, level)) {
>>> + *flptep = flpte;
>>> + *flpte_level = level;
>>> + return 0;
>>> + }
>>> +
>>> + addr = vtd_get_pte_addr(flpte, aw_bits);
>>> + level--;
>>> + }
>>
>> As I replied in last version, it should check the ir range for the
>> translation result. I saw your reply, but that only covers the input
>> address, my comment is about the output addr.
>>
>> [1]
>> https://lore.kernel.org/qemu-
>> devel/SJ0PR11MB6744D2B572D278DAF8BF267692762@SJ0PR11MB6744.nampr
>> d11.prod.outlook.com/
>
> Oh, I see, you are right! As the check for ir range is common for both stage-2 and stage-1,
> I plan to move it to common place with a separate patch like below, let me know if you
> prefer to have separate check for both stage-2 and stage-1.
checking it in the common place is ok.
>
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -1235,7 +1235,6 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
> uint32_t offset;
> uint64_t slpte;
> uint64_t access_right_check;
> - uint64_t xlat, size;
>
> if (!vtd_iova_sl_range_check(s, iova, ce, aw_bits, pasid)) {
> error_report_once("%s: detected IOVA overflow (iova=0x%" PRIx64 ","
> @@ -1288,28 +1287,7 @@ static int vtd_iova_to_slpte(IntelIOMMUState *s, VTDContextEntry *ce,
> level--;
> }
>
> - xlat = vtd_get_pte_addr(*slptep, aw_bits);
> - size = ~vtd_pt_level_page_mask(level) + 1;
> -
> - /*
> - * From VT-d spec 3.14: Untranslated requests and translation
> - * requests that result in an address in the interrupt range will be
> - * blocked with condition code LGN.4 or SGN.8.
> - */
> - if ((xlat > VTD_INTERRUPT_ADDR_LAST ||
> - xlat + size - 1 < VTD_INTERRUPT_ADDR_FIRST)) {
> - return 0;
> - } else {
> - error_report_once("%s: xlat address is in interrupt range "
> - "(iova=0x%" PRIx64 ", level=0x%" PRIx32 ", "
> - "slpte=0x%" PRIx64 ", write=%d, "
> - "xlat=0x%" PRIx64 ", size=0x%" PRIx64 ", "
> - "pasid=0x%" PRIx32 ")",
> - __func__, iova, level, slpte, is_write,
> - xlat, size, pasid);
> - return s->scalable_mode ? -VTD_FR_SM_INTERRUPT_ADDR :
> - -VTD_FR_INTERRUPT_ADDR;
> - }
> + return 0;
> }
>
> typedef int (*vtd_page_walk_hook)(const IOMMUTLBEvent *event, void *private);
> @@ -2201,6 +2179,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> uint8_t access_flags, pgtt;
> bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
> VTDIOTLBEntry *iotlb_entry;
> + uint64_t xlat, size;
>
> /*
> * We have standalone memory region for interrupt addresses, we
> @@ -2312,6 +2291,29 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
> &reads, &writes, s->aw_bits, pasid);
> pgtt = VTD_SM_PASID_ENTRY_SLT;
> }
> + if (!ret_fr) {
> + xlat = vtd_get_pte_addr(pte, s->aw_bits);
> + size = ~vtd_pt_level_page_mask(level) + 1;
> +
> + /*
> + * From VT-d spec 3.14: Untranslated requests and translation
> + * requests that result in an address in the interrupt range will be
> + * blocked with condition code LGN.4 or SGN.8.
> + */
> + if ((xlat <= VTD_INTERRUPT_ADDR_LAST &&
> + xlat + size - 1 >= VTD_INTERRUPT_ADDR_FIRST)) {
> + error_report_once("%s: xlat address is in interrupt range "
> + "(iova=0x%" PRIx64 ", level=0x%" PRIx32 ", "
> + "pte=0x%" PRIx64 ", write=%d, "
> + "xlat=0x%" PRIx64 ", size=0x%" PRIx64 ", "
> + "pasid=0x%" PRIx32 ")",
> + __func__, addr, level, pte, is_write,
> + xlat, size, pasid);
> + ret_fr = s->scalable_mode ? -VTD_FR_SM_INTERRUPT_ADDR :
> + -VTD_FR_INTERRUPT_ADDR;
> + }
> + }
> +
> if (ret_fr) {
> vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
> addr, is_write, pasid != PCI_NO_PASID, pasid);
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 10/17] intel_iommu: Process PASID-based iotlb invalidation
2024-11-04 5:40 ` Duan, Zhenzhong
@ 2024-11-04 7:05 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-04 7:05 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Marcel Apfelbaum, Paolo Bonzini, Richard Henderson,
Eduardo Habkost
On 2024/11/4 13:40, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Sent: Monday, November 4, 2024 10:51 AM
>> Subject: Re: [PATCH v4 10/17] intel_iommu: Process PASID-based iotlb
>> invalidation
>>
>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>> PASID-based iotlb (piotlb) is used during walking Intel
>>> VT-d stage-1 page table.
>>>
>>> This emulates the stage-1 page table iotlb invalidation requested
>>> by a PASID-based IOTLB Invalidate Descriptor (P_IOTLB).
>>>
>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>>> Acked-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 3 +++
>>> hw/i386/intel_iommu.c | 45 ++++++++++++++++++++++++++++++++++
>>> 2 files changed, 48 insertions(+)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>>> index 4c3e75e593..20d922d600 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -453,6 +453,9 @@ typedef union VTDInvDesc VTDInvDesc;
>>> #define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>>> #define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>> VTD_DOMAIN_ID_MASK)
>>> #define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>>> +#define VTD_INV_DESC_PIOTLB_AM(val) ((val) & 0x3fULL)
>>> +#define VTD_INV_DESC_PIOTLB_IH(val) (((val) >> 6) & 0x1)
>>> +#define VTD_INV_DESC_PIOTLB_ADDR(val) ((val) & ~0xfffULL)
>>> #define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>>> #define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>>>
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 46bde1ad40..289278ce30 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -322,6 +322,28 @@ static gboolean vtd_hash_remove_by_page(gpointer
>> key, gpointer value,
>>> return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
>>> }
>>>
>>> +static gboolean vtd_hash_remove_by_page_piotlb(gpointer key, gpointer
>> value,
>>> + gpointer user_data)
>>> +{
>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>> + uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
>>> + uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
>>> +
>>> + /*
>>> + * According to spec, PASID-based-IOTLB Invalidation in page granularity
>>> + * doesn't invalidate IOTLB entries caching second-stage (PGTT=010b)
>>> + * or pass-through (PGTT=100b) mappings. Nested isn't supported yet,
>>> + * so only need to check first-stage (PGTT=001b) mappings.
>>> + */
>>> + if (entry->pgtt != VTD_SM_PASID_ENTRY_FLT) {
>>> + return false;
>>> + }
>>> +
>>> + return entry->domain_id == info->domain_id && entry->pasid == info->pasid
>> &&
>>> + ((entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb);
>>> +}
>>> +
>>> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
>>> * IntelIOMMUState to 1. Must be called with IOMMU lock held.
>>> */
>>> @@ -2884,11 +2906,30 @@ static void
>> vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>> }
>>> }
>>>
>>> +static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t
>> domain_id,
>>> + uint32_t pasid, hwaddr addr, uint8_t am,
>>> + bool ih)
>>
>> @ih is not used, perhaps you can drop it. Seems like we don't cache paging
>> structure, hence ih can be ignored so far. Besides this, the patch looks
>> good to me.
>
> OK, will drop it. But nesting series needs it, see below.
> I'll drop it in this series and add back in nesting series.
yep, you can add it back when it's going to be used. :)
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-11-04 6:25 ` Duan, Zhenzhong
@ 2024-11-04 7:23 ` Yi Liu
2024-11-05 3:11 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 7:23 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/11/4 14:25, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Sent: Monday, November 4, 2024 12:25 PM
>> Subject: Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for
>> scalable modern mode
>>
>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
>>> related to scalable mode translation, thus there are multiple combinations.
>>>
>>> This vIOMMU implementation wants to simplify it with a new property "x-fls".
>>> When enabled in scalable mode, first stage translation also known as scalable
>>> modern mode is supported. When enabled in legacy mode, throw out error.
>>>
>>> With scalable modern mode exposed to user, also accurate the pasid entry
>>> check in vtd_pe_type_check().
>>>
>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>
>> Maybe a Suggested-by tag can help to understand where this idea comes. :)
>
> Will add:
> Suggested-by: Jason Wang <jasowang@redhat.com>
>
>>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 2 ++
>>> hw/i386/intel_iommu.c | 28 +++++++++++++++++++---------
>>> 2 files changed, 21 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h
>>> index 2702edd27f..f13576d334 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -195,6 +195,7 @@
>>> #define VTD_ECAP_PASID (1ULL << 40)
>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>
>>> /* CAP_REG */
>>> /* (offset >> 4) << 24 */
>>> @@ -211,6 +212,7 @@
>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>> VTD_CAP_DRAIN_WRITE)
>>> #define VTD_CAP_CM (1ULL << 7)
>>> #define VTD_PASID_ID_SHIFT 20
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 068a08f522..14578655e1 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -803,16 +803,18 @@ static inline bool
>> vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
>>> }
>>>
>>> /* Return true if check passed, otherwise false */
>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>> - VTDPASIDEntry *pe)
>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry *pe)
>>> {
>>> switch (VTD_PE_GET_TYPE(pe)) {
>>> - case VTD_SM_PASID_ENTRY_SLT:
>>> - return true;
>>> - case VTD_SM_PASID_ENTRY_PT:
>>> - return x86_iommu->pt_supported;
>>> case VTD_SM_PASID_ENTRY_FLT:
>>> + return !!(s->ecap & VTD_ECAP_FLTS);
>>> + case VTD_SM_PASID_ENTRY_SLT:
>>> + return !!(s->ecap & VTD_ECAP_SLTS);
>>> case VTD_SM_PASID_ENTRY_NESTED:
>>> + /* Not support NESTED page table type yet */
>>> + return false;
>>> + case VTD_SM_PASID_ENTRY_PT:
>>> + return !!(s->ecap & VTD_ECAP_PT);
>>> default:
>>> /* Unknown type */
>>> return false;
>>> @@ -861,7 +863,6 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> uint8_t pgtt;
>>> uint32_t index;
>>> dma_addr_t entry_size;
>>> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>>
>>> index = VTD_PASID_TABLE_INDEX(pasid);
>>> entry_size = VTD_PASID_ENTRY_SIZE;
>>> @@ -875,7 +876,7 @@ static int
>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>> }
>>>
>>> /* Do translation type check */
>>> - if (!vtd_pe_type_check(x86_iommu, pe)) {
>>> + if (!vtd_pe_type_check(s, pe)) {
>>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>> }
>>>
>>> @@ -3779,6 +3780,7 @@ static Property vtd_properties[] = {
>>> VTD_HOST_AW_AUTO),
>>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>> FALSE),
>>> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode,
>> FALSE),
>>> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern, FALSE),
>>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>> false),
>>
>> a question: is there any requirement on the layout of this array? Should
>> new fields added in the end?
>
> Looked over the history, seems we didn't have an explicit rule in vtd_properties.
> I put "x-fls" just under "x-scalable-mode" as stage-1 is a sub-feature of scalable mode.
> Let me know if you have preference to add in the end.
I don't have a preference for now as long as it does not break any
functionality. BTW. Will x-flt or x-flts better?
>
>>
>>> DEFINE_PROP_BOOL("x-pasid-mode", IntelIOMMUState, pasid, false),
>>> DEFINE_PROP_BOOL("dma-drain", IntelIOMMUState, dma_drain, true),
>>> @@ -4509,7 +4511,10 @@ static void vtd_cap_init(IntelIOMMUState *s)
>>> }
>>>
>>> /* TODO: read cap/ecap from host to decide which cap to be exposed. */
>>> - if (s->scalable_mode) {
>>> + if (s->scalable_modern) {
>>> + s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_FLTS;
>>> + s->cap |= VTD_CAP_FS1GP;
>>> + } else if (s->scalable_mode) {
>>> s->ecap |= VTD_ECAP_SMTS | VTD_ECAP_SRS | VTD_ECAP_SLTS;
>>> }
>>>
>>> @@ -4683,6 +4688,11 @@ static bool vtd_decide_config(IntelIOMMUState *s,
>> Error **errp)
>>> }
>>> }
>>>
>>> + if (!s->scalable_mode && s->scalable_modern) {
>>> + error_setg(errp, "Legacy mode: not support x-fls=on");
>>> + return false;
>>> + }
>>> +
>>> if (s->aw_bits == VTD_HOST_AW_AUTO) {
>>> if (s->scalable_modern) {
>>> s->aw_bits = VTD_HOST_AW_48BIT;
>>
>> --
>> Regards,
>> Yi Liu
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-11-04 3:19 ` Duan, Zhenzhong
@ 2024-11-04 7:25 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-04 7:25 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/11/4 11:19, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Sent: Monday, November 4, 2024 11:16 AM
>> Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable
>> modern mode
>>
>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>> According to VTD spec, stage-1 page table could support 4-level and
>>> 5-level paging.
>>>
>>> However, 5-level paging translation emulation is unsupported yet.
>>> That means the only supported value for aw_bits is 48.
>>>
>>> So default aw_bits to 48 in scalable modern mode. In other cases,
>>> it is still default to 39 for backward compatibility.
>>>
>>> Add a check to ensure user specified value is 48 in modern mode
>>> for now.
>>
>> this is not a simple check. I think your patch makes an auto selection
>> of aw_bits.
>
> Yes, if user doesn't specify it, will auto select a default.
>
>>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>>> ---
>>> include/hw/i386/intel_iommu.h | 2 +-
>>> hw/i386/intel_iommu.c | 10 +++++++++-
>>> 2 files changed, 10 insertions(+), 2 deletions(-)
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation
2024-11-04 3:38 ` Duan, Zhenzhong
@ 2024-11-04 7:36 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-04 7:36 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/11/4 11:38, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Sent: Monday, November 4, 2024 10:51 AM
>> Subject: Re: [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb
>> invalidation
>>
>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>> According to spec, Page-Selective-within-Domain Invalidation (11b):
>>>
>>> 1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-through
>>> (PGTT=100b) mappings associated with the specified domain-id and the
>>> input-address range are invalidated.
>>> 2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
>>> mapping associated with specified domain-id are invalidated.
>>>
>>> So per spec definition the Page-Selective-within-Domain Invalidation
>>> needs to flush first stage and nested cached IOTLB enties as well.
>>>
>>> We don't support nested yet and pass-through mapping is never cached,
>>> so what in iotlb cache are only first-stage and second-stage mappings.
>>
>> a side question, how about cache paging structure?
>
> We don't cache paging structures in current vIOMMU emulation code,
> I thought the reason is it's cheap for vIOMMU to get paging structure
> compared to real IOMMU hw. Even if we cache paging structure, we need
> to compare address tag and read memory to get result, seems not much benefit.
never mind.
>
>>
>>> Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
>>> invalidate entries based on PGTT type.
>>>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>>> Acked-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> include/hw/i386/intel_iommu.h | 1 +
>>> hw/i386/intel_iommu.c | 27 +++++++++++++++++++++------
>>> 2 files changed, 22 insertions(+), 6 deletions(-)
>>
>> anyhow, this patch looks good to me.
>>
>> Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>>
>>> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
>>> index fe9057c50d..b843d069cc 100644
>>> --- a/include/hw/i386/intel_iommu.h
>>> +++ b/include/hw/i386/intel_iommu.h
>>> @@ -155,6 +155,7 @@ struct VTDIOTLBEntry {
>>> uint64_t pte;
>>> uint64_t mask;
>>> uint8_t access_flags;
>>> + uint8_t pgtt;
>>> };
>>>
>>> /* VT-d Source-ID Qualifier types */
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 99bb3f42ea..46bde1ad40 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -305,9 +305,21 @@ static gboolean vtd_hash_remove_by_page(gpointer
>> key, gpointer value,
>>> VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>> uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
>>> uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
>>> - return (entry->domain_id == info->domain_id) &&
>>> - (((entry->gfn & info->mask) == gfn) ||
>>> - (entry->gfn == gfn_tlb));
>>> +
>>> + if (entry->domain_id != info->domain_id) {
>>> + return false;
>>> + }
>>> +
>>> + /*
>>> + * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
>>> + * nested (PGTT=011b) mapping associated with specified domain-id are
>>> + * invalidated. Nested isn't supported yet, so only need to check 001b.
>>> + */
>>> + if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
>>> + return true;
>>> + }
>>> +
>>> + return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
>>> }
>>>
>>> /* Reset all the gen of VTDAddressSpace to zero and set the gen of
>>> @@ -382,7 +394,7 @@ out:
>>> static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
>>> uint16_t domain_id, hwaddr addr, uint64_t pte,
>>> uint8_t access_flags, uint32_t level,
>>> - uint32_t pasid)
>>> + uint32_t pasid, uint8_t pgtt)
>>> {
>>> VTDIOTLBEntry *entry = g_malloc(sizeof(*entry));
>>> struct vtd_iotlb_key *key = g_malloc(sizeof(*key));
>>> @@ -400,6 +412,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s,
>> uint16_t source_id,
>>> entry->access_flags = access_flags;
>>> entry->mask = vtd_pt_level_page_mask(level);
>>> entry->pasid = pasid;
>>> + entry->pgtt = pgtt;
>>>
>>> key->gfn = gfn;
>>> key->sid = source_id;
>>> @@ -2069,7 +2082,7 @@ static bool
>> vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
>>> bool is_fpd_set = false;
>>> bool reads = true;
>>> bool writes = true;
>>> - uint8_t access_flags;
>>> + uint8_t access_flags, pgtt;
>>> bool rid2pasid = (pasid == PCI_NO_PASID) && s->root_scalable;
>>> VTDIOTLBEntry *iotlb_entry;
>>>
>>> @@ -2177,9 +2190,11 @@ static bool
>> vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
>>> if (s->scalable_modern && s->root_scalable) {
>>> ret_fr = vtd_iova_to_flpte(s, &ce, addr, is_write, &pte, &level,
>>> &reads, &writes, s->aw_bits, pasid);
>>> + pgtt = VTD_SM_PASID_ENTRY_FLT;
>>> } else {
>>> ret_fr = vtd_iova_to_slpte(s, &ce, addr, is_write, &pte, &level,
>>> &reads, &writes, s->aw_bits, pasid);
>>> + pgtt = VTD_SM_PASID_ENTRY_SLT;
>>> }
>>> if (ret_fr) {
>>> vtd_report_fault(s, -ret_fr, is_fpd_set, source_id,
>>> @@ -2190,7 +2205,7 @@ static bool
>> vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
>>> page_mask = vtd_pt_level_page_mask(level);
>>> access_flags = IOMMU_ACCESS_FLAG(reads, writes);
>>> vtd_update_iotlb(s, source_id, vtd_get_domain_id(s, &ce, pasid),
>>> - addr, pte, access_flags, level, pasid);
>>> + addr, pte, access_flags, level, pasid, pgtt);
>>> out:
>>> vtd_iommu_unlock(s);
>>> entry->iova = addr & page_mask;
>>
>> --
>> Regards,
>> Yi Liu
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 2:49 ` Yi Liu
@ 2024-11-04 7:37 ` CLEMENT MATHIEU--DRIF
2024-11-04 8:45 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: CLEMENT MATHIEU--DRIF @ 2024-11-04 7:37 UTC (permalink / raw)
To: Yi Liu, Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, chao.p.peng@intel.com, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
On 04/11/2024 03:49, Yi Liu wrote:
> Caution: External email. Do not open attachments or click links, unless
> this email comes from a known sender and you know the content is safe.
>
>
> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>> flush stage-2 iotlb entries with matching domain id and pasid.
>
> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
> VT-d spec 4.1.
>
>> With scalable modern mode introduced, guest could send PASID-selective
>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>>
>> By this chance, remove old IOTLB related definitions which were unused.
>
>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> Acked-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/i386/intel_iommu_internal.h | 14 ++++--
>> hw/i386/intel_iommu.c | 88 +++++++++++++++++++++++++++++++++-
>> 2 files changed, 96 insertions(+), 6 deletions(-)
>>
>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
>> intel_iommu_internal.h
>> index d0f9d4589d..eec8090190 100644
>> --- a/hw/i386/intel_iommu_internal.h
>> +++ b/hw/i386/intel_iommu_internal.h
>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
>> VTD_PASID_ID_MASK)
>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
>>
>> /* Mask for Device IOTLB Invalidate Descriptor */
>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
>> 0xfffffffffffff000ULL)
>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>
>> +/* Masks for PIOTLB Invalidate Descriptor */
>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>> VTD_DOMAIN_ID_MASK)
>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>> +
>> /* Information about page-selective IOTLB invalidate */
>> struct VTDIOTLBPageInvInfo {
>> uint16_t domain_id;
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 9e6ef0cb99..72c9c91d4f 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -2656,6 +2656,86 @@ static bool
>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>> return true;
>> }
>>
>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
>> + gpointer user_data)
>> +{
>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>> +
>> + return ((entry->domain_id == info->domain_id) &&
>> + (entry->pasid == info->pasid));
>> +}
>> +
>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> + uint16_t domain_id, uint32_t
>> pasid)
>> +{
>> + VTDIOTLBPageInvInfo info;
>> + VTDAddressSpace *vtd_as;
>> + VTDContextEntry ce;
>> +
>> + info.domain_id = domain_id;
>> + info.pasid = pasid;
>> +
>> + vtd_iommu_lock(s);
>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
>> + &info);
>> + vtd_iommu_unlock(s);
>> +
>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>> + vtd_as->devfn, &ce) &&
>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>> +
>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>> + vtd_as->pasid != pasid) {
>> + continue;
>> + }
>> +
>> + if (!s->scalable_modern) {
>> + vtd_address_space_sync(vtd_as);
>> + }
>> + }
>> + }
>> +}
>> +
>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>> + VTDInvDesc *inv_desc)
>> +{
>> + uint16_t domain_id;
>> + uint32_t pasid;
>> +
>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
>> + inv_desc->val[2] || inv_desc->val[3]) {
>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
>> + __func__, inv_desc->val[3], inv_desc->val[2],
>> + inv_desc->val[1], inv_desc->val[0]);
>> + return false;
>> + }
>
> Need to consider the below behaviour as well.
>
> "
> This
> descriptor is a 256-bit descriptor and will result in an invalid descriptor
> error if submitted in an IQ that
> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
> "
>
> Also there are descriptions about the old inv desc types (e.g.
> iotlb_inv_desc) that can be either 128bits or 256bits.
>
> "If a 128-bit
> version of this descriptor is submitted into an IQ that is setup to provide
> hardware with 256-bit
> descriptors or vice-versa it will result in an invalid descriptor error.
> "
>
> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
> submits 128bits desc, then the high 128bits would be non-zero if there is
> more than one desc. But if there is only one desc in the queue, then the
> high 128bits would be zero as well. While, it may be captured by the
> tail register update. Bit4 is reserved when DW==1, and guest would use
> bit4 when it only submits one desc.
>
> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
> can be identified as valid except for the types that does not requires
> 256bits. The higher 128bits would be subjected to the desc sanity check
> as well.
>
> Based on the above, I think you may need to add two more checks. If DW==0,
> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
> done it in this patch.
>
> Thoughts are welcomed here.
Good catch,
I think we should write the check in vtd_process_inv_desc
rather than updating the handlers.
What are your thoughts?
>
>> +
>> + domain_id = VTD_INV_DESC_PIOTLB_DID(inv_desc->val[0]);
>> + pasid = VTD_INV_DESC_PIOTLB_PASID(inv_desc->val[0]);
>> + switch (inv_desc->val[0] & VTD_INV_DESC_PIOTLB_G) {
>> + case VTD_INV_DESC_PIOTLB_ALL_IN_PASID:
>> + vtd_piotlb_pasid_invalidate(s, domain_id, pasid);
>> + break;
>> +
>> + case VTD_INV_DESC_PIOTLB_PSI_IN_PASID:
>> + break;
>> +
>> + default:
>> + error_report_once("%s: invalid piotlb inv desc: hi=0x%"PRIx64
>> + ", lo=0x%"PRIx64" (type mismatch: 0x%llx)",
>> + __func__, inv_desc->val[1], inv_desc->val[0],
>> + inv_desc->val[0] & VTD_INV_DESC_IOTLB_G);
>> + return false;
>> + }
>> + return true;
>> +}
>> +
>> static bool vtd_process_inv_iec_desc(IntelIOMMUState *s,
>> VTDInvDesc *inv_desc)
>> {
>> @@ -2766,6 +2846,13 @@ static bool
>> vtd_process_inv_desc(IntelIOMMUState *s)
>> }
>> break;
>>
>> + case VTD_INV_DESC_PIOTLB:
>> + trace_vtd_inv_desc("p-iotlb", inv_desc.val[1], inv_desc.val[0]);
>> + if (!vtd_process_piotlb_desc(s, &inv_desc)) {
>> + return false;
>> + }
>> + break;
>> +
>> case VTD_INV_DESC_WAIT:
>> trace_vtd_inv_desc("wait", inv_desc.hi, inv_desc.lo);
>> if (!vtd_process_wait_desc(s, &inv_desc)) {
>> @@ -2793,7 +2880,6 @@ static bool vtd_process_inv_desc(IntelIOMMUState
>> *s)
>> * iommu driver) work, just return true is enough so far.
>> */
>> case VTD_INV_DESC_PC:
>> - case VTD_INV_DESC_PIOTLB:
>> if (s->scalable_mode) {
>> break;
>> }
>
> --
> Regards,
> Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap
2024-11-04 3:05 ` Yi Liu
@ 2024-11-04 8:15 ` Duan, Zhenzhong
2024-11-05 6:29 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 8:15 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 11:05 AM
>Subject: Re: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify
>unmap
>
>On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> This is used by some emulated devices which caches address
>> translation result. When piotlb invalidation issued in guest,
>> those caches should be refreshed.
>>
>> For device that does not implement ATS capability or disable
>> it but still caches the translation result, it is better to
>> implement ATS cap or enable it if there is need to cache the
>> translation result.
>
>Is there a list of such devices? Though I don't object this patch,
>but it may be helpful to list such devices. One day we may remove
>this when the list becomes empty.
Currently only virtio pci device support ATS and only vhost variant caches
translation result even if ATS disabled.
>
>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> ---
>> hw/i386/intel_iommu.c | 35 ++++++++++++++++++++++++++++++++++-
>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 5ea59167b3..91d7b1abfa 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -2908,7 +2908,7 @@ static void
>vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> continue;
>> }
>>
>> - if (!s->scalable_modern) {
>> + if (!s->scalable_modern || !vtd_as_has_map_notifier(vtd_as)) {
>> vtd_address_space_sync(vtd_as);
>> }
>> }
>> @@ -2920,6 +2920,9 @@ static void
>vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>> bool ih)
>> {
>> VTDIOTLBPageInvInfo info;
>> + VTDAddressSpace *vtd_as;
>> + VTDContextEntry ce;
>> + hwaddr size = (1 << am) * VTD_PAGE_SIZE;
>>
>> info.domain_id = domain_id;
>> info.pasid = pasid;
>> @@ -2930,6 +2933,36 @@ static void
>vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>> g_hash_table_foreach_remove(s->iotlb,
>> vtd_hash_remove_by_page_piotlb, &info);
>> vtd_iommu_unlock(s);
>> +
>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>> + vtd_as->devfn, &ce) &&
>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>> + IOMMUTLBEvent event;
>> +
>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>> + vtd_as->pasid != pasid) {
>> + continue;
>
>not quite get the logic here. patch 4 has a similar logic.
This code means we need to invalidate device tlb either when pasid matches address space's pasid or when pasid matches rid2pasid if this address space has no pasid.
Yes, patch4 only deal with stage-2, while this patch deal with stage-1.
Thanks
Zhenzhong
>
>> + }
>> +
>> + /*
>> + * Page-Selective-within-PASID PASID-based-IOTLB Invalidation
>> + * does not flush stage-2 entries. See spec section 6.5.2.4
>> + */
>> + if (!s->scalable_modern) {
>> + continue;
>> + }
>> +
>> + event.type = IOMMU_NOTIFIER_UNMAP;
>> + event.entry.target_as = &address_space_memory;
>> + event.entry.iova = addr;
>> + event.entry.perm = IOMMU_NONE;
>> + event.entry.addr_mask = size - 1;
>> + event.entry.translated_addr = 0;
>> + memory_region_notify_iommu(&vtd_as->iommu, 0, event);
>> + }
>> + }
>> }
>>
>> static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>
>--
>Regards,
>Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 7:37 ` CLEMENT MATHIEU--DRIF
@ 2024-11-04 8:45 ` Yi Liu
2024-11-04 11:46 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-04 8:45 UTC (permalink / raw)
To: CLEMENT MATHIEU--DRIF, Zhenzhong Duan, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
kevin.tian@intel.com, chao.p.peng@intel.com, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Marcel Apfelbaum
On 2024/11/4 15:37, CLEMENT MATHIEU--DRIF wrote:
>
>
> On 04/11/2024 03:49, Yi Liu wrote:
>> Caution: External email. Do not open attachments or click links, unless
>> this email comes from a known sender and you know the content is safe.
>>
>>
>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>>> flush stage-2 iotlb entries with matching domain id and pasid.
>>
>> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
>> VT-d spec 4.1.
>>
>>> With scalable modern mode introduced, guest could send PASID-selective
>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>>>
>>> By this chance, remove old IOTLB related definitions which were unused.
>>
>>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>>> Acked-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> hw/i386/intel_iommu_internal.h | 14 ++++--
>>> hw/i386/intel_iommu.c | 88 +++++++++++++++++++++++++++++++++-
>>> 2 files changed, 96 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
>>> intel_iommu_internal.h
>>> index d0f9d4589d..eec8090190 100644
>>> --- a/hw/i386/intel_iommu_internal.h
>>> +++ b/hw/i386/intel_iommu_internal.h
>>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
>>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
>>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
>>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
>>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
>>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
>>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
>>> VTD_PASID_ID_MASK)
>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
>>>
>>> /* Mask for Device IOTLB Invalidate Descriptor */
>>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
>>> 0xfffffffffffff000ULL)
>>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>>
>>> +/* Masks for PIOTLB Invalidate Descriptor */
>>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>>> VTD_DOMAIN_ID_MASK)
>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>>> +
>>> /* Information about page-selective IOTLB invalidate */
>>> struct VTDIOTLBPageInvInfo {
>>> uint16_t domain_id;
>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>> index 9e6ef0cb99..72c9c91d4f 100644
>>> --- a/hw/i386/intel_iommu.c
>>> +++ b/hw/i386/intel_iommu.c
>>> @@ -2656,6 +2656,86 @@ static bool
>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>>> return true;
>>> }
>>>
>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
>>> + gpointer user_data)
>>> +{
>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>> +
>>> + return ((entry->domain_id == info->domain_id) &&
>>> + (entry->pasid == info->pasid));
>>> +}
>>> +
>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>> + uint16_t domain_id, uint32_t
>>> pasid)
>>> +{
>>> + VTDIOTLBPageInvInfo info;
>>> + VTDAddressSpace *vtd_as;
>>> + VTDContextEntry ce;
>>> +
>>> + info.domain_id = domain_id;
>>> + info.pasid = pasid;
>>> +
>>> + vtd_iommu_lock(s);
>>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
>>> + &info);
>>> + vtd_iommu_unlock(s);
>>> +
>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>> + vtd_as->devfn, &ce) &&
>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>> +
>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>> + vtd_as->pasid != pasid) {
>>> + continue;
>>> + }
>>> +
>>> + if (!s->scalable_modern) {
>>> + vtd_address_space_sync(vtd_as);
>>> + }
>>> + }
>>> + }
>>> +}
>>> +
>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>>> + VTDInvDesc *inv_desc)
>>> +{
>>> + uint16_t domain_id;
>>> + uint32_t pasid;
>>> +
>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
>>> + inv_desc->val[2] || inv_desc->val[3]) {
>>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
>>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
>>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
>>> + __func__, inv_desc->val[3], inv_desc->val[2],
>>> + inv_desc->val[1], inv_desc->val[0]);
>>> + return false;
>>> + }
>>
>> Need to consider the below behaviour as well.
>>
>> "
>> This
>> descriptor is a 256-bit descriptor and will result in an invalid descriptor
>> error if submitted in an IQ that
>> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
>> "
>>
>> Also there are descriptions about the old inv desc types (e.g.
>> iotlb_inv_desc) that can be either 128bits or 256bits.
>>
>> "If a 128-bit
>> version of this descriptor is submitted into an IQ that is setup to provide
>> hardware with 256-bit
>> descriptors or vice-versa it will result in an invalid descriptor error.
>> "
>>
>> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
>> submits 128bits desc, then the high 128bits would be non-zero if there is
>> more than one desc. But if there is only one desc in the queue, then the
>> high 128bits would be zero as well. While, it may be captured by the
>> tail register update. Bit4 is reserved when DW==1, and guest would use
>> bit4 when it only submits one desc.
>>
>> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
>> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
>> can be identified as valid except for the types that does not requires
>> 256bits. The higher 128bits would be subjected to the desc sanity check
>> as well.
>>
>> Based on the above, I think you may need to add two more checks. If DW==0,
>> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
>> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
>> done it in this patch.
>>
>> Thoughts are welcomed here.
>
> Good catch,
> I think we should write the check in vtd_process_inv_desc
> rather than updating the handlers.
>
> What are your thoughts?
the first check can be done in vtd_process_inv_desc(). The second may
be better in the handlers as the handlers have the reserved bits check.
But given that none of the inv types use the high 128bits, so it is also
acceptable to do it in vtd_process_inv_desc(). Do add proper comment.
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 8:45 ` Yi Liu
@ 2024-11-04 11:46 ` Duan, Zhenzhong
2024-11-04 11:50 ` Michael S. Tsirkin
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 11:46 UTC (permalink / raw)
To: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
Tian, Kevin, Peng, Chao P, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 4:45 PM
>Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
>selective PASID-based iotlb invalidation
>
>On 2024/11/4 15:37, CLEMENT MATHIEU--DRIF wrote:
>>
>>
>> On 04/11/2024 03:49, Yi Liu wrote:
>>> Caution: External email. Do not open attachments or click links, unless
>>> this email comes from a known sender and you know the content is safe.
>>>
>>>
>>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>>>> flush stage-2 iotlb entries with matching domain id and pasid.
>>>
>>> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
>>> VT-d spec 4.1.
>>>
>>>> With scalable modern mode introduced, guest could send PASID-selective
>>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>>>>
>>>> By this chance, remove old IOTLB related definitions which were unused.
>>>
>>>
>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>>>> Acked-by: Jason Wang <jasowang@redhat.com>
>>>> ---
>>>> hw/i386/intel_iommu_internal.h | 14 ++++--
>>>> hw/i386/intel_iommu.c | 88 +++++++++++++++++++++++++++++++++-
>>>> 2 files changed, 96 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
>>>> intel_iommu_internal.h
>>>> index d0f9d4589d..eec8090190 100644
>>>> --- a/hw/i386/intel_iommu_internal.h
>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
>>>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
>>>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
>>>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
>>>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
>>>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
>>>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
>>>> VTD_PASID_ID_MASK)
>>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
>>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
>>>>
>>>> /* Mask for Device IOTLB Invalidate Descriptor */
>>>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
>>>> 0xfffffffffffff000ULL)
>>>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
>>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>>>>
>>>> +/* Masks for PIOTLB Invalidate Descriptor */
>>>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>>>> VTD_DOMAIN_ID_MASK)
>>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>>>> +
>>>> /* Information about page-selective IOTLB invalidate */
>>>> struct VTDIOTLBPageInvInfo {
>>>> uint16_t domain_id;
>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>> index 9e6ef0cb99..72c9c91d4f 100644
>>>> --- a/hw/i386/intel_iommu.c
>>>> +++ b/hw/i386/intel_iommu.c
>>>> @@ -2656,6 +2656,86 @@ static bool
>>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>>>> return true;
>>>> }
>>>>
>>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
>>>> + gpointer user_data)
>>>> +{
>>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>>> +
>>>> + return ((entry->domain_id == info->domain_id) &&
>>>> + (entry->pasid == info->pasid));
>>>> +}
>>>> +
>>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>>>> + uint16_t domain_id, uint32_t
>>>> pasid)
>>>> +{
>>>> + VTDIOTLBPageInvInfo info;
>>>> + VTDAddressSpace *vtd_as;
>>>> + VTDContextEntry ce;
>>>> +
>>>> + info.domain_id = domain_id;
>>>> + info.pasid = pasid;
>>>> +
>>>> + vtd_iommu_lock(s);
>>>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
>>>> + &info);
>>>> + vtd_iommu_unlock(s);
>>>> +
>>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>>> + vtd_as->devfn, &ce) &&
>>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>>> +
>>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>>> + vtd_as->pasid != pasid) {
>>>> + continue;
>>>> + }
>>>> +
>>>> + if (!s->scalable_modern) {
>>>> + vtd_address_space_sync(vtd_as);
>>>> + }
>>>> + }
>>>> + }
>>>> +}
>>>> +
>>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>>>> + VTDInvDesc *inv_desc)
>>>> +{
>>>> + uint16_t domain_id;
>>>> + uint32_t pasid;
>>>> +
>>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
>>>> + inv_desc->val[2] || inv_desc->val[3]) {
>>>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
>>>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
>>>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
>>>> + __func__, inv_desc->val[3], inv_desc->val[2],
>>>> + inv_desc->val[1], inv_desc->val[0]);
>>>> + return false;
>>>> + }
>>>
>>> Need to consider the below behaviour as well.
>>>
>>> "
>>> This
>>> descriptor is a 256-bit descriptor and will result in an invalid descriptor
>>> error if submitted in an IQ that
>>> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
>>> "
>>>
>>> Also there are descriptions about the old inv desc types (e.g.
>>> iotlb_inv_desc) that can be either 128bits or 256bits.
>>>
>>> "If a 128-bit
>>> version of this descriptor is submitted into an IQ that is setup to provide
>>> hardware with 256-bit
>>> descriptors or vice-versa it will result in an invalid descriptor error.
>>> "
>>>
>>> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
>>> submits 128bits desc, then the high 128bits would be non-zero if there is
>>> more than one desc. But if there is only one desc in the queue, then the
>>> high 128bits would be zero as well. While, it may be captured by the
>>> tail register update. Bit4 is reserved when DW==1, and guest would use
>>> bit4 when it only submits one desc.
>>>
>>> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
>>> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
>>> can be identified as valid except for the types that does not requires
>>> 256bits. The higher 128bits would be subjected to the desc sanity check
>>> as well.
>>>
>>> Based on the above, I think you may need to add two more checks. If DW==0,
>>> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
>>> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
>>> done it in this patch.
>>>
>>> Thoughts are welcomed here.
>>
>> Good catch,
>> I think we should write the check in vtd_process_inv_desc
>> rather than updating the handlers.
>>
>> What are your thoughts?
>
>the first check can be done in vtd_process_inv_desc(). The second may
>be better in the handlers as the handlers have the reserved bits check.
>But given that none of the inv types use the high 128bits, so it is also
>acceptable to do it in vtd_process_inv_desc(). Do add proper comment.
Thanks Yi and Clement's suggestion, I'll send a small series to fix that
for upstream.
BRs.
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 11:46 ` Duan, Zhenzhong
@ 2024-11-04 11:50 ` Michael S. Tsirkin
2024-11-04 11:55 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Michael S. Tsirkin @ 2024-11-04 11:50 UTC (permalink / raw)
To: Duan, Zhenzhong
Cc: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org,
alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
peterx@redhat.com, jasowang@redhat.com, jgg@nvidia.com,
nicolinc@nvidia.com, joao.m.martins@oracle.com, Tian, Kevin,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On Mon, Nov 04, 2024 at 11:46:00AM +0000, Duan, Zhenzhong wrote:
>
>
> >-----Original Message-----
> >From: Liu, Yi L <yi.l.liu@intel.com>
> >Sent: Monday, November 4, 2024 4:45 PM
> >Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
> >selective PASID-based iotlb invalidation
> >
> >On 2024/11/4 15:37, CLEMENT MATHIEU--DRIF wrote:
> >>
> >>
> >> On 04/11/2024 03:49, Yi Liu wrote:
> >>> Caution: External email. Do not open attachments or click links, unless
> >>> this email comes from a known sender and you know the content is safe.
> >>>
> >>>
> >>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
> >>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
> >>>> flush stage-2 iotlb entries with matching domain id and pasid.
> >>>
> >>> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
> >>> VT-d spec 4.1.
> >>>
> >>>> With scalable modern mode introduced, guest could send PASID-selective
> >>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
> >>>>
> >>>> By this chance, remove old IOTLB related definitions which were unused.
> >>>
> >>>
> >>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> >>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> >>>> Acked-by: Jason Wang <jasowang@redhat.com>
> >>>> ---
> >>>> hw/i386/intel_iommu_internal.h | 14 ++++--
> >>>> hw/i386/intel_iommu.c | 88 +++++++++++++++++++++++++++++++++-
> >>>> 2 files changed, 96 insertions(+), 6 deletions(-)
> >>>>
> >>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
> >>>> intel_iommu_internal.h
> >>>> index d0f9d4589d..eec8090190 100644
> >>>> --- a/hw/i386/intel_iommu_internal.h
> >>>> +++ b/hw/i386/intel_iommu_internal.h
> >>>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
> >>>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
> >>>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
> >>>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
> >>>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
> >>>> VTD_PASID_ID_MASK)
> >>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
> >>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
> >>>>
> >>>> /* Mask for Device IOTLB Invalidate Descriptor */
> >>>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
> >>>> 0xfffffffffffff000ULL)
> >>>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
> >>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
> >>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
> >>>>
> >>>> +/* Masks for PIOTLB Invalidate Descriptor */
> >>>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
> >>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
> >>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
> >>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
> >>>> VTD_DOMAIN_ID_MASK)
> >>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
> >>>> +
> >>>> /* Information about page-selective IOTLB invalidate */
> >>>> struct VTDIOTLBPageInvInfo {
> >>>> uint16_t domain_id;
> >>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> >>>> index 9e6ef0cb99..72c9c91d4f 100644
> >>>> --- a/hw/i386/intel_iommu.c
> >>>> +++ b/hw/i386/intel_iommu.c
> >>>> @@ -2656,6 +2656,86 @@ static bool
> >>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
> >>>> return true;
> >>>> }
> >>>>
> >>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
> >>>> + gpointer user_data)
> >>>> +{
> >>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
> >>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> >>>> +
> >>>> + return ((entry->domain_id == info->domain_id) &&
> >>>> + (entry->pasid == info->pasid));
> >>>> +}
> >>>> +
> >>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> >>>> + uint16_t domain_id, uint32_t
> >>>> pasid)
> >>>> +{
> >>>> + VTDIOTLBPageInvInfo info;
> >>>> + VTDAddressSpace *vtd_as;
> >>>> + VTDContextEntry ce;
> >>>> +
> >>>> + info.domain_id = domain_id;
> >>>> + info.pasid = pasid;
> >>>> +
> >>>> + vtd_iommu_lock(s);
> >>>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
> >>>> + &info);
> >>>> + vtd_iommu_unlock(s);
> >>>> +
> >>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
> >>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> >>>> + vtd_as->devfn, &ce) &&
> >>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
> >>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
> >>>> +
> >>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
> >>>> + vtd_as->pasid != pasid) {
> >>>> + continue;
> >>>> + }
> >>>> +
> >>>> + if (!s->scalable_modern) {
> >>>> + vtd_address_space_sync(vtd_as);
> >>>> + }
> >>>> + }
> >>>> + }
> >>>> +}
> >>>> +
> >>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> >>>> + VTDInvDesc *inv_desc)
> >>>> +{
> >>>> + uint16_t domain_id;
> >>>> + uint32_t pasid;
> >>>> +
> >>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> >>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
> >>>> + inv_desc->val[2] || inv_desc->val[3]) {
> >>>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
> >>>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
> >>>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
> >>>> + __func__, inv_desc->val[3], inv_desc->val[2],
> >>>> + inv_desc->val[1], inv_desc->val[0]);
> >>>> + return false;
> >>>> + }
> >>>
> >>> Need to consider the below behaviour as well.
> >>>
> >>> "
> >>> This
> >>> descriptor is a 256-bit descriptor and will result in an invalid descriptor
> >>> error if submitted in an IQ that
> >>> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
> >>> "
> >>>
> >>> Also there are descriptions about the old inv desc types (e.g.
> >>> iotlb_inv_desc) that can be either 128bits or 256bits.
> >>>
> >>> "If a 128-bit
> >>> version of this descriptor is submitted into an IQ that is setup to provide
> >>> hardware with 256-bit
> >>> descriptors or vice-versa it will result in an invalid descriptor error.
> >>> "
> >>>
> >>> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
> >>> submits 128bits desc, then the high 128bits would be non-zero if there is
> >>> more than one desc. But if there is only one desc in the queue, then the
> >>> high 128bits would be zero as well. While, it may be captured by the
> >>> tail register update. Bit4 is reserved when DW==1, and guest would use
> >>> bit4 when it only submits one desc.
> >>>
> >>> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
> >>> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
> >>> can be identified as valid except for the types that does not requires
> >>> 256bits. The higher 128bits would be subjected to the desc sanity check
> >>> as well.
> >>>
> >>> Based on the above, I think you may need to add two more checks. If DW==0,
> >>> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
> >>> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
> >>> done it in this patch.
> >>>
> >>> Thoughts are welcomed here.
> >>
> >> Good catch,
> >> I think we should write the check in vtd_process_inv_desc
> >> rather than updating the handlers.
> >>
> >> What are your thoughts?
> >
> >the first check can be done in vtd_process_inv_desc(). The second may
> >be better in the handlers as the handlers have the reserved bits check.
> >But given that none of the inv types use the high 128bits, so it is also
> >acceptable to do it in vtd_process_inv_desc(). Do add proper comment.
>
> Thanks Yi and Clement's suggestion, I'll send a small series to fix that
> for upstream.
>
> BRs.
> Zhenzhong
Ok so you will send v5?
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 11:50 ` Michael S. Tsirkin
@ 2024-11-04 11:55 ` Duan, Zhenzhong
2024-11-04 12:01 ` Michael S. Tsirkin
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 11:55 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org,
alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
peterx@redhat.com, jasowang@redhat.com, jgg@nvidia.com,
nicolinc@nvidia.com, joao.m.martins@oracle.com, Tian, Kevin,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Michael S. Tsirkin <mst@redhat.com>
>Sent: Monday, November 4, 2024 7:51 PM
>Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
>selective PASID-based iotlb invalidation
>
>On Mon, Nov 04, 2024 at 11:46:00AM +0000, Duan, Zhenzhong wrote:
>>
>>
>> >-----Original Message-----
>> >From: Liu, Yi L <yi.l.liu@intel.com>
>> >Sent: Monday, November 4, 2024 4:45 PM
>> >Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
>> >selective PASID-based iotlb invalidation
>> >
>> >On 2024/11/4 15:37, CLEMENT MATHIEU--DRIF wrote:
>> >>
>> >>
>> >> On 04/11/2024 03:49, Yi Liu wrote:
>> >>> Caution: External email. Do not open attachments or click links, unless
>> >>> this email comes from a known sender and you know the content is safe.
>> >>>
>> >>>
>> >>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> >>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>> >>>> flush stage-2 iotlb entries with matching domain id and pasid.
>> >>>
>> >>> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
>> >>> VT-d spec 4.1.
>> >>>
>> >>>> With scalable modern mode introduced, guest could send PASID-selective
>> >>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>> >>>>
>> >>>> By this chance, remove old IOTLB related definitions which were unused.
>> >>>
>> >>>
>> >>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> >>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> >>>> Acked-by: Jason Wang <jasowang@redhat.com>
>> >>>> ---
>> >>>> hw/i386/intel_iommu_internal.h | 14 ++++--
>> >>>> hw/i386/intel_iommu.c | 88
>+++++++++++++++++++++++++++++++++-
>> >>>> 2 files changed, 96 insertions(+), 6 deletions(-)
>> >>>>
>> >>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
>> >>>> intel_iommu_internal.h
>> >>>> index d0f9d4589d..eec8090190 100644
>> >>>> --- a/hw/i386/intel_iommu_internal.h
>> >>>> +++ b/hw/i386/intel_iommu_internal.h
>> >>>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
>> >>>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
>> >>>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
>> >>>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
>> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
>> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
>> >>>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
>> >>>> VTD_PASID_ID_MASK)
>> >>>> -#define
>VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
>> >>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
>> >>>>
>> >>>> /* Mask for Device IOTLB Invalidate Descriptor */
>> >>>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
>> >>>> 0xfffffffffffff000ULL)
>> >>>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
>> >>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>> >>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>> >>>>
>> >>>> +/* Masks for PIOTLB Invalidate Descriptor */
>> >>>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>> >>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>> >>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>> >>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>> >>>> VTD_DOMAIN_ID_MASK)
>> >>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
>> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>> >>>> +
>> >>>> /* Information about page-selective IOTLB invalidate */
>> >>>> struct VTDIOTLBPageInvInfo {
>> >>>> uint16_t domain_id;
>> >>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> >>>> index 9e6ef0cb99..72c9c91d4f 100644
>> >>>> --- a/hw/i386/intel_iommu.c
>> >>>> +++ b/hw/i386/intel_iommu.c
>> >>>> @@ -2656,6 +2656,86 @@ static bool
>> >>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>> >>>> return true;
>> >>>> }
>> >>>>
>> >>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
>> >>>> + gpointer user_data)
>> >>>> +{
>> >>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>> >>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>> >>>> +
>> >>>> + return ((entry->domain_id == info->domain_id) &&
>> >>>> + (entry->pasid == info->pasid));
>> >>>> +}
>> >>>> +
>> >>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> >>>> + uint16_t domain_id, uint32_t
>> >>>> pasid)
>> >>>> +{
>> >>>> + VTDIOTLBPageInvInfo info;
>> >>>> + VTDAddressSpace *vtd_as;
>> >>>> + VTDContextEntry ce;
>> >>>> +
>> >>>> + info.domain_id = domain_id;
>> >>>> + info.pasid = pasid;
>> >>>> +
>> >>>> + vtd_iommu_lock(s);
>> >>>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
>> >>>> + &info);
>> >>>> + vtd_iommu_unlock(s);
>> >>>> +
>> >>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>> >>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>> >>>> + vtd_as->devfn, &ce) &&
>> >>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>> >>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>> >>>> +
>> >>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>> >>>> + vtd_as->pasid != pasid) {
>> >>>> + continue;
>> >>>> + }
>> >>>> +
>> >>>> + if (!s->scalable_modern) {
>> >>>> + vtd_address_space_sync(vtd_as);
>> >>>> + }
>> >>>> + }
>> >>>> + }
>> >>>> +}
>> >>>> +
>> >>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>> >>>> + VTDInvDesc *inv_desc)
>> >>>> +{
>> >>>> + uint16_t domain_id;
>> >>>> + uint32_t pasid;
>> >>>> +
>> >>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>> >>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
>> >>>> + inv_desc->val[2] || inv_desc->val[3]) {
>> >>>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
>> >>>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
>> >>>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
>> >>>> + __func__, inv_desc->val[3], inv_desc->val[2],
>> >>>> + inv_desc->val[1], inv_desc->val[0]);
>> >>>> + return false;
>> >>>> + }
>> >>>
>> >>> Need to consider the below behaviour as well.
>> >>>
>> >>> "
>> >>> This
>> >>> descriptor is a 256-bit descriptor and will result in an invalid descriptor
>> >>> error if submitted in an IQ that
>> >>> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
>> >>> "
>> >>>
>> >>> Also there are descriptions about the old inv desc types (e.g.
>> >>> iotlb_inv_desc) that can be either 128bits or 256bits.
>> >>>
>> >>> "If a 128-bit
>> >>> version of this descriptor is submitted into an IQ that is setup to provide
>> >>> hardware with 256-bit
>> >>> descriptors or vice-versa it will result in an invalid descriptor error.
>> >>> "
>> >>>
>> >>> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
>> >>> submits 128bits desc, then the high 128bits would be non-zero if there is
>> >>> more than one desc. But if there is only one desc in the queue, then the
>> >>> high 128bits would be zero as well. While, it may be captured by the
>> >>> tail register update. Bit4 is reserved when DW==1, and guest would use
>> >>> bit4 when it only submits one desc.
>> >>>
>> >>> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
>> >>> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
>> >>> can be identified as valid except for the types that does not requires
>> >>> 256bits. The higher 128bits would be subjected to the desc sanity check
>> >>> as well.
>> >>>
>> >>> Based on the above, I think you may need to add two more checks. If
>DW==0,
>> >>> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
>> >>> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
>> >>> done it in this patch.
>> >>>
>> >>> Thoughts are welcomed here.
>> >>
>> >> Good catch,
>> >> I think we should write the check in vtd_process_inv_desc
>> >> rather than updating the handlers.
>> >>
>> >> What are your thoughts?
>> >
>> >the first check can be done in vtd_process_inv_desc(). The second may
>> >be better in the handlers as the handlers have the reserved bits check.
>> >But given that none of the inv types use the high 128bits, so it is also
>> >acceptable to do it in vtd_process_inv_desc(). Do add proper comment.
>>
>> Thanks Yi and Clement's suggestion, I'll send a small series to fix that
>> for upstream.
>>
>> BRs.
>> Zhenzhong
>
>Ok so you will send v5?
No, what Yi pointed out is an upstream issue, I'll send a small series(3 patches)
to fix that issue for upstream.
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 11:55 ` Duan, Zhenzhong
@ 2024-11-04 12:01 ` Michael S. Tsirkin
2024-11-04 12:03 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Michael S. Tsirkin @ 2024-11-04 12:01 UTC (permalink / raw)
To: Duan, Zhenzhong
Cc: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org,
alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
peterx@redhat.com, jasowang@redhat.com, jgg@nvidia.com,
nicolinc@nvidia.com, joao.m.martins@oracle.com, Tian, Kevin,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On Mon, Nov 04, 2024 at 11:55:39AM +0000, Duan, Zhenzhong wrote:
>
>
> >-----Original Message-----
> >From: Michael S. Tsirkin <mst@redhat.com>
> >Sent: Monday, November 4, 2024 7:51 PM
> >Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
> >selective PASID-based iotlb invalidation
> >
> >On Mon, Nov 04, 2024 at 11:46:00AM +0000, Duan, Zhenzhong wrote:
> >>
> >>
> >> >-----Original Message-----
> >> >From: Liu, Yi L <yi.l.liu@intel.com>
> >> >Sent: Monday, November 4, 2024 4:45 PM
> >> >Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
> >> >selective PASID-based iotlb invalidation
> >> >
> >> >On 2024/11/4 15:37, CLEMENT MATHIEU--DRIF wrote:
> >> >>
> >> >>
> >> >> On 04/11/2024 03:49, Yi Liu wrote:
> >> >>> Caution: External email. Do not open attachments or click links, unless
> >> >>> this email comes from a known sender and you know the content is safe.
> >> >>>
> >> >>>
> >> >>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
> >> >>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
> >> >>>> flush stage-2 iotlb entries with matching domain id and pasid.
> >> >>>
> >> >>> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
> >> >>> VT-d spec 4.1.
> >> >>>
> >> >>>> With scalable modern mode introduced, guest could send PASID-selective
> >> >>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
> >> >>>>
> >> >>>> By this chance, remove old IOTLB related definitions which were unused.
> >> >>>
> >> >>>
> >> >>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> >> >>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> >> >>>> Acked-by: Jason Wang <jasowang@redhat.com>
> >> >>>> ---
> >> >>>> hw/i386/intel_iommu_internal.h | 14 ++++--
> >> >>>> hw/i386/intel_iommu.c | 88
> >+++++++++++++++++++++++++++++++++-
> >> >>>> 2 files changed, 96 insertions(+), 6 deletions(-)
> >> >>>>
> >> >>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
> >> >>>> intel_iommu_internal.h
> >> >>>> index d0f9d4589d..eec8090190 100644
> >> >>>> --- a/hw/i386/intel_iommu_internal.h
> >> >>>> +++ b/hw/i386/intel_iommu_internal.h
> >> >>>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
> >> >>>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
> >> >>>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
> >> >>>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
> >> >>>> VTD_PASID_ID_MASK)
> >> >>>> -#define
> >VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
> >> >>>>
> >> >>>> /* Mask for Device IOTLB Invalidate Descriptor */
> >> >>>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
> >> >>>> 0xfffffffffffff000ULL)
> >> >>>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
> >> >>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
> >> >>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
> >> >>>>
> >> >>>> +/* Masks for PIOTLB Invalidate Descriptor */
> >> >>>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
> >> >>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
> >> >>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
> >> >>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
> >> >>>> VTD_DOMAIN_ID_MASK)
> >> >>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) & 0xfffffULL)
> >> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
> >> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
> >> >>>> +
> >> >>>> /* Information about page-selective IOTLB invalidate */
> >> >>>> struct VTDIOTLBPageInvInfo {
> >> >>>> uint16_t domain_id;
> >> >>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> >> >>>> index 9e6ef0cb99..72c9c91d4f 100644
> >> >>>> --- a/hw/i386/intel_iommu.c
> >> >>>> +++ b/hw/i386/intel_iommu.c
> >> >>>> @@ -2656,6 +2656,86 @@ static bool
> >> >>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
> >> >>>> return true;
> >> >>>> }
> >> >>>>
> >> >>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer value,
> >> >>>> + gpointer user_data)
> >> >>>> +{
> >> >>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
> >> >>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
> >> >>>> +
> >> >>>> + return ((entry->domain_id == info->domain_id) &&
> >> >>>> + (entry->pasid == info->pasid));
> >> >>>> +}
> >> >>>> +
> >> >>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
> >> >>>> + uint16_t domain_id, uint32_t
> >> >>>> pasid)
> >> >>>> +{
> >> >>>> + VTDIOTLBPageInvInfo info;
> >> >>>> + VTDAddressSpace *vtd_as;
> >> >>>> + VTDContextEntry ce;
> >> >>>> +
> >> >>>> + info.domain_id = domain_id;
> >> >>>> + info.pasid = pasid;
> >> >>>> +
> >> >>>> + vtd_iommu_lock(s);
> >> >>>> + g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_pasid,
> >> >>>> + &info);
> >> >>>> + vtd_iommu_unlock(s);
> >> >>>> +
> >> >>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
> >> >>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> >> >>>> + vtd_as->devfn, &ce) &&
> >> >>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
> >> >>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
> >> >>>> +
> >> >>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
> >> >>>> + vtd_as->pasid != pasid) {
> >> >>>> + continue;
> >> >>>> + }
> >> >>>> +
> >> >>>> + if (!s->scalable_modern) {
> >> >>>> + vtd_address_space_sync(vtd_as);
> >> >>>> + }
> >> >>>> + }
> >> >>>> + }
> >> >>>> +}
> >> >>>> +
> >> >>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
> >> >>>> + VTDInvDesc *inv_desc)
> >> >>>> +{
> >> >>>> + uint16_t domain_id;
> >> >>>> + uint32_t pasid;
> >> >>>> +
> >> >>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
> >> >>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
> >> >>>> + inv_desc->val[2] || inv_desc->val[3]) {
> >> >>>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
> >> >>>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
> >> >>>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
> >> >>>> + __func__, inv_desc->val[3], inv_desc->val[2],
> >> >>>> + inv_desc->val[1], inv_desc->val[0]);
> >> >>>> + return false;
> >> >>>> + }
> >> >>>
> >> >>> Need to consider the below behaviour as well.
> >> >>>
> >> >>> "
> >> >>> This
> >> >>> descriptor is a 256-bit descriptor and will result in an invalid descriptor
> >> >>> error if submitted in an IQ that
> >> >>> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
> >> >>> "
> >> >>>
> >> >>> Also there are descriptions about the old inv desc types (e.g.
> >> >>> iotlb_inv_desc) that can be either 128bits or 256bits.
> >> >>>
> >> >>> "If a 128-bit
> >> >>> version of this descriptor is submitted into an IQ that is setup to provide
> >> >>> hardware with 256-bit
> >> >>> descriptors or vice-versa it will result in an invalid descriptor error.
> >> >>> "
> >> >>>
> >> >>> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
> >> >>> submits 128bits desc, then the high 128bits would be non-zero if there is
> >> >>> more than one desc. But if there is only one desc in the queue, then the
> >> >>> high 128bits would be zero as well. While, it may be captured by the
> >> >>> tail register update. Bit4 is reserved when DW==1, and guest would use
> >> >>> bit4 when it only submits one desc.
> >> >>>
> >> >>> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits desc,
> >> >>> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
> >> >>> can be identified as valid except for the types that does not requires
> >> >>> 256bits. The higher 128bits would be subjected to the desc sanity check
> >> >>> as well.
> >> >>>
> >> >>> Based on the above, I think you may need to add two more checks. If
> >DW==0,
> >> >>> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
> >> >>> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
> >> >>> done it in this patch.
> >> >>>
> >> >>> Thoughts are welcomed here.
> >> >>
> >> >> Good catch,
> >> >> I think we should write the check in vtd_process_inv_desc
> >> >> rather than updating the handlers.
> >> >>
> >> >> What are your thoughts?
> >> >
> >> >the first check can be done in vtd_process_inv_desc(). The second may
> >> >be better in the handlers as the handlers have the reserved bits check.
> >> >But given that none of the inv types use the high 128bits, so it is also
> >> >acceptable to do it in vtd_process_inv_desc(). Do add proper comment.
> >>
> >> Thanks Yi and Clement's suggestion, I'll send a small series to fix that
> >> for upstream.
> >>
> >> BRs.
> >> Zhenzhong
> >
> >Ok so you will send v5?
>
> No, what Yi pointed out is an upstream issue, I'll send a small series(3 patches)
> to fix that issue for upstream.
>
> Thanks
> Zhenzhong
Also ok. There's still gonnu be v5 because of other comments, right?
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation
2024-11-04 12:01 ` Michael S. Tsirkin
@ 2024-11-04 12:03 ` Duan, Zhenzhong
0 siblings, 0 replies; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-04 12:03 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Liu, Yi L, CLEMENT MATHIEU--DRIF, qemu-devel@nongnu.org,
alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
peterx@redhat.com, jasowang@redhat.com, jgg@nvidia.com,
nicolinc@nvidia.com, joao.m.martins@oracle.com, Tian, Kevin,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Michael S. Tsirkin <mst@redhat.com>
>Sent: Monday, November 4, 2024 8:01 PM
>Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
>selective PASID-based iotlb invalidation
>
>On Mon, Nov 04, 2024 at 11:55:39AM +0000, Duan, Zhenzhong wrote:
>>
>>
>> >-----Original Message-----
>> >From: Michael S. Tsirkin <mst@redhat.com>
>> >Sent: Monday, November 4, 2024 7:51 PM
>> >Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
>> >selective PASID-based iotlb invalidation
>> >
>> >On Mon, Nov 04, 2024 at 11:46:00AM +0000, Duan, Zhenzhong wrote:
>> >>
>> >>
>> >> >-----Original Message-----
>> >> >From: Liu, Yi L <yi.l.liu@intel.com>
>> >> >Sent: Monday, November 4, 2024 4:45 PM
>> >> >Subject: Re: [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-
>> >> >selective PASID-based iotlb invalidation
>> >> >
>> >> >On 2024/11/4 15:37, CLEMENT MATHIEU--DRIF wrote:
>> >> >>
>> >> >>
>> >> >> On 04/11/2024 03:49, Yi Liu wrote:
>> >> >>> Caution: External email. Do not open attachments or click links, unless
>> >> >>> this email comes from a known sender and you know the content is safe.
>> >> >>>
>> >> >>>
>> >> >>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>> >> >>>> Per spec 6.5.2.4, PADID-selective PASID-based iotlb invalidation will
>> >> >>>> flush stage-2 iotlb entries with matching domain id and pasid.
>> >> >>>
>> >> >>> Also, call out it's per table Table 21. PASID-based-IOTLB Invalidation of
>> >> >>> VT-d spec 4.1.
>> >> >>>
>> >> >>>> With scalable modern mode introduced, guest could send PASID-
>selective
>> >> >>>> PASID-based iotlb invalidation to flush both stage-1 and stage-2 entries.
>> >> >>>>
>> >> >>>> By this chance, remove old IOTLB related definitions which were
>unused.
>> >> >>>
>> >> >>>
>> >> >>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> >> >>>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--
>drif@eviden.com>
>> >> >>>> Acked-by: Jason Wang <jasowang@redhat.com>
>> >> >>>> ---
>> >> >>>> hw/i386/intel_iommu_internal.h | 14 ++++--
>> >> >>>> hw/i386/intel_iommu.c | 88
>> >+++++++++++++++++++++++++++++++++-
>> >> >>>> 2 files changed, 96 insertions(+), 6 deletions(-)
>> >> >>>>
>> >> >>>> diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/
>> >> >>>> intel_iommu_internal.h
>> >> >>>> index d0f9d4589d..eec8090190 100644
>> >> >>>> --- a/hw/i386/intel_iommu_internal.h
>> >> >>>> +++ b/hw/i386/intel_iommu_internal.h
>> >> >>>> @@ -403,11 +403,6 @@ typedef union VTDInvDesc VTDInvDesc;
>> >> >>>> #define VTD_INV_DESC_IOTLB_AM(val) ((val) & 0x3fULL)
>> >> >>>> #define VTD_INV_DESC_IOTLB_RSVD_LO 0xffffffff0000f100ULL
>> >> >>>> #define VTD_INV_DESC_IOTLB_RSVD_HI 0xf80ULL
>> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PASID (2ULL << 4)
>> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID_PAGE (3ULL << 4)
>> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID(val) (((val) >> 32) &
>> >> >>>> VTD_PASID_ID_MASK)
>> >> >>>> -#define
>> >VTD_INV_DESC_IOTLB_PASID_RSVD_LO 0xfff00000000001c0ULL
>> >> >>>> -#define VTD_INV_DESC_IOTLB_PASID_RSVD_HI 0xf80ULL
>> >> >>>>
>> >> >>>> /* Mask for Device IOTLB Invalidate Descriptor */
>> >> >>>> #define VTD_INV_DESC_DEVICE_IOTLB_ADDR(val) ((val) &
>> >> >>>> 0xfffffffffffff000ULL)
>> >> >>>> @@ -433,6 +428,15 @@ typedef union VTDInvDesc VTDInvDesc;
>> >> >>>> #define VTD_SPTE_LPAGE_L3_RSVD_MASK(aw) \
>> >> >>>> (0x3ffff800ULL | ~(VTD_HAW_MASK(aw) | VTD_SL_IGN_COM))
>> >> >>>>
>> >> >>>> +/* Masks for PIOTLB Invalidate Descriptor */
>> >> >>>> +#define VTD_INV_DESC_PIOTLB_G (3ULL << 4)
>> >> >>>> +#define VTD_INV_DESC_PIOTLB_ALL_IN_PASID (2ULL << 4)
>> >> >>>> +#define VTD_INV_DESC_PIOTLB_PSI_IN_PASID (3ULL << 4)
>> >> >>>> +#define VTD_INV_DESC_PIOTLB_DID(val) (((val) >> 16) &
>> >> >>>> VTD_DOMAIN_ID_MASK)
>> >> >>>> +#define VTD_INV_DESC_PIOTLB_PASID(val) (((val) >> 32) &
>0xfffffULL)
>> >> >>>> +#define
>VTD_INV_DESC_PIOTLB_RSVD_VAL0 0xfff000000000f1c0ULL
>> >> >>>> +#define VTD_INV_DESC_PIOTLB_RSVD_VAL1 0xf80ULL
>> >> >>>> +
>> >> >>>> /* Information about page-selective IOTLB invalidate */
>> >> >>>> struct VTDIOTLBPageInvInfo {
>> >> >>>> uint16_t domain_id;
>> >> >>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> >> >>>> index 9e6ef0cb99..72c9c91d4f 100644
>> >> >>>> --- a/hw/i386/intel_iommu.c
>> >> >>>> +++ b/hw/i386/intel_iommu.c
>> >> >>>> @@ -2656,6 +2656,86 @@ static bool
>> >> >>>> vtd_process_iotlb_desc(IntelIOMMUState *s, VTDInvDesc *inv_desc)
>> >> >>>> return true;
>> >> >>>> }
>> >> >>>>
>> >> >>>> +static gboolean vtd_hash_remove_by_pasid(gpointer key, gpointer
>value,
>> >> >>>> + gpointer user_data)
>> >> >>>> +{
>> >> >>>> + VTDIOTLBEntry *entry = (VTDIOTLBEntry *)value;
>> >> >>>> + VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>> >> >>>> +
>> >> >>>> + return ((entry->domain_id == info->domain_id) &&
>> >> >>>> + (entry->pasid == info->pasid));
>> >> >>>> +}
>> >> >>>> +
>> >> >>>> +static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s,
>> >> >>>> + uint16_t domain_id, uint32_t
>> >> >>>> pasid)
>> >> >>>> +{
>> >> >>>> + VTDIOTLBPageInvInfo info;
>> >> >>>> + VTDAddressSpace *vtd_as;
>> >> >>>> + VTDContextEntry ce;
>> >> >>>> +
>> >> >>>> + info.domain_id = domain_id;
>> >> >>>> + info.pasid = pasid;
>> >> >>>> +
>> >> >>>> + vtd_iommu_lock(s);
>> >> >>>> + g_hash_table_foreach_remove(s->iotlb,
>vtd_hash_remove_by_pasid,
>> >> >>>> + &info);
>> >> >>>> + vtd_iommu_unlock(s);
>> >> >>>> +
>> >> >>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>> >> >>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>> >> >>>> + vtd_as->devfn, &ce) &&
>> >> >>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>> >> >>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>> >> >>>> +
>> >> >>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>> >> >>>> + vtd_as->pasid != pasid) {
>> >> >>>> + continue;
>> >> >>>> + }
>> >> >>>> +
>> >> >>>> + if (!s->scalable_modern) {
>> >> >>>> + vtd_address_space_sync(vtd_as);
>> >> >>>> + }
>> >> >>>> + }
>> >> >>>> + }
>> >> >>>> +}
>> >> >>>> +
>> >> >>>> +static bool vtd_process_piotlb_desc(IntelIOMMUState *s,
>> >> >>>> + VTDInvDesc *inv_desc)
>> >> >>>> +{
>> >> >>>> + uint16_t domain_id;
>> >> >>>> + uint32_t pasid;
>> >> >>>> +
>> >> >>>> + if ((inv_desc->val[0] & VTD_INV_DESC_PIOTLB_RSVD_VAL0) ||
>> >> >>>> + (inv_desc->val[1] & VTD_INV_DESC_PIOTLB_RSVD_VAL1) ||
>> >> >>>> + inv_desc->val[2] || inv_desc->val[3]) {
>> >> >>>> + error_report_once("%s: invalid piotlb inv desc val[3]=0x%"PRIx64
>> >> >>>> + " val[2]=0x%"PRIx64" val[1]=0x%"PRIx64
>> >> >>>> + " val[0]=0x%"PRIx64" (reserved bits unzero)",
>> >> >>>> + __func__, inv_desc->val[3], inv_desc->val[2],
>> >> >>>> + inv_desc->val[1], inv_desc->val[0]);
>> >> >>>> + return false;
>> >> >>>> + }
>> >> >>>
>> >> >>> Need to consider the below behaviour as well.
>> >> >>>
>> >> >>> "
>> >> >>> This
>> >> >>> descriptor is a 256-bit descriptor and will result in an invalid descriptor
>> >> >>> error if submitted in an IQ that
>> >> >>> is setup to provide hardware with 128-bit descriptors (IQA_REG.DW=0)
>> >> >>> "
>> >> >>>
>> >> >>> Also there are descriptions about the old inv desc types (e.g.
>> >> >>> iotlb_inv_desc) that can be either 128bits or 256bits.
>> >> >>>
>> >> >>> "If a 128-bit
>> >> >>> version of this descriptor is submitted into an IQ that is setup to provide
>> >> >>> hardware with 256-bit
>> >> >>> descriptors or vice-versa it will result in an invalid descriptor error.
>> >> >>> "
>> >> >>>
>> >> >>> If DW==1, vIOMMU fetches 32 bytes per desc. In such case, if the guest
>> >> >>> submits 128bits desc, then the high 128bits would be non-zero if there is
>> >> >>> more than one desc. But if there is only one desc in the queue, then the
>> >> >>> high 128bits would be zero as well. While, it may be captured by the
>> >> >>> tail register update. Bit4 is reserved when DW==1, and guest would use
>> >> >>> bit4 when it only submits one desc.
>> >> >>>
>> >> >>> If DW==0, vIOMMU fetchs 16bytes per desc. If guest submits 256bits
>desc,
>> >> >>> it would appear to be two descs from vIOMMU p.o.v. The first 128bits
>> >> >>> can be identified as valid except for the types that does not requires
>> >> >>> 256bits. The higher 128bits would be subjected to the desc sanity check
>> >> >>> as well.
>> >> >>>
>> >> >>> Based on the above, I think you may need to add two more checks. If
>> >DW==0,
>> >> >>> vIOMMU should fail the inv types that requires 256bits; If DW==1, you
>> >> >>> should check the inv_desc->val[2] and inv_desc->val[3]. You've already
>> >> >>> done it in this patch.
>> >> >>>
>> >> >>> Thoughts are welcomed here.
>> >> >>
>> >> >> Good catch,
>> >> >> I think we should write the check in vtd_process_inv_desc
>> >> >> rather than updating the handlers.
>> >> >>
>> >> >> What are your thoughts?
>> >> >
>> >> >the first check can be done in vtd_process_inv_desc(). The second may
>> >> >be better in the handlers as the handlers have the reserved bits check.
>> >> >But given that none of the inv types use the high 128bits, so it is also
>> >> >acceptable to do it in vtd_process_inv_desc(). Do add proper comment.
>> >>
>> >> Thanks Yi and Clement's suggestion, I'll send a small series to fix that
>> >> for upstream.
>> >>
>> >> BRs.
>> >> Zhenzhong
>> >
>> >Ok so you will send v5?
>>
>> No, what Yi pointed out is an upstream issue, I'll send a small series(3 patches)
>> to fix that issue for upstream.
>>
>> Thanks
>> Zhenzhong
>
>Also ok. There's still gonnu be v5 because of other comments, right?
Right.
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-11-04 7:23 ` Yi Liu
@ 2024-11-05 3:11 ` Duan, Zhenzhong
2024-11-05 5:56 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-05 3:11 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Monday, November 4, 2024 3:23 PM
>Subject: Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for
>scalable modern mode
>
>On 2024/11/4 14:25, Duan, Zhenzhong wrote:
>>
>>
>>> -----Original Message-----
>>> From: Liu, Yi L <yi.l.liu@intel.com>
>>> Sent: Monday, November 4, 2024 12:25 PM
>>> Subject: Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for
>>> scalable modern mode
>>>
>>> On 2024/9/30 17:26, Zhenzhong Duan wrote:
>>>> Intel VT-d 3.0 introduces scalable mode, and it has a bunch of capabilities
>>>> related to scalable mode translation, thus there are multiple combinations.
>>>>
>>>> This vIOMMU implementation wants to simplify it with a new property "x-fls".
>>>> When enabled in scalable mode, first stage translation also known as
>scalable
>>>> modern mode is supported. When enabled in legacy mode, throw out error.
>>>>
>>>> With scalable modern mode exposed to user, also accurate the pasid entry
>>>> check in vtd_pe_type_check().
>>>>
>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>
>>> Maybe a Suggested-by tag can help to understand where this idea comes. :)
>>
>> Will add:
>> Suggested-by: Jason Wang <jasowang@redhat.com>
>>
>>>
>>>> ---
>>>> hw/i386/intel_iommu_internal.h | 2 ++
>>>> hw/i386/intel_iommu.c | 28 +++++++++++++++++++---------
>>>> 2 files changed, 21 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/hw/i386/intel_iommu_internal.h
>b/hw/i386/intel_iommu_internal.h
>>>> index 2702edd27f..f13576d334 100644
>>>> --- a/hw/i386/intel_iommu_internal.h
>>>> +++ b/hw/i386/intel_iommu_internal.h
>>>> @@ -195,6 +195,7 @@
>>>> #define VTD_ECAP_PASID (1ULL << 40)
>>>> #define VTD_ECAP_SMTS (1ULL << 43)
>>>> #define VTD_ECAP_SLTS (1ULL << 46)
>>>> +#define VTD_ECAP_FLTS (1ULL << 47)
>>>>
>>>> /* CAP_REG */
>>>> /* (offset >> 4) << 24 */
>>>> @@ -211,6 +212,7 @@
>>>> #define VTD_CAP_SLLPS ((1ULL << 34) | (1ULL << 35))
>>>> #define VTD_CAP_DRAIN_WRITE (1ULL << 54)
>>>> #define VTD_CAP_DRAIN_READ (1ULL << 55)
>>>> +#define VTD_CAP_FS1GP (1ULL << 56)
>>>> #define VTD_CAP_DRAIN (VTD_CAP_DRAIN_READ |
>>> VTD_CAP_DRAIN_WRITE)
>>>> #define VTD_CAP_CM (1ULL << 7)
>>>> #define VTD_PASID_ID_SHIFT 20
>>>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>>>> index 068a08f522..14578655e1 100644
>>>> --- a/hw/i386/intel_iommu.c
>>>> +++ b/hw/i386/intel_iommu.c
>>>> @@ -803,16 +803,18 @@ static inline bool
>>> vtd_is_fl_level_supported(IntelIOMMUState *s, uint32_t level)
>>>> }
>>>>
>>>> /* Return true if check passed, otherwise false */
>>>> -static inline bool vtd_pe_type_check(X86IOMMUState *x86_iommu,
>>>> - VTDPASIDEntry *pe)
>>>> +static inline bool vtd_pe_type_check(IntelIOMMUState *s, VTDPASIDEntry
>*pe)
>>>> {
>>>> switch (VTD_PE_GET_TYPE(pe)) {
>>>> - case VTD_SM_PASID_ENTRY_SLT:
>>>> - return true;
>>>> - case VTD_SM_PASID_ENTRY_PT:
>>>> - return x86_iommu->pt_supported;
>>>> case VTD_SM_PASID_ENTRY_FLT:
>>>> + return !!(s->ecap & VTD_ECAP_FLTS);
>>>> + case VTD_SM_PASID_ENTRY_SLT:
>>>> + return !!(s->ecap & VTD_ECAP_SLTS);
>>>> case VTD_SM_PASID_ENTRY_NESTED:
>>>> + /* Not support NESTED page table type yet */
>>>> + return false;
>>>> + case VTD_SM_PASID_ENTRY_PT:
>>>> + return !!(s->ecap & VTD_ECAP_PT);
>>>> default:
>>>> /* Unknown type */
>>>> return false;
>>>> @@ -861,7 +863,6 @@ static int
>>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>>> uint8_t pgtt;
>>>> uint32_t index;
>>>> dma_addr_t entry_size;
>>>> - X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(s);
>>>>
>>>> index = VTD_PASID_TABLE_INDEX(pasid);
>>>> entry_size = VTD_PASID_ENTRY_SIZE;
>>>> @@ -875,7 +876,7 @@ static int
>>> vtd_get_pe_in_pasid_leaf_table(IntelIOMMUState *s,
>>>> }
>>>>
>>>> /* Do translation type check */
>>>> - if (!vtd_pe_type_check(x86_iommu, pe)) {
>>>> + if (!vtd_pe_type_check(s, pe)) {
>>>> return -VTD_FR_PASID_TABLE_ENTRY_INV;
>>>> }
>>>>
>>>> @@ -3779,6 +3780,7 @@ static Property vtd_properties[] = {
>>>> VTD_HOST_AW_AUTO),
>>>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>>> FALSE),
>>>> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState,
>scalable_mode,
>>> FALSE),
>>>> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern, FALSE),
>>>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>>> false),
>>>
>>> a question: is there any requirement on the layout of this array? Should
>>> new fields added in the end?
>>
>> Looked over the history, seems we didn't have an explicit rule in vtd_properties.
>> I put "x-fls" just under "x-scalable-mode" as stage-1 is a sub-feature of scalable
>mode.
>> Let me know if you have preference to add in the end.
>
>I don't have a preference for now as long as it does not break any
>functionality. BTW. Will x-flt or x-flts better?
So first level support(fls) vs. first level translation(flt) or first level translation support(flts),
looks same for me, but I can change to x-flt or x-flts if you prefer.
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-11-05 3:11 ` Duan, Zhenzhong
@ 2024-11-05 5:56 ` Yi Liu
2024-11-05 6:03 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-05 5:56 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/11/5 11:11, Duan, Zhenzhong wrote:
>>>>> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern, FALSE),
>>>>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>>>> false),
>>>>
>>>> a question: is there any requirement on the layout of this array? Should
>>>> new fields added in the end?
>>>
>>> Looked over the history, seems we didn't have an explicit rule in vtd_properties.
>>> I put "x-fls" just under "x-scalable-mode" as stage-1 is a sub-feature of scalable
>> mode.
>>> Let me know if you have preference to add in the end.
>>
>> I don't have a preference for now as long as it does not break any
>> functionality. BTW. Will x-flt or x-flts better?
>
> So first level support(fls) vs. first level translation(flt) or first level translation support(flts),
> looks same for me, but I can change to x-flt or x-flts if you prefer.
x-flts looks better as it suits more how spec tells it (FSTS in the eap
register). :)
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-11-05 5:56 ` Yi Liu
@ 2024-11-05 6:03 ` Duan, Zhenzhong
2024-11-05 6:26 ` Yi Liu
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-05 6:03 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Tuesday, November 5, 2024 1:56 PM
>Subject: Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for
>scalable modern mode
>
>On 2024/11/5 11:11, Duan, Zhenzhong wrote:
>
>>>>>> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern,
>FALSE),
>>>>>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState,
>snoop_control,
>>>>> false),
>>>>>
>>>>> a question: is there any requirement on the layout of this array? Should
>>>>> new fields added in the end?
>>>>
>>>> Looked over the history, seems we didn't have an explicit rule in
>vtd_properties.
>>>> I put "x-fls" just under "x-scalable-mode" as stage-1 is a sub-feature of
>scalable
>>> mode.
>>>> Let me know if you have preference to add in the end.
>>>
>>> I don't have a preference for now as long as it does not break any
>>> functionality. BTW. Will x-flt or x-flts better?
>>
>> So first level support(fls) vs. first level translation(flt) or first level translation
>support(flts),
>> looks same for me, but I can change to x-flt or x-flts if you prefer.
>
>x-flts looks better as it suits more how spec tells it (FSTS in the eap
>register). :)
Got it, just double confirm you prefer x-flts, not x-fsts?
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for scalable modern mode
2024-11-05 6:03 ` Duan, Zhenzhong
@ 2024-11-05 6:26 ` Yi Liu
0 siblings, 0 replies; 67+ messages in thread
From: Yi Liu @ 2024-11-05 6:26 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/11/5 14:03, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Sent: Tuesday, November 5, 2024 1:56 PM
>> Subject: Re: [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for
>> scalable modern mode
>>
>> On 2024/11/5 11:11, Duan, Zhenzhong wrote:
>>
>>>>>>> + DEFINE_PROP_BOOL("x-fls", IntelIOMMUState, scalable_modern,
>> FALSE),
>>>>>>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState,
>> snoop_control,
>>>>>> false),
>>>>>>
>>>>>> a question: is there any requirement on the layout of this array? Should
>>>>>> new fields added in the end?
>>>>>
>>>>> Looked over the history, seems we didn't have an explicit rule in
>> vtd_properties.
>>>>> I put "x-fls" just under "x-scalable-mode" as stage-1 is a sub-feature of
>> scalable
>>>> mode.
>>>>> Let me know if you have preference to add in the end.
>>>>
>>>> I don't have a preference for now as long as it does not break any
>>>> functionality. BTW. Will x-flt or x-flts better?
>>>
>>> So first level support(fls) vs. first level translation(flt) or first level translation
>> support(flts),
>>> looks same for me, but I can change to x-flt or x-flts if you prefer.
>>
>> x-flts looks better as it suits more how spec tells it (FSTS in the eap
>> register). :)
>
> Got it, just double confirm you prefer x-flts, not x-fsts?
x-flts as most of the code use flt instead of fst.
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap
2024-11-04 8:15 ` Duan, Zhenzhong
@ 2024-11-05 6:29 ` Yi Liu
2024-11-05 7:25 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Yi Liu @ 2024-11-05 6:29 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On 2024/11/4 16:15, Duan, Zhenzhong wrote:
>
>> vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>>> g_hash_table_foreach_remove(s->iotlb,
>>> vtd_hash_remove_by_page_piotlb, &info);
>>> vtd_iommu_unlock(s);
>>> +
>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>> + vtd_as->devfn, &ce) &&
>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>> + IOMMUTLBEvent event;
>>> +
>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>> + vtd_as->pasid != pasid) {
>>> + continue;
>>
>> not quite get the logic here. patch 4 has a similar logic.
>
> This code means we need to invalidate device tlb either when pasid matches address space's pasid or when pasid matches rid2pasid if this address space has no pasid.
>
> Yes, patch4 only deal with stage-2, while this patch deal with stage-1.
vtd_iotlb_page_invalidate_notify() has a similar check as well. But
it checks PCI_NO_PASID against the pasid instead of vtd_pas->pasid.
So it looks confusing to me.
--
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap
2024-11-05 6:29 ` Yi Liu
@ 2024-11-05 7:25 ` Duan, Zhenzhong
0 siblings, 0 replies; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-05 7:25 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, eric.auger@redhat.com,
mst@redhat.com, peterx@redhat.com, jasowang@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Peng, Chao P,
Yi Sun, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Sent: Tuesday, November 5, 2024 2:30 PM
>Subject: Re: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify
>unmap
>
>On 2024/11/4 16:15, Duan, Zhenzhong wrote:
>>
>>> vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
>>>> g_hash_table_foreach_remove(s->iotlb,
>>>> vtd_hash_remove_by_page_piotlb, &info);
>>>> vtd_iommu_unlock(s);
>>>> +
>>>> + QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
>>>> + if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
>>>> + vtd_as->devfn, &ce) &&
>>>> + domain_id == vtd_get_domain_id(s, &ce, vtd_as->pasid)) {
>>>> + uint32_t rid2pasid = VTD_CE_GET_RID2PASID(&ce);
>>>> + IOMMUTLBEvent event;
>>>> +
>>>> + if ((vtd_as->pasid != PCI_NO_PASID || pasid != rid2pasid) &&
>>>> + vtd_as->pasid != pasid) {
>>>> + continue;
>>>
>>> not quite get the logic here. patch 4 has a similar logic.
>>
>> This code means we need to invalidate device tlb either when pasid matches
>address space's pasid or when pasid matches rid2pasid if this address space has
>no pasid.
>>
>> Yes, patch4 only deal with stage-2, while this patch deal with stage-1.
>
>vtd_iotlb_page_invalidate_notify() has a similar check as well. But
>it checks PCI_NO_PASID against the pasid instead of vtd_pas->pasid.
>So it looks confusing to me.
Yes, it's about difference of piotlb invalidation vs. iotlb invalidation.
vtd_piotlb_page_invalidate() has a parameter pasid which come from piotlb
invalidation descriptor and should never be PCI_NO_PASID(-1).
For vtd_iotlb_page_invalidate_notify(), pasid param is always PCI_NO_PASID
for iotlb Invalidation, reason is iotlb invalidation doesn't support pasid.
I had ever thought about expending and reusing vtd_iotlb_page_invalidate_notify() for
piotlb invalidation, but it's cleaner to have a separate code for piotlb invalidation.
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation
2024-09-30 9:26 ` [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
2024-11-04 2:49 ` Yi Liu
@ 2024-11-08 3:15 ` Jason Wang
1 sibling, 0 replies; 67+ messages in thread
From: Jason Wang @ 2024-11-08 3:15 UTC (permalink / raw)
To: Zhenzhong Duan
Cc: qemu-devel, alex.williamson, clg, eric.auger, mst, peterx, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost
On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan <zhenzhong.duan@intel.com> wrote:
>
> From: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
>
> Signed-off-by: Clément Mathieu--Drif <clement.mathieu--drif@eviden.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap
2024-09-30 9:26 ` [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
2024-11-04 3:05 ` Yi Liu
@ 2024-11-08 4:39 ` Jason Wang
1 sibling, 0 replies; 67+ messages in thread
From: Jason Wang @ 2024-11-08 4:39 UTC (permalink / raw)
To: Zhenzhong Duan
Cc: qemu-devel, alex.williamson, clg, eric.auger, mst, peterx, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Yi Sun, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan <zhenzhong.duan@intel.com> wrote:
>
> This is used by some emulated devices which caches address
> translation result. When piotlb invalidation issued in guest,
> those caches should be refreshed.
>
> For device that does not implement ATS capability or disable
> it but still caches the translation result, it is better to
> implement ATS cap or enable it if there is need to cache the
> translation result.
>
> Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> ---
Acked-by: Jason Wang <jasowang@redhat.com>
Thank
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-09-30 9:26 ` [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode Zhenzhong Duan
2024-11-04 3:16 ` Yi Liu
@ 2024-11-08 4:41 ` Jason Wang
2024-11-08 5:30 ` Duan, Zhenzhong
1 sibling, 1 reply; 67+ messages in thread
From: Jason Wang @ 2024-11-08 4:41 UTC (permalink / raw)
To: Zhenzhong Duan
Cc: qemu-devel, alex.williamson, clg, eric.auger, mst, peterx, jgg,
nicolinc, joao.m.martins, clement.mathieu--drif, kevin.tian,
yi.l.liu, chao.p.peng, Paolo Bonzini, Richard Henderson,
Eduardo Habkost, Marcel Apfelbaum
On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan <zhenzhong.duan@intel.com> wrote:
>
> According to VTD spec, stage-1 page table could support 4-level and
> 5-level paging.
>
> However, 5-level paging translation emulation is unsupported yet.
> That means the only supported value for aw_bits is 48.
>
> So default aw_bits to 48 in scalable modern mode. In other cases,
> it is still default to 39 for backward compatibility.
>
> Add a check to ensure user specified value is 48 in modern mode
> for now.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> ---
> include/hw/i386/intel_iommu.h | 2 +-
> hw/i386/intel_iommu.c | 10 +++++++++-
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> index b843d069cc..48134bda11 100644
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
> #define DMAR_REG_SIZE 0x230
> #define VTD_HOST_AW_39BIT 39
> #define VTD_HOST_AW_48BIT 48
> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
> +#define VTD_HOST_AW_AUTO 0xff
> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>
> #define DMAR_REPORT_F_INTR (1)
> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> index 91d7b1abfa..068a08f522 100644
> --- a/hw/i386/intel_iommu.c
> +++ b/hw/i386/intel_iommu.c
> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
> ON_OFF_AUTO_AUTO),
> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
> - VTD_HOST_ADDRESS_WIDTH),
> + VTD_HOST_AW_AUTO),
> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode, FALSE),
> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode, FALSE),
> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control, false),
> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState *s, Error **errp)
> }
> }
>
> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
> + if (s->scalable_modern) {
> + s->aw_bits = VTD_HOST_AW_48BIT;
> + } else {
> + s->aw_bits = VTD_HOST_AW_39BIT;
> + }
I don't see how we maintain migration compatibility here.
Thanks
> + }
> +
> if (!s->scalable_modern && s->aw_bits != VTD_HOST_AW_39BIT &&
> s->aw_bits != VTD_HOST_AW_48BIT) {
> error_setg(errp, "%s mode: supported values for aw-bits are: %d, %d",
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting
2024-11-04 7:00 ` Yi Liu
@ 2024-11-08 4:45 ` Jason Wang
0 siblings, 0 replies; 67+ messages in thread
From: Jason Wang @ 2024-11-08 4:45 UTC (permalink / raw)
To: Yi Liu
Cc: Zhenzhong Duan, qemu-devel, alex.williamson, clg, eric.auger, mst,
peterx, jgg, nicolinc, joao.m.martins, clement.mathieu--drif,
kevin.tian, chao.p.peng, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost
On Mon, Nov 4, 2024 at 2:56 PM Yi Liu <yi.l.liu@intel.com> wrote:
>
> On 2024/9/30 17:26, Zhenzhong Duan wrote:
> > This gives user flexibility to turn off FS1GP for debug purpose.
> >
> > It is also useful for future nesting feature. When host IOMMU doesn't
> > support FS1GP but vIOMMU does, nested page table on host side works
> > after turn FS1GP off in vIOMMU.
>
> s/turn/turning
>
> Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-11-08 4:41 ` Jason Wang
@ 2024-11-08 5:30 ` Duan, Zhenzhong
2024-11-11 1:24 ` Jason Wang
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-08 5:30 UTC (permalink / raw)
To: Jason Wang
Cc: qemu-devel@nongnu.org, alex.williamson@redhat.com, clg@redhat.com,
eric.auger@redhat.com, mst@redhat.com, peterx@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Liu, Yi L,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Jason Wang <jasowang@redhat.com>
>Sent: Friday, November 8, 2024 12:42 PM
>Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable
>modern mode
>
>On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan <zhenzhong.duan@intel.com>
>wrote:
>>
>> According to VTD spec, stage-1 page table could support 4-level and
>> 5-level paging.
>>
>> However, 5-level paging translation emulation is unsupported yet.
>> That means the only supported value for aw_bits is 48.
>>
>> So default aw_bits to 48 in scalable modern mode. In other cases,
>> it is still default to 39 for backward compatibility.
>>
>> Add a check to ensure user specified value is 48 in modern mode
>> for now.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> ---
>> include/hw/i386/intel_iommu.h | 2 +-
>> hw/i386/intel_iommu.c | 10 +++++++++-
>> 2 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
>> index b843d069cc..48134bda11 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
>INTEL_IOMMU_DEVICE)
>> #define DMAR_REG_SIZE 0x230
>> #define VTD_HOST_AW_39BIT 39
>> #define VTD_HOST_AW_48BIT 48
>> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
>> +#define VTD_HOST_AW_AUTO 0xff
>> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>>
>> #define DMAR_REPORT_F_INTR (1)
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 91d7b1abfa..068a08f522 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
>> ON_OFF_AUTO_AUTO),
>> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
>> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
>> - VTD_HOST_ADDRESS_WIDTH),
>> + VTD_HOST_AW_AUTO),
>> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>FALSE),
>> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode,
>FALSE),
>> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>false),
>> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState *s,
>Error **errp)
>> }
>> }
>>
>> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
>> + if (s->scalable_modern) {
>> + s->aw_bits = VTD_HOST_AW_48BIT;
>> + } else {
>> + s->aw_bits = VTD_HOST_AW_39BIT;
>> + }
>
>I don't see how we maintain migration compatibility here.
Imagine this cmdline: "-device intel-iommu,x-scalable-mode=on" which hints
scalable legacy mode(a.k.a, stage-2 page table mode),
without this patch, initial s->aw_bits value is VTD_HOST_ADDRESS_WIDTH(39).
after this patch, initial s->aw_bit value is VTD_HOST_AW_AUTO(0xff),
vtd_decide_config() is called by vtd_realize() to set s->aw_bit to VTD_HOST_AW_39BIT(39).
So as long as the QEMU cmdline is same, s->aw_bit is same with or without this patch.
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-11-08 5:30 ` Duan, Zhenzhong
@ 2024-11-11 1:24 ` Jason Wang
2024-11-11 2:58 ` Duan, Zhenzhong
0 siblings, 1 reply; 67+ messages in thread
From: Jason Wang @ 2024-11-11 1:24 UTC (permalink / raw)
To: Duan, Zhenzhong
Cc: qemu-devel@nongnu.org, alex.williamson@redhat.com, clg@redhat.com,
eric.auger@redhat.com, mst@redhat.com, peterx@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Liu, Yi L,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On Fri, Nov 8, 2024 at 1:30 PM Duan, Zhenzhong <zhenzhong.duan@intel.com> wrote:
>
>
>
> >-----Original Message-----
> >From: Jason Wang <jasowang@redhat.com>
> >Sent: Friday, November 8, 2024 12:42 PM
> >Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable
> >modern mode
> >
> >On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan <zhenzhong.duan@intel.com>
> >wrote:
> >>
> >> According to VTD spec, stage-1 page table could support 4-level and
> >> 5-level paging.
> >>
> >> However, 5-level paging translation emulation is unsupported yet.
> >> That means the only supported value for aw_bits is 48.
> >>
> >> So default aw_bits to 48 in scalable modern mode. In other cases,
> >> it is still default to 39 for backward compatibility.
> >>
> >> Add a check to ensure user specified value is 48 in modern mode
> >> for now.
> >>
> >> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> >> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> >> ---
> >> include/hw/i386/intel_iommu.h | 2 +-
> >> hw/i386/intel_iommu.c | 10 +++++++++-
> >> 2 files changed, 10 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/include/hw/i386/intel_iommu.h b/include/hw/i386/intel_iommu.h
> >> index b843d069cc..48134bda11 100644
> >> --- a/include/hw/i386/intel_iommu.h
> >> +++ b/include/hw/i386/intel_iommu.h
> >> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
> >INTEL_IOMMU_DEVICE)
> >> #define DMAR_REG_SIZE 0x230
> >> #define VTD_HOST_AW_39BIT 39
> >> #define VTD_HOST_AW_48BIT 48
> >> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
> >> +#define VTD_HOST_AW_AUTO 0xff
> >> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
> >>
> >> #define DMAR_REPORT_F_INTR (1)
> >> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> >> index 91d7b1abfa..068a08f522 100644
> >> --- a/hw/i386/intel_iommu.c
> >> +++ b/hw/i386/intel_iommu.c
> >> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
> >> ON_OFF_AUTO_AUTO),
> >> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
> >> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
> >> - VTD_HOST_ADDRESS_WIDTH),
> >> + VTD_HOST_AW_AUTO),
> >> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
> >FALSE),
> >> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState, scalable_mode,
> >FALSE),
> >> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
> >false),
> >> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState *s,
> >Error **errp)
> >> }
> >> }
> >>
> >> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
> >> + if (s->scalable_modern) {
> >> + s->aw_bits = VTD_HOST_AW_48BIT;
> >> + } else {
> >> + s->aw_bits = VTD_HOST_AW_39BIT;
> >> + }
> >
> >I don't see how we maintain migration compatibility here.
>
> Imagine this cmdline: "-device intel-iommu,x-scalable-mode=on" which hints
> scalable legacy mode(a.k.a, stage-2 page table mode),
>
> without this patch, initial s->aw_bits value is VTD_HOST_ADDRESS_WIDTH(39).
>
> after this patch, initial s->aw_bit value is VTD_HOST_AW_AUTO(0xff),
> vtd_decide_config() is called by vtd_realize() to set s->aw_bit to VTD_HOST_AW_39BIT(39).
>
> So as long as the QEMU cmdline is same, s->aw_bit is same with or without this patch.
Ok, I guess the point is that the scalabe-modern mode is introduced in
this series so we won't bother.
But I see this:
+ if (s->scalable_modern && s->aw_bits != VTD_HOST_AW_48BIT) {
In previous patches. So I wonder instead of mandating management to
set AUTO which seems like a burden. How about just increase the
default AW to 48bit and do the compatibility work here?
Thanks
>
> Thanks
> Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
* RE: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-11-11 1:24 ` Jason Wang
@ 2024-11-11 2:58 ` Duan, Zhenzhong
2024-11-11 3:03 ` Jason Wang
0 siblings, 1 reply; 67+ messages in thread
From: Duan, Zhenzhong @ 2024-11-11 2:58 UTC (permalink / raw)
To: Jason Wang
Cc: qemu-devel@nongnu.org, alex.williamson@redhat.com, clg@redhat.com,
eric.auger@redhat.com, mst@redhat.com, peterx@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Liu, Yi L,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
>-----Original Message-----
>From: Jason Wang <jasowang@redhat.com>
>Sent: Monday, November 11, 2024 9:24 AM
>Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable
>modern mode
>
>On Fri, Nov 8, 2024 at 1:30 PM Duan, Zhenzhong <zhenzhong.duan@intel.com>
>wrote:
>>
>>
>>
>> >-----Original Message-----
>> >From: Jason Wang <jasowang@redhat.com>
>> >Sent: Friday, November 8, 2024 12:42 PM
>> >Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in
>scalable
>> >modern mode
>> >
>> >On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan
><zhenzhong.duan@intel.com>
>> >wrote:
>> >>
>> >> According to VTD spec, stage-1 page table could support 4-level and
>> >> 5-level paging.
>> >>
>> >> However, 5-level paging translation emulation is unsupported yet.
>> >> That means the only supported value for aw_bits is 48.
>> >>
>> >> So default aw_bits to 48 in scalable modern mode. In other cases,
>> >> it is still default to 39 for backward compatibility.
>> >>
>> >> Add a check to ensure user specified value is 48 in modern mode
>> >> for now.
>> >>
>> >> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> >> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
>> >> ---
>> >> include/hw/i386/intel_iommu.h | 2 +-
>> >> hw/i386/intel_iommu.c | 10 +++++++++-
>> >> 2 files changed, 10 insertions(+), 2 deletions(-)
>> >>
>> >> diff --git a/include/hw/i386/intel_iommu.h
>b/include/hw/i386/intel_iommu.h
>> >> index b843d069cc..48134bda11 100644
>> >> --- a/include/hw/i386/intel_iommu.h
>> >> +++ b/include/hw/i386/intel_iommu.h
>> >> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
>> >INTEL_IOMMU_DEVICE)
>> >> #define DMAR_REG_SIZE 0x230
>> >> #define VTD_HOST_AW_39BIT 39
>> >> #define VTD_HOST_AW_48BIT 48
>> >> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
>> >> +#define VTD_HOST_AW_AUTO 0xff
>> >> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>> >>
>> >> #define DMAR_REPORT_F_INTR (1)
>> >> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> >> index 91d7b1abfa..068a08f522 100644
>> >> --- a/hw/i386/intel_iommu.c
>> >> +++ b/hw/i386/intel_iommu.c
>> >> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
>> >> ON_OFF_AUTO_AUTO),
>> >> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
>> >> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
>> >> - VTD_HOST_ADDRESS_WIDTH),
>> >> + VTD_HOST_AW_AUTO),
>> >> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
>> >FALSE),
>> >> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState,
>scalable_mode,
>> >FALSE),
>> >> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
>> >false),
>> >> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState
>*s,
>> >Error **errp)
>> >> }
>> >> }
>> >>
>> >> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
>> >> + if (s->scalable_modern) {
>> >> + s->aw_bits = VTD_HOST_AW_48BIT;
>> >> + } else {
>> >> + s->aw_bits = VTD_HOST_AW_39BIT;
>> >> + }
>> >
>> >I don't see how we maintain migration compatibility here.
>>
>> Imagine this cmdline: "-device intel-iommu,x-scalable-mode=on" which hints
>> scalable legacy mode(a.k.a, stage-2 page table mode),
>>
>> without this patch, initial s->aw_bits value is VTD_HOST_ADDRESS_WIDTH(39).
>>
>> after this patch, initial s->aw_bit value is VTD_HOST_AW_AUTO(0xff),
>> vtd_decide_config() is called by vtd_realize() to set s->aw_bit to
>VTD_HOST_AW_39BIT(39).
>>
>> So as long as the QEMU cmdline is same, s->aw_bit is same with or without this
>patch.
>
>Ok, I guess the point is that the scalabe-modern mode is introduced in
>this series so we won't bother.
>
>But I see this:
>
>+ if (s->scalable_modern && s->aw_bits != VTD_HOST_AW_48BIT) {
>
>In previous patches. So I wonder instead of mandating management to
>set AUTO which seems like a burden. How about just increase the
>default AW to 48bit and do the compatibility work here?
Good idea! Then we don't need VTD_HOST_AW_AUTO(0xff).
Default is 48 starting from qemu 9.2 both for modern and legacy mode,
Default is still 39 for qemu before 9.2. Will be like below, let me know if
any misunderstandings.
--- a/include/hw/i386/intel_iommu.h
+++ b/include/hw/i386/intel_iommu.h
@@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
#define DMAR_REG_SIZE 0x230
#define VTD_HOST_AW_39BIT 39
#define VTD_HOST_AW_48BIT 48
-#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
+#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_48BIT
#define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
#define DMAR_REPORT_F_INTR (1)
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 830614d930..bdb67f1fd4 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -83,6 +83,7 @@ GlobalProperty pc_compat_9_1[] = {
{ "ICH9-LPC", "x-smi-swsmi-timer", "off" },
{ "ICH9-LPC", "x-smi-periodic-timer", "off" },
{ TYPE_INTEL_IOMMU_DEVICE, "stale-tm", "on" },
+ { TYPE_INTEL_IOMMU_DEVICE, "aw-bits", "39" },
};
const size_t pc_compat_9_1_len = G_N_ELEMENTS(pc_compat_9_1);
Thanks
Zhenzhong
^ permalink raw reply related [flat|nested] 67+ messages in thread
* Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode
2024-11-11 2:58 ` Duan, Zhenzhong
@ 2024-11-11 3:03 ` Jason Wang
0 siblings, 0 replies; 67+ messages in thread
From: Jason Wang @ 2024-11-11 3:03 UTC (permalink / raw)
To: Duan, Zhenzhong
Cc: qemu-devel@nongnu.org, alex.williamson@redhat.com, clg@redhat.com,
eric.auger@redhat.com, mst@redhat.com, peterx@redhat.com,
jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com,
clement.mathieu--drif@eviden.com, Tian, Kevin, Liu, Yi L,
Peng, Chao P, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Marcel Apfelbaum
On Mon, Nov 11, 2024 at 10:58 AM Duan, Zhenzhong
<zhenzhong.duan@intel.com> wrote:
>
>
>
> >-----Original Message-----
> >From: Jason Wang <jasowang@redhat.com>
> >Sent: Monday, November 11, 2024 9:24 AM
> >Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable
> >modern mode
> >
> >On Fri, Nov 8, 2024 at 1:30 PM Duan, Zhenzhong <zhenzhong.duan@intel.com>
> >wrote:
> >>
> >>
> >>
> >> >-----Original Message-----
> >> >From: Jason Wang <jasowang@redhat.com>
> >> >Sent: Friday, November 8, 2024 12:42 PM
> >> >Subject: Re: [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in
> >scalable
> >> >modern mode
> >> >
> >> >On Mon, Sep 30, 2024 at 5:30 PM Zhenzhong Duan
> ><zhenzhong.duan@intel.com>
> >> >wrote:
> >> >>
> >> >> According to VTD spec, stage-1 page table could support 4-level and
> >> >> 5-level paging.
> >> >>
> >> >> However, 5-level paging translation emulation is unsupported yet.
> >> >> That means the only supported value for aw_bits is 48.
> >> >>
> >> >> So default aw_bits to 48 in scalable modern mode. In other cases,
> >> >> it is still default to 39 for backward compatibility.
> >> >>
> >> >> Add a check to ensure user specified value is 48 in modern mode
> >> >> for now.
> >> >>
> >> >> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> >> >> Reviewed-by: Clément Mathieu--Drif<clement.mathieu--drif@eviden.com>
> >> >> ---
> >> >> include/hw/i386/intel_iommu.h | 2 +-
> >> >> hw/i386/intel_iommu.c | 10 +++++++++-
> >> >> 2 files changed, 10 insertions(+), 2 deletions(-)
> >> >>
> >> >> diff --git a/include/hw/i386/intel_iommu.h
> >b/include/hw/i386/intel_iommu.h
> >> >> index b843d069cc..48134bda11 100644
> >> >> --- a/include/hw/i386/intel_iommu.h
> >> >> +++ b/include/hw/i386/intel_iommu.h
> >> >> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState,
> >> >INTEL_IOMMU_DEVICE)
> >> >> #define DMAR_REG_SIZE 0x230
> >> >> #define VTD_HOST_AW_39BIT 39
> >> >> #define VTD_HOST_AW_48BIT 48
> >> >> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
> >> >> +#define VTD_HOST_AW_AUTO 0xff
> >> >> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
> >> >>
> >> >> #define DMAR_REPORT_F_INTR (1)
> >> >> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
> >> >> index 91d7b1abfa..068a08f522 100644
> >> >> --- a/hw/i386/intel_iommu.c
> >> >> +++ b/hw/i386/intel_iommu.c
> >> >> @@ -3776,7 +3776,7 @@ static Property vtd_properties[] = {
> >> >> ON_OFF_AUTO_AUTO),
> >> >> DEFINE_PROP_BOOL("x-buggy-eim", IntelIOMMUState, buggy_eim, false),
> >> >> DEFINE_PROP_UINT8("aw-bits", IntelIOMMUState, aw_bits,
> >> >> - VTD_HOST_ADDRESS_WIDTH),
> >> >> + VTD_HOST_AW_AUTO),
> >> >> DEFINE_PROP_BOOL("caching-mode", IntelIOMMUState, caching_mode,
> >> >FALSE),
> >> >> DEFINE_PROP_BOOL("x-scalable-mode", IntelIOMMUState,
> >scalable_mode,
> >> >FALSE),
> >> >> DEFINE_PROP_BOOL("snoop-control", IntelIOMMUState, snoop_control,
> >> >false),
> >> >> @@ -4683,6 +4683,14 @@ static bool vtd_decide_config(IntelIOMMUState
> >*s,
> >> >Error **errp)
> >> >> }
> >> >> }
> >> >>
> >> >> + if (s->aw_bits == VTD_HOST_AW_AUTO) {
> >> >> + if (s->scalable_modern) {
> >> >> + s->aw_bits = VTD_HOST_AW_48BIT;
> >> >> + } else {
> >> >> + s->aw_bits = VTD_HOST_AW_39BIT;
> >> >> + }
> >> >
> >> >I don't see how we maintain migration compatibility here.
> >>
> >> Imagine this cmdline: "-device intel-iommu,x-scalable-mode=on" which hints
> >> scalable legacy mode(a.k.a, stage-2 page table mode),
> >>
> >> without this patch, initial s->aw_bits value is VTD_HOST_ADDRESS_WIDTH(39).
> >>
> >> after this patch, initial s->aw_bit value is VTD_HOST_AW_AUTO(0xff),
> >> vtd_decide_config() is called by vtd_realize() to set s->aw_bit to
> >VTD_HOST_AW_39BIT(39).
> >>
> >> So as long as the QEMU cmdline is same, s->aw_bit is same with or without this
> >patch.
> >
> >Ok, I guess the point is that the scalabe-modern mode is introduced in
> >this series so we won't bother.
> >
> >But I see this:
> >
> >+ if (s->scalable_modern && s->aw_bits != VTD_HOST_AW_48BIT) {
> >
> >In previous patches. So I wonder instead of mandating management to
> >set AUTO which seems like a burden. How about just increase the
> >default AW to 48bit and do the compatibility work here?
>
> Good idea! Then we don't need VTD_HOST_AW_AUTO(0xff).
> Default is 48 starting from qemu 9.2 both for modern and legacy mode,
> Default is still 39 for qemu before 9.2. Will be like below, let me know if
> any misunderstandings.
>
> --- a/include/hw/i386/intel_iommu.h
> +++ b/include/hw/i386/intel_iommu.h
> @@ -45,7 +45,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(IntelIOMMUState, INTEL_IOMMU_DEVICE)
> #define DMAR_REG_SIZE 0x230
> #define VTD_HOST_AW_39BIT 39
> #define VTD_HOST_AW_48BIT 48
> -#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_39BIT
> +#define VTD_HOST_ADDRESS_WIDTH VTD_HOST_AW_48BIT
> #define VTD_HAW_MASK(aw) ((1ULL << (aw)) - 1)
>
> #define DMAR_REPORT_F_INTR (1)
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 830614d930..bdb67f1fd4 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -83,6 +83,7 @@ GlobalProperty pc_compat_9_1[] = {
> { "ICH9-LPC", "x-smi-swsmi-timer", "off" },
> { "ICH9-LPC", "x-smi-periodic-timer", "off" },
> { TYPE_INTEL_IOMMU_DEVICE, "stale-tm", "on" },
> + { TYPE_INTEL_IOMMU_DEVICE, "aw-bits", "39" },
> };
> const size_t pc_compat_9_1_len = G_N_ELEMENTS(pc_compat_9_1);
Ack.
Thanks
>
>
>
> Thanks
> Zhenzhong
^ permalink raw reply [flat|nested] 67+ messages in thread
end of thread, other threads:[~2024-11-11 3:04 UTC | newest]
Thread overview: 67+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-30 9:26 [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 01/17] intel_iommu: Use the latest fault reasons defined by spec Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 02/17] intel_iommu: Make pasid entry type check accurate Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 03/17] intel_iommu: Add a placeholder variable for scalable modern mode Zhenzhong Duan
2024-10-04 5:22 ` CLEMENT MATHIEU--DRIF
2024-11-03 14:21 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 04/17] intel_iommu: Flush stage-2 cache in PASID-selective PASID-based iotlb invalidation Zhenzhong Duan
2024-11-04 2:49 ` Yi Liu
2024-11-04 7:37 ` CLEMENT MATHIEU--DRIF
2024-11-04 8:45 ` Yi Liu
2024-11-04 11:46 ` Duan, Zhenzhong
2024-11-04 11:50 ` Michael S. Tsirkin
2024-11-04 11:55 ` Duan, Zhenzhong
2024-11-04 12:01 ` Michael S. Tsirkin
2024-11-04 12:03 ` Duan, Zhenzhong
2024-09-30 9:26 ` [PATCH v4 05/17] intel_iommu: Rename slpte to pte Zhenzhong Duan
2024-09-30 9:26 ` [PATCH v4 06/17] intel_iommu: Implement stage-1 translation Zhenzhong Duan
2024-11-03 14:21 ` Yi Liu
2024-11-04 3:05 ` Duan, Zhenzhong
2024-11-04 7:02 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 07/17] intel_iommu: Check if the input address is canonical Zhenzhong Duan
2024-11-03 14:22 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 08/17] intel_iommu: Set accessed and dirty bits during first stage translation Zhenzhong Duan
2024-11-04 2:49 ` Yi Liu
2024-11-08 3:15 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 09/17] intel_iommu: Flush stage-1 cache in iotlb invalidation Zhenzhong Duan
2024-11-04 2:50 ` Yi Liu
2024-11-04 3:38 ` Duan, Zhenzhong
2024-11-04 7:36 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 10/17] intel_iommu: Process PASID-based " Zhenzhong Duan
2024-11-04 2:50 ` Yi Liu
2024-11-04 5:40 ` Duan, Zhenzhong
2024-11-04 7:05 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 11/17] intel_iommu: Add an internal API to find an address space with PASID Zhenzhong Duan
2024-11-04 2:50 ` Yi Liu
2024-11-04 5:47 ` Duan, Zhenzhong
2024-09-30 9:26 ` [PATCH v4 12/17] intel_iommu: Add support for PASID-based device IOTLB invalidation Zhenzhong Duan
2024-11-04 2:51 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 13/17] intel_iommu: piotlb invalidation should notify unmap Zhenzhong Duan
2024-11-04 3:05 ` Yi Liu
2024-11-04 8:15 ` Duan, Zhenzhong
2024-11-05 6:29 ` Yi Liu
2024-11-05 7:25 ` Duan, Zhenzhong
2024-11-08 4:39 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 14/17] intel_iommu: Set default aw_bits to 48 in scalable modern mode Zhenzhong Duan
2024-11-04 3:16 ` Yi Liu
2024-11-04 3:19 ` Duan, Zhenzhong
2024-11-04 7:25 ` Yi Liu
2024-11-08 4:41 ` Jason Wang
2024-11-08 5:30 ` Duan, Zhenzhong
2024-11-11 1:24 ` Jason Wang
2024-11-11 2:58 ` Duan, Zhenzhong
2024-11-11 3:03 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 15/17] intel_iommu: Introduce a property x-fls for " Zhenzhong Duan
2024-11-04 4:25 ` Yi Liu
2024-11-04 6:25 ` Duan, Zhenzhong
2024-11-04 7:23 ` Yi Liu
2024-11-05 3:11 ` Duan, Zhenzhong
2024-11-05 5:56 ` Yi Liu
2024-11-05 6:03 ` Duan, Zhenzhong
2024-11-05 6:26 ` Yi Liu
2024-09-30 9:26 ` [PATCH v4 16/17] intel_iommu: Introduce a property to control FS1GP cap bit setting Zhenzhong Duan
2024-11-04 7:00 ` Yi Liu
2024-11-08 4:45 ` Jason Wang
2024-09-30 9:26 ` [PATCH v4 17/17] tests/qtest: Add intel-iommu test Zhenzhong Duan
2024-09-30 9:52 ` Duan, Zhenzhong
2024-10-25 6:32 ` [PATCH v4 00/17] intel_iommu: Enable stage-1 translation for emulated device Duan, Zhenzhong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).