iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rc v2 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases
@ 2025-12-07 20:49 Nicolin Chen
  2025-12-07 20:49 ` [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence Nicolin Chen
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Nicolin Chen @ 2025-12-07 20:49 UTC (permalink / raw)
  To: jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan,
	xueshuai

Occasional C_BAD_STE errors were observed in nesting setups where a device
attached to a nested bypass/identity domain enables PASID.

This occurred when the physical STE was updated from S2-only mode to S1+S2
nesting mode, but the update failed to use the hitless routine that it was
supposed to use. Instead, it cleared STE.V bit to load the CD table, while
the default substream was still actively performing DMA.

It was later found that the diff algorithm in arm_smmu_entry_qword_diff()
enforced an additional critical word due to misaligned MEV and EATS fields
between S2-only and S1+S2 modes.

Both fields are either well-managed or non-critical, so move them to the
"ignored" list to relax the qword diff algorithm.

Additionally, add KUnit test coverage for these nesting STE cases.

This is on Github:
https://github.com/nicolinc/iommufd/commits/smmuv3_ste_fixes/

A host kernel must apply this to fix the bug.

Changelog
v2:
 * Fix kunit tests
 * Update commit message
 * Keep MEV/EATS in used list by masking them away using ignored_bits
v1:
 https://lore.kernel.org/all/cover.1764982046.git.nicolinc@nvidia.com/

Jason Gunthorpe (3):
  iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence
  iommu/arm-smmu-v3: Ignore STE MEV when computing the update sequence
  iommu/arm-smmu-v3: Ignore STE EATS when computing the update sequence

Nicolin Chen (1):
  iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  2 +
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  | 80 ++++++++++++++++++-
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 43 ++++++++--
 3 files changed, 117 insertions(+), 8 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence
  2025-12-07 20:49 [PATCH rc v2 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
@ 2025-12-07 20:49 ` Nicolin Chen
  2025-12-08  2:33   ` Shuai Xue
  2025-12-07 20:49 ` [PATCH rc v2 2/4] iommu/arm-smmu-v3: Ignore STE MEV when computing the " Nicolin Chen
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Nicolin Chen @ 2025-12-07 20:49 UTC (permalink / raw)
  To: jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan,
	xueshuai

From: Jason Gunthorpe <jgg@nvidia.com>

C_BAD_STE was observed when updating nested STE from an S1-bypass mode to
an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly
different than the normal S1-bypass and S1DSS-bypass modes. As a result,
fields like MEV and EATS in S2's used list marked the word1 as a critical
word that requested a STE.V=0. This breaks a hitless update.

However, both MEV and EATS aren't critical in terms of STE update. One
controls the merge of the events and the other controls the ATS that is
managed by the driver at the same time via pci_enable_ats().

Add an arm_smmu_get_ste_ignored() to allow STE update algorithm to ignore
those fields, avoiding the STE update breakages.

Note that this change is required by both MEV and EATS fields, which were
introduced in different kernel versions. So add this get_ignored() first.
The MEV and EATS will be added in arm_smmu_get_ste_ignored() separately.

Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
Cc: stable@vger.kernel.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  2 ++
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  | 19 ++++++++++++---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 24 +++++++++++++++----
 3 files changed, 37 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index ae23aacc3840..d5f0e5407b9f 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -900,6 +900,7 @@ struct arm_smmu_entry_writer {
 
 struct arm_smmu_entry_writer_ops {
 	void (*get_used)(const __le64 *entry, __le64 *used);
+	void (*get_ignored)(__le64 *ignored_bits);
 	void (*sync)(struct arm_smmu_entry_writer *writer);
 };
 
@@ -911,6 +912,7 @@ void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target,
 
 #if IS_ENABLED(CONFIG_KUNIT)
 void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits);
+void arm_smmu_get_ste_ignored(__le64 *ignored_bits);
 void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *cur,
 			  const __le64 *target);
 void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits);
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
index d2671bfd3798..3556e65cf9ac 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
@@ -38,13 +38,16 @@ enum arm_smmu_test_master_feat {
 static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
 						const __le64 *used_bits,
 						const __le64 *target,
+						const __le64 *ignored,
 						unsigned int length)
 {
 	bool differs = false;
 	unsigned int i;
 
 	for (i = 0; i < length; i++) {
-		if ((entry[i] & used_bits[i]) != target[i])
+		__le64 used = used_bits[i] & ~ignored[i];
+
+		if ((entry[i] & used) != (target[i] & used))
 			differs = true;
 	}
 	return differs;
@@ -56,12 +59,18 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer)
 	struct arm_smmu_test_writer *test_writer =
 		container_of(writer, struct arm_smmu_test_writer, writer);
 	__le64 *entry_used_bits;
+	__le64 *ignored;
 
 	entry_used_bits = kunit_kzalloc(
 		test_writer->test, sizeof(*entry_used_bits) * NUM_ENTRY_QWORDS,
 		GFP_KERNEL);
 	KUNIT_ASSERT_NOT_NULL(test_writer->test, entry_used_bits);
 
+	ignored = kunit_kzalloc(test_writer->test,
+				sizeof(*ignored) * NUM_ENTRY_QWORDS,
+				GFP_KERNEL);
+	KUNIT_ASSERT_NOT_NULL(test_writer->test, ignored);
+
 	pr_debug("STE value is now set to: ");
 	print_hex_dump_debug("    ", DUMP_PREFIX_NONE, 16, 8,
 			     test_writer->entry,
@@ -79,14 +88,17 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer)
 		 * configuration.
 		 */
 		writer->ops->get_used(test_writer->entry, entry_used_bits);
+		if (writer->ops->get_ignored)
+			writer->ops->get_ignored(ignored);
 		KUNIT_EXPECT_FALSE(
 			test_writer->test,
 			arm_smmu_entry_differs_in_used_bits(
 				test_writer->entry, entry_used_bits,
-				test_writer->init_entry, NUM_ENTRY_QWORDS) &&
+				test_writer->init_entry, ignored,
+				NUM_ENTRY_QWORDS) &&
 				arm_smmu_entry_differs_in_used_bits(
 					test_writer->entry, entry_used_bits,
-					test_writer->target_entry,
+					test_writer->target_entry, ignored,
 					NUM_ENTRY_QWORDS));
 	}
 }
@@ -106,6 +118,7 @@ arm_smmu_v3_test_debug_print_used_bits(struct arm_smmu_entry_writer *writer,
 static const struct arm_smmu_entry_writer_ops test_ste_ops = {
 	.sync = arm_smmu_test_writer_record_syncs,
 	.get_used = arm_smmu_get_ste_used,
+	.get_ignored = arm_smmu_get_ste_ignored,
 };
 
 static const struct arm_smmu_entry_writer_ops test_cd_ops = {
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index d16d35c78c06..e22c0890041b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1082,6 +1082,12 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
 }
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
 
+VISIBLE_IF_KUNIT
+void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
+{
+}
+EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored);
+
 /*
  * Figure out if we can do a hitless update of entry to become target. Returns a
  * bit mask where 1 indicates that qword needs to be set disruptively.
@@ -1094,13 +1100,22 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
 {
 	__le64 target_used[NUM_ENTRY_QWORDS] = {};
 	__le64 cur_used[NUM_ENTRY_QWORDS] = {};
+	__le64 ignored[NUM_ENTRY_QWORDS] = {};
 	u8 used_qword_diff = 0;
 	unsigned int i;
 
 	writer->ops->get_used(entry, cur_used);
 	writer->ops->get_used(target, target_used);
+	if (writer->ops->get_ignored)
+		writer->ops->get_ignored(ignored);
 
 	for (i = 0; i != NUM_ENTRY_QWORDS; i++) {
+		/*
+		 * Ignored is only used for bits that are used by both entries,
+		 * otherwise it is sequenced according to the unused entry.
+		 */
+		ignored[i] &= target_used[i] & cur_used[i];
+
 		/*
 		 * Check that masks are up to date, the make functions are not
 		 * allowed to set a bit to 1 if the used function doesn't say it
@@ -1109,6 +1124,7 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
 		WARN_ON_ONCE(target[i] & ~target_used[i]);
 
 		/* Bits can change because they are not currently being used */
+		cur_used[i] &= ~ignored[i];
 		unused_update[i] = (entry[i] & cur_used[i]) |
 				   (target[i] & ~cur_used[i]);
 		/*
@@ -1207,12 +1223,9 @@ void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *entry,
 		entry_set(writer, entry, target, 0, 1);
 	} else {
 		/*
-		 * No inuse bit changed. Sanity check that all unused bits are 0
-		 * in the entry. The target was already sanity checked by
-		 * compute_qword_diff().
+		 * No inuse bit changed, though ignored bits may have changed.
 		 */
-		WARN_ON_ONCE(
-			entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS));
+		entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS);
 	}
 }
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_write_entry);
@@ -1543,6 +1556,7 @@ static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer)
 static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = {
 	.sync = arm_smmu_ste_writer_sync_entry,
 	.get_used = arm_smmu_get_ste_used,
+	.get_ignored = arm_smmu_get_ste_ignored,
 };
 
 static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH rc v2 2/4] iommu/arm-smmu-v3: Ignore STE MEV when computing the update sequence
  2025-12-07 20:49 [PATCH rc v2 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
  2025-12-07 20:49 ` [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence Nicolin Chen
@ 2025-12-07 20:49 ` Nicolin Chen
  2025-12-08  2:33   ` Shuai Xue
  2025-12-07 20:49 ` [PATCH rc v2 3/4] iommu/arm-smmu-v3: Ignore STE EATS " Nicolin Chen
  2025-12-07 20:49 ` [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
  3 siblings, 1 reply; 11+ messages in thread
From: Nicolin Chen @ 2025-12-07 20:49 UTC (permalink / raw)
  To: jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan,
	xueshuai

From: Jason Gunthorpe <jgg@nvidia.com>

Nested CD tables set the MEV bit to try to reduce multi-fault spamming on
the hypervisor. Since MEV is in STE word 1 this causes a breaking update
sequence that is not required and impacts real workloads.

For the purposes of STE updates the value of MEV doesn't matter, if it is
set/cleared early or late it just results in a change to the fault reports
that must be supported by the kernel anyhow. The spec says:

 Note: Software must expect, and be able to deal with, coalesced fault
 records even when MEV == 0.

So ignore MEV when computing the update sequence to avoid creating a
breaking update.

Fixes: da0c56520e88 ("iommu/arm-smmu-v3: Set MEV bit in nested STE for DoS mitigations")
Cc: stable@vger.kernel.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index e22c0890041b..3e161d8298d9 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1085,6 +1085,16 @@ EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
 VISIBLE_IF_KUNIT
 void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
 {
+	/*
+	 * MEV does not meaningfully impact the operation of the HW, it only
+	 * changes how many fault events are generated, thus we can ignore it
+	 * when computing the ordering. The spec notes the device can act like
+	 * MEV=1 anyhow:
+	 *
+	 *  Note: Software must expect, and be able to deal with, coalesced
+	 *  fault records even when MEV == 0.
+	 */
+	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
 }
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH rc v2 3/4] iommu/arm-smmu-v3: Ignore STE EATS when computing the update sequence
  2025-12-07 20:49 [PATCH rc v2 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
  2025-12-07 20:49 ` [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence Nicolin Chen
  2025-12-07 20:49 ` [PATCH rc v2 2/4] iommu/arm-smmu-v3: Ignore STE MEV when computing the " Nicolin Chen
@ 2025-12-07 20:49 ` Nicolin Chen
  2025-12-08  2:33   ` Shuai Xue
  2025-12-07 20:49 ` [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
  3 siblings, 1 reply; 11+ messages in thread
From: Nicolin Chen @ 2025-12-07 20:49 UTC (permalink / raw)
  To: jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan,
	xueshuai

From: Jason Gunthorpe <jgg@nvidia.com>

If a VM wants to toggle EATS off at the same time as changing the CFG, the
hypervisor will see EATS change to 0 and insert a V=0 breaking update into
the STE even though the VM did not ask for that.

In bare metal, EATS is ignored by CFG=ABORT/BYPASS, which is why this does
not cause a problem until we have nested where CFG is always a variation of
S2 trans that does use EATS.

Relax the rules for EATS sequencing, we don't need it to be exact because
the enclosing code will always disable ATS at the PCI device if we are
changing EATS. This ensures there are no ATS transactions that can race
with an EATS change so we don't need to carefully sequence these bits.

Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
Cc: stable@vger.kernel.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 3e161d8298d9..72ba41591fdb 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1095,6 +1095,15 @@ void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
 	 *  fault records even when MEV == 0.
 	 */
 	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
+
+	/*
+	 * EATS is used to reject and control the ATS behavior of the device. If
+	 * we are changing it away from 0 then we already trust the device to
+	 * use ATS properly and we have sequenced the device's ATS enable in PCI
+	 * config space to prevent it from issuing ATS while we are changing
+	 * EATS.
+	 */
+	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
 }
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
  2025-12-07 20:49 [PATCH rc v2 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
                   ` (2 preceding siblings ...)
  2025-12-07 20:49 ` [PATCH rc v2 3/4] iommu/arm-smmu-v3: Ignore STE EATS " Nicolin Chen
@ 2025-12-07 20:49 ` Nicolin Chen
  2025-12-08  3:43   ` Shuai Xue
  3 siblings, 1 reply; 11+ messages in thread
From: Nicolin Chen @ 2025-12-07 20:49 UTC (permalink / raw)
  To: jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan,
	xueshuai

STE in a nested case requires both S1 and S2 fields. And this makes the use
case different from the existing one.

Add coverage for previously failed cases shifting between S2-only and S1+S2
STEs.

Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  | 61 +++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
index 3556e65cf9ac..1672e75ebffc 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
@@ -555,6 +555,65 @@ static void arm_smmu_v3_write_ste_test_s2_to_s1_stall(struct kunit *test)
 						       NUM_EXPECTED_SYNCS(3));
 }
 
+static void arm_smmu_test_make_nested_cdtable_ste(
+	struct arm_smmu_ste *ste, unsigned int s1dss, const dma_addr_t dma_addr,
+	enum arm_smmu_test_master_feat feat)
+{
+	bool stall_enabled = feat & ARM_SMMU_MASTER_TEST_STALL;
+	bool ats_enabled = feat & ARM_SMMU_MASTER_TEST_ATS;
+	struct arm_smmu_ste s1ste;
+
+	struct arm_smmu_master master = {
+		.ats_enabled = ats_enabled,
+		.cd_table.cdtab_dma = dma_addr,
+		.cd_table.s1cdmax = 0xFF,
+		.cd_table.s1fmt = STRTAB_STE_0_S1FMT_64K_L2,
+		.smmu = &smmu,
+		.stall_enabled = stall_enabled,
+	};
+
+	arm_smmu_test_make_s2_ste(ste, ARM_SMMU_MASTER_TEST_ATS);
+	arm_smmu_make_cdtable_ste(&s1ste, &master, ats_enabled, s1dss);
+
+	ste->data[0] = cpu_to_le64(
+		STRTAB_STE_0_V |
+		FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
+	ste->data[0] |= s1ste.data[0] & ~cpu_to_le64(STRTAB_STE_0_CFG);
+	ste->data[1] |= s1ste.data[1];
+	/* Merge events for DoS mitigations on eventq */
+	ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
+}
+
+static void
+arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass(struct kunit *test)
+{
+	struct arm_smmu_ste s1_ste;
+	struct arm_smmu_ste s2_ste;
+
+	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
+					      STRTAB_STE_1_S1DSS_BYPASS,
+					      fake_cdtab_dma_addr,
+					      ARM_SMMU_MASTER_TEST_ATS);
+	arm_smmu_test_make_s2_ste(&s2_ste, 0);
+	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s1_ste, &s2_ste,
+						       NUM_EXPECTED_SYNCS(3));
+}
+
+static void
+arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass(struct kunit *test)
+{
+	struct arm_smmu_ste s1_ste;
+	struct arm_smmu_ste s2_ste;
+
+	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
+					      STRTAB_STE_1_S1DSS_BYPASS,
+					      fake_cdtab_dma_addr,
+					      ARM_SMMU_MASTER_TEST_ATS);
+	arm_smmu_test_make_s2_ste(&s2_ste, 0);
+	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s2_ste, &s1_ste,
+						       NUM_EXPECTED_SYNCS(2));
+}
+
 static void arm_smmu_v3_write_cd_test_sva_clear(struct kunit *test)
 {
 	struct arm_smmu_cd cd = {};
@@ -601,6 +660,8 @@ static struct kunit_case arm_smmu_v3_test_cases[] = {
 	KUNIT_CASE(arm_smmu_v3_write_cd_test_s1_change_asid),
 	KUNIT_CASE(arm_smmu_v3_write_ste_test_s1_to_s2_stall),
 	KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_s1_stall),
+	KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass),
+	KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass),
 	KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_clear),
 	KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_release),
 	{},
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence
  2025-12-07 20:49 ` [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence Nicolin Chen
@ 2025-12-08  2:33   ` Shuai Xue
  0 siblings, 0 replies; 11+ messages in thread
From: Shuai Xue @ 2025-12-08  2:33 UTC (permalink / raw)
  To: Nicolin Chen, jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan



在 2025/12/8 04:49, Nicolin Chen 写道:
> From: Jason Gunthorpe <jgg@nvidia.com>
> 
> C_BAD_STE was observed when updating nested STE from an S1-bypass mode to
> an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly
> different than the normal S1-bypass and S1DSS-bypass modes. As a result,
> fields like MEV and EATS in S2's used list marked the word1 as a critical
> word that requested a STE.V=0. This breaks a hitless update.
> 
> However, both MEV and EATS aren't critical in terms of STE update. One
> controls the merge of the events and the other controls the ATS that is
> managed by the driver at the same time via pci_enable_ats().
> 
> Add an arm_smmu_get_ste_ignored() to allow STE update algorithm to ignore
> those fields, avoiding the STE update breakages.
> 
> Note that this change is required by both MEV and EATS fields, which were
> introduced in different kernel versions. So add this get_ignored() first.
> The MEV and EATS will be added in arm_smmu_get_ste_ignored() separately.
> 
> Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---

Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>

Thanks.
Shuai


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rc v2 2/4] iommu/arm-smmu-v3: Ignore STE MEV when computing the update sequence
  2025-12-07 20:49 ` [PATCH rc v2 2/4] iommu/arm-smmu-v3: Ignore STE MEV when computing the " Nicolin Chen
@ 2025-12-08  2:33   ` Shuai Xue
  0 siblings, 0 replies; 11+ messages in thread
From: Shuai Xue @ 2025-12-08  2:33 UTC (permalink / raw)
  To: Nicolin Chen, jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan



在 2025/12/8 04:49, Nicolin Chen 写道:
> From: Jason Gunthorpe <jgg@nvidia.com>
> 
> Nested CD tables set the MEV bit to try to reduce multi-fault spamming on
> the hypervisor. Since MEV is in STE word 1 this causes a breaking update
> sequence that is not required and impacts real workloads.
> 
> For the purposes of STE updates the value of MEV doesn't matter, if it is
> set/cleared early or late it just results in a change to the fault reports
> that must be supported by the kernel anyhow. The spec says:
> 
>   Note: Software must expect, and be able to deal with, coalesced fault
>   records even when MEV == 0.
> 
> So ignore MEV when computing the update sequence to avoid creating a
> breaking update.
> 
> Fixes: da0c56520e88 ("iommu/arm-smmu-v3: Set MEV bit in nested STE for DoS mitigations")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index e22c0890041b..3e161d8298d9 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1085,6 +1085,16 @@ EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
>   VISIBLE_IF_KUNIT
>   void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
>   {
> +	/*
> +	 * MEV does not meaningfully impact the operation of the HW, it only
> +	 * changes how many fault events are generated, thus we can ignore it
> +	 * when computing the ordering. The spec notes the device can act like
> +	 * MEV=1 anyhow:
> +	 *
> +	 *  Note: Software must expect, and be able to deal with, coalesced
> +	 *  fault records even when MEV == 0.
> +	 */
> +	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
>   }
>   EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored);
>   


Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>

Thanks.
Shuai

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rc v2 3/4] iommu/arm-smmu-v3: Ignore STE EATS when computing the update sequence
  2025-12-07 20:49 ` [PATCH rc v2 3/4] iommu/arm-smmu-v3: Ignore STE EATS " Nicolin Chen
@ 2025-12-08  2:33   ` Shuai Xue
  0 siblings, 0 replies; 11+ messages in thread
From: Shuai Xue @ 2025-12-08  2:33 UTC (permalink / raw)
  To: Nicolin Chen, jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan



在 2025/12/8 04:49, Nicolin Chen 写道:
> From: Jason Gunthorpe <jgg@nvidia.com>
> 
> If a VM wants to toggle EATS off at the same time as changing the CFG, the
> hypervisor will see EATS change to 0 and insert a V=0 breaking update into
> the STE even though the VM did not ask for that.
> 
> In bare metal, EATS is ignored by CFG=ABORT/BYPASS, which is why this does
> not cause a problem until we have nested where CFG is always a variation of
> S2 trans that does use EATS.
> 
> Relax the rules for EATS sequencing, we don't need it to be exact because
> the enclosing code will always disable ATS at the PCI device if we are
> changing EATS. This ensures there are no ATS transactions that can race
> with an EATS change so we don't need to carefully sequence these bits.
> 
> Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index 3e161d8298d9..72ba41591fdb 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1095,6 +1095,15 @@ void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
>   	 *  fault records even when MEV == 0.
>   	 */
>   	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> +
> +	/*
> +	 * EATS is used to reject and control the ATS behavior of the device. If
> +	 * we are changing it away from 0 then we already trust the device to
> +	 * use ATS properly and we have sequenced the device's ATS enable in PCI
> +	 * config space to prevent it from issuing ATS while we are changing
> +	 * EATS.
> +	 */
> +	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
>   }
>   EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored);
>   


Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>

Thanks.
Shuai

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
  2025-12-07 20:49 ` [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
@ 2025-12-08  3:43   ` Shuai Xue
  2025-12-09 21:04     ` Nicolin Chen
  0 siblings, 1 reply; 11+ messages in thread
From: Shuai Xue @ 2025-12-08  3:43 UTC (permalink / raw)
  To: Nicolin Chen, jgg, will, robin.murphy
  Cc: joro, linux-arm-kernel, iommu, linux-kernel, skolothumtho, praan



在 2025/12/8 04:49, Nicolin Chen 写道:
> STE in a nested case requires both S1 and S2 fields. And this makes the use
> case different from the existing one.
> 
> Add coverage for previously failed cases shifting between S2-only and S1+S2
> STEs.
> 
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
>   .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  | 61 +++++++++++++++++++
>   1 file changed, 61 insertions(+)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> index 3556e65cf9ac..1672e75ebffc 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> @@ -555,6 +555,65 @@ static void arm_smmu_v3_write_ste_test_s2_to_s1_stall(struct kunit *test)
>   						       NUM_EXPECTED_SYNCS(3));
>   }
>   
> +static void arm_smmu_test_make_nested_cdtable_ste(
> +	struct arm_smmu_ste *ste, unsigned int s1dss, const dma_addr_t dma_addr,
> +	enum arm_smmu_test_master_feat feat)
> +{
> +	bool stall_enabled = feat & ARM_SMMU_MASTER_TEST_STALL;
> +	bool ats_enabled = feat & ARM_SMMU_MASTER_TEST_ATS;
> +	struct arm_smmu_ste s1ste;
> +
> +	struct arm_smmu_master master = {
> +		.ats_enabled = ats_enabled,
> +		.cd_table.cdtab_dma = dma_addr,
> +		.cd_table.s1cdmax = 0xFF,
> +		.cd_table.s1fmt = STRTAB_STE_0_S1FMT_64K_L2,
> +		.smmu = &smmu,
> +		.stall_enabled = stall_enabled,
> +	};
> +
> +	arm_smmu_test_make_s2_ste(ste, ARM_SMMU_MASTER_TEST_ATS);
> +	arm_smmu_make_cdtable_ste(&s1ste, &master, ats_enabled, s1dss);

Hi, Nicolin,

Nit. Instead of duplicating this code, we can leverage the existing
arm_smmu_test_make_cdtable_ste() helper here.

> +
> +	ste->data[0] = cpu_to_le64(
> +		STRTAB_STE_0_V |
> +		FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
> +	ste->data[0] |= s1ste.data[0] & ~cpu_to_le64(STRTAB_STE_0_CFG);
> +	ste->data[1] |= s1ste.data[1];
> +	/* Merge events for DoS mitigations on eventq */
> +	ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> +}
> +
> +static void
> +arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass(struct kunit *test)
> +{
> +	struct arm_smmu_ste s1_ste;
> +	struct arm_smmu_ste s2_ste;
> +
> +	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
> +					      STRTAB_STE_1_S1DSS_BYPASS,
> +					      fake_cdtab_dma_addr,
> +					      ARM_SMMU_MASTER_TEST_ATS);
> +	arm_smmu_test_make_s2_ste(&s2_ste, 0);
> +	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s1_ste, &s2_ste,
> +						       NUM_EXPECTED_SYNCS(3));
> +}
> +
> +static void
> +arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass(struct kunit *test)
> +{
> +	struct arm_smmu_ste s1_ste;
> +	struct arm_smmu_ste s2_ste;
> +
> +	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
> +					      STRTAB_STE_1_S1DSS_BYPASS,
> +					      fake_cdtab_dma_addr,
> +					      ARM_SMMU_MASTER_TEST_ATS);
> +	arm_smmu_test_make_s2_ste(&s2_ste, 0);
> +	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s2_ste, &s1_ste,
> +						       NUM_EXPECTED_SYNCS(2));

It would be better to add comments explaining why the number of syncs differs
between the reverse transitions.

> +}
> +
>   static void arm_smmu_v3_write_cd_test_sva_clear(struct kunit *test)
>   {
>   	struct arm_smmu_cd cd = {};
> @@ -601,6 +660,8 @@ static struct kunit_case arm_smmu_v3_test_cases[] = {
>   	KUNIT_CASE(arm_smmu_v3_write_cd_test_s1_change_asid),
>   	KUNIT_CASE(arm_smmu_v3_write_ste_test_s1_to_s2_stall),
>   	KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_s1_stall),
> +	KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass),
> +	KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass),
>   	KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_clear),
>   	KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_release),
>   	{},

Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>

Thanks.
Shuai

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
  2025-12-08  3:43   ` Shuai Xue
@ 2025-12-09 21:04     ` Nicolin Chen
  2025-12-10  1:53       ` Shuai Xue
  0 siblings, 1 reply; 11+ messages in thread
From: Nicolin Chen @ 2025-12-09 21:04 UTC (permalink / raw)
  To: Shuai Xue
  Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
	linux-kernel, skolothumtho, praan

On Mon, Dec 08, 2025 at 11:43:41AM +0800, Shuai Xue wrote:
> Hi, Nicolin,
> 
> Nit. Instead of duplicating this code, we can leverage the existing
> arm_smmu_test_make_cdtable_ste() helper here.

Thanks for the review. I squashed the following changes:

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
index 1672e75ebffc2..197b8b55fe7a2 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
@@ -33,8 +33,12 @@ static struct mm_struct sva_mm = {
 enum arm_smmu_test_master_feat {
 	ARM_SMMU_MASTER_TEST_ATS = BIT(0),
 	ARM_SMMU_MASTER_TEST_STALL = BIT(1),
+	ARM_SMMU_MASTER_TEST_NESTED = BIT(2),
 };
 
+static void arm_smmu_test_make_s2_ste(struct arm_smmu_ste *ste,
+				      enum arm_smmu_test_master_feat feat);
+
 static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
 						const __le64 *used_bits,
 						const __le64 *target,
@@ -198,6 +202,17 @@ static void arm_smmu_test_make_cdtable_ste(struct arm_smmu_ste *ste,
 	};
 
 	arm_smmu_make_cdtable_ste(ste, &master, ats_enabled, s1dss);
+	if (feat & ARM_SMMU_MASTER_TEST_NESTED) {
+		struct arm_smmu_ste s2ste;
+		int i;
+
+		arm_smmu_test_make_s2_ste(&s2ste, ARM_SMMU_MASTER_TEST_ATS);
+		ste->data[0] |= cpu_to_le64(
+			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
+		ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
+		for (i = 2; i < NUM_ENTRY_QWORDS; i++)
+			ste->data[i] = s2ste.data[i];
+	}
 }
 
 static void arm_smmu_v3_write_ste_test_bypass_to_abort(struct kunit *test)
@@ -555,46 +570,17 @@ static void arm_smmu_v3_write_ste_test_s2_to_s1_stall(struct kunit *test)
 						       NUM_EXPECTED_SYNCS(3));
 }
 
-static void arm_smmu_test_make_nested_cdtable_ste(
-	struct arm_smmu_ste *ste, unsigned int s1dss, const dma_addr_t dma_addr,
-	enum arm_smmu_test_master_feat feat)
-{
-	bool stall_enabled = feat & ARM_SMMU_MASTER_TEST_STALL;
-	bool ats_enabled = feat & ARM_SMMU_MASTER_TEST_ATS;
-	struct arm_smmu_ste s1ste;
-
-	struct arm_smmu_master master = {
-		.ats_enabled = ats_enabled,
-		.cd_table.cdtab_dma = dma_addr,
-		.cd_table.s1cdmax = 0xFF,
-		.cd_table.s1fmt = STRTAB_STE_0_S1FMT_64K_L2,
-		.smmu = &smmu,
-		.stall_enabled = stall_enabled,
-	};
-
-	arm_smmu_test_make_s2_ste(ste, ARM_SMMU_MASTER_TEST_ATS);
-	arm_smmu_make_cdtable_ste(&s1ste, &master, ats_enabled, s1dss);
-
-	ste->data[0] = cpu_to_le64(
-		STRTAB_STE_0_V |
-		FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
-	ste->data[0] |= s1ste.data[0] & ~cpu_to_le64(STRTAB_STE_0_CFG);
-	ste->data[1] |= s1ste.data[1];
-	/* Merge events for DoS mitigations on eventq */
-	ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
-}
-
 static void
 arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass(struct kunit *test)
 {
 	struct arm_smmu_ste s1_ste;
 	struct arm_smmu_ste s2_ste;
 
-	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
-					      STRTAB_STE_1_S1DSS_BYPASS,
-					      fake_cdtab_dma_addr,
-					      ARM_SMMU_MASTER_TEST_ATS);
+	arm_smmu_test_make_cdtable_ste(
+		&s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
+		ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
 	arm_smmu_test_make_s2_ste(&s2_ste, 0);
+	/* Expect an additional sync to unset ignored bits: EATS and MEV */
 	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s1_ste, &s2_ste,
 						       NUM_EXPECTED_SYNCS(3));
 }
@@ -605,10 +591,9 @@ arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass(struct kunit *test)
 	struct arm_smmu_ste s1_ste;
 	struct arm_smmu_ste s2_ste;
 
-	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
-					      STRTAB_STE_1_S1DSS_BYPASS,
-					      fake_cdtab_dma_addr,
-					      ARM_SMMU_MASTER_TEST_ATS);
+	arm_smmu_test_make_cdtable_ste(
+		&s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
+		ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
 	arm_smmu_test_make_s2_ste(&s2_ste, 0);
 	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s2_ste, &s1_ste,
 						       NUM_EXPECTED_SYNCS(2));

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
  2025-12-09 21:04     ` Nicolin Chen
@ 2025-12-10  1:53       ` Shuai Xue
  0 siblings, 0 replies; 11+ messages in thread
From: Shuai Xue @ 2025-12-10  1:53 UTC (permalink / raw)
  To: Nicolin Chen
  Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
	linux-kernel, skolothumtho, praan



在 2025/12/10 05:04, Nicolin Chen 写道:
> On Mon, Dec 08, 2025 at 11:43:41AM +0800, Shuai Xue wrote:
>> Hi, Nicolin,
>>
>> Nit. Instead of duplicating this code, we can leverage the existing
>> arm_smmu_test_make_cdtable_ste() helper here.
> 
> Thanks for the review. I squashed the following changes:
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> index 1672e75ebffc2..197b8b55fe7a2 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> @@ -33,8 +33,12 @@ static struct mm_struct sva_mm = {
>   enum arm_smmu_test_master_feat {
>   	ARM_SMMU_MASTER_TEST_ATS = BIT(0),
>   	ARM_SMMU_MASTER_TEST_STALL = BIT(1),
> +	ARM_SMMU_MASTER_TEST_NESTED = BIT(2),
>   };
>   
> +static void arm_smmu_test_make_s2_ste(struct arm_smmu_ste *ste,
> +				      enum arm_smmu_test_master_feat feat);
> +
>   static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
>   						const __le64 *used_bits,
>   						const __le64 *target,
> @@ -198,6 +202,17 @@ static void arm_smmu_test_make_cdtable_ste(struct arm_smmu_ste *ste,
>   	};
>   
>   	arm_smmu_make_cdtable_ste(ste, &master, ats_enabled, s1dss);
> +	if (feat & ARM_SMMU_MASTER_TEST_NESTED) {
> +		struct arm_smmu_ste s2ste;
> +		int i;
> +
> +		arm_smmu_test_make_s2_ste(&s2ste, ARM_SMMU_MASTER_TEST_ATS);
> +		ste->data[0] |= cpu_to_le64(
> +			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
> +		ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> +		for (i = 2; i < NUM_ENTRY_QWORDS; i++)
> +			ste->data[i] = s2ste.data[i];
> +	}
>   }
>   
>   static void arm_smmu_v3_write_ste_test_bypass_to_abort(struct kunit *test)
> @@ -555,46 +570,17 @@ static void arm_smmu_v3_write_ste_test_s2_to_s1_stall(struct kunit *test)
>   						       NUM_EXPECTED_SYNCS(3));
>   }
>   
> -static void arm_smmu_test_make_nested_cdtable_ste(
> -	struct arm_smmu_ste *ste, unsigned int s1dss, const dma_addr_t dma_addr,
> -	enum arm_smmu_test_master_feat feat)
> -{
> -	bool stall_enabled = feat & ARM_SMMU_MASTER_TEST_STALL;
> -	bool ats_enabled = feat & ARM_SMMU_MASTER_TEST_ATS;
> -	struct arm_smmu_ste s1ste;
> -
> -	struct arm_smmu_master master = {
> -		.ats_enabled = ats_enabled,
> -		.cd_table.cdtab_dma = dma_addr,
> -		.cd_table.s1cdmax = 0xFF,
> -		.cd_table.s1fmt = STRTAB_STE_0_S1FMT_64K_L2,
> -		.smmu = &smmu,
> -		.stall_enabled = stall_enabled,
> -	};
> -
> -	arm_smmu_test_make_s2_ste(ste, ARM_SMMU_MASTER_TEST_ATS);
> -	arm_smmu_make_cdtable_ste(&s1ste, &master, ats_enabled, s1dss);
> -
> -	ste->data[0] = cpu_to_le64(
> -		STRTAB_STE_0_V |
> -		FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
> -	ste->data[0] |= s1ste.data[0] & ~cpu_to_le64(STRTAB_STE_0_CFG);
> -	ste->data[1] |= s1ste.data[1];
> -	/* Merge events for DoS mitigations on eventq */
> -	ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> -}
> -
>   static void
>   arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass(struct kunit *test)
>   {
>   	struct arm_smmu_ste s1_ste;
>   	struct arm_smmu_ste s2_ste;
>   
> -	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
> -					      STRTAB_STE_1_S1DSS_BYPASS,
> -					      fake_cdtab_dma_addr,
> -					      ARM_SMMU_MASTER_TEST_ATS);
> +	arm_smmu_test_make_cdtable_ste(
> +		&s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
> +		ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
>   	arm_smmu_test_make_s2_ste(&s2_ste, 0);
> +	/* Expect an additional sync to unset ignored bits: EATS and MEV */
>   	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s1_ste, &s2_ste,
>   						       NUM_EXPECTED_SYNCS(3));
>   }
> @@ -605,10 +591,9 @@ arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass(struct kunit *test)
>   	struct arm_smmu_ste s1_ste;
>   	struct arm_smmu_ste s2_ste;
>   
> -	arm_smmu_test_make_nested_cdtable_ste(&s1_ste,
> -					      STRTAB_STE_1_S1DSS_BYPASS,
> -					      fake_cdtab_dma_addr,
> -					      ARM_SMMU_MASTER_TEST_ATS);
> +	arm_smmu_test_make_cdtable_ste(
> +		&s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
> +		ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
>   	arm_smmu_test_make_s2_ste(&s2_ste, 0);
>   	arm_smmu_v3_test_ste_expect_hitless_transition(test, &s2_ste, &s1_ste,
>   						       NUM_EXPECTED_SYNCS(2));


Nice work. LGTM.

Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>

Thanks.
Shuai


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-12-10  1:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-07 20:49 [PATCH rc v2 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
2025-12-07 20:49 ` [PATCH rc v2 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence Nicolin Chen
2025-12-08  2:33   ` Shuai Xue
2025-12-07 20:49 ` [PATCH rc v2 2/4] iommu/arm-smmu-v3: Ignore STE MEV when computing the " Nicolin Chen
2025-12-08  2:33   ` Shuai Xue
2025-12-07 20:49 ` [PATCH rc v2 3/4] iommu/arm-smmu-v3: Ignore STE EATS " Nicolin Chen
2025-12-08  2:33   ` Shuai Xue
2025-12-07 20:49 ` [PATCH rc v2 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
2025-12-08  3:43   ` Shuai Xue
2025-12-09 21:04     ` Nicolin Chen
2025-12-10  1:53       ` Shuai Xue

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).