* [PATCH rc v4 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases
@ 2025-12-17 4:25 Nicolin Chen
2025-12-17 4:25 ` [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence Nicolin Chen
` (3 more replies)
0 siblings, 4 replies; 13+ messages in thread
From: Nicolin Chen @ 2025-12-17 4:25 UTC (permalink / raw)
To: jgg, will, robin.murphy
Cc: smostafa, joro, linux-arm-kernel, iommu, linux-kernel,
skolothumtho, praan, xueshuai
Occasional C_BAD_STE errors were observed in nesting setups where a device
attached to a nested bypass/identity domain enables PASID.
This occurred when the physical STE was updated from S2-only mode to S1+S2
nesting mode, but the update failed to use the hitless routine that it was
supposed to use. Instead, it cleared STE.V bit to load the CD table, while
the default substream was still actively performing DMA.
It was later found that the diff algorithm in arm_smmu_entry_qword_diff()
enforced an additional critical word due to misaligned MEV and EATS fields
between S2-only and S1+S2 modes.
Both fields are either well-managed or non-critical, so move them to the
"ignored" list to relax the qword diff algorithm.
Additionally, add KUnit test coverage for these nesting STE cases.
This is on Github:
https://github.com/nicolinc/iommufd/commits/smmuv3_ste_fixes/
A host kernel must apply this to fix the bug.
Changelog
v4:
* s/ignored/update_safe
* Change entry_set to void
v3:
https://lore.kernel.org/all/cover.1765334526.git.nicolinc@nvidia.com/
* Add Reviewed-by from Shuai
* Add an inline comments in nested test cases
* Reuse arm_smmu_test_make_cdtable_ste() for nested test cases
v2:
https://lore.kernel.org/all/cover.1765140287.git.nicolinc@nvidia.com/
* Fix kunit tests
* Update commit message and inline comments
* Keep MEV/EATS in used list by masking them away using ignored_bits
v1:
https://lore.kernel.org/all/cover.1764982046.git.nicolinc@nvidia.com/
Jason Gunthorpe (3):
iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence
iommu/arm-smmu-v3: Mark STE MEV safe when computing the update
sequence
iommu/arm-smmu-v3: Mark STE EATS safe when computing the update
sequence
Nicolin Chen (1):
iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 +
.../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 64 ++++++++++++++++++-
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 46 +++++++++++--
3 files changed, 102 insertions(+), 10 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence
2025-12-17 4:25 [PATCH rc v4 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
@ 2025-12-17 4:25 ` Nicolin Chen
2025-12-18 16:40 ` Mostafa Saleh
2025-12-17 4:26 ` [PATCH rc v4 2/4] iommu/arm-smmu-v3: Mark STE MEV safe when computing the " Nicolin Chen
` (2 subsequent siblings)
3 siblings, 1 reply; 13+ messages in thread
From: Nicolin Chen @ 2025-12-17 4:25 UTC (permalink / raw)
To: jgg, will, robin.murphy
Cc: smostafa, joro, linux-arm-kernel, iommu, linux-kernel,
skolothumtho, praan, xueshuai
From: Jason Gunthorpe <jgg@nvidia.com>
C_BAD_STE was observed when updating nested STE from an S1-bypass mode to
an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly
different than the normal S1-bypass and S1DSS-bypass modes. As a result,
fields like MEV and EATS in S2's used list marked the word1 as a critical
word that requested a STE.V=0. This breaks a hitless update.
However, both MEV and EATS aren't critical in terms of STE update. One
controls the merge of the events and the other controls the ATS that is
managed by the driver at the same time via pci_enable_ats().
Add an arm_smmu_get_ste_update_safe() to allow STE update algorithm to
relax those fields, avoiding the STE update breakages.
After this change, entry_set has no caller checking its return value, so
change it to void.
Note that this change is required by both MEV and EATS fields, which were
introduced in different kernel versions. So add get_update_safe() first.
MEV and EATS will be added to arm_smmu_get_ste_update_safe() separately.
Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
Cc: stable@vger.kernel.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 ++
.../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 18 ++++++++++---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 27 ++++++++++++++-----
3 files changed, 37 insertions(+), 10 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index ae23aacc3840..a6c976fa9df2 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -900,6 +900,7 @@ struct arm_smmu_entry_writer {
struct arm_smmu_entry_writer_ops {
void (*get_used)(const __le64 *entry, __le64 *used);
+ void (*get_update_safe)(__le64 *safe_bits);
void (*sync)(struct arm_smmu_entry_writer *writer);
};
@@ -911,6 +912,7 @@ void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target,
#if IS_ENABLED(CONFIG_KUNIT)
void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits);
+void arm_smmu_get_ste_update_safe(__le64 *safe_bits);
void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *cur,
const __le64 *target);
void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits);
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
index d2671bfd3798..5db14718fdd6 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
@@ -38,13 +38,16 @@ enum arm_smmu_test_master_feat {
static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
const __le64 *used_bits,
const __le64 *target,
+ const __le64 *safe,
unsigned int length)
{
bool differs = false;
unsigned int i;
for (i = 0; i < length; i++) {
- if ((entry[i] & used_bits[i]) != target[i])
+ __le64 used = used_bits[i] & ~safe[i];
+
+ if ((entry[i] & used) != (target[i] & used))
differs = true;
}
return differs;
@@ -56,12 +59,17 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer)
struct arm_smmu_test_writer *test_writer =
container_of(writer, struct arm_smmu_test_writer, writer);
__le64 *entry_used_bits;
+ __le64 *safe;
entry_used_bits = kunit_kzalloc(
test_writer->test, sizeof(*entry_used_bits) * NUM_ENTRY_QWORDS,
GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test_writer->test, entry_used_bits);
+ safe = kunit_kzalloc(test_writer->test,
+ sizeof(*safe) * NUM_ENTRY_QWORDS, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test_writer->test, safe);
+
pr_debug("STE value is now set to: ");
print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8,
test_writer->entry,
@@ -79,14 +87,17 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer)
* configuration.
*/
writer->ops->get_used(test_writer->entry, entry_used_bits);
+ if (writer->ops->get_update_safe)
+ writer->ops->get_update_safe(safe);
KUNIT_EXPECT_FALSE(
test_writer->test,
arm_smmu_entry_differs_in_used_bits(
test_writer->entry, entry_used_bits,
- test_writer->init_entry, NUM_ENTRY_QWORDS) &&
+ test_writer->init_entry, safe,
+ NUM_ENTRY_QWORDS) &&
arm_smmu_entry_differs_in_used_bits(
test_writer->entry, entry_used_bits,
- test_writer->target_entry,
+ test_writer->target_entry, safe,
NUM_ENTRY_QWORDS));
}
}
@@ -106,6 +117,7 @@ arm_smmu_v3_test_debug_print_used_bits(struct arm_smmu_entry_writer *writer,
static const struct arm_smmu_entry_writer_ops test_ste_ops = {
.sync = arm_smmu_test_writer_record_syncs,
.get_used = arm_smmu_get_ste_used,
+ .get_update_safe = arm_smmu_get_ste_update_safe,
};
static const struct arm_smmu_entry_writer_ops test_cd_ops = {
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index d16d35c78c06..8dbf4ad5b51e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1082,6 +1082,12 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
}
EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
+VISIBLE_IF_KUNIT
+void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
+{
+}
+EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_update_safe);
+
/*
* Figure out if we can do a hitless update of entry to become target. Returns a
* bit mask where 1 indicates that qword needs to be set disruptively.
@@ -1094,13 +1100,22 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
{
__le64 target_used[NUM_ENTRY_QWORDS] = {};
__le64 cur_used[NUM_ENTRY_QWORDS] = {};
+ __le64 safe[NUM_ENTRY_QWORDS] = {};
u8 used_qword_diff = 0;
unsigned int i;
writer->ops->get_used(entry, cur_used);
writer->ops->get_used(target, target_used);
+ if (writer->ops->get_update_safe)
+ writer->ops->get_update_safe(safe);
for (i = 0; i != NUM_ENTRY_QWORDS; i++) {
+ /*
+ * Safe is only used for bits that are used by both entries,
+ * otherwise it is sequenced according to the unused entry.
+ */
+ safe[i] &= target_used[i] & cur_used[i];
+
/*
* Check that masks are up to date, the make functions are not
* allowed to set a bit to 1 if the used function doesn't say it
@@ -1109,6 +1124,7 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
WARN_ON_ONCE(target[i] & ~target_used[i]);
/* Bits can change because they are not currently being used */
+ cur_used[i] &= ~safe[i];
unused_update[i] = (entry[i] & cur_used[i]) |
(target[i] & ~cur_used[i]);
/*
@@ -1121,7 +1137,7 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
return used_qword_diff;
}
-static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry,
+static void entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry,
const __le64 *target, unsigned int start,
unsigned int len)
{
@@ -1137,7 +1153,6 @@ static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry,
if (changed)
writer->ops->sync(writer);
- return changed;
}
/*
@@ -1207,12 +1222,9 @@ void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *entry,
entry_set(writer, entry, target, 0, 1);
} else {
/*
- * No inuse bit changed. Sanity check that all unused bits are 0
- * in the entry. The target was already sanity checked by
- * compute_qword_diff().
+ * No inuse bit changed, though safe bits may have changed.
*/
- WARN_ON_ONCE(
- entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS));
+ entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS);
}
}
EXPORT_SYMBOL_IF_KUNIT(arm_smmu_write_entry);
@@ -1543,6 +1555,7 @@ static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer)
static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = {
.sync = arm_smmu_ste_writer_sync_entry,
.get_used = arm_smmu_get_ste_used,
+ .get_update_safe = arm_smmu_get_ste_update_safe,
};
static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid,
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH rc v4 2/4] iommu/arm-smmu-v3: Mark STE MEV safe when computing the update sequence
2025-12-17 4:25 [PATCH rc v4 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
2025-12-17 4:25 ` [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence Nicolin Chen
@ 2025-12-17 4:26 ` Nicolin Chen
2025-12-18 16:40 ` Mostafa Saleh
2025-12-17 4:26 ` [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS " Nicolin Chen
2025-12-17 4:26 ` [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
3 siblings, 1 reply; 13+ messages in thread
From: Nicolin Chen @ 2025-12-17 4:26 UTC (permalink / raw)
To: jgg, will, robin.murphy
Cc: smostafa, joro, linux-arm-kernel, iommu, linux-kernel,
skolothumtho, praan, xueshuai
From: Jason Gunthorpe <jgg@nvidia.com>
Nested CD tables set the MEV bit to try to reduce multi-fault spamming on
the hypervisor. Since MEV is in STE word 1 this causes a breaking update
sequence that is not required and impacts real workloads.
For the purposes of STE updates the value of MEV doesn't matter, if it is
set/cleared early or late it just results in a change to the fault reports
that must be supported by the kernel anyhow. The spec says:
Note: Software must expect, and be able to deal with, coalesced fault
records even when MEV == 0.
So mark STE MEV safe when computing the update sequence, to avoid creating
a breaking update.
Fixes: da0c56520e88 ("iommu/arm-smmu-v3: Set MEV bit in nested STE for DoS mitigations")
Cc: stable@vger.kernel.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 8dbf4ad5b51e..12a9669bcc83 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1085,6 +1085,16 @@ EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
VISIBLE_IF_KUNIT
void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
{
+ /*
+ * MEV does not meaningfully impact the operation of the HW, it only
+ * changes how many fault events are generated, thus we can relax it
+ * when computing the ordering. The spec notes the device can act like
+ * MEV=1 anyhow:
+ *
+ * Note: Software must expect, and be able to deal with, coalesced
+ * fault records even when MEV == 0.
+ */
+ safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
}
EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_update_safe);
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS safe when computing the update sequence
2025-12-17 4:25 [PATCH rc v4 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
2025-12-17 4:25 ` [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence Nicolin Chen
2025-12-17 4:26 ` [PATCH rc v4 2/4] iommu/arm-smmu-v3: Mark STE MEV safe when computing the " Nicolin Chen
@ 2025-12-17 4:26 ` Nicolin Chen
2025-12-18 16:42 ` Mostafa Saleh
2025-12-17 4:26 ` [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
3 siblings, 1 reply; 13+ messages in thread
From: Nicolin Chen @ 2025-12-17 4:26 UTC (permalink / raw)
To: jgg, will, robin.murphy
Cc: smostafa, joro, linux-arm-kernel, iommu, linux-kernel,
skolothumtho, praan, xueshuai
From: Jason Gunthorpe <jgg@nvidia.com>
If a VM wants to toggle EATS off at the same time as changing the CFG, the
hypervisor will see EATS change to 0 and insert a V=0 breaking update into
the STE even though the VM did not ask for that.
In bare metal, EATS is ignored by CFG=ABORT/BYPASS, which is why this does
not cause a problem until we have nested where CFG is always a variation of
S2 trans that does use EATS.
Relax the rules for EATS sequencing, we don't need it to be exact because
the enclosing code will always disable ATS at the PCI device if we are
changing EATS. This ensures there are no ATS transactions that can race
with an EATS change so we don't need to carefully sequence these bits.
Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
Cc: stable@vger.kernel.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 12a9669bcc83..a3b29ad20a82 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1095,6 +1095,15 @@ void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
* fault records even when MEV == 0.
*/
safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
+
+ /*
+ * EATS is used to reject and control the ATS behavior of the device. If
+ * we are changing it away from 0 then we already trust the device to
+ * use ATS properly and we have sequenced the device's ATS enable in PCI
+ * config space to prevent it from issuing ATS while we are changing
+ * EATS.
+ */
+ safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
}
EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_update_safe);
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
2025-12-17 4:25 [PATCH rc v4 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
` (2 preceding siblings ...)
2025-12-17 4:26 ` [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS " Nicolin Chen
@ 2025-12-17 4:26 ` Nicolin Chen
2025-12-18 16:47 ` Mostafa Saleh
3 siblings, 1 reply; 13+ messages in thread
From: Nicolin Chen @ 2025-12-17 4:26 UTC (permalink / raw)
To: jgg, will, robin.murphy
Cc: smostafa, joro, linux-arm-kernel, iommu, linux-kernel,
skolothumtho, praan, xueshuai
STE in a nested case requires both S1 and S2 fields. And this makes the use
case different from the existing one.
Add coverage for previously failed cases shifting between S2-only and S1+S2
STEs.
Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
.../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 46 +++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
index 5db14718fdd6..8255a02f4efa 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
@@ -33,8 +33,12 @@ static struct mm_struct sva_mm = {
enum arm_smmu_test_master_feat {
ARM_SMMU_MASTER_TEST_ATS = BIT(0),
ARM_SMMU_MASTER_TEST_STALL = BIT(1),
+ ARM_SMMU_MASTER_TEST_NESTED = BIT(2),
};
+static void arm_smmu_test_make_s2_ste(struct arm_smmu_ste *ste,
+ enum arm_smmu_test_master_feat feat);
+
static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
const __le64 *used_bits,
const __le64 *target,
@@ -197,6 +201,17 @@ static void arm_smmu_test_make_cdtable_ste(struct arm_smmu_ste *ste,
};
arm_smmu_make_cdtable_ste(ste, &master, ats_enabled, s1dss);
+ if (feat & ARM_SMMU_MASTER_TEST_NESTED) {
+ struct arm_smmu_ste s2ste;
+ int i;
+
+ arm_smmu_test_make_s2_ste(&s2ste, ARM_SMMU_MASTER_TEST_ATS);
+ ste->data[0] |= cpu_to_le64(
+ FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
+ ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
+ for (i = 2; i < NUM_ENTRY_QWORDS; i++)
+ ste->data[i] = s2ste.data[i];
+ }
}
static void arm_smmu_v3_write_ste_test_bypass_to_abort(struct kunit *test)
@@ -554,6 +569,35 @@ static void arm_smmu_v3_write_ste_test_s2_to_s1_stall(struct kunit *test)
NUM_EXPECTED_SYNCS(3));
}
+static void
+arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass(struct kunit *test)
+{
+ struct arm_smmu_ste s1_ste;
+ struct arm_smmu_ste s2_ste;
+
+ arm_smmu_test_make_cdtable_ste(
+ &s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
+ ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
+ arm_smmu_test_make_s2_ste(&s2_ste, 0);
+ /* Expect an additional sync to unset ignored bits: EATS and MEV */
+ arm_smmu_v3_test_ste_expect_hitless_transition(test, &s1_ste, &s2_ste,
+ NUM_EXPECTED_SYNCS(3));
+}
+
+static void
+arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass(struct kunit *test)
+{
+ struct arm_smmu_ste s1_ste;
+ struct arm_smmu_ste s2_ste;
+
+ arm_smmu_test_make_cdtable_ste(
+ &s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
+ ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
+ arm_smmu_test_make_s2_ste(&s2_ste, 0);
+ arm_smmu_v3_test_ste_expect_hitless_transition(test, &s2_ste, &s1_ste,
+ NUM_EXPECTED_SYNCS(2));
+}
+
static void arm_smmu_v3_write_cd_test_sva_clear(struct kunit *test)
{
struct arm_smmu_cd cd = {};
@@ -600,6 +644,8 @@ static struct kunit_case arm_smmu_v3_test_cases[] = {
KUNIT_CASE(arm_smmu_v3_write_cd_test_s1_change_asid),
KUNIT_CASE(arm_smmu_v3_write_ste_test_s1_to_s2_stall),
KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_s1_stall),
+ KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass),
+ KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass),
KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_clear),
KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_release),
{},
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence
2025-12-17 4:25 ` [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence Nicolin Chen
@ 2025-12-18 16:40 ` Mostafa Saleh
2025-12-19 6:05 ` Nicolin Chen
0 siblings, 1 reply; 13+ messages in thread
From: Mostafa Saleh @ 2025-12-18 16:40 UTC (permalink / raw)
To: Nicolin Chen
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Tue, Dec 16, 2025 at 08:25:59PM -0800, Nicolin Chen wrote:
> From: Jason Gunthorpe <jgg@nvidia.com>
>
> C_BAD_STE was observed when updating nested STE from an S1-bypass mode to
> an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly
> different than the normal S1-bypass and S1DSS-bypass modes. As a result,
> fields like MEV and EATS in S2's used list marked the word1 as a critical
> word that requested a STE.V=0. This breaks a hitless update.
>
> However, both MEV and EATS aren't critical in terms of STE update. One
> controls the merge of the events and the other controls the ATS that is
> managed by the driver at the same time via pci_enable_ats().
>
> Add an arm_smmu_get_ste_update_safe() to allow STE update algorithm to
> relax those fields, avoiding the STE update breakages.
>
> After this change, entry_set has no caller checking its return value, so
> change it to void.
>
> Note that this change is required by both MEV and EATS fields, which were
> introduced in different kernel versions. So add get_update_safe() first.
> MEV and EATS will be added to arm_smmu_get_ste_update_safe() separately.
>
> Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 ++
> .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 18 ++++++++++---
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 27 ++++++++++++++-----
> 3 files changed, 37 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index ae23aacc3840..a6c976fa9df2 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -900,6 +900,7 @@ struct arm_smmu_entry_writer {
>
> struct arm_smmu_entry_writer_ops {
> void (*get_used)(const __le64 *entry, __le64 *used);
> + void (*get_update_safe)(__le64 *safe_bits);
> void (*sync)(struct arm_smmu_entry_writer *writer);
> };
>
> @@ -911,6 +912,7 @@ void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target,
>
> #if IS_ENABLED(CONFIG_KUNIT)
> void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits);
> +void arm_smmu_get_ste_update_safe(__le64 *safe_bits);
> void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *cur,
> const __le64 *target);
> void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits);
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> index d2671bfd3798..5db14718fdd6 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> @@ -38,13 +38,16 @@ enum arm_smmu_test_master_feat {
> static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
> const __le64 *used_bits,
> const __le64 *target,
> + const __le64 *safe,
> unsigned int length)
> {
> bool differs = false;
> unsigned int i;
>
> for (i = 0; i < length; i++) {
> - if ((entry[i] & used_bits[i]) != target[i])
> + __le64 used = used_bits[i] & ~safe[i];
> +
> + if ((entry[i] & used) != (target[i] & used))
> differs = true;
I have been looking at this, as before, target was not masked at all, so
I'd expect it would be only masked with (~safe[i]) and not used.
But I think that is OK, as we only care about the used bits in the cur
entry and how it's different from target, and target is either the
acutal target or the initial STE so it musn't have any unused bits set.
So that should be fine I believe,
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Thanks,
Mostafa
> }
> return differs;
> @@ -56,12 +59,17 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer)
> struct arm_smmu_test_writer *test_writer =
> container_of(writer, struct arm_smmu_test_writer, writer);
> __le64 *entry_used_bits;
> + __le64 *safe;
>
> entry_used_bits = kunit_kzalloc(
> test_writer->test, sizeof(*entry_used_bits) * NUM_ENTRY_QWORDS,
> GFP_KERNEL);
> KUNIT_ASSERT_NOT_NULL(test_writer->test, entry_used_bits);
>
> + safe = kunit_kzalloc(test_writer->test,
> + sizeof(*safe) * NUM_ENTRY_QWORDS, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_NULL(test_writer->test, safe);
> +
> pr_debug("STE value is now set to: ");
> print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8,
> test_writer->entry,
> @@ -79,14 +87,17 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer)
> * configuration.
> */
> writer->ops->get_used(test_writer->entry, entry_used_bits);
> + if (writer->ops->get_update_safe)
> + writer->ops->get_update_safe(safe);
> KUNIT_EXPECT_FALSE(
> test_writer->test,
> arm_smmu_entry_differs_in_used_bits(
> test_writer->entry, entry_used_bits,
> - test_writer->init_entry, NUM_ENTRY_QWORDS) &&
> + test_writer->init_entry, safe,
> + NUM_ENTRY_QWORDS) &&
> arm_smmu_entry_differs_in_used_bits(
> test_writer->entry, entry_used_bits,
> - test_writer->target_entry,
> + test_writer->target_entry, safe,
> NUM_ENTRY_QWORDS));
> }
> }
> @@ -106,6 +117,7 @@ arm_smmu_v3_test_debug_print_used_bits(struct arm_smmu_entry_writer *writer,
> static const struct arm_smmu_entry_writer_ops test_ste_ops = {
> .sync = arm_smmu_test_writer_record_syncs,
> .get_used = arm_smmu_get_ste_used,
> + .get_update_safe = arm_smmu_get_ste_update_safe,
> };
>
> static const struct arm_smmu_entry_writer_ops test_cd_ops = {
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index d16d35c78c06..8dbf4ad5b51e 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1082,6 +1082,12 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
> }
> EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
>
> +VISIBLE_IF_KUNIT
> +void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
> +{
> +}
> +EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_update_safe);
> +
> /*
> * Figure out if we can do a hitless update of entry to become target. Returns a
> * bit mask where 1 indicates that qword needs to be set disruptively.
> @@ -1094,13 +1100,22 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
> {
> __le64 target_used[NUM_ENTRY_QWORDS] = {};
> __le64 cur_used[NUM_ENTRY_QWORDS] = {};
> + __le64 safe[NUM_ENTRY_QWORDS] = {};
> u8 used_qword_diff = 0;
> unsigned int i;
>
> writer->ops->get_used(entry, cur_used);
> writer->ops->get_used(target, target_used);
> + if (writer->ops->get_update_safe)
> + writer->ops->get_update_safe(safe);
>
> for (i = 0; i != NUM_ENTRY_QWORDS; i++) {
> + /*
> + * Safe is only used for bits that are used by both entries,
> + * otherwise it is sequenced according to the unused entry.
> + */
> + safe[i] &= target_used[i] & cur_used[i];
> +
> /*
> * Check that masks are up to date, the make functions are not
> * allowed to set a bit to 1 if the used function doesn't say it
> @@ -1109,6 +1124,7 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
> WARN_ON_ONCE(target[i] & ~target_used[i]);
>
> /* Bits can change because they are not currently being used */
> + cur_used[i] &= ~safe[i];
> unused_update[i] = (entry[i] & cur_used[i]) |
> (target[i] & ~cur_used[i]);
> /*
> @@ -1121,7 +1137,7 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
> return used_qword_diff;
> }
>
> -static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry,
> +static void entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry,
> const __le64 *target, unsigned int start,
> unsigned int len)
> {
> @@ -1137,7 +1153,6 @@ static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry,
>
> if (changed)
> writer->ops->sync(writer);
> - return changed;
> }
>
> /*
> @@ -1207,12 +1222,9 @@ void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *entry,
> entry_set(writer, entry, target, 0, 1);
> } else {
> /*
> - * No inuse bit changed. Sanity check that all unused bits are 0
> - * in the entry. The target was already sanity checked by
> - * compute_qword_diff().
> + * No inuse bit changed, though safe bits may have changed.
> */
> - WARN_ON_ONCE(
> - entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS));
> + entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS);
> }
> }
> EXPORT_SYMBOL_IF_KUNIT(arm_smmu_write_entry);
> @@ -1543,6 +1555,7 @@ static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer)
> static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = {
> .sync = arm_smmu_ste_writer_sync_entry,
> .get_used = arm_smmu_get_ste_used,
> + .get_update_safe = arm_smmu_get_ste_update_safe,
> };
>
> static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid,
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 2/4] iommu/arm-smmu-v3: Mark STE MEV safe when computing the update sequence
2025-12-17 4:26 ` [PATCH rc v4 2/4] iommu/arm-smmu-v3: Mark STE MEV safe when computing the " Nicolin Chen
@ 2025-12-18 16:40 ` Mostafa Saleh
0 siblings, 0 replies; 13+ messages in thread
From: Mostafa Saleh @ 2025-12-18 16:40 UTC (permalink / raw)
To: Nicolin Chen
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Tue, Dec 16, 2025 at 08:26:00PM -0800, Nicolin Chen wrote:
> From: Jason Gunthorpe <jgg@nvidia.com>
>
> Nested CD tables set the MEV bit to try to reduce multi-fault spamming on
> the hypervisor. Since MEV is in STE word 1 this causes a breaking update
> sequence that is not required and impacts real workloads.
>
> For the purposes of STE updates the value of MEV doesn't matter, if it is
> set/cleared early or late it just results in a change to the fault reports
> that must be supported by the kernel anyhow. The spec says:
>
> Note: Software must expect, and be able to deal with, coalesced fault
> records even when MEV == 0.
>
> So mark STE MEV safe when computing the update sequence, to avoid creating
> a breaking update.
>
> Fixes: da0c56520e88 ("iommu/arm-smmu-v3: Set MEV bit in nested STE for DoS mitigations")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Thanks,
Mostafa
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index 8dbf4ad5b51e..12a9669bcc83 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1085,6 +1085,16 @@ EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);
> VISIBLE_IF_KUNIT
> void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
> {
> + /*
> + * MEV does not meaningfully impact the operation of the HW, it only
> + * changes how many fault events are generated, thus we can relax it
> + * when computing the ordering. The spec notes the device can act like
> + * MEV=1 anyhow:
> + *
> + * Note: Software must expect, and be able to deal with, coalesced
> + * fault records even when MEV == 0.
> + */
> + safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> }
> EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_update_safe);
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS safe when computing the update sequence
2025-12-17 4:26 ` [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS " Nicolin Chen
@ 2025-12-18 16:42 ` Mostafa Saleh
2025-12-18 17:32 ` Nicolin Chen
0 siblings, 1 reply; 13+ messages in thread
From: Mostafa Saleh @ 2025-12-18 16:42 UTC (permalink / raw)
To: Nicolin Chen
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Tue, Dec 16, 2025 at 08:26:01PM -0800, Nicolin Chen wrote:
> From: Jason Gunthorpe <jgg@nvidia.com>
>
> If a VM wants to toggle EATS off at the same time as changing the CFG, the
> hypervisor will see EATS change to 0 and insert a V=0 breaking update into
> the STE even though the VM did not ask for that.
>
> In bare metal, EATS is ignored by CFG=ABORT/BYPASS, which is why this does
> not cause a problem until we have nested where CFG is always a variation of
> S2 trans that does use EATS.
>
> Relax the rules for EATS sequencing, we don't need it to be exact because
> the enclosing code will always disable ATS at the PCI device if we are
> changing EATS. This ensures there are no ATS transactions that can race
> with an EATS change so we don't need to carefully sequence these bits.
>
> Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
> Cc: stable@vger.kernel.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index 12a9669bcc83..a3b29ad20a82 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -1095,6 +1095,15 @@ void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
> * fault records even when MEV == 0.
> */
> safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> +
> + /*
> + * EATS is used to reject and control the ATS behavior of the device. If
> + * we are changing it away from 0 then we already trust the device to
> + * use ATS properly and we have sequenced the device's ATS enable in PCI
> + * config space to prevent it from issuing ATS while we are changing
> + * EATS.
> + */
I am not sure about this one, Is it only about trusting the device?
I’d be worried about cases where we switch domains, that means that
briefly the HW observers EATS=1 while it was not intended, especially
that EATS is in a different DWORD from S2TTB and CDptr. With all the
IOMMUFD/VFIO stuff it makes it harder to reason about. But I can’t
come up with an example to break this.
Thanks,
Mostafa
> + safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> }
> EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_update_safe);
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
2025-12-17 4:26 ` [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
@ 2025-12-18 16:47 ` Mostafa Saleh
2025-12-18 17:35 ` Nicolin Chen
0 siblings, 1 reply; 13+ messages in thread
From: Mostafa Saleh @ 2025-12-18 16:47 UTC (permalink / raw)
To: Nicolin Chen
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Tue, Dec 16, 2025 at 08:26:02PM -0800, Nicolin Chen wrote:
> STE in a nested case requires both S1 and S2 fields. And this makes the use
> case different from the existing one.
>
> Add coverage for previously failed cases shifting between S2-only and S1+S2
> STEs.
>
> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 46 +++++++++++++++++++
> 1 file changed, 46 insertions(+)
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> index 5db14718fdd6..8255a02f4efa 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> @@ -33,8 +33,12 @@ static struct mm_struct sva_mm = {
> enum arm_smmu_test_master_feat {
> ARM_SMMU_MASTER_TEST_ATS = BIT(0),
> ARM_SMMU_MASTER_TEST_STALL = BIT(1),
> + ARM_SMMU_MASTER_TEST_NESTED = BIT(2),
> };
>
> +static void arm_smmu_test_make_s2_ste(struct arm_smmu_ste *ste,
> + enum arm_smmu_test_master_feat feat);
> +
> static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
> const __le64 *used_bits,
> const __le64 *target,
> @@ -197,6 +201,17 @@ static void arm_smmu_test_make_cdtable_ste(struct arm_smmu_ste *ste,
> };
>
> arm_smmu_make_cdtable_ste(ste, &master, ats_enabled, s1dss);
> + if (feat & ARM_SMMU_MASTER_TEST_NESTED) {
> + struct arm_smmu_ste s2ste;
> + int i;
> +
> + arm_smmu_test_make_s2_ste(&s2ste, ARM_SMMU_MASTER_TEST_ATS);
Shouldn't that be conditional on "ats_enabled", I see the callers of the
new tests already set ARM_SMMU_MASTER_TEST_ATS.
Thanks,
Mostafa
> + ste->data[0] |= cpu_to_le64(
> + FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_NESTED));
> + ste->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> + for (i = 2; i < NUM_ENTRY_QWORDS; i++)
> + ste->data[i] = s2ste.data[i];
> + }
> }
>
> static void arm_smmu_v3_write_ste_test_bypass_to_abort(struct kunit *test)
> @@ -554,6 +569,35 @@ static void arm_smmu_v3_write_ste_test_s2_to_s1_stall(struct kunit *test)
> NUM_EXPECTED_SYNCS(3));
> }
>
> +static void
> +arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass(struct kunit *test)
> +{
> + struct arm_smmu_ste s1_ste;
> + struct arm_smmu_ste s2_ste;
> +
> + arm_smmu_test_make_cdtable_ste(
> + &s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
> + ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
> + arm_smmu_test_make_s2_ste(&s2_ste, 0);
> + /* Expect an additional sync to unset ignored bits: EATS and MEV */
> + arm_smmu_v3_test_ste_expect_hitless_transition(test, &s1_ste, &s2_ste,
> + NUM_EXPECTED_SYNCS(3));
> +}
> +
> +static void
> +arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass(struct kunit *test)
> +{
> + struct arm_smmu_ste s1_ste;
> + struct arm_smmu_ste s2_ste;
> +
> + arm_smmu_test_make_cdtable_ste(
> + &s1_ste, STRTAB_STE_1_S1DSS_BYPASS, fake_cdtab_dma_addr,
> + ARM_SMMU_MASTER_TEST_ATS | ARM_SMMU_MASTER_TEST_NESTED);
> + arm_smmu_test_make_s2_ste(&s2_ste, 0);
> + arm_smmu_v3_test_ste_expect_hitless_transition(test, &s2_ste, &s1_ste,
> + NUM_EXPECTED_SYNCS(2));
> +}
> +
> static void arm_smmu_v3_write_cd_test_sva_clear(struct kunit *test)
> {
> struct arm_smmu_cd cd = {};
> @@ -600,6 +644,8 @@ static struct kunit_case arm_smmu_v3_test_cases[] = {
> KUNIT_CASE(arm_smmu_v3_write_cd_test_s1_change_asid),
> KUNIT_CASE(arm_smmu_v3_write_ste_test_s1_to_s2_stall),
> KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_s1_stall),
> + KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1dssbypass_to_s1bypass),
> + KUNIT_CASE(arm_smmu_v3_write_ste_test_nested_s1bypass_to_s1dssbypass),
> KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_clear),
> KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_release),
> {},
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS safe when computing the update sequence
2025-12-18 16:42 ` Mostafa Saleh
@ 2025-12-18 17:32 ` Nicolin Chen
2025-12-18 18:01 ` Jason Gunthorpe
0 siblings, 1 reply; 13+ messages in thread
From: Nicolin Chen @ 2025-12-18 17:32 UTC (permalink / raw)
To: Mostafa Saleh
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Thu, Dec 18, 2025 at 04:42:40PM +0000, Mostafa Saleh wrote:
> On Tue, Dec 16, 2025 at 08:26:01PM -0800, Nicolin Chen wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> >
> > If a VM wants to toggle EATS off at the same time as changing the CFG, the
> > hypervisor will see EATS change to 0 and insert a V=0 breaking update into
> > the STE even though the VM did not ask for that.
> >
> > In bare metal, EATS is ignored by CFG=ABORT/BYPASS, which is why this does
> > not cause a problem until we have nested where CFG is always a variation of
> > S2 trans that does use EATS.
> >
> > Relax the rules for EATS sequencing, we don't need it to be exact because
> > the enclosing code will always disable ATS at the PCI device if we are
> > changing EATS. This ensures there are no ATS transactions that can race
> > with an EATS change so we don't need to carefully sequence these bits.
> >
> > Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> > Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> > Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> > ---
> > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 9 +++++++++
> > 1 file changed, 9 insertions(+)
> >
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > index 12a9669bcc83..a3b29ad20a82 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > @@ -1095,6 +1095,15 @@ void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
> > * fault records even when MEV == 0.
> > */
> > safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> > +
> > + /*
> > + * EATS is used to reject and control the ATS behavior of the device. If
> > + * we are changing it away from 0 then we already trust the device to
> > + * use ATS properly and we have sequenced the device's ATS enable in PCI
> > + * config space to prevent it from issuing ATS while we are changing
> > + * EATS.
> > + */
>
> I am not sure about this one, Is it only about trusting the device?
>
> I’d be worried about cases where we switch domains, that means that
> briefly the HW observers EATS=1 while it was not intended, especially
> that EATS is in a different DWORD from S2TTB and CDptr. With all the
> IOMMUFD/VFIO stuff it makes it harder to reason about. But I can’t
> come up with an example to break this.
Hmm..
I think the last line that driver controls pci_enable/disable_ats()
should justify the whole thing? Are you worried about device still
doing ATS after pci_disable_ats()?
Thanks
Nicolin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage
2025-12-18 16:47 ` Mostafa Saleh
@ 2025-12-18 17:35 ` Nicolin Chen
0 siblings, 0 replies; 13+ messages in thread
From: Nicolin Chen @ 2025-12-18 17:35 UTC (permalink / raw)
To: Mostafa Saleh
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Thu, Dec 18, 2025 at 04:47:38PM +0000, Mostafa Saleh wrote:
> On Tue, Dec 16, 2025 at 08:26:02PM -0800, Nicolin Chen wrote:
> > STE in a nested case requires both S1 and S2 fields. And this makes the use
> > case different from the existing one.
> >
> > Add coverage for previously failed cases shifting between S2-only and S1+S2
> > STEs.
> >
> > Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> > Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> > ---
> > .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 46 +++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> >
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> > index 5db14718fdd6..8255a02f4efa 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
> > @@ -33,8 +33,12 @@ static struct mm_struct sva_mm = {
> > enum arm_smmu_test_master_feat {
> > ARM_SMMU_MASTER_TEST_ATS = BIT(0),
> > ARM_SMMU_MASTER_TEST_STALL = BIT(1),
> > + ARM_SMMU_MASTER_TEST_NESTED = BIT(2),
> > };
> >
> > +static void arm_smmu_test_make_s2_ste(struct arm_smmu_ste *ste,
> > + enum arm_smmu_test_master_feat feat);
> > +
> > static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry,
> > const __le64 *used_bits,
> > const __le64 *target,
> > @@ -197,6 +201,17 @@ static void arm_smmu_test_make_cdtable_ste(struct arm_smmu_ste *ste,
> > };
> >
> > arm_smmu_make_cdtable_ste(ste, &master, ats_enabled, s1dss);
> > + if (feat & ARM_SMMU_MASTER_TEST_NESTED) {
> > + struct arm_smmu_ste s2ste;
> > + int i;
> > +
> > + arm_smmu_test_make_s2_ste(&s2ste, ARM_SMMU_MASTER_TEST_ATS);
>
> Shouldn't that be conditional on "ats_enabled", I see the callers of the
> new tests already set ARM_SMMU_MASTER_TEST_ATS.
I will fix that.
Thanks
Nicolin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS safe when computing the update sequence
2025-12-18 17:32 ` Nicolin Chen
@ 2025-12-18 18:01 ` Jason Gunthorpe
0 siblings, 0 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-12-18 18:01 UTC (permalink / raw)
To: Nicolin Chen
Cc: Mostafa Saleh, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
On Thu, Dec 18, 2025 at 09:32:43AM -0800, Nicolin Chen wrote:
> > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > > index 12a9669bcc83..a3b29ad20a82 100644
> > > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > > @@ -1095,6 +1095,15 @@ void arm_smmu_get_ste_update_safe(__le64 *safe_bits)
> > > * fault records even when MEV == 0.
> > > */
> > > safe_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
> > > +
> > > + /*
> > > + * EATS is used to reject and control the ATS behavior of the device. If
> > > + * we are changing it away from 0 then we already trust the device to
> > > + * use ATS properly and we have sequenced the device's ATS enable in PCI
> > > + * config space to prevent it from issuing ATS while we are changing
> > > + * EATS.
> > > + */
> >
> > I am not sure about this one, Is it only about trusting the device?
Yes. The purpose of EATS=0 is to prevent the device from using ATS at
all - critically including using translated TLPs eg because it is an
untrusted device and the OS wants to prevent it from attacking the
system with direct access to physical memory.
If the device is trusted then once we disable ATS it must stop issuing
ATS, so the EATS=0 should never trigger a fault.
> > I’d be worried about cases where we switch domains, that means that
> > briefly the HW observers EATS=1 while it was not intended, especially
> > that EATS is in a different DWORD from S2TTB and CDptr.
Well, no, it means EATS is enabled a little bit earlier or disabled a
little bit later, it doesn't mean it was not intended.
The point is our rules for ATS say that the ATC is empty at this
moment and the device is not permitted to do any ATS fetches because
we won't issue any flushes.
Thus there can be no concurrent ATS traffic and we don't need to
exactly sequence EATS with the translation.
With virtualization the hypervisor is still the exclusive owner of ATS
and guarentees that EATS enable/disable is sequences correctly with
ATC invalidation.
> I think the last line that driver controls pci_enable/disable_ats()
> should justify the whole thing? Are you worried about device still
> doing ATS after pci_disable_ats()?
Exactly right, and we can't worry about that because it says the whole
ATC coherency system is broken.
Jason
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence
2025-12-18 16:40 ` Mostafa Saleh
@ 2025-12-19 6:05 ` Nicolin Chen
0 siblings, 0 replies; 13+ messages in thread
From: Nicolin Chen @ 2025-12-19 6:05 UTC (permalink / raw)
To: Mostafa Saleh
Cc: jgg, will, robin.murphy, joro, linux-arm-kernel, iommu,
linux-kernel, skolothumtho, praan, xueshuai
Hi Mostafa,
On Thu, Dec 18, 2025 at 04:40:01PM +0000, Mostafa Saleh wrote:
> On Tue, Dec 16, 2025 at 08:25:59PM -0800, Nicolin Chen wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> >
> > C_BAD_STE was observed when updating nested STE from an S1-bypass mode to
> > an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly
> > different than the normal S1-bypass and S1DSS-bypass modes. As a result,
> > fields like MEV and EATS in S2's used list marked the word1 as a critical
> > word that requested a STE.V=0. This breaks a hitless update.
> >
> > However, both MEV and EATS aren't critical in terms of STE update. One
> > controls the merge of the events and the other controls the ATS that is
> > managed by the driver at the same time via pci_enable_ats().
> >
> > Add an arm_smmu_get_ste_update_safe() to allow STE update algorithm to
> > relax those fields, avoiding the STE update breakages.
> >
> > After this change, entry_set has no caller checking its return value, so
> > change it to void.
> >
> > Note that this change is required by both MEV and EATS fields, which were
> > introduced in different kernel versions. So add get_update_safe() first.
> > MEV and EATS will be added to arm_smmu_get_ste_update_safe() separately.
> >
> > Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> > Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
> > Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> Reviewed-by: Mostafa Saleh <smostafa@google.com>
I failed to add the two review tags of yours into the v5..
Would you mind replying with your tags once again to v5?
Sorry for the inconvenience!
Nicolin
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-12-19 6:05 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17 4:25 [PATCH rc v4 0/4] iommu/arm-smmu-v3: Fix hitless STE update in nesting cases Nicolin Chen
2025-12-17 4:25 ` [PATCH rc v4 1/4] iommu/arm-smmu-v3: Add update_safe bits to fix STE update sequence Nicolin Chen
2025-12-18 16:40 ` Mostafa Saleh
2025-12-19 6:05 ` Nicolin Chen
2025-12-17 4:26 ` [PATCH rc v4 2/4] iommu/arm-smmu-v3: Mark STE MEV safe when computing the " Nicolin Chen
2025-12-18 16:40 ` Mostafa Saleh
2025-12-17 4:26 ` [PATCH rc v4 3/4] iommu/arm-smmu-v3: Mark STE EATS " Nicolin Chen
2025-12-18 16:42 ` Mostafa Saleh
2025-12-18 17:32 ` Nicolin Chen
2025-12-18 18:01 ` Jason Gunthorpe
2025-12-17 4:26 ` [PATCH rc v4 4/4] iommu/arm-smmu-v3-test: Add nested s1bypass/s1dssbypass coverage Nicolin Chen
2025-12-18 16:47 ` Mostafa Saleh
2025-12-18 17:35 ` Nicolin Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).