iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array
@ 2025-10-15 19:42 Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

This is a work based on Jason's design and algorithm. This implementation
follows his initial draft and revising as well.

The new arm_smmu_invs array is an RCU-protected array, mutated when device
attach to the domain, iterated when an invalidation is required for IOPTE
changes in this domain. This keeps the current invalidation efficiency of
a smb_mb() followed by a conditional rwlock replacing the atomic/spinlock
combination.

A new data structure is defined for the array and its entry, representing
invalidation operations, such as S1_ASID, S2_VMID, and ATS. The algorithm
adds and deletes array entries efficiently and also keeps the array sorted
so as to group similar invalidations into batches.

During an invalidation, a new invalidation function iterates domain->invs,
and converts each entry to the corresponding invalidation command(s). This
new function is fully compatible with all the existing use cases, allowing
a simple rework/replacement.

Some races to keep in mind:

1) A domain can be shared across SMMU instances. When an SMMU instance is
   removed, the updated invs array has to be sync-ed via synchronize_rcu()
   to prevent an concurrent invalidation routine that is accessing the old
   array from issuing commands to the removed SMMU instance.

2) When there are concurrent IOPTE changes (followed by invalidations) and
   a domain attachment, the new attachment must not become out of sync at
   the HW level, meaning that an STE store and invalidation array load must
   be sequenced by the CPU's memory model.

3) When an ATS-enabled device attaches to a blocking domain, the core code
   requires a hard fence to ensure all ATS invalidations to the device are
   completed. Relying on RCU alone requires calling synchronize_rcu() that
   can be too slow. Instead, when ATS is in use, hold a conditional rwlock
   till all concurrent invalidations are finished.

Related future work and dependent projects:

 * NVIDIA is building systems with > 10 SMMU instances where > 8 are being
   used concurrently in a single VM. So having 8 copies of an identical S2
   page table is not efficient. Instead, all vSMMU instances should check
   compatibility on a shared S2 iopt, to eliminate 7 copies.

   Previous attempt based on the list/spinlock design:
     iommu/arm-smmu-v3: Allocate vmid per vsmmu instead of s2_parent
     https://lore.kernel.org/all/cover.1744692494.git.nicolinc@nvidia.com/
   now can adopt this invs array, avoiding adding complex lists/locks.

 * The guest support for BTM requires temporarily invalidating two ASIDs
   for a single instance. When it renumbers ASIDs this can now be done via
   the invs array.

 * SVA with multiple devices being used by a single process (NVIDIA today
   has 4-8) sequentially iterates the invalidations through all instances.
   This ignores the HW concurrency available in each instance. It would be
   nice to not spin on each sync but go forward and issue batches to other
   instances also. Reducing to a single SVA domain shared across instances
   is required to look at this.

This is on Github:
https://github.com/nicolinc/iommufd/commits/arm_smmu_invs-v3

Changelog
v3:
 * Add Reviewed/Acked-by from Jason and Balbir
 * Rebase on v6.18-rc1
 * Drop arm_smmu_invs_dbg()
 * Improve kdocs and inline/commit comments
 * Rename arm_smmu_invs_cmp to arm_smmu_inv_cmp
 * Rename arm_smmu_invs_merge_cmp to arm_smmu_invs_cmp
 * Call arm_smmu_invs_flush_iotlb_tags() from arm_smmu_invs_unref()
 * Unconditionally trim the invs->num_invs inside arm_smmu_invs_unref(),
   and simplify arm_smmu_install_old_domain_invs().
v2:
 https://lore.kernel.org/all/cover.1757373449.git.nicolinc@nvidia.com/
 * Rebase on v6.17-rc5
 * Improve kdocs and inline comments
 * Add arm_smmu_invs_dbg() for tracing
 * Use users refcount to replace todel flag
 * Initialize num_invs in arm_smmu_invs_alloc()
 * Add a struct arm_smmu_inv_state to group invs pointers
 * Add in struct arm_smmu_invs two flags (has_ats and old)
 * Rename master->invs to master->build_invs, and sort the array
 * Rework arm_smmu_domain_inv_range() and arm_smmu_invs_end_batch()
 * Copy entries by struct arm_smmu_inv in arm_smmu_master_build_invs()
 * Add arm_smmu_invs_flush_iotlb_tags() for IOTLB flush by last device
 * Rework three invs mutation helpers, and prioritize use the in-place
   mutation for detach
 * Take writer's lock unconditionally but keep it short, and only take
   reader's lock conditionally on a has_ats flag
v1:
 https://lore.kernel.org/all/cover.1755131672.git.nicolinc@nvidia.com/

Thanks
Nicolin

Jason Gunthorpe (1):
  iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array

Nicolin Chen (6):
  iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA
  iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free()
  iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array
  iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
  iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range()
  iommu/arm-smmu-v3: Perform per-domain invalidations using
    arm_smmu_invs

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   | 135 ++-
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   |  32 +-
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  |  93 ++
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 839 +++++++++++++++---
 4 files changed, 924 insertions(+), 175 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 2/7] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

Both the ARM_SMMU_DOMAIN_S1 case and the SVA case use ASID, requiring ASID
based invalidation commands to flush the TLB.

Define an ARM_SMMU_DOMAIN_SVA to make the SVA case clear to share the same
path with the ARM_SMMU_DOMAIN_S1 case, which will be a part of the routine
to build a new per-domain invalidation array.

There is no function change.

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Balbir Singh <balbirs@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h     | 1 +
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 1 +
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c     | 3 +++
 3 files changed, 5 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index ae23aacc38402..5c0b38595d209 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -858,6 +858,7 @@ struct arm_smmu_master {
 enum arm_smmu_domain_stage {
 	ARM_SMMU_DOMAIN_S1 = 0,
 	ARM_SMMU_DOMAIN_S2,
+	ARM_SMMU_DOMAIN_SVA,
 };
 
 struct arm_smmu_domain {
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 59a480974d80f..6097f1f540d87 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -346,6 +346,7 @@ struct iommu_domain *arm_smmu_sva_domain_alloc(struct device *dev,
 	 * ARM_SMMU_FEAT_RANGE_INV is present
 	 */
 	smmu_domain->domain.pgsize_bitmap = PAGE_SIZE;
+	smmu_domain->stage = ARM_SMMU_DOMAIN_SVA;
 	smmu_domain->smmu = smmu;
 
 	ret = xa_alloc(&arm_smmu_asid_xa, &asid, smmu_domain,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 2a8b46b948f05..0312bb79f1247 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -3064,6 +3064,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 		arm_smmu_install_ste_for_dev(master, &target);
 		arm_smmu_clear_cd(master, IOMMU_NO_PASID);
 		break;
+	default:
+		WARN_ON(true);
+		break;
 	}
 
 	arm_smmu_attach_commit(&state);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 2/7] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free()
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

There will be a bit more things to free than smmu_domain itself. So keep a
simple inline function in the header to share aross files.

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h     | 5 +++++
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 2 +-
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c     | 4 ++--
 3 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 5c0b38595d209..96a23ca633cb6 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -954,6 +954,11 @@ extern struct mutex arm_smmu_asid_lock;
 
 struct arm_smmu_domain *arm_smmu_domain_alloc(void);
 
+static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
+{
+	kfree(smmu_domain);
+}
+
 void arm_smmu_clear_cd(struct arm_smmu_master *master, ioasid_t ssid);
 struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master,
 					u32 ssid);
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 6097f1f540d87..fc601b494e0af 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -365,6 +365,6 @@ struct iommu_domain *arm_smmu_sva_domain_alloc(struct device *dev,
 err_asid:
 	xa_erase(&arm_smmu_asid_xa, smmu_domain->cd.asid);
 err_free:
-	kfree(smmu_domain);
+	arm_smmu_domain_free(smmu_domain);
 	return ERR_PTR(ret);
 }
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 0312bb79f1247..00d43080efaa8 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2492,7 +2492,7 @@ static void arm_smmu_domain_free_paging(struct iommu_domain *domain)
 			ida_free(&smmu->vmid_map, cfg->vmid);
 	}
 
-	kfree(smmu_domain);
+	arm_smmu_domain_free(smmu_domain);
 }
 
 static int arm_smmu_domain_finalise_s1(struct arm_smmu_device *smmu,
@@ -3353,7 +3353,7 @@ arm_smmu_domain_alloc_paging_flags(struct device *dev, u32 flags,
 	return &smmu_domain->domain;
 
 err_free:
-	kfree(smmu_domain);
+	arm_smmu_domain_free(smmu_domain);
 	return ERR_PTR(ret);
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 2/7] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  2025-10-16 19:18   ` kernel test robot
                     ` (3 more replies)
  2025-10-15 19:42 ` [PATCH v3 4/7] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
                   ` (3 subsequent siblings)
  6 siblings, 4 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

From: Jason Gunthorpe <jgg@nvidia.com>

Create a new data structure to hold an array of invalidations that need to
be performed for the domain based on what masters are attached, to replace
the single smmu pointer and linked list of masters in the current design.

Each array entry holds one of the invalidation actions - S1_ASID, S2_VMID,
ATS or their variant with information to feed invalidation commands to HW.
It is structured so that multiple SMMUs can participate in the same array,
removing one key limitation of the current system.

To maximize performance, a sorted array is used as the data structure. It
allows grouping SYNCs together to parallelize invalidations. For instance,
it will group all the ATS entries after the ASID/VMID entry, so they will
all be pushed to the PCI devices in parallel with one SYNC.

To minimize the locking cost on the invalidation fast path (reader of the
invalidation array), the array is managed with RCU.

Provide a set of APIs to add/delete entries to/from an array, which cover
cannot-fail attach cases, e.g. attaching to arm_smmu_blocked_domain. Also
add kunit coverage for those APIs.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Co-developed-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  90 +++++++
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  |  93 +++++++
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 242 ++++++++++++++++++
 3 files changed, 425 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 96a23ca633cb6..d079c66a41e94 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -649,6 +649,93 @@ struct arm_smmu_cmdq_batch {
 	int				num;
 };
 
+/*
+ * The order here also determines the sequence in which commands are sent to the
+ * command queue. E.g. TLBI must be done before ATC_INV.
+ */
+enum arm_smmu_inv_type {
+	INV_TYPE_S1_ASID,
+	INV_TYPE_S2_VMID,
+	INV_TYPE_S2_VMID_S1_CLEAR,
+	INV_TYPE_ATS,
+	INV_TYPE_ATS_FULL,
+};
+
+struct arm_smmu_inv {
+	struct arm_smmu_device *smmu;
+	u8 type;
+	u8 size_opcode;
+	u8 nsize_opcode;
+	u32 id; /* ASID or VMID or SID */
+	union {
+		size_t pgsize; /* ARM_SMMU_FEAT_RANGE_INV */
+		u32 ssid; /* INV_TYPE_ATS */
+	};
+
+	refcount_t users; /* users=0 to mark as a trash to be purged */
+};
+
+static inline bool arm_smmu_inv_is_ats(struct arm_smmu_inv *inv)
+{
+	return inv->type == INV_TYPE_ATS || inv->type == INV_TYPE_ATS_FULL;
+}
+
+/**
+ * struct arm_smmu_invs - Per-domain invalidation array
+ * @num_invs: number of invalidations in the flexible array
+ * @rwlock: optional rwlock to fench ATS operations
+ * @has_ats: flag if the array contains an INV_TYPE_ATS or INV_TYPE_ATS_FULL
+ * @rcu: rcu head for kfree_rcu()
+ * @inv: flexible invalidation array
+ *
+ * The arm_smmu_invs is an RCU data structure. During a ->attach_dev callback,
+ * arm_smmu_invs_merge(), arm_smmu_invs_unref() and arm_smmu_invs_purge() will
+ * be used to allocate a new copy of an old array for addition and deletion in
+ * the old domain's and new domain's invs arrays.
+ *
+ * The arm_smmu_invs_unref() mutates a given array, by internally reducing the
+ * users counts of some given entries. This exists to support a no-fail routine
+ * like attaching to an IOMMU_DOMAIN_BLOCKED. And it could pair with a followup
+ * arm_smmu_invs_purge() call to generate a new clean array.
+ *
+ * Concurrent invalidation thread will push every invalidation described in the
+ * array into the command queue for each invalidation event. It is designed like
+ * this to optimize the invalidation fast path by avoiding locks.
+ *
+ * A domain can be shared across SMMU instances. When an instance gets removed,
+ * it would delete all the entries that belong to that SMMU instance. Then, a
+ * synchronize_rcu() would have to be called to sync the array, to prevent any
+ * concurrent invalidation thread accessing the old array from issuing commands
+ * to the command queue of a removed SMMU instance.
+ */
+struct arm_smmu_invs {
+	size_t num_invs;
+	rwlock_t rwlock;
+	bool has_ats;
+	struct rcu_head rcu;
+	struct arm_smmu_inv inv[];
+};
+
+static inline struct arm_smmu_invs *arm_smmu_invs_alloc(size_t num_invs)
+{
+	struct arm_smmu_invs *new_invs;
+
+	new_invs = kzalloc(struct_size(new_invs, inv, num_invs), GFP_KERNEL);
+	if (!new_invs)
+		return ERR_PTR(-ENOMEM);
+	rwlock_init(&new_invs->rwlock);
+	new_invs->num_invs = num_invs;
+	return new_invs;
+}
+
+struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
+					  struct arm_smmu_invs *to_merge);
+size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
+			   struct arm_smmu_invs *to_unref,
+			   void (*flush_fn)(struct arm_smmu_inv *inv));
+struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
+					  size_t num_trashes);
+
 struct arm_smmu_evtq {
 	struct arm_smmu_queue		q;
 	struct iopf_queue		*iopf;
@@ -875,6 +962,8 @@ struct arm_smmu_domain {
 
 	struct iommu_domain		domain;
 
+	struct arm_smmu_invs __rcu	*invs;
+
 	/* List of struct arm_smmu_master_domain */
 	struct list_head		devices;
 	spinlock_t			devices_lock;
@@ -956,6 +1045,7 @@ struct arm_smmu_domain *arm_smmu_domain_alloc(void);
 
 static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
 {
+	kfree_rcu(smmu_domain->invs, rcu);
 	kfree(smmu_domain);
 }
 
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
index d2671bfd37981..a37a55480b3ff 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c
@@ -567,6 +567,98 @@ static void arm_smmu_v3_write_cd_test_sva_release(struct kunit *test)
 						      NUM_EXPECTED_SYNCS(2));
 }
 
+static void arm_smmu_v3_invs_test_verify(struct kunit *test,
+					 struct arm_smmu_invs *invs, int num,
+					 const int *ids, const int *users)
+{
+	KUNIT_EXPECT_EQ(test, invs->num_invs, num);
+	while (num--) {
+		KUNIT_EXPECT_EQ(test, invs->inv[num].id, ids[num]);
+		KUNIT_EXPECT_EQ(test, refcount_read(&invs->inv[num].users),
+				users[num]);
+	}
+}
+
+static struct arm_smmu_invs invs1 = {
+	.num_invs = 3,
+	.inv = { { .type = INV_TYPE_S2_VMID, .id = 1, },
+		 { .type = INV_TYPE_S2_VMID, .id = 2, },
+		 { .type = INV_TYPE_S2_VMID, .id = 3, }, },
+};
+
+static struct arm_smmu_invs invs2 = {
+	.num_invs = 3,
+	.inv = { { .type = INV_TYPE_S2_VMID, .id = 1, }, /* duplicated */
+		 { .type = INV_TYPE_ATS, .id = 4, },
+		 { .type = INV_TYPE_ATS, .id = 5, }, },
+};
+
+static struct arm_smmu_invs invs3 = {
+	.num_invs = 3,
+	.inv = { { .type = INV_TYPE_S2_VMID, .id = 1, }, /* duplicated */
+		 { .type = INV_TYPE_ATS, .id = 5, }, /* recover a trash */
+		 { .type = INV_TYPE_ATS, .id = 6, }, },
+};
+
+static void arm_smmu_v3_invs_test(struct kunit *test)
+{
+	const int results1[2][3] = { { 1, 2, 3, }, { 1, 1, 1, }, };
+	const int results2[2][5] = { { 1, 2, 3, 4, 5, }, { 2, 1, 1, 1, 1, }, };
+	const int results3[2][3] = { { 1, 2, 3, }, { 1, 1, 1, }, };
+	const int results4[2][5] = { { 1, 2, 3, 5, 6, }, { 2, 1, 1, 1, 1, }, };
+	const int results5[2][5] = { { 1, 2, 3, 5, 6, }, { 1, 0, 0, 1, 1, }, };
+	const int results6[2][3] = { { 1, 5, 6, }, { 1, 1, 1, }, };
+	struct arm_smmu_invs *test_a, *test_b;
+	size_t num_trashes;
+
+	/* New array */
+	test_a = arm_smmu_invs_alloc(0);
+	KUNIT_EXPECT_EQ(test, test_a->num_invs, 0);
+
+	/* Test1: merge invs1 (new array) */
+	test_b = arm_smmu_invs_merge(test_a, &invs1);
+	kfree(test_a);
+	arm_smmu_v3_invs_test_verify(test, test_b, ARRAY_SIZE(results1[0]),
+				     results1[0], results1[1]);
+
+	/* Test2: merge invs2 (new array) */
+	test_a = arm_smmu_invs_merge(test_b, &invs2);
+	kfree(test_b);
+	arm_smmu_v3_invs_test_verify(test, test_a, ARRAY_SIZE(results2[0]),
+				     results2[0], results2[1]);
+
+	/* Test3: unref invs2 (same array) */
+	num_trashes = arm_smmu_invs_unref(test_a, &invs2, NULL);
+	arm_smmu_v3_invs_test_verify(test, test_a, ARRAY_SIZE(results3[0]),
+				     results3[0], results3[1]);
+	KUNIT_EXPECT_EQ(test, num_trashes, 0);
+
+	/* Test4: merge invs3 (new array) */
+	test_b = arm_smmu_invs_merge(test_a, &invs3);
+	kfree(test_a);
+	arm_smmu_v3_invs_test_verify(test, test_b, ARRAY_SIZE(results4[0]),
+				     results4[0], results4[1]);
+
+	/* Test5: unref invs1 (same array) */
+	num_trashes = arm_smmu_invs_unref(test_b, &invs1, NULL);
+	arm_smmu_v3_invs_test_verify(test, test_b, ARRAY_SIZE(results5[0]),
+				     results5[0], results5[1]);
+	KUNIT_EXPECT_EQ(test, num_trashes, 2);
+
+	/* Test6: purge test_b (new array) */
+	test_a = arm_smmu_invs_purge(test_b, num_trashes);
+	kfree(test_b);
+	arm_smmu_v3_invs_test_verify(test, test_a, ARRAY_SIZE(results6[0]),
+				     results6[0], results6[1]);
+
+	/* Test7: unref invs3 (same array) */
+	num_trashes = arm_smmu_invs_unref(test_a, &invs3, NULL);
+	KUNIT_EXPECT_EQ(test, test_a->num_invs, 0);
+	KUNIT_EXPECT_EQ(test, num_trashes, 0);
+
+	kfree(test_a);
+}
+
 static struct kunit_case arm_smmu_v3_test_cases[] = {
 	KUNIT_CASE(arm_smmu_v3_write_ste_test_bypass_to_abort),
 	KUNIT_CASE(arm_smmu_v3_write_ste_test_abort_to_bypass),
@@ -590,6 +682,7 @@ static struct kunit_case arm_smmu_v3_test_cases[] = {
 	KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_s1_stall),
 	KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_clear),
 	KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_release),
+	KUNIT_CASE(arm_smmu_v3_invs_test),
 	{},
 };
 
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 00d43080efaa8..2a8a0c76af67b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -26,6 +26,7 @@
 #include <linux/pci.h>
 #include <linux/pci-ats.h>
 #include <linux/platform_device.h>
+#include <linux/sort.h>
 #include <linux/string_choices.h>
 #include <kunit/visibility.h>
 #include <uapi/linux/iommufd.h>
@@ -1015,6 +1016,239 @@ static void arm_smmu_page_response(struct device *dev, struct iopf_fault *unused
 	 */
 }
 
+/* Invalidation array manipulation functions */
+static int arm_smmu_inv_cmp(const struct arm_smmu_inv *l,
+			    const struct arm_smmu_inv *r)
+{
+	if (l->smmu != r->smmu)
+		return cmp_int((uintptr_t)l->smmu, (uintptr_t)r->smmu);
+	if (l->type != r->type)
+		return cmp_int(l->type, r->type);
+	return cmp_int(l->id, r->id);
+}
+
+/*
+ * Compare of two sorted arrays items. If one side is past the end of the array,
+ * return the other side to let it run out the iteration.
+ */
+static inline int arm_smmu_invs_cmp(const struct arm_smmu_invs *l, size_t l_idx,
+				    const struct arm_smmu_invs *r, size_t r_idx)
+{
+	if (l_idx != l->num_invs && r_idx != r->num_invs)
+		return arm_smmu_inv_cmp(&l->inv[l_idx], &r->inv[r_idx]);
+	if (l_idx != l->num_invs)
+		return -1;
+	return 1;
+}
+
+/**
+ * arm_smmu_invs_merge() - Merge @to_merge into @invs and generate a new array
+ * @invs: the base invalidation array
+ * @to_merge: an array of invlidations to merge
+ *
+ * Return: a newly allocated array on success, or ERR_PTR
+ *
+ * This function must be locked and serialized with arm_smmu_invs_unref() and
+ * arm_smmu_invs_purge(), but do not lockdep on any lock for KUNIT test.
+ *
+ * Both @invs and @to_merge must be sorted, to ensure the returned array will be
+ * sorted as well.
+ *
+ * Caller is resposible for freeing the @invs and the returned new one.
+ *
+ * Entries marked as trash will be purged in the returned array.
+ */
+VISIBLE_IF_KUNIT
+struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
+					  struct arm_smmu_invs *to_merge)
+{
+	struct arm_smmu_invs *new_invs;
+	struct arm_smmu_inv *new;
+	size_t num_trashes = 0;
+	size_t num_adds = 0;
+	size_t i, j;
+
+	for (i = j = 0; i != invs->num_invs || j != to_merge->num_invs;) {
+		int cmp = arm_smmu_invs_cmp(invs, i, to_merge, j);
+
+		/* Skip any unwanted trash entry */
+		if (cmp < 0 && !refcount_read(&invs->inv[i].users)) {
+			num_trashes++;
+			i++;
+			continue;
+		}
+
+		if (cmp < 0) {
+			/* not found in to_merge, leave alone */
+			i++;
+		} else if (cmp == 0) {
+			/* same item */
+			i++;
+			j++;
+		} else {
+			/* unique to to_merge */
+			num_adds++;
+			j++;
+		}
+	}
+
+	new_invs = arm_smmu_invs_alloc(invs->num_invs - num_trashes + num_adds);
+	if (IS_ERR(new_invs))
+		return new_invs;
+
+	new = new_invs->inv;
+	for (i = j = 0; i != invs->num_invs || j != to_merge->num_invs;) {
+		int cmp = arm_smmu_invs_cmp(invs, i, to_merge, j);
+
+		if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
+			i++;
+			continue;
+		}
+
+		if (cmp < 0) {
+			*new = invs->inv[i];
+			i++;
+		} else if (cmp == 0) {
+			*new = invs->inv[i];
+			refcount_inc(&new->users);
+			i++;
+			j++;
+		} else {
+			*new = to_merge->inv[j];
+			refcount_set(&new->users, 1);
+			j++;
+		}
+
+		if (new != new_invs->inv)
+			WARN_ON_ONCE(arm_smmu_inv_cmp(new - 1, new) == 1);
+		new++;
+	}
+
+	WARN_ON(new != new_invs->inv + new_invs->num_invs);
+
+	return new_invs;
+}
+EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_merge);
+
+/**
+ * arm_smmu_invs_unref() - Find in @invs for all entries in @to_unref, decrease
+ *                         the user counts without deletions
+ * @invs: the base invalidation array
+ * @to_unref: an array of invlidations to decrease their user counts
+ * @flush_fn: A callback function to invoke, when an entry's user count reduces
+ *            to 0
+ *
+ * Return: the number of trash entries in the array, for arm_smmu_invs_purge()
+ *
+ * This function will not fail. Any entry with users=0 will be marked as trash.
+ * All tailing trash entries in the array will be dropped. And the size of the
+ * array will be trimmed properly. All trash entries in-between will remain in
+ * the @invs until being completely deleted by the next arm_smmu_invs_merge()
+ * or an arm_smmu_invs_purge() function call.
+ *
+ * This function must be locked and serialized with arm_smmu_invs_merge() and
+ * arm_smmu_invs_purge(), but do not lockdep on any mutex for KUNIT test.
+ *
+ * Note that the final @invs->num_invs might not reflect the actual number of
+ * invalidations due to trash entries. Any reader should take the read lock to
+ * iterate each entry and check its users counter till the last entry.
+ */
+VISIBLE_IF_KUNIT
+size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
+			   struct arm_smmu_invs *to_unref,
+			   void (*flush_fn)(struct arm_smmu_inv *inv))
+{
+	unsigned long flags;
+	size_t num_trashes = 0;
+	size_t num_invs = 0;
+	size_t i, j;
+
+	for (i = j = 0; i != invs->num_invs || j != to_unref->num_invs;) {
+		int cmp;
+
+		/* Skip any existing trash entry */
+		if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
+			num_trashes++;
+			i++;
+			continue;
+		}
+
+		cmp = arm_smmu_invs_cmp(invs, i, to_unref, j);
+		if (cmp < 0) {
+			/* not found in to_unref, leave alone */
+			i++;
+			num_invs = i;
+		} else if (cmp == 0) {
+			/* same item */
+			if (refcount_dec_and_test(&invs->inv[i].users)) {
+				/* KUNIT test doesn't pass in a flush_fn */
+				if (flush_fn)
+					flush_fn(&invs->inv[i]);
+				num_trashes++;
+			} else {
+				num_invs = i + 1;
+			}
+			i++;
+			j++;
+		} else {
+			/* item in to_unref is not in invs or already a trash */
+			WARN_ON(true);
+			j++;
+		}
+	}
+
+	/* Exclude any tailing trash */
+	num_trashes -= invs->num_invs - num_invs;
+
+	/* The lock is required to fence concurrent ATS operations. */
+	write_lock_irqsave(&invs->rwlock, flags);
+	WRITE_ONCE(invs->num_invs, num_invs); /* Remove tailing trash entries */
+	write_unlock_irqrestore(&invs->rwlock, flags);
+
+	return num_trashes;
+}
+EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_unref);
+
+/**
+ * arm_smmu_invs_purge() - Purge all the trash entries in the @invs
+ * @invs: the base invalidation array
+ * @num_trashes: expected number of trash entries, typically returned by a prior
+ *               arm_smmu_invs_unref() call
+ *
+ * Return: a newly allocated array on success removing all the trash entries, or
+ *         NULL on failure
+ *
+ * This function must be locked and serialized with arm_smmu_invs_merge() and
+ * arm_smmu_invs_unref(), but do not lockdep on any lock for KUNIT test.
+ *
+ * Caller is resposible for freeing the @invs and the returned new one.
+ */
+VISIBLE_IF_KUNIT
+struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
+					  size_t num_trashes)
+{
+	struct arm_smmu_invs *new_invs;
+	size_t i, j;
+
+	if (WARN_ON(invs->num_invs < num_trashes))
+		return NULL;
+
+	new_invs = arm_smmu_invs_alloc(invs->num_invs - num_trashes);
+	if (IS_ERR(new_invs))
+		return NULL;
+
+	for (i = j = 0; i != invs->num_invs; i++) {
+		if (!refcount_read(&invs->inv[i].users))
+			continue;
+		new_invs->inv[j] = invs->inv[i];
+		j++;
+	}
+
+	WARN_ON(j != new_invs->num_invs);
+	return new_invs;
+}
+EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_purge);
+
 /* Context descriptor manipulation functions */
 void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid)
 {
@@ -2462,13 +2696,21 @@ static bool arm_smmu_enforce_cache_coherency(struct iommu_domain *domain)
 struct arm_smmu_domain *arm_smmu_domain_alloc(void)
 {
 	struct arm_smmu_domain *smmu_domain;
+	struct arm_smmu_invs *new_invs;
 
 	smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
 	if (!smmu_domain)
 		return ERR_PTR(-ENOMEM);
 
+	new_invs = arm_smmu_invs_alloc(0);
+	if (IS_ERR(new_invs)) {
+		kfree(smmu_domain);
+		return ERR_CAST(new_invs);
+	}
+
 	INIT_LIST_HEAD(&smmu_domain->devices);
 	spin_lock_init(&smmu_domain->devices_lock);
+	rcu_assign_pointer(smmu_domain->invs, new_invs);
 
 	return smmu_domain;
 }
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 4/7] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
                   ` (2 preceding siblings ...)
  2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

When a master is attached from an old domain to a new domain, it needs to
build an invalidation array to delete and add the array entries from/onto
the invalidation arrays of those two domains, passed via the to_merge and
to_unref arguments into arm_smmu_invs_merge/unref() respectively.

Since the master->num_streams might differ across masters, a memory would
have to be allocated when building an to_merge/to_unref array which might
fail with -ENOMEM.

On the other hand, an attachment to arm_smmu_blocked_domain must not fail
so it's the best to avoid any memory allocation in that path.

Pre-allocate a fixed size invalidation array for every master. This array
will be used as a scratch to fill dynamically when building a to_merge or
to_unref invs array. Sort fwspec->ids in an ascending order to fit to the
arm_smmu_invs_merge() function.

Co-developed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  8 ++++++
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 27 +++++++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index d079c66a41e94..c43b2ffef8a4d 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -930,6 +930,14 @@ struct arm_smmu_master {
 	struct arm_smmu_device		*smmu;
 	struct device			*dev;
 	struct arm_smmu_stream		*streams;
+	/*
+	 * Scratch memory for a to_merge or to_unref array to build a per-domain
+	 * invalidation array. It'll be pre-allocated with enough enries for all
+	 * possible build scenarios. It can be used by only one caller at a time
+	 * until the arm_smmu_invs_merge/unref() finishes. Must be locked by the
+	 * iommu_group mutex.
+	 */
+	struct arm_smmu_invs		*build_invs;
 	struct arm_smmu_vmaster		*vmaster; /* use smmu->streams_mutex */
 	/* Locked by the iommu core using the group mutex */
 	struct arm_smmu_ctx_desc_cfg	cd_table;
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 2a8a0c76af67b..97f52130992cd 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -3687,12 +3687,22 @@ static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid)
 	return 0;
 }
 
+static int arm_smmu_ids_cmp(const void *_l, const void *_r)
+{
+	const typeof_member(struct iommu_fwspec, ids[0]) *l = _l;
+	const typeof_member(struct iommu_fwspec, ids[0]) *r = _r;
+
+	return cmp_int(*l, *r);
+}
+
 static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
 				  struct arm_smmu_master *master)
 {
 	int i;
 	int ret = 0;
 	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev);
+	bool ats_supported = dev_is_pci(master->dev) &&
+			     pci_ats_supported(to_pci_dev(master->dev));
 
 	master->streams = kcalloc(fwspec->num_ids, sizeof(*master->streams),
 				  GFP_KERNEL);
@@ -3700,6 +3710,21 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
 		return -ENOMEM;
 	master->num_streams = fwspec->num_ids;
 
+	if (!ats_supported) {
+		/* Base case has 1 ASID entry or maximum 2 VMID entries */
+		master->build_invs = arm_smmu_invs_alloc(2);
+	} else {
+		/* Put the ids into order for sorted to_merge/to_unref arrays */
+		sort_nonatomic(fwspec->ids, fwspec->num_ids,
+			       sizeof(fwspec->ids[0]), arm_smmu_ids_cmp, NULL);
+		/* ATS case adds num_ids of entries, on top of the base case */
+		master->build_invs = arm_smmu_invs_alloc(2 + fwspec->num_ids);
+	}
+	if (IS_ERR(master->build_invs)) {
+		kfree(master->streams);
+		return PTR_ERR(master->build_invs);
+	}
+
 	mutex_lock(&smmu->streams_mutex);
 	for (i = 0; i < fwspec->num_ids; i++) {
 		struct arm_smmu_stream *new_stream = &master->streams[i];
@@ -3737,6 +3762,7 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
 		for (i--; i >= 0; i--)
 			rb_erase(&master->streams[i].node, &smmu->streams);
 		kfree(master->streams);
+		kfree(master->build_invs);
 	}
 	mutex_unlock(&smmu->streams_mutex);
 
@@ -3758,6 +3784,7 @@ static void arm_smmu_remove_master(struct arm_smmu_master *master)
 	mutex_unlock(&smmu->streams_mutex);
 
 	kfree(master->streams);
+	kfree(master->build_invs);
 }
 
 static struct iommu_device *arm_smmu_probe_device(struct device *dev)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
                   ` (3 preceding siblings ...)
  2025-10-15 19:42 ` [PATCH v3 4/7] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  2025-10-17 16:03   ` kernel test robot
  2025-10-15 19:42 ` [PATCH v3 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen
  6 siblings, 1 reply; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

Update the invs array with the invalidations required by each domain type
during attachment operations.

Only an SVA domain or a paging domain will have an invs array:
 a. SVA domain will add an INV_TYPE_S1_ASID per SMMU and an INV_TYPE_ATS
    per SID

 b. Non-nesting-parent paging domain with no ATS-enabled master will add
    a single INV_TYPE_S1_ASID or INV_TYPE_S2_VMID per SMMU

 c. Non-nesting-parent paging domain with ATS-enabled master(s) will do
    (b) and add an INV_TYPE_ATS per SID

 d. Nesting-parent paging domain will add an INV_TYPE_S2_VMID followed by
    an INV_TYPE_S2_VMID_S1_CLEAR per vSMMU. For an ATS-enabled master, it
    will add an INV_TYPE_ATS_FULL per SID

 Note that case #d prepares for a future implementation of VMID allocation
 which requires a followup series for S2 domain sharing. So when a nesting
 parent domain is attached through a vSMMU instance using a nested domain.
 VMID will be allocated per vSMMU instance v.s. currectly per S2 domain.

The per-domain invalidation is not needed until the domain is attached to
a master (when it starts to possibly use TLB). This will make it possible
to attach the domain to multiple SMMUs and avoid unnecessary invalidation
overhead during teardown if no STEs/CDs refer to the domain. It also means
that when the last device is detached, the old domain must flush its ASID
or VMID, since any new iommu_unmap() call would not trigger invalidations
given an empty domain->invs array.

Introduce some arm_smmu_invs helper functions for building scratch arrays,
preparing and installing old/new domain's invalidation arrays.

Co-developed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  17 ++
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 242 +++++++++++++++++++-
 2 files changed, 258 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index c43b2ffef8a4d..b1dbcc6747baa 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -1093,6 +1093,21 @@ static inline bool arm_smmu_master_canwbs(struct arm_smmu_master *master)
 	       IOMMU_FWSPEC_PCI_RC_CANWBS;
 }
 
+/**
+ * struct arm_smmu_inv_state - Per-domain invalidation array state
+ * @invs_ptr: points to the domain->invs (unwinding nesting/etc.) or is NULL if
+ *            no change should be made
+ * @old_invs: the original invs array
+ * @new_invs: for new domain, this is the new invs array to update domin->invs;
+ *            for old domain, this is the master->build_invs to pass in as the
+ *            to_unref argument to an arm_smmu_invs_unref() call
+ */
+struct arm_smmu_inv_state {
+	struct arm_smmu_invs **invs_ptr;
+	struct arm_smmu_invs *old_invs;
+	struct arm_smmu_invs *new_invs;
+};
+
 struct arm_smmu_attach_state {
 	/* Inputs */
 	struct iommu_domain *old_domain;
@@ -1102,6 +1117,8 @@ struct arm_smmu_attach_state {
 	ioasid_t ssid;
 	/* Resulting state */
 	struct arm_smmu_vmaster *vmaster;
+	struct arm_smmu_inv_state old_domain_invst;
+	struct arm_smmu_inv_state new_domain_invst;
 	bool ats_enabled;
 };
 
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 97f52130992cd..9f7945c98e3c7 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -3052,6 +3052,97 @@ static void arm_smmu_disable_iopf(struct arm_smmu_master *master,
 		iopf_queue_remove_device(master->smmu->evtq.iopf, master->dev);
 }
 
+/*
+ * Use the preallocated scratch array at master->build_invs, to build a to_merge
+ * or to_unref array, to pass into a following arm_smmu_invs_merge/unref() call.
+ *
+ * Do not free the returned invs array. It is reused, and will be overwritten by
+ * the next arm_smmu_master_build_invs() call.
+ */
+static struct arm_smmu_invs *
+arm_smmu_master_build_invs(struct arm_smmu_master *master, bool ats_enabled,
+			   ioasid_t ssid, struct arm_smmu_domain *smmu_domain)
+{
+	const bool e2h = master->smmu->features & ARM_SMMU_FEAT_E2H;
+	struct arm_smmu_invs *build_invs = master->build_invs;
+	const bool nesting = smmu_domain->nest_parent;
+	struct arm_smmu_inv *cur;
+
+	iommu_group_mutex_assert(master->dev);
+
+	cur = build_invs->inv;
+
+	switch (smmu_domain->stage) {
+	case ARM_SMMU_DOMAIN_SVA:
+	case ARM_SMMU_DOMAIN_S1:
+		*cur = (struct arm_smmu_inv){
+			.smmu = master->smmu,
+			.type = INV_TYPE_S1_ASID,
+			.id = smmu_domain->cd.asid,
+			.size_opcode = e2h ? CMDQ_OP_TLBI_EL2_VA :
+					     CMDQ_OP_TLBI_NH_VA,
+			.nsize_opcode = e2h ? CMDQ_OP_TLBI_EL2_ASID :
+					      CMDQ_OP_TLBI_NH_ASID
+		};
+		break;
+	case ARM_SMMU_DOMAIN_S2:
+		*cur = (struct arm_smmu_inv){
+			.smmu = master->smmu,
+			.type = INV_TYPE_S2_VMID,
+			.id = smmu_domain->s2_cfg.vmid,
+			.size_opcode = CMDQ_OP_TLBI_S2_IPA,
+			.nsize_opcode = CMDQ_OP_TLBI_S12_VMALL,
+		};
+		break;
+	default:
+		WARN_ON(true);
+		return NULL;
+	}
+
+	/* Range-based invalidation requires the leaf pgsize for calculation */
+	if (master->smmu->features & ARM_SMMU_FEAT_RANGE_INV)
+		cur->pgsize = __ffs(smmu_domain->domain.pgsize_bitmap);
+	cur++;
+
+	/* All the nested S1 ASIDs have to be flushed when S2 parent changes */
+	if (nesting) {
+		*cur = (struct arm_smmu_inv){
+			.smmu = master->smmu,
+			.type = INV_TYPE_S2_VMID_S1_CLEAR,
+			.id = smmu_domain->s2_cfg.vmid,
+			.size_opcode = CMDQ_OP_TLBI_NH_ALL,
+			.nsize_opcode = CMDQ_OP_TLBI_NH_ALL,
+		};
+		cur++;
+	}
+
+	if (ats_enabled) {
+		size_t i;
+
+		for (i = 0; i < master->num_streams; i++) {
+			/*
+			 * If an S2 used as a nesting parent is changed we have
+			 * no option but to completely flush the ATC.
+			 */
+			*cur = (struct arm_smmu_inv){
+				.smmu = master->smmu,
+				.type = nesting ? INV_TYPE_ATS_FULL :
+						  INV_TYPE_ATS,
+				.id = master->streams[i].id,
+				.ssid = ssid,
+				.size_opcode = CMDQ_OP_ATC_INV,
+				.nsize_opcode = CMDQ_OP_ATC_INV,
+			};
+			cur++;
+		}
+	}
+
+	/* Note this build_invs must have been sorted */
+
+	build_invs->num_invs = cur - build_invs->inv;
+	return build_invs;
+}
+
 static void arm_smmu_remove_master_domain(struct arm_smmu_master *master,
 					  struct iommu_domain *domain,
 					  ioasid_t ssid)
@@ -3081,6 +3172,146 @@ static void arm_smmu_remove_master_domain(struct arm_smmu_master *master,
 	kfree(master_domain);
 }
 
+/*
+ * During attachment, the updates of the two domain->invs arrays are sequenced:
+ *  1. new domain updates its invs array, merging master->build_invs
+ *  2. new domain starts to include the master during its invalidation
+ *  3. master updates its STE switching from the old domain to the new domain
+ *  4. old domain still includes the master during its invalidation
+ *  5. old domain updates its invs array, unreferencing master->build_invs
+ *
+ * For 1 and 5, prepare the two updated arrays in advance, handling any changes
+ * that can possibly failure. So the actual update of either 1 or 5 won't fail.
+ * arm_smmu_asid_lock ensures that the old invs in the domains are intact while
+ * we are sequencing to update them.
+ */
+static int arm_smmu_attach_prepare_invs(struct arm_smmu_attach_state *state,
+					struct arm_smmu_domain *new_smmu_domain)
+{
+	struct arm_smmu_domain *old_smmu_domain =
+		to_smmu_domain_devices(state->old_domain);
+	struct arm_smmu_master *master = state->master;
+	ioasid_t ssid = state->ssid;
+
+	/* A re-attach case doesn't need to update invs array */
+	if (new_smmu_domain == old_smmu_domain)
+		return 0;
+
+	/*
+	 * At this point a NULL domain indicates the domain doesn't use the
+	 * IOTLB, see to_smmu_domain_devices().
+	 */
+	if (new_smmu_domain) {
+		struct arm_smmu_inv_state *invst = &state->new_domain_invst;
+		struct arm_smmu_invs *build_invs;
+
+		invst->invs_ptr = &new_smmu_domain->invs;
+		invst->old_invs = rcu_dereference_protected(
+			new_smmu_domain->invs,
+			lockdep_is_held(&arm_smmu_asid_lock));
+		build_invs = arm_smmu_master_build_invs(
+			master, state->ats_enabled, ssid, new_smmu_domain);
+		if (!build_invs)
+			return -EINVAL;
+
+		invst->new_invs =
+			arm_smmu_invs_merge(invst->old_invs, build_invs);
+		if (IS_ERR(invst->new_invs))
+			return PTR_ERR(invst->new_invs);
+	}
+
+	if (old_smmu_domain) {
+		struct arm_smmu_inv_state *invst = &state->old_domain_invst;
+
+		invst->invs_ptr = &old_smmu_domain->invs;
+		invst->old_invs = rcu_dereference_protected(
+			old_smmu_domain->invs,
+			lockdep_is_held(&arm_smmu_asid_lock));
+		/* For old_smmu_domain, new_invs points to master->build_invs */
+		invst->new_invs = arm_smmu_master_build_invs(
+			master, master->ats_enabled, ssid, old_smmu_domain);
+	}
+
+	return 0;
+}
+
+/* Must be installed before arm_smmu_install_ste_for_dev() */
+static void
+arm_smmu_install_new_domain_invs(struct arm_smmu_attach_state *state)
+{
+	struct arm_smmu_inv_state *invst = &state->new_domain_invst;
+
+	if (!invst->invs_ptr)
+		return;
+
+	rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);
+	/*
+	 * We are committed to updating the STE. Ensure the invalidation array
+	 * is visable to concurrent map/unmap threads, and acquire any racying
+	 * IOPTE updates.
+	 */
+	smp_mb();
+	kfree_rcu(invst->old_invs, rcu);
+}
+
+/*
+ * When an array entry's users count reaches zero, it means the ASID/VMID is no
+ * longer being invalidated by map/unmap and must be cleaned. The rule is that
+ * all ASIDs/VMIDs not in an invalidation array are left cleared in the IOTLB.
+ */
+static void arm_smmu_invs_flush_iotlb_tags(struct arm_smmu_inv *inv)
+{
+	struct arm_smmu_cmdq_ent cmd = {};
+
+	switch (inv->type) {
+	case INV_TYPE_S1_ASID:
+		cmd.tlbi.asid = inv->id;
+		break;
+	case INV_TYPE_S2_VMID:
+		/* S2_VMID using nsize_opcode covers S2_VMID_S1_CLEAR */
+		cmd.tlbi.vmid = inv->id;
+		break;
+	default:
+		return;
+	}
+
+	cmd.opcode = inv->nsize_opcode;
+	arm_smmu_cmdq_issue_cmd_with_sync(inv->smmu, &cmd);
+}
+
+/* Should be installed after arm_smmu_install_ste_for_dev() */
+static void
+arm_smmu_install_old_domain_invs(struct arm_smmu_attach_state *state)
+{
+	struct arm_smmu_inv_state *invst = &state->old_domain_invst;
+	struct arm_smmu_invs *old_invs = invst->old_invs;
+	struct arm_smmu_invs *new_invs;
+	size_t num_trashes;
+
+	lockdep_assert_held(&arm_smmu_asid_lock);
+
+	if (!invst->invs_ptr)
+		return;
+
+	num_trashes = arm_smmu_invs_unref(old_invs, invst->new_invs,
+					  arm_smmu_invs_flush_iotlb_tags);
+	if (!num_trashes)
+		return;
+
+	new_invs = arm_smmu_invs_purge(old_invs, num_trashes);
+	if (!new_invs)
+		return;
+
+	rcu_assign_pointer(*invst->invs_ptr, new_invs);
+	/*
+	 * We are committed to updating the STE. Ensure the invalidation array
+	 * is visable to concurrent map/unmap threads, and acquire any racying
+	 * IOPTE updates.
+	 */
+	smp_mb();
+	kfree_rcu(old_invs, rcu);
+}
+
 /*
  * Start the sequence to attach a domain to a master. The sequence contains three
  * steps:
@@ -3138,12 +3369,16 @@ int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
 				     arm_smmu_ats_supported(master);
 	}
 
+	ret = arm_smmu_attach_prepare_invs(state, smmu_domain);
+	if (ret)
+		return ret;
+
 	if (smmu_domain) {
 		if (new_domain->type == IOMMU_DOMAIN_NESTED) {
 			ret = arm_smmu_attach_prepare_vmaster(
 				state, to_smmu_nested_domain(new_domain));
 			if (ret)
-				return ret;
+				goto err_unprepare_invs;
 		}
 
 		master_domain = kzalloc(sizeof(*master_domain), GFP_KERNEL);
@@ -3191,6 +3426,8 @@ int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
 			atomic_inc(&smmu_domain->nr_ats_masters);
 		list_add(&master_domain->devices_elm, &smmu_domain->devices);
 		spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+		arm_smmu_install_new_domain_invs(state);
 	}
 
 	if (!state->ats_enabled && master->ats_enabled) {
@@ -3210,6 +3447,8 @@ int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
 	kfree(master_domain);
 err_free_vmaster:
 	kfree(state->vmaster);
+err_unprepare_invs:
+	kfree(state->new_domain_invst.new_invs);
 	return ret;
 }
 
@@ -3241,6 +3480,7 @@ void arm_smmu_attach_commit(struct arm_smmu_attach_state *state)
 	}
 
 	arm_smmu_remove_master_domain(master, state->old_domain, state->ssid);
+	arm_smmu_install_old_domain_invs(state);
 	master->ats_enabled = state->ats_enabled;
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range()
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
                   ` (4 preceding siblings ...)
  2025-10-15 19:42 ` [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  2025-10-15 19:42 ` [PATCH v3 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen
  6 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

Each smmu_domain now has an arm_smmu_invs that specifies the invalidation
steps to perform after any change the IOPTEs. This includes supports for
basic ASID/VMID, the special case for nesting, and ATC invalidations.

Introduce a new arm_smmu_domain_inv helper iterating smmu_domain->invs to
convert the invalidation array to commands. Any invalidation request with
no size specified means an entire flush over a range based one.

Take advantage of the sorted array to compatible batch operations together
to the same SMMU. For instance, ATC invaliations for multiple SIDs can be
pushed as a batch.

ATC invalidations must be completed before the driver disables ATS. Or the
device is permitted to ignore any racing invalidation that would cause an
SMMU timeout. The sequencing is done with a rwlock where holding the write
side of the rwlock means that there are no outstanding ATC invalidations.
If ATS is not used the rwlock is ignored, similar to the existing code.

Co-developed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |   9 +
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 218 ++++++++++++++++++--
 2 files changed, 214 insertions(+), 13 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index b1dbcc6747baa..ab166de50f3e1 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -1078,6 +1078,15 @@ void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
 int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 			    unsigned long iova, size_t size);
 
+void arm_smmu_domain_inv_range(struct arm_smmu_domain *smmu_domain,
+			       unsigned long iova, size_t size,
+			       unsigned int granule, bool leaf);
+
+static inline void arm_smmu_domain_inv(struct arm_smmu_domain *smmu_domain)
+{
+	arm_smmu_domain_inv_range(smmu_domain, 0, 0, 0, false);
+}
+
 void __arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu,
 			      struct arm_smmu_cmdq *cmdq);
 int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 9f7945c98e3c7..e74c788c23673 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2500,23 +2500,19 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	arm_smmu_atc_inv_domain(smmu_domain, 0, 0);
 }
 
-static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
-				     unsigned long iova, size_t size,
-				     size_t granule,
-				     struct arm_smmu_domain *smmu_domain)
+static void arm_smmu_cmdq_batch_add_range(struct arm_smmu_device *smmu,
+					  struct arm_smmu_cmdq_batch *cmds,
+					  struct arm_smmu_cmdq_ent *cmd,
+					  unsigned long iova, size_t size,
+					  size_t granule, size_t pgsize)
 {
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	unsigned long end = iova + size, num_pages = 0, tg = 0;
+	unsigned long end = iova + size, num_pages = 0, tg = pgsize;
 	size_t inv_range = granule;
-	struct arm_smmu_cmdq_batch cmds;
 
 	if (!size)
 		return;
 
 	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
-		/* Get the leaf page size */
-		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
-
 		num_pages = size >> tg;
 
 		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
@@ -2536,8 +2532,6 @@ static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
 			num_pages++;
 	}
 
-	arm_smmu_cmdq_batch_init(smmu, &cmds, cmd);
-
 	while (iova < end) {
 		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
 			/*
@@ -2565,9 +2559,26 @@ static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
 		}
 
 		cmd->tlbi.addr = iova;
-		arm_smmu_cmdq_batch_add(smmu, &cmds, cmd);
+		arm_smmu_cmdq_batch_add(smmu, cmds, cmd);
 		iova += inv_range;
 	}
+}
+
+static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
+				     unsigned long iova, size_t size,
+				     size_t granule,
+				     struct arm_smmu_domain *smmu_domain)
+{
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_cmdq_batch cmds;
+	size_t pgsize;
+
+	/* Get the leaf page size */
+	pgsize = __ffs(smmu_domain->domain.pgsize_bitmap);
+
+	arm_smmu_cmdq_batch_init(smmu, &cmds, cmd);
+	arm_smmu_cmdq_batch_add_range(smmu, &cmds, cmd, iova, size, granule,
+				      pgsize);
 	arm_smmu_cmdq_batch_submit(smmu, &cmds);
 }
 
@@ -2623,6 +2634,187 @@ void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
 	__arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
 }
 
+static bool arm_smmu_inv_size_too_big(struct arm_smmu_device *smmu, size_t size,
+				      size_t granule)
+{
+	size_t max_tlbi_ops;
+
+	/* 0 size means invalidate all */
+	if (!size || size == SIZE_MAX)
+		return true;
+
+	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV)
+		return false;
+
+	/*
+	 * Borrowed from the MAX_TLBI_OPS in arch/arm64/include/asm/tlbflush.h,
+	 * this is used as a threshold to replace "size_opcode" commands with a
+	 * single "nsize_opcode" command, when SMMU doesn't implement the range
+	 * invalidation feature, where there can be too many per-granule TLBIs,
+	 * resulting in a soft lockup.
+	 */
+	max_tlbi_ops = 1 << (ilog2(granule) - 3);
+	return size >= max_tlbi_ops * granule;
+}
+
+/* Used by non INV_TYPE_ATS* invalidations */
+static void arm_smmu_inv_to_cmdq_batch(struct arm_smmu_inv *inv,
+				       struct arm_smmu_cmdq_batch *cmds,
+				       struct arm_smmu_cmdq_ent *cmd,
+				       unsigned long iova, size_t size,
+				       unsigned int granule)
+{
+	if (arm_smmu_inv_size_too_big(inv->smmu, size, granule)) {
+		cmd->opcode = inv->nsize_opcode;
+		/* nsize_opcode always needs a sync, no batching */
+		arm_smmu_cmdq_issue_cmd_with_sync(inv->smmu, cmd);
+		return;
+	}
+
+	cmd->opcode = inv->size_opcode;
+	arm_smmu_cmdq_batch_add_range(inv->smmu, cmds, cmd, iova, size, granule,
+				      inv->pgsize);
+}
+
+static inline bool arm_smmu_invs_end_batch(struct arm_smmu_inv *cur,
+					   struct arm_smmu_inv *next)
+{
+	/* Changing smmu means changing command queue */
+	if (cur->smmu != next->smmu)
+		return true;
+	/* The batch for S2 TLBI must be done before nested S1 ASIDs */
+	if (cur->type != INV_TYPE_S2_VMID_S1_CLEAR &&
+	    next->type == INV_TYPE_S2_VMID_S1_CLEAR)
+		return true;
+	/* ATS must be after a sync of the S1/S2 invalidations */
+	if (!arm_smmu_inv_is_ats(cur) && arm_smmu_inv_is_ats(next))
+		return true;
+	return false;
+}
+
+void arm_smmu_domain_inv_range(struct arm_smmu_domain *smmu_domain,
+			       unsigned long iova, size_t size,
+			       unsigned int granule, bool leaf)
+{
+	struct arm_smmu_cmdq_batch cmds = {};
+	struct arm_smmu_invs *invs;
+	struct arm_smmu_inv *cur;
+	struct arm_smmu_inv *end;
+	bool locked = false;
+
+	/*
+	 * An invalidation request must follow some IOPTE change and then load
+	 * an invalidation array. In the meantime, a domain attachment mutates
+	 * the array and then stores an STE/CD asking SMMU HW to acquire those
+	 * changed IOPTEs. In other word, these two are interdependent and can
+	 * race.
+	 *
+	 * In a race, the RCU design (with its underlying memory barriers) can
+	 * ensure the invalidation array to always get updated before loaded.
+	 *
+	 * smp_mb() is used here, paired with the smp_mb() following the array
+	 * update in a concurrent attach, to ensure:
+	 *  - HW sees the new IOPTEs if it walks after STE installation
+	 *  - Invalidation thread sees the updated array with the new ASID.
+	 *
+	 *  [CPU0]                        | [CPU1]
+	 *                                |
+	 *  change IOPTEs and TLB flush:  |
+	 *  arm_smmu_domain_inv_range() { | arm_smmu_install_new_domain_invs {
+	 *    ...                         |   rcu_assign_pointer(new_invs);
+	 *    smp_mb(); // ensure IOPTEs  |   smp_mb(); // ensure new_invs
+	 *    ...                         |   kfree_rcu(old_invs, rcu);
+	 *    // load invalidation array  | }
+	 *    invs = rcu_dereference();   | arm_smmu_install_ste_for_dev {
+	 *                                |   STE = TTB0 // read new IOPTEs
+	 */
+	smp_mb();
+
+	rcu_read_lock();
+	invs = rcu_dereference(smmu_domain->invs);
+
+	/*
+	 * Avoid locking unless ATS is being used. No ATC invalidation can be
+	 * going on after a domain is detached.
+	 */
+	if (invs->has_ats) {
+		read_lock(&invs->rwlock);
+		locked = true;
+	}
+
+	cur = invs->inv;
+	end = cur + READ_ONCE(invs->num_invs);
+	/* Skip any leading entry marked as a trash */
+	for (; cur != end; cur++)
+		if (refcount_read(&cur->users))
+			break;
+	while (cur != end) {
+		struct arm_smmu_device *smmu = cur->smmu;
+		struct arm_smmu_cmdq_ent cmd = {
+			/*
+			 * Pick size_opcode to run arm_smmu_get_cmdq(). This can
+			 * be changed to nsize_opcode, which would result in the
+			 * same CMDQ pointer.
+			 */
+			.opcode = cur->size_opcode,
+		};
+		struct arm_smmu_inv *next;
+
+		if (!cmds.num)
+			arm_smmu_cmdq_batch_init(smmu, &cmds, &cmd);
+
+		switch (cur->type) {
+		case INV_TYPE_S1_ASID:
+			cmd.tlbi.asid = cur->id;
+			cmd.tlbi.leaf = leaf;
+			arm_smmu_inv_to_cmdq_batch(cur, &cmds, &cmd, iova, size,
+						   granule);
+			break;
+		case INV_TYPE_S2_VMID:
+			cmd.tlbi.vmid = cur->id;
+			cmd.tlbi.leaf = leaf;
+			arm_smmu_inv_to_cmdq_batch(cur, &cmds, &cmd, iova, size,
+						   granule);
+			break;
+		case INV_TYPE_S2_VMID_S1_CLEAR:
+			/* CMDQ_OP_TLBI_S12_VMALL already flushed S1 entries */
+			if (arm_smmu_inv_size_too_big(cur->smmu, size, granule))
+				continue;
+			cmd.tlbi.vmid = cur->id;
+			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+			break;
+		case INV_TYPE_ATS:
+			arm_smmu_atc_inv_to_cmd(cur->ssid, iova, size, &cmd);
+			cmd.atc.sid = cur->id;
+			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+			break;
+		case INV_TYPE_ATS_FULL:
+			arm_smmu_atc_inv_to_cmd(IOMMU_NO_PASID, 0, 0, &cmd);
+			cmd.atc.sid = cur->id;
+			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+			break;
+		default:
+			WARN_ON_ONCE(1);
+			continue;
+		}
+
+		/* Skip any trash entry in-between */
+		for (next = cur + 1; next != end; next++)
+			if (refcount_read(&next->users))
+				break;
+
+		if (cmds.num &&
+		    (next == end || arm_smmu_invs_end_batch(cur, next))) {
+			arm_smmu_cmdq_batch_submit(smmu, &cmds);
+			cmds.num = 0;
+		}
+		cur = next;
+	}
+	if (locked)
+		read_unlock(&invs->rwlock);
+	rcu_read_unlock();
+}
+
 static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
 					 unsigned long iova, size_t granule,
 					 void *cookie)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs
  2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
                   ` (5 preceding siblings ...)
  2025-10-15 19:42 ` [PATCH v3 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
@ 2025-10-15 19:42 ` Nicolin Chen
  6 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-15 19:42 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

Replace the old invalidation functions with arm_smmu_domain_inv_range() in
all the existing invalidation routines. And deprecate the old functions.

The new arm_smmu_domain_inv_range() handles the CMDQ_MAX_TLBI_OPS as well,
so drop it in the SVA function.

Since arm_smmu_cmdq_batch_add_range() has only one caller now, and it must
be given a valid size, add a WARN_ON_ONCE to catch any missed case.

Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |   7 -
 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   |  29 +--
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 165 +-----------------
 3 files changed, 11 insertions(+), 190 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index ab166de50f3e1..2b24e3a9bc8d4 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -1071,13 +1071,6 @@ int arm_smmu_set_pasid(struct arm_smmu_master *master,
 		       struct arm_smmu_domain *smmu_domain, ioasid_t pasid,
 		       struct arm_smmu_cd *cd, struct iommu_domain *old);
 
-void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid);
-void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
-				 size_t granule, bool leaf,
-				 struct arm_smmu_domain *smmu_domain);
-int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
-			    unsigned long iova, size_t size);
-
 void arm_smmu_domain_inv_range(struct arm_smmu_domain *smmu_domain,
 			       unsigned long iova, size_t size,
 			       unsigned int granule, bool leaf);
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index fc601b494e0af..048b53f79b144 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -122,15 +122,6 @@ void arm_smmu_make_sva_cd(struct arm_smmu_cd *target,
 }
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_make_sva_cd);
 
-/*
- * Cloned from the MAX_TLBI_OPS in arch/arm64/include/asm/tlbflush.h, this
- * is used as a threshold to replace per-page TLBI commands to issue in the
- * command queue with an address-space TLBI command, when SMMU w/o a range
- * invalidation feature handles too many per-page TLBI commands, which will
- * otherwise result in a soft lockup.
- */
-#define CMDQ_MAX_TLBI_OPS		(1 << (PAGE_SHIFT - 3))
-
 static void arm_smmu_mm_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn,
 						struct mm_struct *mm,
 						unsigned long start,
@@ -146,21 +137,8 @@ static void arm_smmu_mm_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn,
 	 * range. So do a simple translation here by calculating size correctly.
 	 */
 	size = end - start;
-	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_RANGE_INV)) {
-		if (size >= CMDQ_MAX_TLBI_OPS * PAGE_SIZE)
-			size = 0;
-	} else {
-		if (size == ULONG_MAX)
-			size = 0;
-	}
-
-	if (!size)
-		arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_domain->cd.asid);
-	else
-		arm_smmu_tlb_inv_range_asid(start, size, smmu_domain->cd.asid,
-					    PAGE_SIZE, false, smmu_domain);
 
-	arm_smmu_atc_inv_domain(smmu_domain, start, size);
+	arm_smmu_domain_inv_range(smmu_domain, start, size, PAGE_SIZE, false);
 }
 
 static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
@@ -191,8 +169,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 	}
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
-	arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_domain->cd.asid);
-	arm_smmu_atc_inv_domain(smmu_domain, 0, 0);
+	arm_smmu_domain_inv(smmu_domain);
 }
 
 static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn)
@@ -301,7 +278,7 @@ static void arm_smmu_sva_domain_free(struct iommu_domain *domain)
 	/*
 	 * Ensure the ASID is empty in the iommu cache before allowing reuse.
 	 */
-	arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_domain->cd.asid);
+	arm_smmu_domain_inv(smmu_domain);
 
 	/*
 	 * Notice that the arm_smmu_mm_arch_invalidate_secondary_tlbs op can
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index e74c788c23673..776e1ec88da7a 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1250,16 +1250,6 @@ struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_purge);
 
 /* Context descriptor manipulation functions */
-void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid)
-{
-	struct arm_smmu_cmdq_ent cmd = {
-		.opcode	= smmu->features & ARM_SMMU_FEAT_E2H ?
-			CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID,
-		.tlbi.asid = asid,
-	};
-
-	arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
-}
 
 /*
  * Based on the value of ent report which bits of the STE the HW will access. It
@@ -2414,74 +2404,10 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
 	return arm_smmu_cmdq_batch_submit(master->smmu, &cmds);
 }
 
-int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
-			    unsigned long iova, size_t size)
-{
-	struct arm_smmu_master_domain *master_domain;
-	int i;
-	unsigned long flags;
-	struct arm_smmu_cmdq_ent cmd = {
-		.opcode = CMDQ_OP_ATC_INV,
-	};
-	struct arm_smmu_cmdq_batch cmds;
-
-	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
-		return 0;
-
-	/*
-	 * Ensure that we've completed prior invalidation of the main TLBs
-	 * before we read 'nr_ats_masters' in case of a concurrent call to
-	 * arm_smmu_enable_ats():
-	 *
-	 *	// unmap()			// arm_smmu_enable_ats()
-	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
-	 *	smp_mb();			[...]
-	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
-	 *
-	 * Ensures that we always see the incremented 'nr_ats_masters' count if
-	 * ATS was enabled at the PCI device before completion of the TLBI.
-	 */
-	smp_mb();
-	if (!atomic_read(&smmu_domain->nr_ats_masters))
-		return 0;
-
-	arm_smmu_cmdq_batch_init(smmu_domain->smmu, &cmds, &cmd);
-
-	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
-	list_for_each_entry(master_domain, &smmu_domain->devices,
-			    devices_elm) {
-		struct arm_smmu_master *master = master_domain->master;
-
-		if (!master->ats_enabled)
-			continue;
-
-		if (master_domain->nested_ats_flush) {
-			/*
-			 * If a S2 used as a nesting parent is changed we have
-			 * no option but to completely flush the ATC.
-			 */
-			arm_smmu_atc_inv_to_cmd(IOMMU_NO_PASID, 0, 0, &cmd);
-		} else {
-			arm_smmu_atc_inv_to_cmd(master_domain->ssid, iova, size,
-						&cmd);
-		}
-
-		for (i = 0; i < master->num_streams; i++) {
-			cmd.atc.sid = master->streams[i].id;
-			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
-		}
-	}
-	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
-
-	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
-}
-
 /* IO_PGTABLE API */
 static void arm_smmu_tlb_inv_context(void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_cmdq_ent cmd;
 
 	/*
 	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
@@ -2490,14 +2416,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	 * insertion to guarantee those are observed before the TLBI. Do be
 	 * careful, 007.
 	 */
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		arm_smmu_tlb_inv_asid(smmu, smmu_domain->cd.asid);
-	} else {
-		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
-		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-		arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
-	}
-	arm_smmu_atc_inv_domain(smmu_domain, 0, 0);
+	arm_smmu_domain_inv(smmu_domain);
 }
 
 static void arm_smmu_cmdq_batch_add_range(struct arm_smmu_device *smmu,
@@ -2509,7 +2428,7 @@ static void arm_smmu_cmdq_batch_add_range(struct arm_smmu_device *smmu,
 	unsigned long end = iova + size, num_pages = 0, tg = pgsize;
 	size_t inv_range = granule;
 
-	if (!size)
+	if (WARN_ON_ONCE(!size))
 		return;
 
 	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
@@ -2564,76 +2483,6 @@ static void arm_smmu_cmdq_batch_add_range(struct arm_smmu_device *smmu,
 	}
 }
 
-static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
-				     unsigned long iova, size_t size,
-				     size_t granule,
-				     struct arm_smmu_domain *smmu_domain)
-{
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_cmdq_batch cmds;
-	size_t pgsize;
-
-	/* Get the leaf page size */
-	pgsize = __ffs(smmu_domain->domain.pgsize_bitmap);
-
-	arm_smmu_cmdq_batch_init(smmu, &cmds, cmd);
-	arm_smmu_cmdq_batch_add_range(smmu, &cmds, cmd, iova, size, granule,
-				      pgsize);
-	arm_smmu_cmdq_batch_submit(smmu, &cmds);
-}
-
-static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size,
-					  size_t granule, bool leaf,
-					  struct arm_smmu_domain *smmu_domain)
-{
-	struct arm_smmu_cmdq_ent cmd = {
-		.tlbi = {
-			.leaf	= leaf,
-		},
-	};
-
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		cmd.opcode	= smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
-				  CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA;
-		cmd.tlbi.asid	= smmu_domain->cd.asid;
-	} else {
-		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
-		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-	}
-	__arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
-
-	if (smmu_domain->nest_parent) {
-		/*
-		 * When the S2 domain changes all the nested S1 ASIDs have to be
-		 * flushed too.
-		 */
-		cmd.opcode = CMDQ_OP_TLBI_NH_ALL;
-		arm_smmu_cmdq_issue_cmd_with_sync(smmu_domain->smmu, &cmd);
-	}
-
-	/*
-	 * Unfortunately, this can't be leaf-only since we may have
-	 * zapped an entire table.
-	 */
-	arm_smmu_atc_inv_domain(smmu_domain, iova, size);
-}
-
-void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
-				 size_t granule, bool leaf,
-				 struct arm_smmu_domain *smmu_domain)
-{
-	struct arm_smmu_cmdq_ent cmd = {
-		.opcode	= smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
-			  CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA,
-		.tlbi = {
-			.asid	= asid,
-			.leaf	= leaf,
-		},
-	};
-
-	__arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
-}
-
 static bool arm_smmu_inv_size_too_big(struct arm_smmu_device *smmu, size_t size,
 				      size_t granule)
 {
@@ -2828,7 +2677,9 @@ static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
 static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
 				  size_t granule, void *cookie)
 {
-	arm_smmu_tlb_inv_range_domain(iova, size, granule, false, cookie);
+	struct arm_smmu_domain *smmu_domain = cookie;
+
+	arm_smmu_domain_inv_range(smmu_domain, iova, size, granule, false);
 }
 
 static const struct iommu_flush_ops arm_smmu_flush_ops = {
@@ -4072,9 +3923,9 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
 	if (!gather->pgsize)
 		return;
 
-	arm_smmu_tlb_inv_range_domain(gather->start,
-				      gather->end - gather->start + 1,
-				      gather->pgsize, true, smmu_domain);
+	arm_smmu_domain_inv_range(smmu_domain, gather->start,
+				  gather->end - gather->start + 1,
+				  gather->pgsize, true);
 }
 
 static phys_addr_t
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
@ 2025-10-16 19:18   ` kernel test robot
  2025-10-16 21:31   ` Nicolin Chen
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 20+ messages in thread
From: kernel test robot @ 2025-10-16 19:18 UTC (permalink / raw)
  To: Nicolin Chen, will, jgg
  Cc: llvm, oe-kbuild-all, jean-philippe, robin.murphy, joro, balbirs,
	miko.lenczewski, peterz, kevin.tian, praan, linux-arm-kernel,
	iommu, linux-kernel

Hi Nicolin,

kernel test robot noticed the following build warnings:

[auto build test WARNING on soc/for-next]
[also build test WARNING on linus/master v6.18-rc1 next-20251016]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Nicolin-Chen/iommu-arm-smmu-v3-Explicitly-set-smmu_domain-stage-for-SVA/20251016-034754
base:   https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git for-next
patch link:    https://lore.kernel.org/r/345bb7703ebd19992694758b47e371900267fa0e.1760555863.git.nicolinc%40nvidia.com
patch subject: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
config: arm64-randconfig-001-20251017 (https://download.01.org/0day-ci/archive/20251017/202510170345.lO7fR7ao-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251017/202510170345.lO7fR7ao-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510170345.lO7fR7ao-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1170:7: warning: variable 'cmp' is uninitialized when used here [-Wuninitialized]
    1170 |                 if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
         |                     ^~~
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1167:10: note: initialize the variable 'cmp' to silence this warning
    1167 |                 int cmp;
         |                        ^
         |                         = 0
   1 warning generated.


vim +/cmp +1170 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c

  1132	
  1133	/**
  1134	 * arm_smmu_invs_unref() - Find in @invs for all entries in @to_unref, decrease
  1135	 *                         the user counts without deletions
  1136	 * @invs: the base invalidation array
  1137	 * @to_unref: an array of invlidations to decrease their user counts
  1138	 * @flush_fn: A callback function to invoke, when an entry's user count reduces
  1139	 *            to 0
  1140	 *
  1141	 * Return: the number of trash entries in the array, for arm_smmu_invs_purge()
  1142	 *
  1143	 * This function will not fail. Any entry with users=0 will be marked as trash.
  1144	 * All tailing trash entries in the array will be dropped. And the size of the
  1145	 * array will be trimmed properly. All trash entries in-between will remain in
  1146	 * the @invs until being completely deleted by the next arm_smmu_invs_merge()
  1147	 * or an arm_smmu_invs_purge() function call.
  1148	 *
  1149	 * This function must be locked and serialized with arm_smmu_invs_merge() and
  1150	 * arm_smmu_invs_purge(), but do not lockdep on any mutex for KUNIT test.
  1151	 *
  1152	 * Note that the final @invs->num_invs might not reflect the actual number of
  1153	 * invalidations due to trash entries. Any reader should take the read lock to
  1154	 * iterate each entry and check its users counter till the last entry.
  1155	 */
  1156	VISIBLE_IF_KUNIT
  1157	size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
  1158				   struct arm_smmu_invs *to_unref,
  1159				   void (*flush_fn)(struct arm_smmu_inv *inv))
  1160	{
  1161		unsigned long flags;
  1162		size_t num_trashes = 0;
  1163		size_t num_invs = 0;
  1164		size_t i, j;
  1165	
  1166		for (i = j = 0; i != invs->num_invs || j != to_unref->num_invs;) {
  1167			int cmp;
  1168	
  1169			/* Skip any existing trash entry */
> 1170			if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
  1171				num_trashes++;
  1172				i++;
  1173				continue;
  1174			}
  1175	
  1176			cmp = arm_smmu_invs_cmp(invs, i, to_unref, j);
  1177			if (cmp < 0) {
  1178				/* not found in to_unref, leave alone */
  1179				i++;
  1180				num_invs = i;
  1181			} else if (cmp == 0) {
  1182				/* same item */
  1183				if (refcount_dec_and_test(&invs->inv[i].users)) {
  1184					/* KUNIT test doesn't pass in a flush_fn */
  1185					if (flush_fn)
  1186						flush_fn(&invs->inv[i]);
  1187					num_trashes++;
  1188				} else {
  1189					num_invs = i + 1;
  1190				}
  1191				i++;
  1192				j++;
  1193			} else {
  1194				/* item in to_unref is not in invs or already a trash */
  1195				WARN_ON(true);
  1196				j++;
  1197			}
  1198		}
  1199	
  1200		/* Exclude any tailing trash */
  1201		num_trashes -= invs->num_invs - num_invs;
  1202	
  1203		/* The lock is required to fence concurrent ATS operations. */
  1204		write_lock_irqsave(&invs->rwlock, flags);
  1205		WRITE_ONCE(invs->num_invs, num_invs); /* Remove tailing trash entries */
  1206		write_unlock_irqrestore(&invs->rwlock, flags);
  1207	
  1208		return num_trashes;
  1209	}
  1210	EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_unref);
  1211	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
  2025-10-16 19:18   ` kernel test robot
@ 2025-10-16 21:31   ` Nicolin Chen
  2025-10-17 13:47   ` kernel test robot
  2025-10-27 12:06   ` kernel test robot
  3 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-16 21:31 UTC (permalink / raw)
  To: will, jgg
  Cc: jean-philippe, robin.murphy, joro, balbirs, miko.lenczewski,
	peterz, kevin.tian, praan, linux-arm-kernel, iommu, linux-kernel

On Wed, Oct 15, 2025 at 12:42:48PM -0700, Nicolin Chen wrote:
> +size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
> +			   struct arm_smmu_invs *to_unref,
> +			   void (*flush_fn)(struct arm_smmu_inv *inv))
> +{
> +	unsigned long flags;
> +	size_t num_trashes = 0;
> +	size_t num_invs = 0;
> +	size_t i, j;
> +
> +	for (i = j = 0; i != invs->num_invs || j != to_unref->num_invs;) {
> +		int cmp;
> +
> +		/* Skip any existing trash entry */
> +		if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
> +			num_trashes++;
> +			i++;
> +			continue;
> +		}
> +
> +		cmp = arm_smmu_invs_cmp(invs, i, to_unref, j);

This should be moved upwards to "int cmp" before if.

Will fix in v4.

Nicolin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
  2025-10-16 19:18   ` kernel test robot
  2025-10-16 21:31   ` Nicolin Chen
@ 2025-10-17 13:47   ` kernel test robot
  2025-10-17 20:12     ` Nicolin Chen
  2025-10-27 12:06   ` kernel test robot
  3 siblings, 1 reply; 20+ messages in thread
From: kernel test robot @ 2025-10-17 13:47 UTC (permalink / raw)
  To: Nicolin Chen, will, jgg
  Cc: oe-kbuild-all, jean-philippe, robin.murphy, joro, balbirs,
	miko.lenczewski, peterz, kevin.tian, praan, linux-arm-kernel,
	iommu, linux-kernel

Hi Nicolin,

kernel test robot noticed the following build warnings:

[auto build test WARNING on soc/for-next]
[also build test WARNING on linus/master v6.18-rc1 next-20251016]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Nicolin-Chen/iommu-arm-smmu-v3-Explicitly-set-smmu_domain-stage-for-SVA/20251016-034754
base:   https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git for-next
patch link:    https://lore.kernel.org/r/345bb7703ebd19992694758b47e371900267fa0e.1760555863.git.nicolinc%40nvidia.com
patch subject: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
config: arm64-randconfig-r123-20251017 (https://download.01.org/0day-ci/archive/20251017/202510172156.WHU485ad-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 754ebc6ebb9fb9fbee7aef33478c74ea74949853)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251017/202510172156.WHU485ad-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510172156.WHU485ad-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c: note: in included file:
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     expected struct callback_head *head
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     got struct callback_head [noderef] __rcu *
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: cast removes address space '__rcu' of expression
--
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c: note: in included file (through arch/arm64/include/asm/atomic.h, include/linux/atomic.h, include/asm-generic/bitops/atomic.h, ...):
   arch/arm64/include/asm/cmpxchg.h:168:1: sparse: sparse: cast truncates bits from constant value (ffffffff80000000 becomes 0)
   arch/arm64/include/asm/cmpxchg.h:168:1: sparse: sparse: cast truncates bits from constant value (ffffffff80000000 becomes 0)
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c: note: in included file:
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     expected struct callback_head *head
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     got struct callback_head [noderef] __rcu *
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: cast removes address space '__rcu' of expression
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     expected struct callback_head *head
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     got struct callback_head [noderef] __rcu *
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: cast removes address space '__rcu' of expression

vim +1048 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h

  1045	
  1046	static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
  1047	{
> 1048		kfree_rcu(smmu_domain->invs, rcu);
  1049		kfree(smmu_domain);
  1050	}
  1051	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
  2025-10-15 19:42 ` [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
@ 2025-10-17 16:03   ` kernel test robot
  2025-10-17 21:11     ` Nicolin Chen
  0 siblings, 1 reply; 20+ messages in thread
From: kernel test robot @ 2025-10-17 16:03 UTC (permalink / raw)
  To: Nicolin Chen, will, jgg
  Cc: oe-kbuild-all, jean-philippe, robin.murphy, joro, balbirs,
	miko.lenczewski, peterz, kevin.tian, praan, linux-arm-kernel,
	iommu, linux-kernel

Hi Nicolin,

kernel test robot noticed the following build warnings:

[auto build test WARNING on soc/for-next]
[also build test WARNING on linus/master v6.18-rc1 next-20251016]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Nicolin-Chen/iommu-arm-smmu-v3-Explicitly-set-smmu_domain-stage-for-SVA/20251016-034754
base:   https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git for-next
patch link:    https://lore.kernel.org/r/14d76eebae359825442a96c0ffa13687de792063.1760555863.git.nicolinc%40nvidia.com
patch subject: [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
config: arm64-randconfig-r123-20251017 (https://download.01.org/0day-ci/archive/20251017/202510172340.XyneWIPI-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 754ebc6ebb9fb9fbee7aef33478c74ea74949853)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251017/202510172340.XyneWIPI-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510172340.XyneWIPI-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct arm_smmu_invs **invs_ptr @@     got struct arm_smmu_invs [noderef] __rcu ** @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     expected struct arm_smmu_invs **invs_ptr
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     got struct arm_smmu_invs [noderef] __rcu **
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3226:33: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct arm_smmu_invs **invs_ptr @@     got struct arm_smmu_invs [noderef] __rcu ** @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3226:33: sparse:     expected struct arm_smmu_invs **invs_ptr
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3226:33: sparse:     got struct arm_smmu_invs [noderef] __rcu **
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3247:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3247:9: sparse:    struct arm_smmu_invs [noderef] __rcu *
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3247:9: sparse:    struct arm_smmu_invs *
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3305:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3305:9: sparse:    struct arm_smmu_invs [noderef] __rcu *
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3305:9: sparse:    struct arm_smmu_invs *
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c: note: in included file (through arch/arm64/include/asm/atomic.h, include/linux/atomic.h, include/asm-generic/bitops/atomic.h, ...):
   arch/arm64/include/asm/cmpxchg.h:168:1: sparse: sparse: cast truncates bits from constant value (ffffffff80000000 becomes 0)
   arch/arm64/include/asm/cmpxchg.h:168:1: sparse: sparse: cast truncates bits from constant value (ffffffff80000000 becomes 0)
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c: note: in included file:
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse:     expected struct callback_head *head
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse:     got struct callback_head [noderef] __rcu *
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse: sparse: cast removes address space '__rcu' of expression
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse:     expected struct callback_head *head
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse:     got struct callback_head [noderef] __rcu *
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1056:9: sparse: sparse: cast removes address space '__rcu' of expression

vim +3208 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c

  3174	
  3175	/*
  3176	 * During attachment, the updates of the two domain->invs arrays are sequenced:
  3177	 *  1. new domain updates its invs array, merging master->build_invs
  3178	 *  2. new domain starts to include the master during its invalidation
  3179	 *  3. master updates its STE switching from the old domain to the new domain
  3180	 *  4. old domain still includes the master during its invalidation
  3181	 *  5. old domain updates its invs array, unreferencing master->build_invs
  3182	 *
  3183	 * For 1 and 5, prepare the two updated arrays in advance, handling any changes
  3184	 * that can possibly failure. So the actual update of either 1 or 5 won't fail.
  3185	 * arm_smmu_asid_lock ensures that the old invs in the domains are intact while
  3186	 * we are sequencing to update them.
  3187	 */
  3188	static int arm_smmu_attach_prepare_invs(struct arm_smmu_attach_state *state,
  3189						struct arm_smmu_domain *new_smmu_domain)
  3190	{
  3191		struct arm_smmu_domain *old_smmu_domain =
  3192			to_smmu_domain_devices(state->old_domain);
  3193		struct arm_smmu_master *master = state->master;
  3194		ioasid_t ssid = state->ssid;
  3195	
  3196		/* A re-attach case doesn't need to update invs array */
  3197		if (new_smmu_domain == old_smmu_domain)
  3198			return 0;
  3199	
  3200		/*
  3201		 * At this point a NULL domain indicates the domain doesn't use the
  3202		 * IOTLB, see to_smmu_domain_devices().
  3203		 */
  3204		if (new_smmu_domain) {
  3205			struct arm_smmu_inv_state *invst = &state->new_domain_invst;
  3206			struct arm_smmu_invs *build_invs;
  3207	
> 3208			invst->invs_ptr = &new_smmu_domain->invs;
  3209			invst->old_invs = rcu_dereference_protected(
  3210				new_smmu_domain->invs,
  3211				lockdep_is_held(&arm_smmu_asid_lock));
  3212			build_invs = arm_smmu_master_build_invs(
  3213				master, state->ats_enabled, ssid, new_smmu_domain);
  3214			if (!build_invs)
  3215				return -EINVAL;
  3216	
  3217			invst->new_invs =
  3218				arm_smmu_invs_merge(invst->old_invs, build_invs);
  3219			if (IS_ERR(invst->new_invs))
  3220				return PTR_ERR(invst->new_invs);
  3221		}
  3222	
  3223		if (old_smmu_domain) {
  3224			struct arm_smmu_inv_state *invst = &state->old_domain_invst;
  3225	
  3226			invst->invs_ptr = &old_smmu_domain->invs;
  3227			invst->old_invs = rcu_dereference_protected(
  3228				old_smmu_domain->invs,
  3229				lockdep_is_held(&arm_smmu_asid_lock));
  3230			/* For old_smmu_domain, new_invs points to master->build_invs */
  3231			invst->new_invs = arm_smmu_master_build_invs(
  3232				master, master->ats_enabled, ssid, old_smmu_domain);
  3233		}
  3234	
  3235		return 0;
  3236	}
  3237	
  3238	/* Must be installed before arm_smmu_install_ste_for_dev() */
  3239	static void
  3240	arm_smmu_install_new_domain_invs(struct arm_smmu_attach_state *state)
  3241	{
  3242		struct arm_smmu_inv_state *invst = &state->new_domain_invst;
  3243	
  3244		if (!invst->invs_ptr)
  3245			return;
  3246	
> 3247		rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);
  3248		/*
  3249		 * We are committed to updating the STE. Ensure the invalidation array
  3250		 * is visable to concurrent map/unmap threads, and acquire any racying
  3251		 * IOPTE updates.
  3252		 */
  3253		smp_mb();
  3254		kfree_rcu(invst->old_invs, rcu);
  3255	}
  3256	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-17 13:47   ` kernel test robot
@ 2025-10-17 20:12     ` Nicolin Chen
  2025-10-20 12:10       ` Jason Gunthorpe
  0 siblings, 1 reply; 20+ messages in thread
From: Nicolin Chen @ 2025-10-17 20:12 UTC (permalink / raw)
  To: kernel test robot
  Cc: will, jgg, oe-kbuild-all, jean-philippe, robin.murphy, joro,
	balbirs, miko.lenczewski, peterz, kevin.tian, praan,
	linux-arm-kernel, iommu, linux-kernel

On Fri, Oct 17, 2025 at 09:47:07PM +0800, kernel test robot wrote:
>    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c: note: in included file:
> >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
>    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     expected struct callback_head *head
>    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     got struct callback_head [noderef] __rcu *
> >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: cast removes address space '__rcu' of expression
...
>   1045	
>   1046	static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
>   1047	{
> > 1048		kfree_rcu(smmu_domain->invs, rcu);

Looks like it should be:
 static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
 {
-       kfree_rcu(smmu_domain->invs, rcu);
+       struct arm_smmu_invs *invs = rcu_dereference(smmu_domain->invs);
+
+       kfree_rcu(invs, rcu);
        kfree(smmu_domain);
 }

Will fix.

Thanks
Nicolin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
  2025-10-17 16:03   ` kernel test robot
@ 2025-10-17 21:11     ` Nicolin Chen
  2025-10-20 12:12       ` Jason Gunthorpe
  0 siblings, 1 reply; 20+ messages in thread
From: Nicolin Chen @ 2025-10-17 21:11 UTC (permalink / raw)
  To: kernel test robot
  Cc: will, jgg, oe-kbuild-all, jean-philippe, robin.murphy, joro,
	balbirs, miko.lenczewski, peterz, kevin.tian, praan,
	linux-arm-kernel, iommu, linux-kernel

On Sat, Oct 18, 2025 at 12:03:27AM +0800, kernel test robot wrote:
> sparse warnings: (new ones prefixed by >>)
> >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct arm_smmu_invs **invs_ptr @@     got struct arm_smmu_invs [noderef] __rcu ** @@
>    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     expected struct arm_smmu_invs **invs_ptr
>    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     got struct arm_smmu_invs [noderef] __rcu **
...
> > 3208			invst->invs_ptr = &new_smmu_domain->invs;
...
> > 3247		rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);

Looks like we need:

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 8906c1625f428..398d8beb8f862 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -1105,7 +1105,7 @@ static inline bool arm_smmu_master_canwbs(struct arm_smmu_master *master)
  *            to_unref argument to an arm_smmu_invs_unref() call
  */
 struct arm_smmu_inv_state {
-       struct arm_smmu_invs **invs_ptr;
+       struct arm_smmu_invs __rcu **invs_ptr;
        struct arm_smmu_invs *old_invs;
        struct arm_smmu_invs *new_invs;
 };

Will fix in v4.

Thanks
Nicolin

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-17 20:12     ` Nicolin Chen
@ 2025-10-20 12:10       ` Jason Gunthorpe
  2025-10-20 19:16         ` Nicolin Chen
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Gunthorpe @ 2025-10-20 12:10 UTC (permalink / raw)
  To: Nicolin Chen
  Cc: kernel test robot, will, oe-kbuild-all, jean-philippe,
	robin.murphy, joro, balbirs, miko.lenczewski, peterz, kevin.tian,
	praan, linux-arm-kernel, iommu, linux-kernel

On Fri, Oct 17, 2025 at 01:12:02PM -0700, Nicolin Chen wrote:
> On Fri, Oct 17, 2025 at 09:47:07PM +0800, kernel test robot wrote:
> >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c: note: in included file:
> > >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
> >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     expected struct callback_head *head
> >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     got struct callback_head [noderef] __rcu *
> > >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: cast removes address space '__rcu' of expression
> ...
> >   1045	
> >   1046	static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
> >   1047	{
> > > 1048		kfree_rcu(smmu_domain->invs, rcu);
> 
> Looks like it should be:
>  static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
>  {
> -       kfree_rcu(smmu_domain->invs, rcu);
> +       struct arm_smmu_invs *invs = rcu_dereference(smmu_domain->invs);

rcu_derference_protected(,true) since we know there is no concurrency
here..

Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
  2025-10-17 21:11     ` Nicolin Chen
@ 2025-10-20 12:12       ` Jason Gunthorpe
  2025-10-20 19:05         ` Nicolin Chen
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Gunthorpe @ 2025-10-20 12:12 UTC (permalink / raw)
  To: Nicolin Chen
  Cc: kernel test robot, will, oe-kbuild-all, jean-philippe,
	robin.murphy, joro, balbirs, miko.lenczewski, peterz, kevin.tian,
	praan, linux-arm-kernel, iommu, linux-kernel

On Fri, Oct 17, 2025 at 02:11:34PM -0700, Nicolin Chen wrote:
> On Sat, Oct 18, 2025 at 12:03:27AM +0800, kernel test robot wrote:
> > sparse warnings: (new ones prefixed by >>)
> > >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct arm_smmu_invs **invs_ptr @@     got struct arm_smmu_invs [noderef] __rcu ** @@
> >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     expected struct arm_smmu_invs **invs_ptr
> >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     got struct arm_smmu_invs [noderef] __rcu **
> ...
> > > 3208			invst->invs_ptr = &new_smmu_domain->invs;
> ...
> > > 3247		rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);
> 
> Looks like we need:
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index 8906c1625f428..398d8beb8f862 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -1105,7 +1105,7 @@ static inline bool arm_smmu_master_canwbs(struct arm_smmu_master *master)
>   *            to_unref argument to an arm_smmu_invs_unref() call
>   */
>  struct arm_smmu_inv_state {
> -       struct arm_smmu_invs **invs_ptr;
> +       struct arm_smmu_invs __rcu **invs_ptr;
>         struct arm_smmu_invs *old_invs;
>         struct arm_smmu_invs *new_invs;
>  };

Isn't it:

 struct arm_smmu_invs * __rcu *invs_ptr;

This is a pointer to a rcu controlled pointer, not a rcu controlled
pointer to a pointer..

Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters
  2025-10-20 12:12       ` Jason Gunthorpe
@ 2025-10-20 19:05         ` Nicolin Chen
  0 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-20 19:05 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kernel test robot, will, oe-kbuild-all, jean-philippe,
	robin.murphy, joro, balbirs, miko.lenczewski, peterz, kevin.tian,
	praan, linux-arm-kernel, iommu, linux-kernel

On Mon, Oct 20, 2025 at 09:12:25AM -0300, Jason Gunthorpe wrote:
> On Fri, Oct 17, 2025 at 02:11:34PM -0700, Nicolin Chen wrote:
> > On Sat, Oct 18, 2025 at 12:03:27AM +0800, kernel test robot wrote:
> > > sparse warnings: (new ones prefixed by >>)
> > > >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct arm_smmu_invs **invs_ptr @@     got struct arm_smmu_invs [noderef] __rcu ** @@
> > >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     expected struct arm_smmu_invs **invs_ptr
> > >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3208:33: sparse:     got struct arm_smmu_invs [noderef] __rcu **
> > ...
> > > > 3208			invst->invs_ptr = &new_smmu_domain->invs;
> > ...
> > > > 3247		rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);
> > 
> > Looks like we need:
> > 
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> > index 8906c1625f428..398d8beb8f862 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> > @@ -1105,7 +1105,7 @@ static inline bool arm_smmu_master_canwbs(struct arm_smmu_master *master)
> >   *            to_unref argument to an arm_smmu_invs_unref() call
> >   */
> >  struct arm_smmu_inv_state {
> > -       struct arm_smmu_invs **invs_ptr;
> > +       struct arm_smmu_invs __rcu **invs_ptr;
> >         struct arm_smmu_invs *old_invs;
> >         struct arm_smmu_invs *new_invs;
> >  };
> 
> Isn't it:
> 
>  struct arm_smmu_invs * __rcu *invs_ptr;

Hmm, sparse warns this:

drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3257:33: warning: incorrect type in assignment (different address spaces)
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3257:33:    expected struct arm_smmu_invs *[noderef] __rcu *invs_ptr
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3257:33:    got struct arm_smmu_invs [noderef] __rcu **
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3275:33: warning: incorrect type in assignment (different address spaces)
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3275:33:    expected struct arm_smmu_invs *[noderef] __rcu *invs_ptr
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3275:33:    got struct arm_smmu_invs [noderef] __rcu **
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3296:9: error: incompatible types in comparison expression (different address spaces):
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3296:9:    struct arm_smmu_invs [noderef] __rcu *
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3296:9:    struct arm_smmu_invs *
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3354:9: error: incompatible types in comparison expression (different address spaces):
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3354:9:    struct arm_smmu_invs [noderef] __rcu *
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3354:9:    struct arm_smmu_invs *

3257: invst->invs_ptr = &new_smmu_domain->invs;
3296: rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);
3354: rcu_assign_pointer(*invst->invs_ptr, new_invs);

But no warning with "struct arm_smmu_invs __rcu **invs_ptr".

Thanks
Nicolin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-20 12:10       ` Jason Gunthorpe
@ 2025-10-20 19:16         ` Nicolin Chen
  0 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-20 19:16 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kernel test robot, will, oe-kbuild-all, jean-philippe,
	robin.murphy, joro, balbirs, miko.lenczewski, peterz, kevin.tian,
	praan, linux-arm-kernel, iommu, linux-kernel

On Mon, Oct 20, 2025 at 09:10:56AM -0300, Jason Gunthorpe wrote:
> On Fri, Oct 17, 2025 at 01:12:02PM -0700, Nicolin Chen wrote:
> > On Fri, Oct 17, 2025 at 09:47:07PM +0800, kernel test robot wrote:
> > >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c: note: in included file:
> > > >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct callback_head *head @@     got struct callback_head [noderef] __rcu * @@
> > >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     expected struct callback_head *head
> > >    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse:     got struct callback_head [noderef] __rcu *
> > > >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:1048:9: sparse: sparse: cast removes address space '__rcu' of expression
> > ...
> > >   1045	
> > >   1046	static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
> > >   1047	{
> > > > 1048		kfree_rcu(smmu_domain->invs, rcu);
> > 
> > Looks like it should be:
> >  static inline void arm_smmu_domain_free(struct arm_smmu_domain *smmu_domain)
> >  {
> > -       kfree_rcu(smmu_domain->invs, rcu);
> > +       struct arm_smmu_invs *invs = rcu_dereference(smmu_domain->invs);
> 
> rcu_derference_protected(,true) since we know there is no concurrency
> here..

Oh right, it's outside rcu_read_lock().

Thanks
Nicolin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
                     ` (2 preceding siblings ...)
  2025-10-17 13:47   ` kernel test robot
@ 2025-10-27 12:06   ` kernel test robot
  2025-10-27 16:38     ` Nicolin Chen
  3 siblings, 1 reply; 20+ messages in thread
From: kernel test robot @ 2025-10-27 12:06 UTC (permalink / raw)
  To: Nicolin Chen, will, jgg
  Cc: oe-kbuild-all, jean-philippe, robin.murphy, joro, balbirs,
	miko.lenczewski, peterz, kevin.tian, praan, linux-arm-kernel,
	iommu, linux-kernel

Hi Nicolin,

kernel test robot noticed the following build errors:

[auto build test ERROR on soc/for-next]
[also build test ERROR on linus/master v6.18-rc3 next-20251027]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Nicolin-Chen/iommu-arm-smmu-v3-Explicitly-set-smmu_domain-stage-for-SVA/20251016-034754
base:   https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git for-next
patch link:    https://lore.kernel.org/r/345bb7703ebd19992694758b47e371900267fa0e.1760555863.git.nicolinc%40nvidia.com
patch subject: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
config: arm64-defconfig (https://download.01.org/0day-ci/archive/20251027/202510271909.iEzPjNv4-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251027/202510271909.iEzPjNv4-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510271909.iEzPjNv4-lkp@intel.com/

All errors (new ones prefixed by >>):

>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1062:23: error: static declaration of 'arm_smmu_invs_merge' follows non-static declaration
    1062 | struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
         |                       ^~~~~~~~~~~~~~~~~~~
   In file included from drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:34:
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:731:23: note: previous declaration of 'arm_smmu_invs_merge' with type 'struct arm_smmu_invs *(struct arm_smmu_invs *, struct arm_smmu_invs *)'
     731 | struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
         |                       ^~~~~~~~~~~~~~~~~~~
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1157:8: error: static declaration of 'arm_smmu_invs_unref' follows non-static declaration
    1157 | size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
         |        ^~~~~~~~~~~~~~~~~~~
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:733:8: note: previous declaration of 'arm_smmu_invs_unref' with type 'size_t(struct arm_smmu_invs *, struct arm_smmu_invs *, void (*)(struct arm_smmu_inv *))' {aka 'long unsigned int(struct arm_smmu_invs *, struct arm_smmu_invs *, void (*)(struct arm_smmu_inv *))'}
     733 | size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
         |        ^~~~~~~~~~~~~~~~~~~
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1227:23: error: static declaration of 'arm_smmu_invs_purge' follows non-static declaration
    1227 | struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
         |                       ^~~~~~~~~~~~~~~~~~~
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:736:23: note: previous declaration of 'arm_smmu_invs_purge' with type 'struct arm_smmu_invs *(struct arm_smmu_invs *, size_t)' {aka 'struct arm_smmu_invs *(struct arm_smmu_invs *, long unsigned int)'}
     736 | struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
         |                       ^~~~~~~~~~~~~~~~~~~
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1227:23: warning: 'arm_smmu_invs_purge' defined but not used [-Wunused-function]
    1227 | struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
         |                       ^~~~~~~~~~~~~~~~~~~
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1157:8: warning: 'arm_smmu_invs_unref' defined but not used [-Wunused-function]
    1157 | size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
         |        ^~~~~~~~~~~~~~~~~~~
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1062:23: warning: 'arm_smmu_invs_merge' defined but not used [-Wunused-function]
    1062 | struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
         |                       ^~~~~~~~~~~~~~~~~~~


vim +/arm_smmu_invs_merge +1062 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c

  1043	
  1044	/**
  1045	 * arm_smmu_invs_merge() - Merge @to_merge into @invs and generate a new array
  1046	 * @invs: the base invalidation array
  1047	 * @to_merge: an array of invlidations to merge
  1048	 *
  1049	 * Return: a newly allocated array on success, or ERR_PTR
  1050	 *
  1051	 * This function must be locked and serialized with arm_smmu_invs_unref() and
  1052	 * arm_smmu_invs_purge(), but do not lockdep on any lock for KUNIT test.
  1053	 *
  1054	 * Both @invs and @to_merge must be sorted, to ensure the returned array will be
  1055	 * sorted as well.
  1056	 *
  1057	 * Caller is resposible for freeing the @invs and the returned new one.
  1058	 *
  1059	 * Entries marked as trash will be purged in the returned array.
  1060	 */
  1061	VISIBLE_IF_KUNIT
> 1062	struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
  1063						  struct arm_smmu_invs *to_merge)
  1064	{
  1065		struct arm_smmu_invs *new_invs;
  1066		struct arm_smmu_inv *new;
  1067		size_t num_trashes = 0;
  1068		size_t num_adds = 0;
  1069		size_t i, j;
  1070	
  1071		for (i = j = 0; i != invs->num_invs || j != to_merge->num_invs;) {
  1072			int cmp = arm_smmu_invs_cmp(invs, i, to_merge, j);
  1073	
  1074			/* Skip any unwanted trash entry */
  1075			if (cmp < 0 && !refcount_read(&invs->inv[i].users)) {
  1076				num_trashes++;
  1077				i++;
  1078				continue;
  1079			}
  1080	
  1081			if (cmp < 0) {
  1082				/* not found in to_merge, leave alone */
  1083				i++;
  1084			} else if (cmp == 0) {
  1085				/* same item */
  1086				i++;
  1087				j++;
  1088			} else {
  1089				/* unique to to_merge */
  1090				num_adds++;
  1091				j++;
  1092			}
  1093		}
  1094	
  1095		new_invs = arm_smmu_invs_alloc(invs->num_invs - num_trashes + num_adds);
  1096		if (IS_ERR(new_invs))
  1097			return new_invs;
  1098	
  1099		new = new_invs->inv;
  1100		for (i = j = 0; i != invs->num_invs || j != to_merge->num_invs;) {
  1101			int cmp = arm_smmu_invs_cmp(invs, i, to_merge, j);
  1102	
  1103			if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
  1104				i++;
  1105				continue;
  1106			}
  1107	
  1108			if (cmp < 0) {
  1109				*new = invs->inv[i];
  1110				i++;
  1111			} else if (cmp == 0) {
  1112				*new = invs->inv[i];
  1113				refcount_inc(&new->users);
  1114				i++;
  1115				j++;
  1116			} else {
  1117				*new = to_merge->inv[j];
  1118				refcount_set(&new->users, 1);
  1119				j++;
  1120			}
  1121	
  1122			if (new != new_invs->inv)
  1123				WARN_ON_ONCE(arm_smmu_inv_cmp(new - 1, new) == 1);
  1124			new++;
  1125		}
  1126	
  1127		WARN_ON(new != new_invs->inv + new_invs->num_invs);
  1128	
  1129		return new_invs;
  1130	}
  1131	EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_merge);
  1132	
  1133	/**
  1134	 * arm_smmu_invs_unref() - Find in @invs for all entries in @to_unref, decrease
  1135	 *                         the user counts without deletions
  1136	 * @invs: the base invalidation array
  1137	 * @to_unref: an array of invlidations to decrease their user counts
  1138	 * @flush_fn: A callback function to invoke, when an entry's user count reduces
  1139	 *            to 0
  1140	 *
  1141	 * Return: the number of trash entries in the array, for arm_smmu_invs_purge()
  1142	 *
  1143	 * This function will not fail. Any entry with users=0 will be marked as trash.
  1144	 * All tailing trash entries in the array will be dropped. And the size of the
  1145	 * array will be trimmed properly. All trash entries in-between will remain in
  1146	 * the @invs until being completely deleted by the next arm_smmu_invs_merge()
  1147	 * or an arm_smmu_invs_purge() function call.
  1148	 *
  1149	 * This function must be locked and serialized with arm_smmu_invs_merge() and
  1150	 * arm_smmu_invs_purge(), but do not lockdep on any mutex for KUNIT test.
  1151	 *
  1152	 * Note that the final @invs->num_invs might not reflect the actual number of
  1153	 * invalidations due to trash entries. Any reader should take the read lock to
  1154	 * iterate each entry and check its users counter till the last entry.
  1155	 */
  1156	VISIBLE_IF_KUNIT
> 1157	size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
  1158				   struct arm_smmu_invs *to_unref,
  1159				   void (*flush_fn)(struct arm_smmu_inv *inv))
  1160	{
  1161		unsigned long flags;
  1162		size_t num_trashes = 0;
  1163		size_t num_invs = 0;
  1164		size_t i, j;
  1165	
  1166		for (i = j = 0; i != invs->num_invs || j != to_unref->num_invs;) {
  1167			int cmp;
  1168	
  1169			/* Skip any existing trash entry */
  1170			if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
  1171				num_trashes++;
  1172				i++;
  1173				continue;
  1174			}
  1175	
  1176			cmp = arm_smmu_invs_cmp(invs, i, to_unref, j);
  1177			if (cmp < 0) {
  1178				/* not found in to_unref, leave alone */
  1179				i++;
  1180				num_invs = i;
  1181			} else if (cmp == 0) {
  1182				/* same item */
  1183				if (refcount_dec_and_test(&invs->inv[i].users)) {
  1184					/* KUNIT test doesn't pass in a flush_fn */
  1185					if (flush_fn)
  1186						flush_fn(&invs->inv[i]);
  1187					num_trashes++;
  1188				} else {
  1189					num_invs = i + 1;
  1190				}
  1191				i++;
  1192				j++;
  1193			} else {
  1194				/* item in to_unref is not in invs or already a trash */
  1195				WARN_ON(true);
  1196				j++;
  1197			}
  1198		}
  1199	
  1200		/* Exclude any tailing trash */
  1201		num_trashes -= invs->num_invs - num_invs;
  1202	
  1203		/* The lock is required to fence concurrent ATS operations. */
  1204		write_lock_irqsave(&invs->rwlock, flags);
  1205		WRITE_ONCE(invs->num_invs, num_invs); /* Remove tailing trash entries */
  1206		write_unlock_irqrestore(&invs->rwlock, flags);
  1207	
  1208		return num_trashes;
  1209	}
  1210	EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_unref);
  1211	
  1212	/**
  1213	 * arm_smmu_invs_purge() - Purge all the trash entries in the @invs
  1214	 * @invs: the base invalidation array
  1215	 * @num_trashes: expected number of trash entries, typically returned by a prior
  1216	 *               arm_smmu_invs_unref() call
  1217	 *
  1218	 * Return: a newly allocated array on success removing all the trash entries, or
  1219	 *         NULL on failure
  1220	 *
  1221	 * This function must be locked and serialized with arm_smmu_invs_merge() and
  1222	 * arm_smmu_invs_unref(), but do not lockdep on any lock for KUNIT test.
  1223	 *
  1224	 * Caller is resposible for freeing the @invs and the returned new one.
  1225	 */
  1226	VISIBLE_IF_KUNIT
> 1227	struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
  1228						  size_t num_trashes)
  1229	{
  1230		struct arm_smmu_invs *new_invs;
  1231		size_t i, j;
  1232	
  1233		if (WARN_ON(invs->num_invs < num_trashes))
  1234			return NULL;
  1235	
  1236		new_invs = arm_smmu_invs_alloc(invs->num_invs - num_trashes);
  1237		if (IS_ERR(new_invs))
  1238			return NULL;
  1239	
  1240		for (i = j = 0; i != invs->num_invs; i++) {
  1241			if (!refcount_read(&invs->inv[i].users))
  1242				continue;
  1243			new_invs->inv[j] = invs->inv[i];
  1244			j++;
  1245		}
  1246	
  1247		WARN_ON(j != new_invs->num_invs);
  1248		return new_invs;
  1249	}
  1250	EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_purge);
  1251	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
  2025-10-27 12:06   ` kernel test robot
@ 2025-10-27 16:38     ` Nicolin Chen
  0 siblings, 0 replies; 20+ messages in thread
From: Nicolin Chen @ 2025-10-27 16:38 UTC (permalink / raw)
  To: kernel test robot
  Cc: will, jgg, oe-kbuild-all, jean-philippe, robin.murphy, joro,
	balbirs, miko.lenczewski, peterz, kevin.tian, praan,
	linux-arm-kernel, iommu, linux-kernel

On Mon, Oct 27, 2025 at 08:06:03PM +0800, kernel test robot wrote:
> >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1062:23: error: static declaration of 'arm_smmu_invs_merge' follows non-static declaration
>     1062 | struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
>          |                       ^~~~~~~~~~~~~~~~~~~
>    In file included from drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:34:
>    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h:731:23: note: previous declaration of 'arm_smmu_invs_merge' with type 'struct arm_smmu_invs *(struct arm_smmu_invs *, struct arm_smmu_invs *)'
>      731 | struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
>          |                       ^~~~~~~~~~~~~~~~~~~

These should be added under "#if IS_ENABLED(CONFIG_KUNIT)" in the
header.

I will fix this and send a v4 today.

Nicolin

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2025-10-27 16:39 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-15 19:42 [PATCH v3 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 2/7] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
2025-10-16 19:18   ` kernel test robot
2025-10-16 21:31   ` Nicolin Chen
2025-10-17 13:47   ` kernel test robot
2025-10-17 20:12     ` Nicolin Chen
2025-10-20 12:10       ` Jason Gunthorpe
2025-10-20 19:16         ` Nicolin Chen
2025-10-27 12:06   ` kernel test robot
2025-10-27 16:38     ` Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 4/7] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
2025-10-17 16:03   ` kernel test robot
2025-10-17 21:11     ` Nicolin Chen
2025-10-20 12:12       ` Jason Gunthorpe
2025-10-20 19:05         ` Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
2025-10-15 19:42 ` [PATCH v3 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).