public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: jean-philippe@linaro.org, robin.murphy@arm.com, joro@8bytes.org,
	jgg@nvidia.com, balbirs@nvidia.com, miko.lenczewski@arm.com,
	peterz@infradead.org, kevin.tian@intel.com, praan@google.com,
	linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
Date: Mon, 24 Nov 2025 21:42:31 +0000	[thread overview]
Message-ID: <aSTRRyTBh1nATwBa@willie-the-truck> (raw)
In-Reply-To: <eea7bffde13574e099212e3b3823a0f192d6aec3.1762588839.git.nicolinc@nvidia.com>

On Sat, Nov 08, 2025 at 12:08:04AM -0800, Nicolin Chen wrote:
> From: Jason Gunthorpe <jgg@nvidia.com>
> 
> Create a new data structure to hold an array of invalidations that need to
> be performed for the domain based on what masters are attached, to replace
> the single smmu pointer and linked list of masters in the current design.
> 
> Each array entry holds one of the invalidation actions - S1_ASID, S2_VMID,
> ATS or their variant with information to feed invalidation commands to HW.
> It is structured so that multiple SMMUs can participate in the same array,
> removing one key limitation of the current system.
> 
> To maximize performance, a sorted array is used as the data structure. It
> allows grouping SYNCs together to parallelize invalidations. For instance,
> it will group all the ATS entries after the ASID/VMID entry, so they will
> all be pushed to the PCI devices in parallel with one SYNC.
> 
> To minimize the locking cost on the invalidation fast path (reader of the
> invalidation array), the array is managed with RCU.
> 
> Provide a set of APIs to add/delete entries to/from an array, which cover
> cannot-fail attach cases, e.g. attaching to arm_smmu_blocked_domain. Also
> add kunit coverage for those APIs.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> Co-developed-by: Nicolin Chen <nicolinc@nvidia.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  91 +++++++
>  .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c  |  93 +++++++
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 248 ++++++++++++++++++
>  3 files changed, 432 insertions(+)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> index 96a23ca633cb6..757158b9ea655 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
> @@ -649,6 +649,85 @@ struct arm_smmu_cmdq_batch {
>  	int				num;
>  };
>  
> +/*
> + * The order here also determines the sequence in which commands are sent to the
> + * command queue. E.g. TLBI must be done before ATC_INV.
> + */
> +enum arm_smmu_inv_type {
> +	INV_TYPE_S1_ASID,
> +	INV_TYPE_S2_VMID,
> +	INV_TYPE_S2_VMID_S1_CLEAR,
> +	INV_TYPE_ATS,
> +	INV_TYPE_ATS_FULL,
> +};
> +
> +struct arm_smmu_inv {
> +	struct arm_smmu_device *smmu;
> +	u8 type;
> +	u8 size_opcode;
> +	u8 nsize_opcode;
> +	u32 id; /* ASID or VMID or SID */
> +	union {
> +		size_t pgsize; /* ARM_SMMU_FEAT_RANGE_INV */
> +		u32 ssid; /* INV_TYPE_ATS */
> +	};
> +
> +	refcount_t users; /* users=0 to mark as a trash to be purged */
> +};
> +
> +static inline bool arm_smmu_inv_is_ats(struct arm_smmu_inv *inv)
> +{
> +	return inv->type == INV_TYPE_ATS || inv->type == INV_TYPE_ATS_FULL;
> +}
> +
> +/**
> + * struct arm_smmu_invs - Per-domain invalidation array
> + * @num_invs: number of invalidations in the flexible array
> + * @rwlock: optional rwlock to fench ATS operations
> + * @has_ats: flag if the array contains an INV_TYPE_ATS or INV_TYPE_ATS_FULL
> + * @rcu: rcu head for kfree_rcu()
> + * @inv: flexible invalidation array
> + *
> + * The arm_smmu_invs is an RCU data structure. During a ->attach_dev callback,
> + * arm_smmu_invs_merge(), arm_smmu_invs_unref() and arm_smmu_invs_purge() will
> + * be used to allocate a new copy of an old array for addition and deletion in
> + * the old domain's and new domain's invs arrays.
> + *
> + * The arm_smmu_invs_unref() mutates a given array, by internally reducing the
> + * users counts of some given entries. This exists to support a no-fail routine
> + * like attaching to an IOMMU_DOMAIN_BLOCKED. And it could pair with a followup
> + * arm_smmu_invs_purge() call to generate a new clean array.
> + *
> + * Concurrent invalidation thread will push every invalidation described in the
> + * array into the command queue for each invalidation event. It is designed like
> + * this to optimize the invalidation fast path by avoiding locks.
> + *
> + * A domain can be shared across SMMU instances. When an instance gets removed,
> + * it would delete all the entries that belong to that SMMU instance. Then, a
> + * synchronize_rcu() would have to be called to sync the array, to prevent any
> + * concurrent invalidation thread accessing the old array from issuing commands
> + * to the command queue of a removed SMMU instance.
> + */
> +struct arm_smmu_invs {
> +	size_t num_invs;
> +	rwlock_t rwlock;
> +	bool has_ats;
> +	struct rcu_head rcu;
> +	struct arm_smmu_inv inv[];
> +};

Can you use __counted_by(num_invs) here?

> +
> +static inline struct arm_smmu_invs *arm_smmu_invs_alloc(size_t num_invs)
> +{
> +	struct arm_smmu_invs *new_invs;
> +
> +	new_invs = kzalloc(struct_size(new_invs, inv, num_invs), GFP_KERNEL);
> +	if (!new_invs)
> +		return ERR_PTR(-ENOMEM);

Just return NULL on failure like most allocator functions?

> +	rwlock_init(&new_invs->rwlock);
> +	new_invs->num_invs = num_invs;
> +	return new_invs;
> +}
> +

[...]

> +VISIBLE_IF_KUNIT
> +struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
> +					  struct arm_smmu_invs *to_merge)
> +{
> +	struct arm_smmu_invs *new_invs;
> +	struct arm_smmu_inv *new;
> +	size_t num_trashes = 0;
> +	size_t num_adds = 0;
> +	size_t i, j;
> +
> +	for (i = j = 0; i < invs->num_invs || j < to_merge->num_invs;) {

Maybe worth having a simple iterator macro for this?

> +		int cmp;
> +
> +		/* Skip any trash entry */
> +		if (i < invs->num_invs && !refcount_read(&invs->inv[i].users)) {
> +			num_trashes++;
> +			i++;
> +			continue;
> +		}
> +
> +		cmp = arm_smmu_invs_cmp(invs, i, to_merge, j);
> +		if (cmp < 0) {
> +			/* not found in to_merge, leave alone */
> +			i++;
> +		} else if (cmp == 0) {
> +			/* same item */
> +			i++;
> +			j++;
> +		} else {
> +			/* unique to to_merge */
> +			num_adds++;
> +			j++;
> +		}
> +	}
> +
> +	new_invs = arm_smmu_invs_alloc(invs->num_invs - num_trashes + num_adds);
> +	if (IS_ERR(new_invs))
> +		return new_invs;
> +
> +	new = new_invs->inv;
> +	for (i = j = 0; i < invs->num_invs || j < to_merge->num_invs;) {
> +		int cmp;
> +
> +		if (i < invs->num_invs && !refcount_read(&invs->inv[i].users)) {
> +			i++;
> +			continue;
> +		}
> +
> +		cmp = arm_smmu_invs_cmp(invs, i, to_merge, j);
> +		if (cmp < 0) {
> +			*new = invs->inv[i];
> +			i++;
> +		} else if (cmp == 0) {
> +			*new = invs->inv[i];
> +			refcount_inc(&new->users);
> +			i++;
> +			j++;
> +		} else {
> +			*new = to_merge->inv[j];
> +			refcount_set(&new->users, 1);
> +			j++;
> +		}
> +
> +		/*
> +		 * Check that the new array is sorted. This also validates that
> +		 * to_merge is sorted.
> +		 */
> +		if (new != new_invs->inv)
> +			WARN_ON_ONCE(arm_smmu_inv_cmp(new - 1, new) == 1);
> +		new++;
> +	}
> +
> +	WARN_ON(new != new_invs->inv + new_invs->num_invs);
> +
> +	return new_invs;
> +}
> +EXPORT_SYMBOL_IF_KUNIT(arm_smmu_invs_merge);

There's nothing really SMMU-specific about this data structure manipulation.
Do you think we can abstract the invalidation array concept into a library
which other IOMMU drivers could use too?

Will


  reply	other threads:[~2025-11-24 21:42 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-08  8:08 [PATCH v5 0/7] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
2025-11-08  8:08 ` [PATCH v5 1/7] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
2025-11-08  8:08 ` [PATCH v5 2/7] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
2025-11-08  8:08 ` [PATCH v5 3/7] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
2025-11-24 21:42   ` Will Deacon [this message]
2025-11-24 22:41     ` Nicolin Chen
2025-11-24 23:03       ` Jason Gunthorpe
2025-11-26  1:07         ` Nicolin Chen
2025-11-25  4:14     ` Nicolin Chen
2025-11-25 13:43       ` Jason Gunthorpe
2025-11-25 16:20         ` Nicolin Chen
2025-11-08  8:08 ` [PATCH v5 4/7] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
2025-11-24 21:42   ` Will Deacon
2025-11-24 22:43     ` Nicolin Chen
2025-11-24 23:08       ` Jason Gunthorpe
2025-11-24 23:31         ` Nicolin Chen
2025-11-25  7:43           ` Nicolin Chen
2025-11-25 13:07           ` Jason Gunthorpe
2025-11-08  8:08 ` [PATCH v5 5/7] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
2025-11-24 21:43   ` Will Deacon
2025-11-24 23:13     ` Jason Gunthorpe
2025-11-24 23:19       ` Nicolin Chen
2025-11-26  0:56       ` Nicolin Chen
2025-11-08  8:08 ` [PATCH v5 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
2025-11-08  8:08 ` [PATCH v5 7/7] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aSTRRyTBh1nATwBa@willie-the-truck \
    --to=will@kernel.org \
    --cc=balbirs@nvidia.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miko.lenczewski@arm.com \
    --cc=nicolinc@nvidia.com \
    --cc=peterz@infradead.org \
    --cc=praan@google.com \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox