From: Jason Gunthorpe <jgg@nvidia.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: will@kernel.org, robin.murphy@arm.com, joro@8bytes.org,
jean-philippe@linaro.org, miko.lenczewski@arm.com,
balbirs@nvidia.com, peterz@infradead.org, smostafa@google.com,
kevin.tian@intel.com, praan@google.com,
linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev,
linux-kernel@vger.kernel.org, patches@lists.linux.dev
Subject: Re: [PATCH rfcv2 4/8] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
Date: Wed, 24 Sep 2025 18:29:12 -0300 [thread overview]
Message-ID: <20250924212912.GP2617119@nvidia.com> (raw)
In-Reply-To: <80310b98efa4bd7e95d7b3ca302f40d4d69e59c5.1757373449.git.nicolinc@nvidia.com>
On Mon, Sep 08, 2025 at 04:26:58PM -0700, Nicolin Chen wrote:
> +/**
> + * arm_smmu_invs_merge() - Merge @to_merge into @invs and generate a new array
> + * @invs: the base invalidation array
> + * @to_merge: an array of invlidations to merge
> + *
> + * Return: a newly allocated array on success, or ERR_PTR
> + *
> + * This function must be locked and serialized with arm_smmu_invs_unref() and
> + * arm_smmu_invs_purge(), but do not lockdep on any lock for KUNIT test.
> + *
> + * Either @invs or @to_merge must be sorted itself. This ensures the returned
s/Either/Both
A merge sort like this requires both lists to be sorted.
> +struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
> + struct arm_smmu_invs *to_merge)
> +{
> + struct arm_smmu_invs *new_invs;
> + struct arm_smmu_inv *new;
> + size_t num_adds = 0;
> + size_t num_dels = 0;
> + size_t i, j;
> +
> + for (i = j = 0; i != invs->num_invs || j != to_merge->num_invs;) {
> + int cmp = arm_smmu_invs_merge_cmp(invs, i, to_merge, j);
> +
> + if (cmp < 0) {
> + /* no found in to_merge, leave alone but delete trash */
s/no/not/
> + if (!refcount_read(&invs->inv[i].users))
> + num_dels++;
> + i++;
This sequence related to users should be consistent in all the merge
sorts. The one below in unref is the best one:
+ int cmp;
+
+ if (!refcount_read(&invs->inv[i].users)) {
+ num_dels++;
+ i++;
+ continue;
+ }
+
+ cmp = arm_smmu_invs_merge_cmp(invs, i, to_unref, j);
Make all of these loops look like that
> +
> + WARN_ON(new != new_invs->inv + new_invs->num_invs);
> +
> + return new_invs;
A debugging check that the output list is sorted would be a nice touch
for robustness.
I think this looks OK and has turned out to be pretty simple.
I've been thinking about generalizing it to core code and I think it
would hold up well there as well?
Jason
next prev parent reply other threads:[~2025-09-24 21:29 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-08 23:26 [PATCH rfcv2 0/8] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
2025-09-08 23:26 ` [PATCH rfcv2 1/8] iommu/arm-smmu-v3: Clear cmds->num after arm_smmu_cmdq_batch_submit Nicolin Chen
2025-09-09 3:16 ` Balbir Singh
2025-09-09 5:42 ` Nicolin Chen
2025-09-09 22:49 ` Balbir Singh
2025-09-10 2:03 ` Nicolin Chen
2025-09-10 2:56 ` Nicolin Chen
2025-09-08 23:26 ` [PATCH rfcv2 2/8] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
2025-09-09 3:25 ` Balbir Singh
2025-09-09 22:31 ` Balbir Singh
2025-09-10 2:06 ` Nicolin Chen
2025-09-24 21:07 ` Jason Gunthorpe
2025-09-08 23:26 ` [PATCH rfcv2 3/8] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
2025-09-24 21:08 ` Jason Gunthorpe
2025-09-08 23:26 ` [PATCH rfcv2 4/8] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
2025-09-09 13:01 ` kernel test robot
2025-09-20 0:26 ` Nicolin Chen
2025-09-24 21:29 ` Jason Gunthorpe [this message]
2025-09-29 18:52 ` Nicolin Chen
2025-09-30 12:13 ` Jason Gunthorpe
2025-09-08 23:26 ` [PATCH rfcv2 5/8] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
2025-09-24 21:32 ` Jason Gunthorpe
2025-09-29 19:11 ` Nicolin Chen
2025-09-30 11:55 ` Jason Gunthorpe
2025-09-08 23:27 ` [PATCH rfcv2 6/8] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
2025-09-24 21:42 ` Jason Gunthorpe
2025-09-29 20:52 ` Nicolin Chen
2025-09-30 12:12 ` Jason Gunthorpe
2025-09-30 20:19 ` Nicolin Chen
2025-10-01 16:25 ` Jason Gunthorpe
2025-10-01 17:16 ` Nicolin Chen
2025-09-08 23:27 ` [PATCH rfcv2 7/8] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
2025-09-24 21:56 ` Jason Gunthorpe
2025-09-29 21:00 ` Nicolin Chen
2025-09-08 23:27 ` [PATCH rfcv2 8/8] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250924212912.GP2617119@nvidia.com \
--to=jgg@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=miko.lenczewski@arm.com \
--cc=nicolinc@nvidia.com \
--cc=patches@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).