From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <will@kernel.org>, <robin.murphy@arm.com>, <joro@8bytes.org>,
<jean-philippe@linaro.org>, <miko.lenczewski@arm.com>,
<balbirs@nvidia.com>, <peterz@infradead.org>,
<smostafa@google.com>, <kevin.tian@intel.com>, <praan@google.com>,
<zhangzekun11@huawei.com>, <linux-arm-kernel@lists.infradead.org>,
<iommu@lists.linux.dev>, <linux-kernel@vger.kernel.org>,
<patches@lists.linux.dev>
Subject: Re: [PATCH rfcv1 4/8] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array
Date: Sat, 6 Sep 2025 01:16:45 -0700 [thread overview]
Message-ID: <aLvt7WBgvVsAD7wC@nvidia.com> (raw)
In-Reply-To: <20250827200002.GD2206304@nvidia.com>
On Wed, Aug 27, 2025 at 05:00:02PM -0300, Jason Gunthorpe wrote:
> On Wed, Aug 13, 2025 at 06:25:35PM -0700, Nicolin Chen wrote:
> > +struct arm_smmu_invs *arm_smmu_invs_add(struct arm_smmu_invs *old_invs,
> > + struct arm_smmu_invs *add_invs)
> > +{
>
> It turns out it is fairly easy and cheap to sort add_invs by sorting
> the ids during probe:
I have integrated this and also renamed these three helpers:
+struct arm_smmu_invs *arm_smmu_invs_merge(struct arm_smmu_invs *invs,
+ struct arm_smmu_invs *to_merge);
+size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
+ struct arm_smmu_invs *to_unref);
+struct arm_smmu_invs *arm_smmu_invs_purge(struct arm_smmu_invs *invs,
+ size_t num_dels);
Thanks!
Nicolin
> @@ -3983,6 +3989,14 @@ static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid)
> return 0;
> }
>
> +static int arm_smmu_ids_cmp(const void *_l, const void *_r)
> +{
> + const typeof_member(struct iommu_fwspec, ids[0]) *l = _l;
> + const typeof_member(struct iommu_fwspec, ids[0]) *r = _r;
> +
> + return cmp_int(*l, *r);
> +}
> +
> static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
> struct arm_smmu_master *master)
> {
> @@ -4011,6 +4025,13 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
> return PTR_ERR(master->invs);
> }
>
> + /*
> + * Put the ids into order so that arm_smmu_build_invs() can trivially
> + * generate sorted lists.
> + */
> + sort_nonatomic(fwspec->ids, fwspec->num_ids, sizeof(fwspec->ids[0]),
> + arm_smmu_ids_cmp, NULL);
> +
> mutex_lock(&smmu->streams_mutex);
> for (i = 0; i < fwspec->num_ids; i++) {
> struct arm_smmu_stream *new_stream = &master->streams[i];
>
> Then arm_smmu_build_invs() trivially makes sorted lists.
>
> So if old_invs and add_invs are both sorted list we can use variations
> on a merge algorithm for sorted lists which is both simpler to
> understand and runs faster:
>
> /*
> * Compare used for merging two sorted lists. Merge compare of two sorted list
> * items. If one side is past the end of the list then return the other side to
> * let it run out the iteration.
> */
> static inline int arm_smmu_invs_merge_cmp(const struct arm_smmu_invs *lhs,
> size_t lhs_idx,
> const struct arm_smmu_invs *rhs,
> size_t rhs_idx)
> {
> if (lhs_idx != lhs->num_invs && rhs_idx != rhs->num_invs)
> return arm_smmu_invs_cmp(&lhs->inv[lhs_idx],
> &rhs->inv[rhs_idx]);
> if (lhs_idx != lhs->num_invs)
> return -1;
> return 1;
> }
>
> struct arm_smmu_invs *arm_smmu_invs_add(struct arm_smmu_invs *invs,
> struct arm_smmu_invs *add_invs)
> {
> struct arm_smmu_invs *new_invs;
> struct arm_smmu_inv *new;
> size_t to_add = 0;
> size_t to_del = 0;
> size_t i, j;
>
> for (i = 0, j = 0; i != invs->num_invs || j != add_invs->num_invs;) {
> int cmp = arm_smmu_invs_merge_cmp(invs, i, add_invs, j);
>
> if (cmp < 0) {
> /* not found in add_invs, leave alone */
> if (refcount_read(&invs->inv[i].users))
> i++;
> else
> to_del++;
> } else if (cmp == 0) {
> /* same item */
> i++;
> j++;
> } else {
> /* unique to add_invs */
> to_add++;
> j++;
> }
> }
>
> new_invs = arm_smmu_invs_alloc(invs->num_invs + to_add - to_del);
> if (IS_ERR(new_invs))
> return new_invs;
>
> new = new_invs->inv;
> for (i = 0, j = 0; i != invs->num_invs || j != add_invs->num_invs;) {
> int cmp = arm_smmu_invs_merge_cmp(invs, i, add_invs, j);
>
> if (cmp <= 0 && !refcount_read(&invs->inv[i].users)) {
> i++;
> continue;
> }
>
> if (cmp < 0) {
> *new = invs->inv[i];
> i++;
> } else if (cmp == 0) {
> *new = invs->inv[i];
> refcount_inc(&new->users);
> i++;
> j++;
> } else {
> *new = add_invs->inv[j];
> refcount_set(&new->users, 1);
> j++;
> }
> if (arm_smmu_inv_is_ats(new))
> new_invs->has_ats = true;
> new++;
> }
>
> WARN_ON(new != new_invs->inv + new_invs->num_invs);
>
> /*
> * A sorted array allows batching invalidations together for fewer SYNCs.
> * Also, ATS must follow the ASID/VMID invalidation SYNC.
> */
> sort_nonatomic(new_invs->inv, new_invs->num_invs,
> sizeof(add_invs->inv[0]), arm_smmu_invs_cmp, NULL);
> return new_invs;
> }
>
> size_t arm_smmu_invs_dec(struct arm_smmu_invs *invs,
> struct arm_smmu_invs *dec_invs)
> {
> size_t to_del = 0;
> size_t i, j;
>
> for (i = 0, j = 0; i != invs->num_invs || j != dec_invs->num_invs;) {
> int cmp = arm_smmu_invs_merge_cmp(invs, i, dec_invs, j);
>
> if (cmp < 0) {
> /* not found in dec_invs, leave alone */
> i++;
> } else if (cmp == 0) {
> /* same item */
> if (refcount_dec_and_test(&invs->inv[i].users)) {
> dec_invs->inv[j].todel = true;
> to_del++;
> }
> i++;
> j++;
> } else {
> /* item in dec_invs is not in invs? */
> WARN_ON(true);
> j++;
> }
> }
> return to_del;
> }
next prev parent reply other threads:[~2025-09-06 8:17 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-14 1:25 [PATCH rfcv1 0/8] iommu/arm-smmu-v3: Introduce an RCU-protected invalidation array Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 1/8] iommu/arm-smmu-v3: Clear cmds->num after arm_smmu_cmdq_batch_submit Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 2/8] iommu/arm-smmu-v3: Explicitly set smmu_domain->stage for SVA Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 3/8] iommu/arm-smmu-v3: Add an inline arm_smmu_domain_free() Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 4/8] iommu/arm-smmu-v3: Introduce a per-domain arm_smmu_invs array Nicolin Chen
2025-08-26 19:50 ` Jason Gunthorpe
2025-08-27 0:49 ` Nicolin Chen
2025-08-27 16:48 ` Jason Gunthorpe
2025-08-27 17:19 ` Nicolin Chen
2025-08-28 12:37 ` Jason Gunthorpe
2025-08-27 20:00 ` Jason Gunthorpe
2025-09-06 8:16 ` Nicolin Chen [this message]
2025-08-14 1:25 ` [PATCH rfcv1 5/8] iommu/arm-smmu-v3: Pre-allocate a per-master invalidation array Nicolin Chen
2025-08-26 19:56 ` Jason Gunthorpe
2025-09-06 7:45 ` Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 6/8] iommu/arm-smmu-v3: Populate smmu_domain->invs when attaching masters Nicolin Chen
2025-08-27 18:21 ` Jason Gunthorpe
2025-09-06 7:52 ` Nicolin Chen
2025-09-06 8:20 ` Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 7/8] iommu/arm-smmu-v3: Add arm_smmu_invs based arm_smmu_domain_inv_range() Nicolin Chen
2025-08-27 18:49 ` Jason Gunthorpe
2025-09-06 8:12 ` Nicolin Chen
2025-08-14 1:25 ` [PATCH rfcv1 8/8] iommu/arm-smmu-v3: Perform per-domain invalidations using arm_smmu_invs Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aLvt7WBgvVsAD7wC@nvidia.com \
--to=nicolinc@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=miko.lenczewski@arm.com \
--cc=patches@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
--cc=zhangzekun11@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).