From: Jacob Pan <jacob.pan@linux.microsoft.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: <linux-kernel@vger.kernel.org>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
Will Deacon <will@kernel.org>, Jason Gunthorpe <jgg@nvidia.com>,
Robin Murphy <robin.murphy@arm.com>,
Zhang Yu <zhangyu1@linux.microsoft.com>,
Jean Philippe-Brucker <jean-philippe@linaro.org>,
Alexander Grest <Alexander.Grest@microsoft.com>
Subject: Re: [PATCH 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency
Date: Tue, 7 Oct 2025 11:16:06 -0700 [thread overview]
Message-ID: <20251007111606.00005849@linux.microsoft.com> (raw)
In-Reply-To: <aORn/vKfVL88q05w@nvidia.com>
On Mon, 6 Oct 2025 18:08:14 -0700
Nicolin Chen <nicolinc@nvidia.com> wrote:
> On Wed, Sep 24, 2025 at 10:54:38AM -0700, Jacob Pan wrote:
> > static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> > {
> > - int val;
> > -
> > /*
> > - * We can try to avoid the cmpxchg() loop by simply
> > incrementing the
> > - * lock counter. When held in exclusive state, the lock
> > counter is set
> > - * to INT_MIN so these increments won't hurt as the value
> > will remain
> > - * negative.
> > + * We can simply increment the lock counter. When held in
> > exclusive
> > + * state, the lock counter is set to INT_MIN so these
> > increments won't
> > + * hurt as the value will remain negative.
>
> It seems to me that the change at the first statement is not very
> necessary.
>
I can delete "We can simply increment the lock counter." since it is
obvious. But the change to delete cmpxchg in the comment matches the
code change the follows.
> > This will also signal the
> > + * exclusive locker that there are shared waiters. Once
> > the exclusive
> > + * locker releases the lock, the sign bit will be cleared
> > and our
> > + * increment will make the lock counter positive, allowing
> > us to
> > + * proceed.
> > */
> > if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> > return;
> >
> > - do {
> > - val = atomic_cond_read_relaxed(&cmdq->lock, VAL >=
> > 0);
> > - } while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1)
> > != val);
> > + atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
>
> The returned value is not captured for anything. Is this read()
> necessary? If so, a line of comments elaborating it?
We don't need the return value, how about this explanation?
/*
* Someone else is holding the lock in exclusive state, so wait
* for them to finish. Since we already incremented the lock counter,
* no exclusive lock can be acquired until we finish. We don't need
* the return value since we only care that the exclusive lock is
* released (i.e. the lock counter is non-negative).
*/
> > +/*
> > + * Only clear the sign bit when releasing the exclusive lock this
> > will
> > + * allow any shared_lock() waiters to proceed without the
> > possibility
> > + * of entering the exclusive lock in a tight loop.
> > + */
> > #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq,
> > flags) \ ({
> > \
> > - atomic_set_release(&cmdq->lock, 0);
> > \
> > + atomic_fetch_and_release(~INT_MIN, &cmdq->lock);
> > \
>
> By a quick skim, the whole thing looks quite smart to me. But I
> need some time to revisit and perhaps test it as well.
>
> It's also important to get feedback from Will. Both patches are
> touching his writing that has been running for years already..
Definitely, really appreciated your review. I think part of the reason
is that cmdq size is usually quite large, queue full is a rare case.
next prev parent reply other threads:[~2025-10-07 18:16 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-24 17:54 [PATCH 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
2025-09-24 17:54 ` [PATCH 1/2] iommu/arm-smmu-v3: Fix CMDQ timeout warning Jacob Pan
2025-10-07 0:44 ` Nicolin Chen
2025-10-07 16:12 ` Jacob Pan
2025-10-07 16:32 ` Nicolin Chen
2025-09-24 17:54 ` [PATCH 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency Jacob Pan
2025-10-07 1:08 ` Nicolin Chen
2025-10-07 18:16 ` Jacob Pan [this message]
2025-10-17 11:04 ` Mostafa Saleh
2025-10-19 5:32 ` Jacob Pan
2025-10-06 15:14 ` [PATCH 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
2025-10-16 15:31 ` Jacob Pan
2025-10-17 10:57 ` Mostafa Saleh
2025-10-17 13:51 ` Jason Gunthorpe
2025-10-17 14:44 ` Robin Murphy
2025-10-17 16:50 ` Jacob Pan
2025-10-20 12:02 ` Jason Gunthorpe
2025-10-20 18:57 ` Jacob Pan
2025-10-21 11:45 ` Robin Murphy
2025-10-21 20:37 ` Jacob Pan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251007111606.00005849@linux.microsoft.com \
--to=jacob.pan@linux.microsoft.com \
--cc=Alexander.Grest@microsoft.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
--cc=zhangyu1@linux.microsoft.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).