linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jacob Pan <jacob.pan@linux.microsoft.com>
To: Mostafa Saleh <smostafa@google.com>
Cc: linux-kernel@vger.kernel.org,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	Will Deacon <will@kernel.org>, Jason Gunthorpe <jgg@nvidia.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Nicolin Chen <nicolinc@nvidia.com>,
	Zhang Yu <zhangyu1@linux.microsoft.com>,
	Jean Philippe-Brucker <jean-philippe@linaro.org>,
	Alexander Grest <Alexander.Grest@microsoft.com>
Subject: Re: [PATCH 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency
Date: Sat, 18 Oct 2025 22:32:28 -0700	[thread overview]
Message-ID: <20251018223228.00005eff@linux.microsoft.com> (raw)
In-Reply-To: <aPIiuLj9c4IJlmIn@google.com>

On Fri, 17 Oct 2025 11:04:24 +0000
Mostafa Saleh <smostafa@google.com> wrote:

> On Wed, Sep 24, 2025 at 10:54:38AM -0700, Jacob Pan wrote:
> > From: Alexander Grest <Alexander.Grest@microsoft.com>
> > 
> > The SMMU CMDQ lock is highly contentious when there are multiple
> > CPUs issuing commands on an architecture with small queue sizes e.g
> > 256 entries.
> > 
> > The lock has the following states:
> >  - 0:		Unlocked  
> >  - >0:		Shared lock held with count  
> >  - INT_MIN+N:	Exclusive lock held, where N is the # of
> > shared waiters
> >  - INT_MIN:	Exclusive lock held, no shared waiters
> > 
> > When multiple CPUs are polling for space in the queue, they attempt
> > to grab the exclusive lock to update the cons pointer from the
> > hardware. If they fail to get the lock, they will spin until either
> > the cons pointer is updated by another CPU.
> > 
> > The current code allows the possibility of shared lock starvation
> > if there is a constant stream of CPUs trying to grab the exclusive
> > lock. This leads to severe latency issues and soft lockups.
> > 
> > To mitigate this, we release the exclusive lock by only clearing
> > the sign bit while retaining the shared lock waiter count as a way
> > to avoid starving the shared lock waiters.
> > 
> > Also deleted cmpxchg loop while trying to acquire the shared lock
> > as it is not needed. The waiters can see the positive lock count
> > and proceed immediately after the exclusive lock is released.
> > 
> > Exclusive lock is not starved in that submitters will try exclusive
> > lock first when new spaces become available.
> > 
> > In a staged test where 32 CPUs issue SVA invalidations
> > simultaneously on a system with a 256 entry queue, the madvise
> > (MADV_DONTNEED) latency dropped by 50% with this patch and without
> > soft lockups.
> > 
> > Signed-off-by: Alexander Grest <Alexander.Grest@microsoft.com>
> > Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>
> > ---
> >  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 24
> > ++++++++++++--------- 1 file changed, 14 insertions(+), 10
> > deletions(-)
> > 
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index
> > 9b63525c13bb..9b7c01b731df 100644 ---
> > a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++
> > b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -481,20 +481,19 @@
> > static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu) */
> >  static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> >  {
> > -	int val;
> > -
> >  	/*
> > -	 * We can try to avoid the cmpxchg() loop by simply
> > incrementing the
> > -	 * lock counter. When held in exclusive state, the lock
> > counter is set
> > -	 * to INT_MIN so these increments won't hurt as the value
> > will remain
> > -	 * negative.
> > +	 * We can simply increment the lock counter. When held in
> > exclusive
> > +	 * state, the lock counter is set to INT_MIN so these
> > increments won't
> > +	 * hurt as the value will remain negative. This will also
> > signal the
> > +	 * exclusive locker that there are shared waiters. Once
> > the exclusive
> > +	 * locker releases the lock, the sign bit will be cleared
> > and our
> > +	 * increment will make the lock counter positive, allowing
> > us to
> > +	 * proceed.
> >  	 */
> >  	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> >  		return;
> >  
> > -	do {
> > -		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >=
> > 0);
> > -	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1)
> > != val);
> > +	atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);  
> 
> I think that should be "VAL > 0", as it is guaranteed that we hold
> the shared lock at this point.
> 
Indeed, will do.

Though there is no functional difference since we did inc already, VAL
will never be 0 when it comes to this line.

> Otherwise,
> Reviewed-by: Mostafa Saleh <smostafa@google.com>
> 
> Thanks,
> Mostafa
> 
> >  }
> >  
> >  static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
> > @@ -521,9 +520,14 @@ static bool
> > arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> > __ret;
> > 	\ }) 
> > +/*
> > + * Only clear the sign bit when releasing the exclusive lock this
> > will
> > + * allow any shared_lock() waiters to proceed without the
> > possibility
> > + * of entering the exclusive lock in a tight loop.
> > + */
> >  #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq,
> > flags)		\ ({
> > 				\
> > -	atomic_set_release(&cmdq->lock, 0);
> > 	\
> > +	atomic_fetch_and_release(~INT_MIN, &cmdq->lock);
> > 			\ local_irq_restore(flags);
> > 			\ })
> >  
> > -- 
> > 2.43.0
> >   


  reply	other threads:[~2025-10-19  5:32 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-24 17:54 [PATCH 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
2025-09-24 17:54 ` [PATCH 1/2] iommu/arm-smmu-v3: Fix CMDQ timeout warning Jacob Pan
2025-10-07  0:44   ` Nicolin Chen
2025-10-07 16:12     ` Jacob Pan
2025-10-07 16:32       ` Nicolin Chen
2025-09-24 17:54 ` [PATCH 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency Jacob Pan
2025-10-07  1:08   ` Nicolin Chen
2025-10-07 18:16     ` Jacob Pan
2025-10-17 11:04   ` Mostafa Saleh
2025-10-19  5:32     ` Jacob Pan [this message]
2025-10-06 15:14 ` [PATCH 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
2025-10-16 15:31 ` Jacob Pan
2025-10-17 10:57 ` Mostafa Saleh
2025-10-17 13:51   ` Jason Gunthorpe
2025-10-17 14:44     ` Robin Murphy
2025-10-17 16:50     ` Jacob Pan
2025-10-20 12:02       ` Jason Gunthorpe
2025-10-20 18:57         ` Jacob Pan
2025-10-21 11:45           ` Robin Murphy
2025-10-21 20:37             ` Jacob Pan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251018223228.00005eff@linux.microsoft.com \
    --to=jacob.pan@linux.microsoft.com \
    --cc=Alexander.Grest@microsoft.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=robin.murphy@arm.com \
    --cc=smostafa@google.com \
    --cc=will@kernel.org \
    --cc=zhangyu1@linux.microsoft.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).