public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jacob Pan <jacob.pan@linux.microsoft.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: <linux-kernel@vger.kernel.org>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	Will Deacon <will@kernel.org>, Joerg Roedel <joro@8bytes.org>,
	Mostafa Saleh <smostafa@google.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Zhang Yu <zhangyu1@linux.microsoft.com>,
	Jean Philippe-Brucker <jean-philippe@linaro.org>,
	Alexander Grest <Alexander.Grest@microsoft.com>
Subject: Re: [PATCH v2 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency
Date: Mon, 3 Nov 2025 17:08:48 -0800	[thread overview]
Message-ID: <20251103170848.000023aa@linux.microsoft.com> (raw)
In-Reply-To: <aQQYIkC9lmQnd27S@Asurada-Nvidia>

Hi Nicolin,

On Thu, 30 Oct 2025 19:00:02 -0700
Nicolin Chen <nicolinc@nvidia.com> wrote:

> On Mon, Oct 20, 2025 at 03:43:53PM -0700, Jacob Pan wrote:
> > From: Alexander Grest <Alexander.Grest@microsoft.com>
> > 
> > The SMMU CMDQ lock is highly contentious when there are multiple
> > CPUs issuing commands on an architecture with small queue sizes e.g
> > 256 entries.  
> 
> As Robin pointed out that 256 entry itself is not quite normal,
> the justification here might still not be very convincing..
> 
> I'd suggest to avoid saying "an architecture with a small queue
> sizes, but to focus on the issue itself -- potential starvation.
> "256-entry" can be used a testing setup to reproduce the issue.
> 
> > The lock has the following states:
> >  - 0:		Unlocked  
> >  - >0:		Shared lock held with count  
> >  - INT_MIN+N:	Exclusive lock held, where N is the # of
> > shared waiters
> >  - INT_MIN:	Exclusive lock held, no shared waiters
> > 
> > When multiple CPUs are polling for space in the queue, they attempt
> > to grab the exclusive lock to update the cons pointer from the
> > hardware. If they fail to get the lock, they will spin until either
> > the cons pointer is updated by another CPU.
> > 
> > The current code allows the possibility of shared lock starvation
> > if there is a constant stream of CPUs trying to grab the exclusive
> > lock. This leads to severe latency issues and soft lockups.  
> 
> It'd be nicer to have a graph to show how the starvation might
> happen due to a race:
> 
> CPU0 (exclusive)  | CPU1 (shared)     | CPU2 (exclusive)    |
> `cmdq->lock`
> --------------------------------------------------------------------------
> trylock() //takes |                   |                     | 0 |
> shared_lock()     |                     | INT_MIN | fetch_inc()
> |                     | INT_MIN | no return         |
>     | INT_MIN + 1 | spins // VAL >= 0 |                     | INT_MIN
> + 1 unlock()          | spins...          |                     |
> INT_MIN + 1 set_release(0)    | spins...          |
>   | 0  <-- BUG? 
Not sure we can call it a bug but it definitely opens the door for
starving shared lock.

>(done)            | (sees 0)          | trylock() //
> takes  | 0 | *exits loop*      | cmpxchg(0, INT_MIN) | 0
>                   |                   | *cuts in*           | INT_MIN
>                   | cmpxchg(0, 1)     |                     | INT_MIN
>                   | fails // != 0     |                     | INT_MIN
>                   | spins // VAL >= 0 |                     | INT_MIN
>                   | *starved*         |                     | INT_MIN
>
Thanks for the graph, will incorporate. The starved shared lock also
prevents advancing cmdq which perpetuate the situation of
!queue_has_space(&llq, n + sync)
 
> And point it out that it should have reserved the "+1" from CPU1
> instead of nuking the entire cmdq->lock to 0.
> 
Will do. reserved the "+1" is useful to prevent back to back exclusive
lock acquisition. Nuking to 0 wasted such info.

> > In a staged test where 32 CPUs issue SVA invalidations
> > simultaneously on a system with a 256 entry queue, the madvise
> > (MADV_DONTNEED) latency dropped by 50% with this patch and without
> > soft lockups.  
> 
> This might not be very useful per Robin's remarks. I'd drop it.
> 
Will do.

> > Reviewed-by: Mostafa Saleh <smostafa@google.com>
> > Signed-off-by: Alexander Grest <Alexander.Grest@microsoft.com>
> > Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>  
> 
> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com>
> 
> > @@ -500,9 +506,14 @@ static bool
> > arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> > __ret;
> > 	\ }) 
> > +/*
> > + * Only clear the sign bit when releasing the exclusive lock this
> > will
> > + * allow any shared_lock() waiters to proceed without the
> > possibility
> > + * of entering the exclusive lock in a tight loop.
> > + */
> >  #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq,
> > flags)		\ ({
> > 				\
> > -	atomic_set_release(&cmdq->lock, 0);
> > 	\
> > +	atomic_fetch_and_release(~INT_MIN, &cmdq->lock);
> > 			\  
> 
> Align the tailing spacing with other lines please.
> 
> Nicolin


  reply	other threads:[~2025-11-04  1:08 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-20 22:43 [PATCH v2 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
2025-10-20 22:43 ` [PATCH v2 1/2] iommu/arm-smmu-v3: Fix CMDQ timeout warning Jacob Pan
2025-10-30 22:41   ` Nicolin Chen
2025-11-03 23:16     ` Jacob Pan
2025-11-04  1:23       ` Nicolin Chen
2025-11-04 18:25         ` Jacob Pan
2025-11-04 18:48           ` Nicolin Chen
2025-11-04 19:37             ` Jacob Pan
2025-10-20 22:43 ` [PATCH v2 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency Jacob Pan
2025-10-31  2:00   ` Nicolin Chen
2025-11-04  1:08     ` Jacob Pan [this message]
2025-10-30 15:42 ` [PATCH v2 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251103170848.000023aa@linux.microsoft.com \
    --to=jacob.pan@linux.microsoft.com \
    --cc=Alexander.Grest@microsoft.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=robin.murphy@arm.com \
    --cc=smostafa@google.com \
    --cc=will@kernel.org \
    --cc=zhangyu1@linux.microsoft.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox