From: Nicolin Chen <nicolinc@nvidia.com>
To: Jacob Pan <jacob.pan@linux.microsoft.com>
Cc: <linux-kernel@vger.kernel.org>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
Will Deacon <will@kernel.org>, Joerg Roedel <joro@8bytes.org>,
Mostafa Saleh <smostafa@google.com>,
Jason Gunthorpe <jgg@nvidia.com>,
Robin Murphy <robin.murphy@arm.com>,
Zhang Yu <zhangyu1@linux.microsoft.com>,
Jean Philippe-Brucker <jean-philippe@linaro.org>,
Alexander Grest <Alexander.Grest@microsoft.com>
Subject: Re: [PATCH v2 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency
Date: Thu, 30 Oct 2025 19:00:02 -0700 [thread overview]
Message-ID: <aQQYIkC9lmQnd27S@Asurada-Nvidia> (raw)
In-Reply-To: <20251020224353.1408-3-jacob.pan@linux.microsoft.com>
On Mon, Oct 20, 2025 at 03:43:53PM -0700, Jacob Pan wrote:
> From: Alexander Grest <Alexander.Grest@microsoft.com>
>
> The SMMU CMDQ lock is highly contentious when there are multiple CPUs
> issuing commands on an architecture with small queue sizes e.g 256
> entries.
As Robin pointed out that 256 entry itself is not quite normal,
the justification here might still not be very convincing..
I'd suggest to avoid saying "an architecture with a small queue
sizes, but to focus on the issue itself -- potential starvation.
"256-entry" can be used a testing setup to reproduce the issue.
> The lock has the following states:
> - 0: Unlocked
> - >0: Shared lock held with count
> - INT_MIN+N: Exclusive lock held, where N is the # of shared waiters
> - INT_MIN: Exclusive lock held, no shared waiters
>
> When multiple CPUs are polling for space in the queue, they attempt to
> grab the exclusive lock to update the cons pointer from the hardware. If
> they fail to get the lock, they will spin until either the cons pointer
> is updated by another CPU.
>
> The current code allows the possibility of shared lock starvation
> if there is a constant stream of CPUs trying to grab the exclusive lock.
> This leads to severe latency issues and soft lockups.
It'd be nicer to have a graph to show how the starvation might
happen due to a race:
CPU0 (exclusive) | CPU1 (shared) | CPU2 (exclusive) | `cmdq->lock`
--------------------------------------------------------------------------
trylock() //takes | | | 0
| shared_lock() | | INT_MIN
| fetch_inc() | | INT_MIN
| no return | | INT_MIN + 1
| spins // VAL >= 0 | | INT_MIN + 1
unlock() | spins... | | INT_MIN + 1
set_release(0) | spins... | | 0 <-- BUG?
(done) | (sees 0) | trylock() // takes | 0
| *exits loop* | cmpxchg(0, INT_MIN) | 0
| | *cuts in* | INT_MIN
| cmpxchg(0, 1) | | INT_MIN
| fails // != 0 | | INT_MIN
| spins // VAL >= 0 | | INT_MIN
| *starved* | | INT_MIN
And point it out that it should have reserved the "+1" from CPU1
instead of nuking the entire cmdq->lock to 0.
> In a staged test where 32 CPUs issue SVA invalidations simultaneously on
> a system with a 256 entry queue, the madvise (MADV_DONTNEED) latency
> dropped by 50% with this patch and without soft lockups.
This might not be very useful per Robin's remarks. I'd drop it.
> Reviewed-by: Mostafa Saleh <smostafa@google.com>
> Signed-off-by: Alexander Grest <Alexander.Grest@microsoft.com>
> Signed-off-by: Jacob Pan <jacob.pan@linux.microsoft.com>
Reviewed-by: Nicolin Chen <nicolinc@nvidia.com>
> @@ -500,9 +506,14 @@ static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> __ret; \
> })
>
> +/*
> + * Only clear the sign bit when releasing the exclusive lock this will
> + * allow any shared_lock() waiters to proceed without the possibility
> + * of entering the exclusive lock in a tight loop.
> + */
> #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags) \
> ({ \
> - atomic_set_release(&cmdq->lock, 0); \
> + atomic_fetch_and_release(~INT_MIN, &cmdq->lock); \
Align the tailing spacing with other lines please.
Nicolin
next prev parent reply other threads:[~2025-10-31 2:00 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-20 22:43 [PATCH v2 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
2025-10-20 22:43 ` [PATCH v2 1/2] iommu/arm-smmu-v3: Fix CMDQ timeout warning Jacob Pan
2025-10-30 22:41 ` Nicolin Chen
2025-11-03 23:16 ` Jacob Pan
2025-11-04 1:23 ` Nicolin Chen
2025-11-04 18:25 ` Jacob Pan
2025-11-04 18:48 ` Nicolin Chen
2025-11-04 19:37 ` Jacob Pan
2025-10-20 22:43 ` [PATCH v2 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency Jacob Pan
2025-10-31 2:00 ` Nicolin Chen [this message]
2025-11-04 1:08 ` Jacob Pan
2025-10-30 15:42 ` [PATCH v2 0/2] SMMU v3 CMDQ fix and improvement Jacob Pan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aQQYIkC9lmQnd27S@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=Alexander.Grest@microsoft.com \
--cc=iommu@lists.linux.dev \
--cc=jacob.pan@linux.microsoft.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
--cc=zhangyu1@linux.microsoft.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox