public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: leo.jiang1224@foxmail.com
Cc: robin.murphy@arm.com, joro@8bytes.org, iommu@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] iommu/arm-smmu-v3: Stop queue allocation retry at PAGE_SIZE
Date: Tue, 21 Apr 2026 16:26:42 +0100	[thread overview]
Message-ID: <aeeXMrCvOIwvvWy-@willie-the-truck> (raw)
In-Reply-To: <tencent_F6E384A40D990A279B460A0CDE1927FDF509@qq.com>

On Sat, Apr 18, 2026 at 01:31:43PM +0800, leo.jiang1224@foxmail.com wrote:
> From: LoserJL <leo.jiang1224@foxmail.com>
> 
> In arm_smmu_init_one_queue(), the loop reduces max_n_shift if
> dmam_alloc_coherent() fails. However, since dmam_alloc_coherent()
> allocates at least PAGE_SIZE, retrying with a smaller size after
> a PAGE_SIZE failure is logically redundant.
> 
> Moreover, if a sub-page retry were to succeed due to concurrent memory
> release, the hardware would be configured with a smaller queue depth
> despite a full page being allocated. This leads to inefficient memory
> usage and unnecessary hardware performance limitation.
> 
> Terminate the loop once qsz reaches PAGE_SIZE to ensure logical
> consistency and optimal hardware configuration.
> 
> Signed-off-by: LoserJL <leo.jiang1224@foxmail.com>

No pseudonyms, please.

> ---
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index e8d7dbe495f0..e0ec118ff560 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -4418,7 +4418,14 @@ int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
>  		qsz = ((1 << q->llq.max_n_shift) * dwords) << 3;
>  		q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma,
>  					      GFP_KERNEL);
> -		if (q->base || qsz < PAGE_SIZE)
> +		/*
> +		 * If allocation succeeds, we're done. If it fails, only retry
> +		 * if the requested size is still larger than a page. Since
> +		 * dmam_alloc_coherent() allocates at least PAGE_SIZE, retrying
> +		 * with a sub-page size is logically redundant and could lead
> +		 * to sub-optimal hardware configuration.

What do you mean by "sub-optimal hardware configuration"? I think you can
probably just drop this comment.

> +		 */
> +		if (q->base || qsz <= PAGE_SIZE)
>  			break;

I think this part is fine.

Will


  reply	other threads:[~2026-04-21 15:26 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-18  5:31 [PATCH] iommu/arm-smmu-v3: Stop queue allocation retry at PAGE_SIZE leo.jiang1224
2026-04-21 15:26 ` Will Deacon [this message]
2026-04-21 15:56 ` Robin Murphy
2026-04-21 16:38   ` Will Deacon
2026-04-22  9:13     ` Leo Jiang
2026-04-22  9:28       ` [PATCH v2] iommu/arm-smmu-v3: Limit queue allocation retry boundary to PAGE_SIZE Leo Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aeeXMrCvOIwvvWy-@willie-the-truck \
    --to=will@kernel.org \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=leo.jiang1224@foxmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox