From: Nicolin Chen <nicolinc@nvidia.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: <will@kernel.org>, <jgg@nvidia.com>, <joro@8bytes.org>,
<jean-philippe@linaro.org>, <apopple@nvidia.com>,
<linux-kernel@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>, <iommu@lists.linux.dev>
Subject: Re: [PATCH 1/3] iommu/io-pgtable-arm: Add nents_per_pgtable in struct io_pgtable_cfg
Date: Tue, 29 Aug 2023 13:15:34 -0700 [thread overview]
Message-ID: <ZO5R5i4n2WI2GnKQ@Asurada-Nvidia> (raw)
In-Reply-To: <61f9b371-7c45-26b1-ec0f-600765280c89@arm.com>
On Tue, Aug 29, 2023 at 04:37:00PM +0100, Robin Murphy wrote:
> On 2023-08-22 17:42, Nicolin Chen wrote:
> > On Tue, Aug 22, 2023 at 10:19:21AM +0100, Robin Murphy wrote:
> >
> > > > out_free_data:
> > > > @@ -1071,6 +1073,7 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
> > > > ARM_MALI_LPAE_TTBR_ADRMODE_TABLE;
> > > > if (cfg->coherent_walk)
> > > > cfg->arm_mali_lpae_cfg.transtab |= ARM_MALI_LPAE_TTBR_SHARE_OUTER;
> > > > + cfg->nents_per_pgtable = 1 << data->bits_per_level;
> > >
> > > The result of this highly complex and expensive calculation is clearly
> > > redundant with the existing bits_per_level field, so why do we need to
> > > waste space storing when the driver could simply use bits_per_level?
> >
> > bits_per_level is in the private struct arm_lpae_io_pgtable, while
> > drivers can only access struct io_pgtable_cfg. Are you suggesting
> > to move bits_per_level out of the private struct arm_lpae_io_pgtable
> > to the public struct io_pgtable_cfg?
> >
> > Or am I missing another bits_per_level?
>
> Bleh, apologies, I always confuse myself trying to remember the fiddly
> design of io-pgtable data. However, I think this then ends up proving
> the opposite point - the number of pages per table only happens to be a
> fixed constant for certain formats like LPAE, but does not necessarily
> generalise. For instance for a single v7s config it would be 1024 or 256
> or 16 depending on what has actually been unmapped.
>
> The mechanism as proposed implicitly assumes LPAE format, so I still
> think we're better off making that assumption explicit. And at that
> point arm-smmu-v3 can then freely admit it already knows the number is
> simply 1/8th of the domain page size.
Hmm, I am not getting that "1/8th" part, would you mind elaborating?
Also, what we need is actually an arbitrary number for max_tlbi_ops.
And I think it could be irrelevant to the page size, i.e. either a
4K pgsize or a 64K pgsize could use the same max_tlbi_ops number,
because what eventually impacts the latency is the number of loops
of building/issuing commands.
So, combining your narrative above that nents_per_pgtable isn't so
general as we have in the tlbflush for MMU, perhaps we could just
decouple max_tlbi_ops from the pgtable and pgsize, instead define
something like this in the SMMUv3 driver:
/*
* A request for a large number of TLBI commands could result in a big
* overhead and latency on SMMUs without ARM_SMMU_FEAT_RANGE_INV. Set
* a threshold to the number, so the driver would switch to one single
* full-range command.
*/
#define MAX_TLBI_OPS 8192
Any thought?
Thanks
Nicolin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-08-29 20:16 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-22 8:45 [PATCH 0/3] iommu/arm-smmu-v3: Reduce latency in __arm_smmu_tlb_inv_range() Nicolin Chen
2023-08-22 8:45 ` [PATCH 1/3] iommu/io-pgtable-arm: Add nents_per_pgtable in struct io_pgtable_cfg Nicolin Chen
2023-08-22 9:19 ` Robin Murphy
2023-08-22 16:42 ` Nicolin Chen
2023-08-29 15:37 ` Robin Murphy
2023-08-29 20:15 ` Nicolin Chen [this message]
2023-08-29 21:25 ` Robin Murphy
2023-08-29 22:15 ` Nicolin Chen
2023-08-30 21:49 ` Will Deacon
2023-08-31 17:39 ` Nicolin Chen
2023-09-01 0:08 ` Nicolin Chen
2023-09-01 18:02 ` Robin Murphy
2023-09-01 18:23 ` Nicolin Chen
2024-01-20 19:59 ` Nicolin Chen
2024-01-22 13:01 ` Jason Gunthorpe
2024-01-22 17:24 ` Nicolin Chen
2024-01-22 17:57 ` Jason Gunthorpe
2024-01-24 0:11 ` Nicolin Chen
2024-01-25 13:55 ` Jason Gunthorpe
2024-01-25 17:23 ` Nicolin Chen
2024-01-25 17:47 ` Jason Gunthorpe
2024-01-25 19:55 ` Nicolin Chen
[not found] ` <098d64da-ecf5-4a23-bff9-a04840726ef0@huawei.com>
2024-01-25 5:09 ` Nicolin Chen
2023-08-22 8:45 ` [PATCH 2/3] iommu/arm-smmu-v3: Add an arm_smmu_tlb_inv_domain helper Nicolin Chen
2023-08-22 9:40 ` Robin Murphy
2023-08-22 17:03 ` Nicolin Chen
2023-08-29 21:54 ` Robin Murphy
2023-08-29 23:03 ` Nicolin Chen
2023-08-22 8:45 ` [PATCH 3/3] iommu/arm-smmu-v3: Add a max_tlbi_ops for __arm_smmu_tlb_inv_range() Nicolin Chen
2023-08-22 9:30 ` Robin Murphy
2023-08-22 16:32 ` Nicolin Chen
2023-08-22 23:04 ` Nicolin Chen
2023-08-29 22:40 ` Robin Murphy
2023-08-29 23:14 ` Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZO5R5i4n2WI2GnKQ@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=apopple@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox