From: Jason Gunthorpe <jgg@nvidia.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>,
iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
linux-arm-kernel@lists.infradead.org,
Lu Baolu <baolu.lu@linux.intel.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Joerg Roedel <jroedel@suse.de>, Moritz Fischer <mdf@kernel.org>,
Moritz Fischer <moritzf@google.com>,
Michael Shavit <mshavit@google.com>,
Nicolin Chen <nicolinc@nvidia.com>,
patches@lists.linux.dev,
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
Mostafa Saleh <smostafa@google.com>,
Zhangfei Gao <zhangfei.gao@linaro.org>
Subject: Re: [PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers
Date: Thu, 15 Feb 2024 17:17:39 -0400 [thread overview]
Message-ID: <20240215211739.GN1088888@nvidia.com> (raw)
In-Reply-To: <02fac0ab-07ac-448e-ae4e-26788ed4fce9@arm.com>
On Thu, Feb 15, 2024 at 06:42:37PM +0000, Robin Murphy wrote:
> > > > @@ -48,6 +48,21 @@ enum arm_smmu_msi_index {
> > > > ARM_SMMU_MAX_MSIS,
> > > > };
> > > > +struct arm_smmu_entry_writer_ops;
> > > > +struct arm_smmu_entry_writer {
> > > > + const struct arm_smmu_entry_writer_ops *ops;
> > > > + struct arm_smmu_master *master;
> > > > +};
> > > > +
> > > > +struct arm_smmu_entry_writer_ops {
> > > > + unsigned int num_entry_qwords;
> > > > + __le64 v_bit;
> > > > + void (*get_used)(const __le64 *entry, __le64 *used);
> > > > + void (*sync)(struct arm_smmu_entry_writer *writer);
> > > > +};
> > >
> > > Can we avoid the indirection for now, please? I'm sure we'll want it later
> > > when you extend this to CDs, but for the initial support it just makes it
> > > more difficult to follow the flow. Should be a trivial thing to drop, I
> > > hope.
> >
> > We can.
>
> Ack, the abstraction is really hard to follow, and much of that
> seems entirely self-inflicted in the amount of recalculating
> information which was in-context in a previous step but then thrown
> away.
I'm not sure I understand this can you be more specific? I don't know
what we are throwing away that you see?
> And as best I can tell I think it will still end up doing more CFGIs
> than needed.
I think we've minimized the number of steps and Michael did check it,
even pushed tests for the popular scenarios into the kunit. He found a
case where it was not optimal and it was improved.
Mostafa asked about extra syncs, and you can read my reply explaining
why. We both agreed the sync's are necessary.
The only extra thing I know of is the zeroing of fields. Perhaps we
don't have to do this, but I think we should. Operating with the STE
in a known state seems like the conservative choice.
Regardless if you have a case in mind where there are extra steps lets
try it in the kunit and check.
This is not a performance path, so I wouldn't invest too much in this
question.
> Keeping a single monolithic check-and-update function will be *so* much
> easier to understand and maintain.
The ops are used by the kunit test suite and I think the kunit is
valuable.
Further I've been looking at the AMD driver and it has the same
problem to solve for its DTE and can use this same solution. Intel
also has > 128 bit structures too. I already drafted an exploration of
using this algorithm in AMD.
I see a someday future where we will move this to shared core code. In
which case the driver only provides the used and sync operation which
I think is a low driver burden for solving such a tricky shared
problem. There is some more shared complexity here on x86 which needs
to use 128 bit stores if the CPU supports those instructions.
IOW this approach is nice and valuable outside ARM. I would like to
move in a direction where we simply use this shared code for all
multi-qword HW descriptors. We've certainly invested enough in
building it and none of the three drivers have anything better.
> As far as CDs go, anything we might reasonably want to change in a
> live CD is all in the first word so I don't see any value in
Changing from a S1 -> S1 requires updating two qwords in the CD and
that requires the V=0 flow that the current arm_smmu_write_ctx_desc()
doesn't do. It is not that arm_smmu_write_ctx_desc() needs to be
prettier, it needs more functionality.
> > > > +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
> > > > {
> > > > + unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));
> > > > +
> > > > + used_bits[0] = cpu_to_le64(STRTAB_STE_0_V);
> > > > + if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
> > > > + return;
> > > > +
> > > > + /*
> > > > + * See 13.5 Summary of attribute/permission configuration fields for the
> > > > + * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
> > > > + * and S2 only.
> > > > + */
> > > > + if (cfg == STRTAB_STE_0_CFG_BYPASS ||
> > > > + cfg == STRTAB_STE_0_CFG_S2_TRANS ||
> > > > + (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
> > > > + FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
> > > > + STRTAB_STE_1_S1DSS_BYPASS))
> > > > + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> > > > > > Huh, SHCFG is really getting in the way here, isn't it?
> >
> > I wouldn't say that.. It is just a complicated bit of the spec. One of
> > the things we recently did was to audit all the cache settings and, at
> > least, we then realized that SHCFG was being subtly used by S2 as
> > well..
>
> Yeah, that really shouldn't be subtle; incoming attributes are replaced by
> S1 translation, thus they are relevant to not-S1 configs.
That is a really nice way to summarize the spec! But my remark was
more about the code which isn't so obvious what value it intended to
have for SHCFG on the S2 case.
This doesn't really change anthing about this patch, we'd still have
the above hunk to accurately reflect the SHCFG usage, and we'd still
set SHCFG to 0 in S1 cases where it isn't used by HW, just like today.
> I think it's likely to be significantly more straightforward to give up on
> the switch statement and jump straight into the more architectural paradigm
> at this level, e.g.
I've thought about that, I can make effort to do this, the later
nesting change would probably look nicer in this style.
Thanks,
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2024-02-15 21:18 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-06 15:12 [PATCH v5 00/17] Update SMMUv3 to the modern iommu API (part 1/3) Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers Jason Gunthorpe
2024-02-15 13:49 ` Will Deacon
2024-02-15 16:01 ` Jason Gunthorpe
2024-02-15 18:42 ` Robin Murphy
2024-02-15 20:11 ` Robin Murphy
2024-02-16 16:28 ` Will Deacon
2024-02-15 21:17 ` Jason Gunthorpe [this message]
2024-02-21 13:49 ` Will Deacon
2024-02-21 14:08 ` Jason Gunthorpe
2024-02-21 16:19 ` Michael Shavit
2024-02-21 16:52 ` Michael Shavit
2024-02-21 17:06 ` Jason Gunthorpe
2024-02-22 17:43 ` Will Deacon
2024-02-23 15:18 ` Jason Gunthorpe
2024-02-27 12:43 ` Will Deacon
2024-02-29 13:57 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 02/17] iommu/arm-smmu-v3: Consolidate the STE generation for abort/bypass Jason Gunthorpe
2024-02-15 17:27 ` Robin Murphy
2024-02-22 17:40 ` Will Deacon
2024-02-23 18:53 ` Jason Gunthorpe
2024-02-27 10:50 ` Will Deacon
2024-02-06 15:12 ` [PATCH v5 03/17] iommu/arm-smmu-v3: Move arm_smmu_rmr_install_bypass_ste() Jason Gunthorpe
2024-02-13 15:37 ` Mostafa Saleh
2024-02-13 16:16 ` Jason Gunthorpe
2024-02-13 16:46 ` Mostafa Saleh
2024-02-15 19:01 ` Robin Murphy
2024-02-15 21:18 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 04/17] iommu/arm-smmu-v3: Move the STE generation for S1 and S2 domains into functions Jason Gunthorpe
2024-02-16 17:12 ` Jason Gunthorpe
2024-02-16 17:39 ` Will Deacon
2024-02-16 17:58 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 05/17] iommu/arm-smmu-v3: Build the whole STE in arm_smmu_make_s2_domain_ste() Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 06/17] iommu/arm-smmu-v3: Hold arm_smmu_asid_lock during all of attach_dev Jason Gunthorpe
2024-02-13 15:38 ` Mostafa Saleh
2024-02-13 16:18 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 07/17] iommu/arm-smmu-v3: Compute the STE only once for each master Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 08/17] iommu/arm-smmu-v3: Do not change the STE twice during arm_smmu_attach_dev() Jason Gunthorpe
2024-02-13 15:40 ` Mostafa Saleh
2024-02-13 16:26 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 09/17] iommu/arm-smmu-v3: Put writing the context descriptor in the right order Jason Gunthorpe
2024-02-13 15:42 ` Mostafa Saleh
2024-02-13 17:50 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 10/17] iommu/arm-smmu-v3: Pass smmu_domain to arm_enable/disable_ats() Jason Gunthorpe
2024-02-13 15:43 ` Mostafa Saleh
2024-02-06 15:12 ` [PATCH v5 11/17] iommu/arm-smmu-v3: Remove arm_smmu_master->domain Jason Gunthorpe
2024-02-13 15:45 ` Mostafa Saleh
2024-02-13 16:37 ` Jason Gunthorpe
2024-02-13 17:00 ` Mostafa Saleh
2024-02-06 15:12 ` [PATCH v5 12/17] iommu/arm-smmu-v3: Check that the RID domain is S1 in SVA Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 13/17] iommu/arm-smmu-v3: Add a global static IDENTITY domain Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 14/17] iommu/arm-smmu-v3: Add a global static BLOCKED domain Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 15/17] iommu/arm-smmu-v3: Use the identity/blocked domain during release Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 16/17] iommu/arm-smmu-v3: Pass arm_smmu_domain and arm_smmu_device to finalize Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 17/17] iommu/arm-smmu-v3: Convert to domain_alloc_paging() Jason Gunthorpe
2024-02-07 5:27 ` [PATCH v5 00/17] Update SMMUv3 to the modern iommu API (part 1/3) Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240215211739.GN1088888@nvidia.com \
--to=jgg@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=joro@8bytes.org \
--cc=jroedel@suse.de \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mdf@kernel.org \
--cc=moritzf@google.com \
--cc=mshavit@google.com \
--cc=nicolinc@nvidia.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
--cc=zhangfei.gao@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).