From: Mostafa Saleh <smostafa@google.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
linux-arm-kernel@lists.infradead.org,
Robin Murphy <robin.murphy@arm.com>,
Will Deacon <will@kernel.org>, Eric Auger <eric.auger@redhat.com>,
Moritz Fischer <mdf@kernel.org>,
Michael Shavit <mshavit@google.com>,
Nicolin Chen <nicolinc@nvidia.com>,
patches@lists.linux.dev,
Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Subject: Re: [PATCH v3 04/19] iommu/arm-smmu-v3: Make STE programming independent of the callers
Date: Mon, 29 Jan 2024 20:48:45 +0000 [thread overview]
Message-ID: <ZbgPLYN30I1V3axi@google.com> (raw)
In-Reply-To: <20240129194910.GB1455070@nvidia.com>
On Mon, Jan 29, 2024 at 03:49:10PM -0400, Jason Gunthorpe wrote:
> On Mon, Jan 29, 2024 at 07:10:47PM +0000, Mostafa Saleh wrote:
>
> > > Going forward this will use a V=0 transition instead of cycling through
> > > ABORT if a hitfull change is required. This seems more appropriate as ABORT
> > > will fail DMAs without any logging, but dropping a DMA due to transient
> > > V=0 is probably signaling a bug, so the C_BAD_STE is valuable.
> > Would the driver do anything in that case, or would just print the log message?
>
> Just log, AFAIK.
>
> > > +static bool arm_smmu_write_entry_step(__le64 *cur, const __le64 *cur_used,
> > > + const __le64 *target,
> > > + const __le64 *target_used, __le64 *step,
> > > + __le64 v_bit,
> > I think this is confusing here, I believe we have this as an argument as this
> > function would be used for CD later, however for this series it is unnecessary,
> > IMHO, this should be removed and added in another patch for the CD rework.
>
> It is alot of code churn to do that, even more on the new version.
>
> > > + used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_CFG);
> > > + switch (FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent->data[0]))) {
> > > + case STRTAB_STE_0_CFG_ABORT:
> > > + break;
> > > + case STRTAB_STE_0_CFG_BYPASS:
> > > + used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> > > + break;
> > > + case STRTAB_STE_0_CFG_S1_TRANS:
> > > + used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT |
> > > + STRTAB_STE_0_S1CTXPTR_MASK |
> > > + STRTAB_STE_0_S1CDMAX);
> > > + used_bits->data[1] |=
> > > + cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
> > > + STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
> > > + STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW);
> > > + used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> > > + break;
> > AFAIU, this is missing something like (while passing smmu->features)
> > used_bits->data[2] |= features & ARM_SMMU_FEAT_NESTING ?
> > cpu_to_le64(STRTAB_STE_2_S2VMID) : 0;
> >
> > As the SMMUv3 manual says:
> > “ For a Non-secure STE when stage 2 is implemented (SMMU_IDR0.S2P == 1)
> > translations resulting from a StreamWorld == NS-EL1 configuration are
> > VMID-tagged with S2VMID when either of stage 1 (Config[0] == 1) or stage 2
> > (Config[1] == 1) provide translation.“
> >
> > Which means in case of S1=>S2 switch or vice versa this algorithm will ignore
> > VMID while it is used.
Yes, In that case we would consider S2VMID even for stage-1 instances only,
even though it should never change and in that case the algorithm will have
the same steps. I guess it might still look confusing, but no strong opinion.
>
> Ah, yes, that is a small miss, thanks. I don't think we need the
> features test though, s2vmid doesn't mean something different if the
> feature is not present..
>
> > > +static void arm_smmu_write_ste(struct arm_smmu_device *smmu, u32 sid,
> > > + struct arm_smmu_ste *ste,
> > > + const struct arm_smmu_ste *target)
> > > +{
> > > + struct arm_smmu_ste target_used;
> > > + int i;
> > > +
> > > + arm_smmu_get_ste_used(target, &target_used);
> > > + /* Masks in arm_smmu_get_ste_used() are up to date */
> > > + for (i = 0; i != ARRAY_SIZE(target->data); i++)
> > > + WARN_ON_ONCE(target->data[i] & ~target_used.data[i]);
> > In what situation this would be triggered, is that for future proofing,
> > maybe we can move it to arm_smmu_get_ste_used()?
>
> Yes, prevent people from making an error down the road.
>
> It can't be in ste_used due to how this specific algorithm works
> iteratively
>
> And in the v4 version it still wouldn't be a good idea at this point
> due to how the series slowly migrates STE and CD programming
> over. There are cases where the current STE will not have been written
> by this code and may not pass this test.
>
> Thanks,
> Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2024-01-29 20:49 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-05 19:14 [PATCH v3 00/19] Update SMMUv3 to the modern iommu API (part 1/3) Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 01/19] iommu/arm-smmu-v3: Add a type for the STE Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 02/19] iommu/arm-smmu-v3: Master cannot be NULL in arm_smmu_write_strtab_ent() Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 03/19] iommu/arm-smmu-v3: Remove ARM_SMMU_DOMAIN_NESTED Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 04/19] iommu/arm-smmu-v3: Make STE programming independent of the callers Jason Gunthorpe
2023-12-12 16:23 ` Will Deacon
2023-12-12 18:04 ` Jason Gunthorpe
2024-01-29 19:10 ` Mostafa Saleh
2024-01-29 19:49 ` Jason Gunthorpe
2024-01-29 20:48 ` Mostafa Saleh [this message]
2023-12-05 19:14 ` [PATCH v3 05/19] iommu/arm-smmu-v3: Consolidate the STE generation for abort/bypass Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 06/19] iommu/arm-smmu-v3: Move arm_smmu_rmr_install_bypass_ste() Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 07/19] iommu/arm-smmu-v3: Move the STE generation for S1 and S2 domains into functions Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 08/19] iommu/arm-smmu-v3: Build the whole STE in arm_smmu_make_s2_domain_ste() Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 09/19] iommu/arm-smmu-v3: Hold arm_smmu_asid_lock during all of attach_dev Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 10/19] iommu/arm-smmu-v3: Compute the STE only once for each master Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 11/19] iommu/arm-smmu-v3: Do not change the STE twice during arm_smmu_attach_dev() Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 12/19] iommu/arm-smmu-v3: Put writing the context descriptor in the right order Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 13/19] iommu/arm-smmu-v3: Pass smmu_domain to arm_enable/disable_ats() Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 14/19] iommu/arm-smmu-v3: Remove arm_smmu_master->domain Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 15/19] iommu/arm-smmu-v3: Add a global static IDENTITY domain Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 16/19] iommu/arm-smmu-v3: Add a global static BLOCKED domain Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 17/19] iommu/arm-smmu-v3: Use the identity/blocked domain during release Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 18/19] iommu/arm-smmu-v3: Pass arm_smmu_domain and arm_smmu_device to finalize Jason Gunthorpe
2023-12-05 19:14 ` [PATCH v3 19/19] iommu/arm-smmu-v3: Convert to domain_alloc_paging() Jason Gunthorpe
2023-12-06 1:53 ` [PATCH v3 00/19] Update SMMUv3 to the modern iommu API (part 1/3) Moritz Fischer
2023-12-11 18:03 ` Jason Gunthorpe
2023-12-11 18:15 ` Will Deacon
2024-01-29 19:13 ` Mostafa Saleh
2024-01-29 19:42 ` Jason Gunthorpe
2024-01-29 20:45 ` Mostafa Saleh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZbgPLYN30I1V3axi@google.com \
--to=smostafa@google.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mdf@kernel.org \
--cc=mshavit@google.com \
--cc=nicolinc@nvidia.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).