linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Will Deacon <will@kernel.org>
Cc: iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
	linux-arm-kernel@lists.infradead.org,
	Robin Murphy <robin.murphy@arm.com>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	Joerg Roedel <jroedel@suse.de>, Moritz Fischer <mdf@kernel.org>,
	Moritz Fischer <moritzf@google.com>,
	Michael Shavit <mshavit@google.com>,
	Nicolin Chen <nicolinc@nvidia.com>,
	patches@lists.linux.dev,
	Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
	Mostafa Saleh <smostafa@google.com>,
	Zhangfei Gao <zhangfei.gao@linaro.org>
Subject: Re: [PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers
Date: Thu, 15 Feb 2024 12:01:35 -0400	[thread overview]
Message-ID: <20240215160135.GL1088888@nvidia.com> (raw)
In-Reply-To: <20240215134952.GA690@willie-the-truck>

On Thu, Feb 15, 2024 at 01:49:53PM +0000, Will Deacon wrote:
> Hi Jason,
> 
> On Tue, Feb 06, 2024 at 11:12:38AM -0400, Jason Gunthorpe wrote:
> > As the comment in arm_smmu_write_strtab_ent() explains, this routine has
> > been limited to only work correctly in certain scenarios that the caller
> > must ensure. Generally the caller must put the STE into ABORT or BYPASS
> > before attempting to program it to something else.
> 
> This is looking pretty good now, but I have a few comments inline.

Ok

> > @@ -48,6 +48,21 @@ enum arm_smmu_msi_index {
> >  	ARM_SMMU_MAX_MSIS,
> >  };
> >  
> > +struct arm_smmu_entry_writer_ops;
> > +struct arm_smmu_entry_writer {
> > +	const struct arm_smmu_entry_writer_ops *ops;
> > +	struct arm_smmu_master *master;
> > +};
> > +
> > +struct arm_smmu_entry_writer_ops {
> > +	unsigned int num_entry_qwords;
> > +	__le64 v_bit;
> > +	void (*get_used)(const __le64 *entry, __le64 *used);
> > +	void (*sync)(struct arm_smmu_entry_writer *writer);
> > +};
> 
> Can we avoid the indirection for now, please? I'm sure we'll want it later
> when you extend this to CDs, but for the initial support it just makes it
> more difficult to follow the flow. Should be a trivial thing to drop, I
> hope.

We can.

> > +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
> >  {
> > +	unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));
> > +
> > +	used_bits[0] = cpu_to_le64(STRTAB_STE_0_V);
> > +	if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
> > +		return;
> > +
> > +	/*
> > +	 * See 13.5 Summary of attribute/permission configuration fields for the
> > +	 * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
> > +	 * and S2 only.
> > +	 */
> > +	if (cfg == STRTAB_STE_0_CFG_BYPASS ||
> > +	    cfg == STRTAB_STE_0_CFG_S2_TRANS ||
> > +	    (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
> > +	     FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
> > +		     STRTAB_STE_1_S1DSS_BYPASS))
> > +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);
> 
> Huh, SHCFG is really getting in the way here, isn't it? 

I wouldn't say that.. It is just a complicated bit of the spec. One of
the things we recently did was to audit all the cache settings and, at
least, we then realized that SHCFG was being subtly used by S2 as
well..

Not sure if that was intentional or if it was just missed from the
spec that the S2 uses the value too.

From that perspective I view this layout of used to be valuable. It
forces the kind of reflection and rigor that I think is helpful. The
fact we found a thing to improve on by inspection is proof of this
worth to me.

> I think it also means we don't have a "hitless" transition from
> stage-2 translation -> bypass.

Hmm, I didn't notice that. The kunit passed:

[    0.511483] 1..1
[    0.511510]     KTAP version 1
[    0.511551]     # Subtest: arm-smmu-v3-kunit-test
[    0.511592]     # module: arm_smmu_v3_test
[    0.511594]     1..10
[    0.511910]     ok 1 arm_smmu_v3_write_ste_test_bypass_to_abort
[    0.512110]     ok 2 arm_smmu_v3_write_ste_test_abort_to_bypass
[    0.512386]     ok 3 arm_smmu_v3_write_ste_test_cdtable_to_abort
[    0.512631]     ok 4 arm_smmu_v3_write_ste_test_abort_to_cdtable
[    0.512874]     ok 5 arm_smmu_v3_write_ste_test_cdtable_to_bypass
[    0.513075]     ok 6 arm_smmu_v3_write_ste_test_bypass_to_cdtable
[    0.513275]     ok 7 arm_smmu_v3_write_ste_test_cdtable_s1dss_change
[    0.513466]     ok 8 arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass
[    0.513672]     ok 9 arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass
[    0.514148]     ok 10 arm_smmu_v3_write_ste_test_non_hitless

Which I see is because it did not test the S2 case...

> I'm inclined to leave it set to "use incoming" all the time; the
> only difference I can see is if you have stage-2 translation and a
> master emitting outer-shareable transactions, in which case they'd now
> be outer-shareable instead of inner-shareable, which I think is harmless.

Broadly it seems to me to make sense that the iommu would try to have
a consistent translation - that bypass and S2 use different
cachability doesn't seem great. But isn't the current S2 value of 0
"non-sharable"?

> Additionally, it looks like there's an existing buglet here in that we
> shouldn't set SHCFG if SMMU_IDR1.ATTR_TYPES_OVR == 0.

Ah because the spec says RES0.. I'll add these two into the pile of
random stuff in part 3

> > +	used_bits[0] |= cpu_to_le64(STRTAB_STE_0_CFG);
> > +	switch (cfg) {
> > +	case STRTAB_STE_0_CFG_ABORT:
> > +	case STRTAB_STE_0_CFG_BYPASS:
> > +		break;
> > +	case STRTAB_STE_0_CFG_S1_TRANS:
> > +		used_bits[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT |
> > +					    STRTAB_STE_0_S1CTXPTR_MASK |
> > +					    STRTAB_STE_0_S1CDMAX);
> > +		used_bits[1] |=
> > +			cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
> > +				    STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
> > +				    STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW);
> > +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> > +		used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID);
> > +		break;
> > +	case STRTAB_STE_0_CFG_S2_TRANS:
> > +		used_bits[1] |=
> > +			cpu_to_le64(STRTAB_STE_1_EATS);
> > +		used_bits[2] |=
> > +			cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
> > +				    STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
> > +				    STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R);
> > +		used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK);
> > +		break;
> 
> With SHCFG fixed, can we go a step further with this and simply identify
> the live qwords directly, rather than on a field-by-field basis? I think
> we should be able to do the same "hitless" transitions you want with the
> coarser granularity.

Not naively, Michael's excellent unit test shows it.. My understanding
of your idea was roughly thus:

void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
{
	unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));

	used_bits[0] = U64_MAX;
	if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
		return;

	/*
	 * See 13.5 Summary of attribute/permission configuration fields for the
	 * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
	 * and S2 only.
	 */
	if (cfg == STRTAB_STE_0_CFG_BYPASS ||
	    cfg == STRTAB_STE_0_CFG_S2_TRANS ||
	    (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
	     FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
		     STRTAB_STE_1_S1DSS_BYPASS))
		used_bits[1] |= U64_MAX;

	used_bits[0] |= U64_MAX;
	switch (cfg) {
	case STRTAB_STE_0_CFG_ABORT:
	case STRTAB_STE_0_CFG_BYPASS:
		break;
	case STRTAB_STE_0_CFG_S1_TRANS:
		used_bits[0] |= U64_MAX;
		used_bits[1] |= U64_MAX;
		used_bits[2] |= U64_MAX;
		break;
	case STRTAB_STE_0_CFG_NESTED:
		used_bits[0] |= U64_MAX;
		used_bits[1] |= U64_MAX;
		fallthrough;
	case STRTAB_STE_0_CFG_S2_TRANS:
		used_bits[1] |= U64_MAX;
		used_bits[2] |= U64_MAX;
		used_bits[3] |= U64_MAX;
		break;

	default:
		memset(used_bits, 0xFF, sizeof(struct arm_smmu_ste));
		WARN_ON(true);
	}
}

And the failures:

[    0.500676]     ok 1 arm_smmu_v3_write_ste_test_bypass_to_abort
[    0.500818]     ok 2 arm_smmu_v3_write_ste_test_abort_to_bypass
[    0.501014]     ok 3 arm_smmu_v3_write_ste_test_cdtable_to_abort
[    0.501197]     ok 4 arm_smmu_v3_write_ste_test_abort_to_cdtable
[    0.501340]     # arm_smmu_v3_write_ste_test_cdtable_to_bypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.501340]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.501340]         test_writer.invalid_entry_written == 1 (0x1)
[    0.501340]         !hitless == 0 (0x0)
[    0.501489]     not ok 5 arm_smmu_v3_write_ste_test_cdtable_to_bypass
[    0.501787]     # arm_smmu_v3_write_ste_test_bypass_to_cdtable: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.501787]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.501787]         test_writer.invalid_entry_written == 1 (0x1)
[    0.501787]         !hitless == 0 (0x0)
[    0.501931]     not ok 6 arm_smmu_v3_write_ste_test_bypass_to_cdtable
[    0.502274]     ok 7 arm_smmu_v3_write_ste_test_cdtable_s1dss_change
[    0.502397]     # arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.502397]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.502397]         test_writer.invalid_entry_written == 1 (0x1)
[    0.502397]         !hitless == 0 (0x0)
[    0.502473]     # arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:129
[    0.502473]     Expected test_writer.num_syncs == num_syncs_expected, but
[    0.502473]         test_writer.num_syncs == 3 (0x3)
[    0.502473]         num_syncs_expected == 2 (0x2)
[    0.502784]     not ok 8 arm_smmu_v3_write_ste_test_s1dssbypass_to_stebypass
[    0.503073]     # arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:128
[    0.503073]     Expected test_writer.invalid_entry_written == !hitless, but
[    0.503073]         test_writer.invalid_entry_written == 1 (0x1)
[    0.503073]         !hitless == 0 (0x0)
[    0.503176]     # arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass: EXPECTATION FAILED at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c:129
[    0.503176]     Expected test_writer.num_syncs == num_syncs_expected, but
[    0.503176]         test_writer.num_syncs == 3 (0x3)
[    0.503176]         num_syncs_expected == 2 (0x2)
[    0.503464]     not ok 9 arm_smmu_v3_write_ste_test_stebypass_to_s1dssbypass
[    0.503807]     ok 10 arm_smmu_v3_write_ste_test_non_hitless

BYPASS -> S1 requires changing overlapping bits in qword 1. The
programming sequence would look like this:

start qw[1] = SHCFG_INCOMING
      qw[1] = SHCFG_INCOMING | S1DSS
      qw[0] = S1 mode
      qw[1] = S1DSS

The two states are sharing qw[1] and BYPASS ignores all of it except
SHCFG_INCOMING. Since bypass would have its qw[1] marked as used due
to the SHCFG there is no way to express that it is not looking at the
other bits.

We'd have to really start doing really hacky things like remove the
SHCFG as a used field entirely - but I think if you do that you break
the entire logic of the design and also go backwards to having
programming that only works if STEs are constructed in certain ways.

Thanks,
Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-02-15 16:02 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-06 15:12 [PATCH v5 00/17] Update SMMUv3 to the modern iommu API (part 1/3) Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers Jason Gunthorpe
2024-02-15 13:49   ` Will Deacon
2024-02-15 16:01     ` Jason Gunthorpe [this message]
2024-02-15 18:42       ` Robin Murphy
2024-02-15 20:11         ` Robin Murphy
2024-02-16 16:28           ` Will Deacon
2024-02-15 21:17         ` Jason Gunthorpe
2024-02-21 13:49       ` Will Deacon
2024-02-21 14:08         ` Jason Gunthorpe
2024-02-21 16:19           ` Michael Shavit
2024-02-21 16:52             ` Michael Shavit
2024-02-21 17:06             ` Jason Gunthorpe
2024-02-22 17:43           ` Will Deacon
2024-02-23 15:18             ` Jason Gunthorpe
2024-02-27 12:43               ` Will Deacon
2024-02-29 13:57                 ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 02/17] iommu/arm-smmu-v3: Consolidate the STE generation for abort/bypass Jason Gunthorpe
2024-02-15 17:27   ` Robin Murphy
2024-02-22 17:40     ` Will Deacon
2024-02-23 18:53     ` Jason Gunthorpe
2024-02-27 10:50       ` Will Deacon
2024-02-06 15:12 ` [PATCH v5 03/17] iommu/arm-smmu-v3: Move arm_smmu_rmr_install_bypass_ste() Jason Gunthorpe
2024-02-13 15:37   ` Mostafa Saleh
2024-02-13 16:16     ` Jason Gunthorpe
2024-02-13 16:46       ` Mostafa Saleh
2024-02-15 19:01     ` Robin Murphy
2024-02-15 21:18       ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 04/17] iommu/arm-smmu-v3: Move the STE generation for S1 and S2 domains into functions Jason Gunthorpe
2024-02-16 17:12   ` Jason Gunthorpe
2024-02-16 17:39     ` Will Deacon
2024-02-16 17:58       ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 05/17] iommu/arm-smmu-v3: Build the whole STE in arm_smmu_make_s2_domain_ste() Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 06/17] iommu/arm-smmu-v3: Hold arm_smmu_asid_lock during all of attach_dev Jason Gunthorpe
2024-02-13 15:38   ` Mostafa Saleh
2024-02-13 16:18     ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 07/17] iommu/arm-smmu-v3: Compute the STE only once for each master Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 08/17] iommu/arm-smmu-v3: Do not change the STE twice during arm_smmu_attach_dev() Jason Gunthorpe
2024-02-13 15:40   ` Mostafa Saleh
2024-02-13 16:26     ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 09/17] iommu/arm-smmu-v3: Put writing the context descriptor in the right order Jason Gunthorpe
2024-02-13 15:42   ` Mostafa Saleh
2024-02-13 17:50     ` Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 10/17] iommu/arm-smmu-v3: Pass smmu_domain to arm_enable/disable_ats() Jason Gunthorpe
2024-02-13 15:43   ` Mostafa Saleh
2024-02-06 15:12 ` [PATCH v5 11/17] iommu/arm-smmu-v3: Remove arm_smmu_master->domain Jason Gunthorpe
2024-02-13 15:45   ` Mostafa Saleh
2024-02-13 16:37     ` Jason Gunthorpe
2024-02-13 17:00       ` Mostafa Saleh
2024-02-06 15:12 ` [PATCH v5 12/17] iommu/arm-smmu-v3: Check that the RID domain is S1 in SVA Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 13/17] iommu/arm-smmu-v3: Add a global static IDENTITY domain Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 14/17] iommu/arm-smmu-v3: Add a global static BLOCKED domain Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 15/17] iommu/arm-smmu-v3: Use the identity/blocked domain during release Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 16/17] iommu/arm-smmu-v3: Pass arm_smmu_domain and arm_smmu_device to finalize Jason Gunthorpe
2024-02-06 15:12 ` [PATCH v5 17/17] iommu/arm-smmu-v3: Convert to domain_alloc_paging() Jason Gunthorpe
2024-02-07  5:27 ` [PATCH v5 00/17] Update SMMUv3 to the modern iommu API (part 1/3) Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240215160135.GL1088888@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=joro@8bytes.org \
    --cc=jroedel@suse.de \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mdf@kernel.org \
    --cc=moritzf@google.com \
    --cc=mshavit@google.com \
    --cc=nicolinc@nvidia.com \
    --cc=patches@lists.linux.dev \
    --cc=robin.murphy@arm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=smostafa@google.com \
    --cc=will@kernel.org \
    --cc=zhangfei.gao@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).