linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Michael Shavit <mshavit@google.com>
Cc: iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
	linux-arm-kernel@lists.infradead.org,
	Robin Murphy <robin.murphy@arm.com>,
	Will Deacon <will@kernel.org>, Nicolin Chen <nicolinc@nvidia.com>
Subject: Re: [PATCH 04/19] iommu/arm-smmu-v3: Make STE programming independent of the callers
Date: Fri, 20 Oct 2023 08:39:18 -0300	[thread overview]
Message-ID: <20231020113918.GD3952@nvidia.com> (raw)
In-Reply-To: <CAKHBV270uJjdj4zeK-NNt+j3TiqoGDdLUQ5uRrLw9PN0ysCDbw@mail.gmail.com>

On Fri, Oct 20, 2023 at 04:23:44PM +0800, Michael Shavit wrote:
> The comment helps a lot thank you.
> 
> I do still have some final reservations: wouldn't it be clearer with
> the loop un-rolled? After all it's only 3 steps in the worst case....
> Something like:

I thought about that, but a big point for me was to consolidate the
algorithm between CD/STE. Inlining everything makes it much more
difficult to achieve this. Actually my first sketches were trying to
write it unrolled.

> +       arm_smmu_get_ste_used(target, &target_used);
> +       arm_smmu_get_ste_used(cur, &cur_used);
> +       if (!hitless_possible(target, target_used, cur_used, cur_used)) {

hitless possible requires the loop of the step function to calcuate
it.

> +               target->data[0] = STRTAB_STE_0_V;
> +               arm_smmu_sync_ste_for_sid(smmu, sid);

I still like V=0 as I think we do want the event for this case.

> +               /*
> +                * The STE is now in abort where none of the bits except
> +                * STRTAB_STE_0_V and STRTAB_STE_0_CFG are accessed. This allows
> +                * all other words of the STE to be written without further
> +                * disruption.
> +                */
> +               arm_smmu_get_ste_used(cur, &cur_used);
> +       }
> +       /* write bits in all positions unused by the STE */
> +       for (i = 0; i != ARRAY_SIZE(cur->data); i++) {
> +               /* (should probably optimize this away if no write needed) */
> +               WRITE_ONCE(cur->data[i], (cur->data[i] & cur_used[i])
> | (target->data[i] & ~cur_used[i]));
> +       }
> +       arm_smmu_sync_ste_for_sid(smmu, sid);

Yes, I wanted to avoid all the syncs if they are not required.

> +       /*
> +        * It should now be possible to make a single qword write to make the
> +        * the new configuration take effect.
> +        */
> +       for (i = 0; i != ARRAY_SIZE(cur->data); i++) {
> +               if ((cur->data[i] & target_used[i]) !=
> (target->data[i] & target_used[i]))
> +                       /*
> +                        * TODO:
> +                        * WARN_ONCE if this condition hits more than once in
> +                        * the loop
> +                        */
> +                       WRITE_ONCE(cur->data[i], (cur->data[i] &
> cur_used[i]) | (target->data[i] & ~cur_used[i]));
> +       }

> +       arm_smmu_sync_ste_for_sid(smmu, sid);

This needs to be optional too

And there is another optional 4th pass to set the unused target values
to 0.

Basically you have captured the core algorithm, but I think if you
fill in all the missing bits to get up to the same functionality it
will be longer and unsharable with the CD side.

You could perhaps take this approach and split it into 4 sharable step
functions:

 if (step1(target, target_used, cur_used, cur_used, len)) {
  arm_smmu_sync_ste_for_sid(smmu, sid);
  arm_smmu_get_ste_used(cur, &cur_used);
 }

 if (step2(target, target_used, cur_used, cur_used, len))
  arm_smmu_sync_ste_for_sid(smmu, sid);

 if (step3(target, target_used, cur_used, cur_used, len)) {
   arm_smmu_sync_ste_for_sid(smmu, sid);
   arm_smmu_get_ste_used(cur, &cur_used);
  }

 if (step4(target, target_used, cur_used, cur_used, len))
   arm_smmu_sync_ste_for_sid(smmu, sid);

To me this is inelegant as if we only need to do step 3 we have to
redundantly scan the array 2 times. The rolled up version just
directly goes to step 3.

However this does convince me you've thought very carefully about this
and have not found a flaw in the design!

Thanks,
Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-10-20 11:39 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-11  0:33 [PATCH 00/19] Update SMMUv3 to the modern iommu API (part 1/2) Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 01/19] iommu/arm-smmu-v3: Add a type for the STE Jason Gunthorpe
2023-10-13 10:37   ` Will Deacon
2023-10-13 14:00     ` Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 02/19] iommu/arm-smmu-v3: Master cannot be NULL in arm_smmu_write_strtab_ent() Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 03/19] iommu/arm-smmu-v3: Remove ARM_SMMU_DOMAIN_NESTED Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 04/19] iommu/arm-smmu-v3: Make STE programming independent of the callers Jason Gunthorpe
2023-10-12  8:10   ` Michael Shavit
2023-10-12 12:16     ` Jason Gunthorpe
2023-10-18 11:05       ` Michael Shavit
2023-10-18 13:04         ` Jason Gunthorpe
2023-10-20  8:23           ` Michael Shavit
2023-10-20 11:39             ` Jason Gunthorpe [this message]
2023-10-23  8:36               ` Michael Shavit
2023-10-23 12:05                 ` Jason Gunthorpe
2023-12-15 20:26                 ` Michael Shavit
2023-12-17 13:03                   ` Jason Gunthorpe
2023-12-18 12:35                     ` Michael Shavit
2023-12-18 12:42                       ` Michael Shavit
2023-12-19 13:42                       ` Michael Shavit
2023-12-25 12:17                         ` Michael Shavit
2023-12-25 12:58                           ` Michael Shavit
2023-12-27 15:33                             ` Jason Gunthorpe
2023-12-27 15:46                         ` Jason Gunthorpe
2024-01-02  8:08                           ` Michael Shavit
2024-01-02 14:48                             ` Jason Gunthorpe
2024-01-03 16:52                               ` Michael Shavit
2024-01-03 17:50                                 ` Jason Gunthorpe
2024-01-06  8:36                                   ` [PATCH] " Michael Shavit
2024-01-06  8:36                                     ` [PATCH] iommu/arm-smmu-v3: Make CD programming use arm_smmu_write_entry_step() Michael Shavit
2024-01-10 13:34                                       ` Jason Gunthorpe
2024-01-06  8:36                                     ` [PATCH] iommu/arm-smmu-v3: Add unit tests for arm_smmu_write_entry Michael Shavit
2024-01-12 16:36                                       ` Jason Gunthorpe
2024-01-16  9:23                                         ` Michael Shavit
2024-01-10 13:10                                     ` [PATCH] iommu/arm-smmu-v3: Make STE programming independent of the callers Jason Gunthorpe
2024-01-06  8:50                                   ` [PATCH 04/19] " Michael Shavit
2024-01-12 19:45                                     ` Jason Gunthorpe
2024-01-03 15:42                           ` Michael Shavit
2024-01-03 15:49                             ` Jason Gunthorpe
2024-01-03 16:47                               ` Michael Shavit
2024-01-02  8:13                         ` Michael Shavit
2024-01-02 14:48                           ` Jason Gunthorpe
2023-10-18 10:54   ` Michael Shavit
2023-10-18 12:24     ` Jason Gunthorpe
2023-10-19 23:03       ` Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 05/19] iommu/arm-smmu-v3: Consolidate the STE generation for abort/bypass Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 06/19] iommu/arm-smmu-v3: Move arm_smmu_rmr_install_bypass_ste() Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 07/19] iommu/arm-smmu-v3: Move the STE generation for S1 and S2 domains into functions Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 08/19] iommu/arm-smmu-v3: Build the whole STE in arm_smmu_make_s2_domain_ste() Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 09/19] iommu/arm-smmu-v3: Hold arm_smmu_asid_lock during all of attach_dev Jason Gunthorpe
2023-10-24  2:44   ` Michael Shavit
2023-10-24  2:48     ` Michael Shavit
2023-10-24 11:50     ` Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 10/19] iommu/arm-smmu-v3: Compute the STE only once for each master Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 11/19] iommu/arm-smmu-v3: Do not change the STE twice during arm_smmu_attach_dev() Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 12/19] iommu/arm-smmu-v3: Put writing the context descriptor in the right order Jason Gunthorpe
2023-10-12  9:01   ` Michael Shavit
2023-10-12 12:34     ` Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 13/19] iommu/arm-smmu-v3: Pass smmu_domain to arm_enable/disable_ats() Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 14/19] iommu/arm-smmu-v3: Remove arm_smmu_master->domain Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 15/19] iommu/arm-smmu-v3: Add a global static IDENTITY domain Jason Gunthorpe
2023-10-18 11:06   ` Michael Shavit
2023-10-18 12:26     ` Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 16/19] iommu/arm-smmu-v3: Add a global static BLOCKED domain Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 17/19] iommu/arm-smmu-v3: Use the identity/blocked domain during release Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 18/19] iommu/arm-smmu-v3: Pass arm_smmu_domain and arm_smmu_device to finalize Jason Gunthorpe
2023-10-11  0:33 ` [PATCH 19/19] iommu/arm-smmu-v3: Convert to domain_alloc_paging() Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231020113918.GD3952@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mshavit@google.com \
    --cc=nicolinc@nvidia.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).