From: Jason Gunthorpe <jgg@nvidia.com>
To: Michael Shavit <mshavit@google.com>
Cc: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, will@kernel.org,
nicolinc@nvidia.com, tina.zhang@intel.com,
jean-philippe@linaro.org, robin.murphy@arm.com
Subject: Re: [RFC PATCH v1 3/8] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus
Date: Mon, 21 Aug 2023 11:26:27 -0300 [thread overview]
Message-ID: <ZON0E3KV46EEPw/p@nvidia.com> (raw)
In-Reply-To: <CAKHBV27PL=2jxOd0BoYdoBMTu_0rm4z_JP6iG+SVi5Ag7w2kWw@mail.gmail.com>
On Mon, Aug 21, 2023 at 10:16:54PM +0800, Michael Shavit wrote:
> On Mon, Aug 21, 2023 at 9:50 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> >
> > On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> > > On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > >
> > > > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > > > >
> > > > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > > > domain is installed to.
> > > > > > >
> > > > > > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > > > > > ---
> > > > > >
> > > > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > > > global for the max ASID?
> > > > > >
> > > > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > > > when the domain is installed in different devices if the current ASID
> > > > > > is over the instance max..
> > > > >
> > > > > This RFC took the other easy way out for this problem by rejecting
> > > > > attaching a domain if its currently assigned ASID/VMID
> > > > > is out of range when attaching to a new SMMU. But I'm not sure
> > > > > which of the two options is the right trade-off.
> > > > > Especially if we move VMID to a global allocator (which I plan to add
> > > > > for v2), setting a global maximum for VMID of 256 sounds small.
> > > >
> > > > IMHO the simplest and best thing is to make both vmid and asid as
> > > > local allocators. Then alot of these problems disappear
> > >
> > > Well that does sound like the most flexible, but IMO quite a lot more
> > > complicated.
> > >
> > > I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> > > installed_smmus` patch and uses a flat master list in smmu_domain as
> > > suggested by Robin, for comparison with the v1. But at a glance using a
> > > local allocator would require:
> >
> > > 1. Keeping that patch so we can track the asid/vmid for a domain on a
> > > per smmu instance
> >
> > You'd have to store the cache tag in the per-master struct on that
> > list and take it out of the domain struct.
> >
> > Ie the list of attached masters contains the per-master cache tag
> > instead of a global cache tag.
> >
> > The only place you need the cache tag is when iterating over the list
> > of masters, so it is OK.
> >
> > If the list of masters is sorted by smmu then the first master of each
> > smmu can be used to perform the cache tag invalidation, then the rest
> > of the list is the ATC invalidation.
> >
> > The looping code will be a bit ugly.
>
> I suppose that could work.... but I'm worried it's gonna be messy,
> especially if we think about how the PASID feature would interact.
> With PASID, there could be multiple domains attached to a master. So
> we won't be able to store a single cache tag/asid for the currently
> attached domain on the arm_smmu_master.
I wasn't suggesting to store it in the arm_smmu_master, I was
suggesting to store it in the same place you store the per-master
PASID.
eg I expect that on attach the domain will allocate new memory to
store the pasid/cache tag/master/domain and thread that memory on a
list of attached masters.
> > > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > > attached to, which just at a glance looks headache inducing because of
> > > sva's piggybacking on the rid domain.)
> >
> > Not every smmu, just the one you are *currently* attaching to. We
> > don't care if the *other* smmu's have different ASIDs, maybe they are
> > not using BTM, or won't use SVA.
>
> I mean because the domain in arm_smmu_mmu_notifier_get is the RID
> domain (not the SVA domain, same issue we discussed in previous
> thread) , which can be attached to multiple SMMUs.
Oh that is totally nonsensical. I expect you will need to fix that
sooner than later. Once the CD table is moved and there is a proper
way to track the PASID it should not be needed. It shouldn't fall into
the decision making about where to put the ASID xarray.
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-08-21 14:26 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-17 18:16 [RFC PATCH v1 0/8] Install domain onto multiple smmus Michael Shavit
2023-08-17 18:16 ` [RFC PATCH v1 1/8] iommu/arm-smmu-v3: Add list of installed_smmus Michael Shavit
2023-08-17 19:05 ` Jason Gunthorpe
2023-08-17 19:34 ` Robin Murphy
2023-08-18 5:34 ` Michael Shavit
2023-08-17 18:16 ` [RFC PATCH v1 2/8] iommu/arm-smmu-v3: Perform invalidations over installed_smmus Michael Shavit
2023-08-17 19:20 ` Jason Gunthorpe
2023-08-17 19:41 ` Robin Murphy
2023-08-18 3:44 ` Michael Shavit
2023-08-18 13:51 ` Jason Gunthorpe
2023-08-21 8:33 ` Michael Shavit
2023-08-21 11:57 ` Jason Gunthorpe
2023-08-22 8:17 ` Michael Shavit
2023-08-22 8:21 ` Michael Shavit
2023-08-22 10:10 ` Michael Shavit
2023-08-22 14:15 ` Jason Gunthorpe
2023-08-17 18:16 ` [RFC PATCH v1 3/8] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus Michael Shavit
2023-08-17 18:38 ` Jason Gunthorpe
2023-08-21 9:31 ` Michael Shavit
2023-08-21 11:54 ` Jason Gunthorpe
2023-08-21 13:38 ` Michael Shavit
2023-08-21 13:50 ` Jason Gunthorpe
2023-08-21 14:16 ` Michael Shavit
2023-08-21 14:26 ` Jason Gunthorpe [this message]
2023-08-21 14:39 ` Michael Shavit
2023-08-21 14:56 ` Jason Gunthorpe
2023-08-22 8:53 ` Michael Shavit
2023-08-17 18:16 ` [RFC PATCH v1 4/8] iommu/arm-smmu-v3: check smmu compatibility on attach Michael Shavit
2023-08-17 19:16 ` Robin Murphy
2023-08-18 3:14 ` Michael Shavit
2023-08-17 18:16 ` [RFC PATCH v1 5/8] iommu/arm-smmu-v3: Add arm_smmu_device as a parameter to domain_finalise Michael Shavit
2023-08-17 18:16 ` [RFC PATCH v1 6/8] iommu/arm-smmu-v3: Free VMID when uninstalling domain from SMMU Michael Shavit
2023-08-17 18:50 ` Jason Gunthorpe
2023-08-17 18:16 ` [RFC PATCH v1 7/8] iommu/arm-smmu-v3: check for domain initialization using pgtbl_ops Michael Shavit
2023-08-17 18:16 ` [RFC PATCH v1 8/8] iommu/arm-smmu-v3: allow multi-SMMU domain installs Michael Shavit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZON0E3KV46EEPw/p@nvidia.com \
--to=jgg@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mshavit@google.com \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=tina.zhang@intel.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).