From: Mostafa Saleh <smostafa@google.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev,
linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev,
maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com,
suzuki.poulose@arm.com, yuzenghui@huawei.com,
catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com,
jean-philippe@linaro.org, qperret@google.com, tabba@google.com,
mark.rutland@arm.com, praan@google.com
Subject: Re: [PATCH v3 29/29] iommu/arm-smmu-v3-kvm: Add IOMMU ops
Date: Wed, 13 Aug 2025 13:52:42 +0000 [thread overview]
Message-ID: <aJyYquSQl_OfNDA-@google.com> (raw)
In-Reply-To: <20250812134808.GC599331@ziepe.ca>
On Tue, Aug 12, 2025 at 10:48:08AM -0300, Jason Gunthorpe wrote:
> On Tue, Aug 12, 2025 at 12:37:08PM +0000, Mostafa Saleh wrote:
> > I see, but most of the code in KVM mode is exactly the same as in the
> > current driver, as the driver is not HW agnostic (unlike virtio-iommu).
> > In fact it does almost everything, and just delegates
> > attach_identity/blocked to the hypervisor.
>
> How is that possible? the kvm driver for smmuv3 should be very
> different than the iommu subsystem driver. That seemed to be what this
> series was showing.. Even the memory allocations and page table code
> were all modified?
>
> This delta seems to only get bigger as you move on toward having full
> emulation in the hypervisor.
>
> So, I'm confused what is being shared here and why are we trying so
> hard to force two different things to share some unclear amount of
> code?
>
Originally, this series from v1-v3(currently), just split the common
code (SMMUv3 and io-pgtable) to separate files and added a new driver
so it was a clear boundary.
But, I started re-working the series as I mentioned based on the feedback
to have KVM as mode in the driver.
IMO, there is a value in either approaches (single or 2 drivers), a
single driver is less intrusive to the current driver as it doesn't do
any splitting, and just add few hooks.
As I mentioned I open to both, not trying to force anything :)
> > In addition, with no standard iommu-binding, this driver has to be
> > enlightened somehow about how to deal with device operations.
> >
> > As mentioned before, when nesting is added, many of the hooks will be
> > removed anyway as KVM would rely on trap and emulate instead of HVCs.
> >
> > Otherwise, we can skip this series and I can post nesting directly
> > (which would be a relatively bigger one), that probably would rely
> > on the same concept of the driver bootstrapping the hypervisor driver.
>
> I think you end up with the same design I am suggesting here, it is
> nonsense to have one smmu driver when it is actually split into two
> instances where one is running inside the protected world and one is
> the normal iommu subsystem driver. That's not just bootstrapping, that
> is two full instances running in parallel that are really very
> different things.
But they don’t do the same thing? they collaborate to configure the IOMMUs.
That’s how the kernel boots, it starts as one object then splits between
EL1 and EL2, which runs in parallel. At run time KVM will decide if it
runs fully in EL2 or in split mode and either do function calls or
hypercalls. It makes sense for KVM drivers to follow the same model.
See kvm_call_* for example.
>
> > I am generally open to any path to move this forward, as Robin and
> > Will originally suggested the KVM mode in the upstream driver approach,
> > what do you think?
>
> Well, I'd like to see you succeed here too, but it these series are
> very big and seem a pretty invasive. I'm very keen we are still able
> to support the smmuv3 driver in all the non-pkvm configurations
> without hassle, and I don't want to become blocked on inscrutible pkvm
> problems because we have decided to share a few lines of code..
Makes sense, but making changes is part of evolving the driver, for
example attaching devices looks nothing at the moment as 3 years ago
when I started working on this.
Many of the cmdq code has been reworked for tegra which is not part of
the SMMUv3 architecture.
IMHO, we can’t just reject changes because it’s invasive, or some people
doesn't care about it.
Instead based on maintainability of new changes and whether it makes
sense or not.
>
> Are you sure there isn't some inbetween you can do that allows getting
> to the full emulation of smmuv3 without so much churn to existing
> code?
>
> This is why I prefer adding new stuff that you then just erase vs
> mangling the existing drivers potentially forever..
>
What I can do is to hold back v4, and I can revive my nesting patches,
it doesn’t need any hooks in iommu_ops nor hypercalls.
However, the tricky part would be the probe, as the hypervisor has to know
about the SMMUs before the kernel doesn’t any anything significant with,
but the same time we don't to repeat the probe path.
If that goes well, I can post something next week with nesting instead
of the current approach.
Thanks,
Mostafa
> Jason
prev parent reply other threads:[~2025-08-13 13:52 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-28 17:52 [PATCH v3 00/29] KVM: arm64: SMMUv3 driver for pKVM Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 01/29] KVM: arm64: Add a new function to donate memory with prot Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 02/29] KVM: arm64: Donate MMIO to the hypervisor Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 03/29] KVM: arm64: pkvm: Add pkvm_time_get() Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 04/29] iommu/io-pgtable-arm: Split the page table driver Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 05/29] iommu/io-pgtable-arm: Split initialization Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 06/29] iommu/arm-smmu-v3: Move some definitions to a new common file Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 07/29] iommu/arm-smmu-v3: Extract driver-specific bits from probe function Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 08/29] iommu/arm-smmu-v3: Move some functions to arm-smmu-v3-common.c Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 09/29] iommu/arm-smmu-v3: Move queue and table allocation " Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 10/29] iommu/arm-smmu-v3: Move firmware probe to arm-smmu-v3-common Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 11/29] iommu/arm-smmu-v3: Move IOMMU registration to arm-smmu-v3-common.c Mostafa Saleh
2025-07-28 17:52 ` [PATCH v3 12/29] iommu/arm-smmu-v3: Split cmdq code with hyp Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 13/29] iommu/arm-smmu-v3: Move TLB range invalidation into a macro Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 14/29] KVM: arm64: iommu: Introduce IOMMU driver infrastructure Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 15/29] KVM: arm64: iommu: Shadow host stage-2 page table Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 16/29] KVM: arm64: iommu: Add a memory pool Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 17/29] KVM: arm64: iommu: Add enable/disable hypercalls Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 18/29] iommu/arm-smmu-v3-kvm: Add SMMUv3 driver Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 19/29] iommu/arm-smmu-v3-kvm: Initialize registers Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 20/29] iommu/arm-smmu-v3-kvm: Setup command queue Mostafa Saleh
2025-07-29 6:44 ` Krzysztof Kozlowski
2025-07-29 9:55 ` Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 21/29] iommu/arm-smmu-v3-kvm: Setup stream table Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 22/29] iommu/arm-smmu-v3-kvm: Reset the device Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 23/29] iommu/arm-smmu-v3-kvm: Support io-pgtable Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 24/29] iommu/arm-smmu-v3-kvm: Shadow the CPU stage-2 page table Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 25/29] iommu/arm-smmu-v3-kvm: Add enable/disable device HVCs Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 26/29] iommu/arm-smmu-v3-kvm: Add host driver for pKVM Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 27/29] iommu/arm-smmu-v3-kvm: Pass a list of SMMU devices to the hypervisor Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 28/29] iommu/arm-smmu-v3-kvm: Allocate structures and reset device Mostafa Saleh
2025-07-28 17:53 ` [PATCH v3 29/29] iommu/arm-smmu-v3-kvm: Add IOMMU ops Mostafa Saleh
2025-07-30 14:42 ` Jason Gunthorpe
2025-07-30 15:07 ` Mostafa Saleh
2025-07-30 16:47 ` Jason Gunthorpe
2025-07-31 14:17 ` Mostafa Saleh
2025-07-31 16:57 ` Jason Gunthorpe
2025-07-31 17:44 ` Mostafa Saleh
2025-08-01 18:59 ` Jason Gunthorpe
2025-08-04 14:41 ` Mostafa Saleh
2025-08-05 17:57 ` Jason Gunthorpe
2025-08-06 14:10 ` Mostafa Saleh
2025-08-11 18:55 ` Jason Gunthorpe
2025-08-12 10:29 ` Mostafa Saleh
2025-08-12 12:10 ` Jason Gunthorpe
2025-08-12 12:37 ` Mostafa Saleh
2025-08-12 13:48 ` Jason Gunthorpe
2025-08-13 13:52 ` Mostafa Saleh [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aJyYquSQl_OfNDA-@google.com \
--to=smostafa@google.com \
--cc=catalin.marinas@arm.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@ziepe.ca \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=praan@google.com \
--cc=qperret@google.com \
--cc=robin.murphy@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).