From: Alexey Kardashevskiy <aik@amd.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>, Anup Patel <anup@brainfault.org>,
Albert Ou <aou@eecs.berkeley.edu>,
Jonathan Corbet <corbet@lwn.net>,
iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
Justin Stitt <justinstitt@google.com>,
linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org,
linux-riscv@lists.infradead.org, llvm@lists.linux.dev,
Bill Wendling <morbo@google.com>,
Nathan Chancellor <nathan@kernel.org>,
Nick Desaulniers <nick.desaulniers+lkml@gmail.com>,
Miguel Ojeda <ojeda@kernel.org>,
Palmer Dabbelt <palmer@dabbelt.com>,
Paul Walmsley <pjw@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Shuah Khan <shuah@kernel.org>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
Will Deacon <will@kernel.org>,
Alejandro Jimenez <alejandro.j.jimenez@oracle.com>,
James Gowans <jgowans@amazon.com>,
Kevin Tian <kevin.tian@intel.com>,
Michael Roth <michael.roth@amd.com>,
Pasha Tatashin <pasha.tatashin@soleen.com>,
patches@lists.linux.dev, Samiullah Khawaja <skhawaja@google.com>,
Vasant Hegde <vasant.hegde@amd.com>
Subject: Re: [PATCH v8 07/15] iommupt: Add map_pages op
Date: Mon, 19 Jan 2026 12:00:47 +1100 [thread overview]
Message-ID: <e0514ec6-b428-4367-9e0d-cfb53cc64379@amd.com> (raw)
In-Reply-To: <20260117154347.GF1134360@nvidia.com>
On 18/1/26 02:43, Jason Gunthorpe wrote:
> On Sat, Jan 17, 2026 at 03:54:52PM +1100, Alexey Kardashevskiy wrote:
>
>> I am trying this with TEE-IO on AMD SEV and hitting problems.
>
> My understanding is that if you want to use SEV today you also have to
> use the kernel command line parameter to force 4k IOMMU pages?
No, not only 4K. I do not enforce any page size by default so it is "everything but 512G", only when the device is "accepted" - I unmap everything in QEMU, "accept" the device, then map everything again but this time IOMMU uses the (4K|2M) pagemask and takes RMP entry sizes into account.
> So, I think your questions are about trying to enhance this to get
> larger pages in the IOMMU when possible?
I did test my 6 month old stuff with 2MB pages + runtime smashing, works fine, although ugly as it uses that page size recalculating hack I mentioned before.
>> Now, from time to time the guest will share 4K pages which makes the
>> host OS smash NPT's 2MB PDEs to 4K PTEs, and 2M RMP entries to 4K
>> RMP entries, and since the IOMMU performs RMP checks - IOMMU PDEs
>> have to use the same granularity as NPT and RMP.
>
> IMHO this is a bad hardware choice, it is going to make some very
> troublesome software, so sigh.
afaik the Other OS is still not using 2MB pages (or does but not much?) and runs on the same hw :)
Sure we can force some rules in Linux to make the sw simpler though.
>> So I end up in a situation when QEMU asks to map, for example, 2GB
>> of guest RAM and I want most of it to be 2MB mappings, and only
>> handful of 2MB pages to be split into 4K pages. But it appears so
>> that the above enforces the same page size for entire range.
>
>> In the old IOMMU code, I handled it like this:
>>
>> https://github.com/AMDESE/linux-kvm/commit/0a40130987b7b65c367390d23821cc4ecaeb94bd#diff-f22bea128ddb136c3adc56bc09de9822a53ba1ca60c8be662a48c3143c511963L341
>>
>> tl;dr: I constantly re-calculate the page size while mapping.
>
> Doing it at mapping time doesn't seem right to me, AFAICT the RMP can
> change dynamically whenever the guest decides to change the
> private/shared status of memory?
The guest requests page state conversion which makes KVM change RMPs and potentially smash huge pages, the guest only (in)validates the RMP entry but does not change ASID+GPA+otherbits, the host does. But yeah a race is possible here.
> My expectation for AMD was that the VMM would be monitoring the RMP
> granularity and use cut or "increase/decrease page size" through
> iommupt to adjust the S2 mapping so it works with these RMP
> limitations.
>
> Those don't fully exist yet, but they are in the plans.
I remember the talks about hitless smashing but in case of RMPs atomic xchg is not enough (we have a HW engine for that).
> It assumes that the VMM is continually aware of what all the RMP PTEs
> look like and when they are changing so it can make the required
> adjustments.
>
> The flow would be some thing like..
> 1) Create an IOAS
> 2) Create a HWPT. If there is some known upper bound on RMP/etc page
> size then limit the HWPT page size to the upper bound
> 3) Map stuff into the ioas
> 4) Build the RMP/etc and map ranges of page granularity
> 5) Call iommufd to adjust the page size within ranges
Say, I hotplug a device into a VM with a mix of 4K and 2M RMPs. QEMU will ask iommufd to map everything (and that would be 2M/1G), should then QEMU ask KVM to walk through ranges and call iommufd directly to make IO PDEs/PTEs match RMPs?
I mean, I have to do the KVM->iommufd part anyway when 2M->4K smashing happens in runtime but the initial mapping could be simpler if iommufd could check RMP.
> 6) Guest changes encrypted state so RMP changes
> 7) VMM adjusts the ranges of page granularity and calls iommufd with
> the updates
> 8) iommput code increases/decreases page size as required.
>
> Does this seem reasonable?
It does.
For the time being I do bypass IOMMU and make KVM call another FW+HW DMA engine to smash IOPDEs.
>> I know, ideally we would only share memory in 2MB chunks but we are
>> not there yet as I do not know the early boot stage on x86 enough to
>
> Even 2M is too small, I'd expect realy scenarios to want to get up to
> 1GB ??
Except SWIOTLB, afaict there is really no good reason to share more than half a MB of memory ever, 1GB is just way too much waste imho. The biggest RMP entry size is 2M, the 1G RMP optimization is done quite differently. Thanks,
ps. I am still curious about:
> btw just realized - does the code check that the folio_size matches IO pagesize? Or batch_to_domain() is expected to start a new batch if the next page size is not the same as previous? With THP, we can have a mix of page sizes"
> Jason
--
Alexey
next prev parent reply other threads:[~2026-01-19 1:01 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-04 18:29 [PATCH v8 00/15] Consolidate iommu page table implementations (AMD) Jason Gunthorpe
2025-11-04 18:29 ` [PATCH v8 01/15] genpt: Generic Page Table base API Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 02/15] genpt: Add Documentation/ files Jason Gunthorpe
2025-11-04 23:49 ` Randy Dunlap
2025-11-05 18:51 ` Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 03/15] iommupt: Add the basic structure of the iommu implementation Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 04/15] iommupt: Add the AMD IOMMU v1 page table format Jason Gunthorpe
2025-11-04 18:51 ` Randy Dunlap
2025-11-04 18:30 ` [PATCH v8 05/15] iommupt: Add iova_to_phys op Jason Gunthorpe
2025-11-04 19:02 ` Randy Dunlap
2025-11-04 19:19 ` Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 06/15] iommupt: Add unmap_pages op Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 07/15] iommupt: Add map_pages op Jason Gunthorpe
2026-01-17 4:54 ` Alexey Kardashevskiy
2026-01-17 15:43 ` Jason Gunthorpe
2026-01-19 1:00 ` Alexey Kardashevskiy [this message]
2026-01-19 17:37 ` Jason Gunthorpe
2026-01-21 1:08 ` Alexey Kardashevskiy
2026-01-21 17:09 ` Jason Gunthorpe
2026-01-22 10:58 ` Alexey Kardashevskiy
2026-01-22 14:12 ` Jason Gunthorpe
2026-01-23 1:07 ` Alexey Kardashevskiy
2026-01-23 14:14 ` Jason Gunthorpe
2026-01-27 8:08 ` Alexey Kardashevskiy
2026-01-27 14:25 ` Jason Gunthorpe
2026-01-28 1:42 ` Alexey Kardashevskiy
2026-01-28 13:32 ` Jason Gunthorpe
2026-01-29 0:33 ` Alexey Kardashevskiy
2026-01-29 1:17 ` Jason Gunthorpe
2026-02-25 23:11 ` Alexey Kardashevskiy
2026-02-26 15:04 ` Jason Gunthorpe
2026-02-27 1:39 ` Alexey Kardashevskiy
2026-02-27 13:48 ` Jason Gunthorpe
2026-03-02 0:02 ` Alexey Kardashevskiy
2026-03-02 0:41 ` Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 08/15] iommupt: Add read_and_clear_dirty op Jason Gunthorpe
2025-11-04 19:13 ` Randy Dunlap
2025-11-04 19:17 ` Jason Gunthorpe
2025-11-04 19:19 ` Randy Dunlap
2025-11-04 18:30 ` [PATCH v8 09/15] iommupt: Add a kunit test for Generic Page Table Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 10/15] iommupt: Add a mock pagetable format for iommufd selftest to use Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 11/15] iommufd: Change the selftest to use iommupt instead of xarray Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 12/15] iommupt: Add the x86 64 bit page table format Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 13/15] iommu/amd: Use the generic iommu page table Jason Gunthorpe
2025-11-05 16:01 ` Ankit Soni
2025-11-05 16:57 ` Jason Gunthorpe
2025-12-05 2:40 ` Lai, Yi
2025-12-05 19:46 ` Jason Gunthorpe
2025-12-05 20:07 ` Alejandro Jimenez
2025-11-04 18:30 ` [PATCH v8 14/15] iommu/amd: Remove AMD io_pgtable support Jason Gunthorpe
2025-11-04 18:30 ` [PATCH v8 15/15] iommupt: Add a kunit test for the IOMMU implementation Jason Gunthorpe
2025-11-05 8:45 ` [PATCH v8 00/15] Consolidate iommu page table implementations (AMD) Joerg Roedel
2025-11-05 12:43 ` Jason Gunthorpe
2025-12-19 8:10 ` patchwork-bot+linux-riscv
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e0514ec6-b428-4367-9e0d-cfb53cc64379@amd.com \
--to=aik@amd.com \
--cc=alejandro.j.jimenez@oracle.com \
--cc=alex@ghiti.fr \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=corbet@lwn.net \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=jgowans@amazon.com \
--cc=joro@8bytes.org \
--cc=justinstitt@google.com \
--cc=kevin.tian@intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=llvm@lists.linux.dev \
--cc=michael.roth@amd.com \
--cc=morbo@google.com \
--cc=nathan@kernel.org \
--cc=nick.desaulniers+lkml@gmail.com \
--cc=ojeda@kernel.org \
--cc=palmer@dabbelt.com \
--cc=pasha.tatashin@soleen.com \
--cc=patches@lists.linux.dev \
--cc=pjw@kernel.org \
--cc=robin.murphy@arm.com \
--cc=shuah@kernel.org \
--cc=skhawaja@google.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=vasant.hegde@amd.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox