From: Samiullah Khawaja <skhawaja@google.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>, Anup Patel <anup@brainfault.org>,
Albert Ou <aou@eecs.berkeley.edu>,
Jonathan Corbet <corbet@lwn.net>,
iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
Justin Stitt <justinstitt@google.com>,
linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org,
linux-riscv@lists.infradead.org, llvm@lists.linux.dev,
Bill Wendling <morbo@google.com>,
Nathan Chancellor <nathan@kernel.org>,
Nick Desaulniers <nick.desaulniers+lkml@gmail.com>,
Miguel Ojeda <ojeda@kernel.org>,
Palmer Dabbelt <palmer@dabbelt.com>,
Paul Walmsley <pjw@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Shuah Khan <shuah@kernel.org>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
Will Deacon <will@kernel.org>,
Alexey Kardashevskiy <aik@amd.com>,
Alejandro Jimenez <alejandro.j.jimenez@oracle.com>,
James Gowans <jgowans@amazon.com>,
Kevin Tian <kevin.tian@intel.com>,
Michael Roth <michael.roth@amd.com>,
Pasha Tatashin <pasha.tatashin@soleen.com>,
patches@lists.linux.dev
Subject: Re: [PATCH v7 07/15] iommupt: Add map_pages op
Date: Tue, 28 Oct 2025 10:33:26 -0700 [thread overview]
Message-ID: <CAAywjhS+-CNXTR3_EpVjsie3bmz_2szBR7nh53hA-dWCm5j1kA@mail.gmail.com> (raw)
In-Reply-To: <7-v7-ab019a8791e2+175b8-iommu_pt_jgg@nvidia.com>
On Thu, Oct 23, 2025 at 11:21 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> map is slightly complicated because it has to handle a number of special
> edge cases:
> - Overmapping a previously shared, but now empty, table level with an OA.
> Requries validating and freeing the possibly empty tables
> - Doing the above across an entire to-be-created contiguous entry
> - Installing a new shared table level concurrently with another thread
> - Expanding the table by adding more top levels
>
> Table expansion is a unique feature of AMDv1, this version is quite
> similar except we handle racing concurrent lockless map. The table top
> pointer and starting level are encoded in a single uintptr_t which ensures
> we can READ_ONCE() without tearing. Any op will do the READ_ONCE() and use
> that fixed point as its starting point. Concurrent expansion is handled
> with a table global spinlock.
>
> When inserting a new table entry map checks that the entire portion of the
> table is empty. This includes freeing any empty lower tables that will be
> overwritten by an OA. A separate free list is used while checking and
> collecting all the empty lower tables so that writing the new entry is
> uninterrupted, either the new entry fully writes or nothing changes.
>
> A special fast path for PAGE_SIZE is implemented that does a direct walk
> to the leaf level and installs a single entry. This gives ~15% improvement
> for iommu_map() when mapping lists of single pages.
>
> This version sits under the iommu_domain_ops as map_pages() but does not
> require the external page size calculation. The implementation is actually
> map_range() and can do arbitrary ranges, internally handling all the
> validation and supporting any arrangment of page sizes. A future series
> can optimize iommu_map() to take advantage of this.
>
> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/iommu/generic_pt/iommu_pt.h | 501 +++++++++++++++++++++++++++-
> drivers/iommu/generic_pt/pt_iter.h | 2 +-
> include/linux/generic_pt/iommu.h | 59 ++++
> 3 files changed, 560 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
> index e3d1b272723db0..f32e81509f4f09 100644
> --- a/drivers/iommu/generic_pt/iommu_pt.h
> +++ b/drivers/iommu/generic_pt/iommu_pt.h
> @@ -91,6 +91,23 @@ static __maybe_unused int make_range_u64(struct pt_common *common,
> ret; \
> })
>
> +static inline unsigned int compute_best_pgsize(struct pt_state *pts,
> + pt_oaddr_t oa)
> +{
> + struct pt_iommu *iommu_table = iommu_from_common(pts->range->common);
> +
> + if (!pt_can_have_leaf(pts))
> + return 0;
> +
> + /*
> + * The page size is limited by the domain's bitmap. This allows the core
> + * code to reduce the supported page sizes by changing the bitmap.
> + */
> + return pt_compute_best_pgsize(pt_possible_sizes(pts) &
> + iommu_table->domain.pgsize_bitmap,
> + pts->range->va, pts->range->last_va, oa);
> +}
> +
> static __always_inline int __do_iova_to_phys(struct pt_range *range, void *arg,
> unsigned int level,
> struct pt_table_p *table,
> @@ -147,6 +164,8 @@ EXPORT_SYMBOL_NS_GPL(DOMAIN_NS(iova_to_phys), "GENERIC_PT_IOMMU");
>
> struct pt_iommu_collect_args {
> struct iommu_pages_list free_list;
> + /* Fail if any OAs are within the range */
> + u8 check_mapped : 1;
> };
>
> static int __collect_tables(struct pt_range *range, void *arg,
> @@ -156,7 +175,7 @@ static int __collect_tables(struct pt_range *range, void *arg,
> struct pt_iommu_collect_args *collect = arg;
> int ret;
>
> - if (!pt_can_have_table(&pts))
> + if (!collect->check_mapped && !pt_can_have_table(&pts))
> return 0;
>
> for_each_pt_level_entry(&pts) {
> @@ -167,6 +186,8 @@ static int __collect_tables(struct pt_range *range, void *arg,
> return ret;
> continue;
> }
> + if (pts.type == PT_ENTRY_OA && collect->check_mapped)
> + return -EADDRINUSE;
> }
> return 0;
> }
> @@ -187,6 +208,477 @@ static inline struct pt_table_p *table_alloc_top(struct pt_common *common,
> log2_to_int(pt_top_memsize_lg2(common, top_of_table)));
> }
>
> +/* Allocate an interior table */
> +static inline struct pt_table_p *table_alloc(const struct pt_state *parent_pts,
> + gfp_t gfp)
> +{
> + struct pt_iommu *iommu_table =
> + iommu_from_common(parent_pts->range->common);
> + struct pt_state child_pts =
> + pt_init(parent_pts->range, parent_pts->level - 1, NULL);
> +
> + return iommu_alloc_pages_node_sz(
> + iommu_table->nid, gfp,
> + log2_to_int(pt_num_items_lg2(&child_pts) +
> + ilog2(PT_ITEM_WORD_SIZE)));
> +}
> +
> +static inline int pt_iommu_new_table(struct pt_state *pts,
> + struct pt_write_attrs *attrs)
> +{
> + struct pt_table_p *table_mem;
> + phys_addr_t phys;
> +
> + /* Given PA/VA/length can't be represented */
> + if (PT_WARN_ON(!pt_can_have_table(pts)))
> + return -ENXIO;
> +
> + table_mem = table_alloc(pts, attrs->gfp);
> + if (IS_ERR(table_mem))
> + return PTR_ERR(table_mem);
> +
> + phys = virt_to_phys(table_mem);
> + if (!pt_install_table(pts, phys, attrs)) {
> + iommu_free_pages(table_mem);
> + return -EAGAIN;
> + }
> +
> + if (IS_ENABLED(CONFIG_DEBUG_GENERIC_PT)) {
> + /*
> + * The underlying table can't store the physical table address.
> + * This happens when kunit testing tables outside their normal
> + * environment where a CPU might be limited.
> + */
> + pt_load_single_entry(pts);
> + if (PT_WARN_ON(pt_table_pa(pts) != phys)) {
> + pt_clear_entries(pts, ilog2(1));
> + iommu_free_pages(table_mem);
> + return -EINVAL;
> + }
> + }
> +
> + pts->table_lower = table_mem;
> + return 0;
> +}
> +
> +struct pt_iommu_map_args {
> + struct iommu_iotlb_gather *iotlb_gather;
> + struct pt_write_attrs attrs;
> + pt_oaddr_t oa;
> + unsigned int leaf_pgsize_lg2;
> + unsigned int leaf_level;
> +};
> +
> +/*
> + * This will recursively check any tables in the block to validate they are
> + * empty and then free them through the gather.
> + */
> +static int clear_contig(const struct pt_state *start_pts,
> + struct iommu_iotlb_gather *iotlb_gather,
> + unsigned int step, unsigned int pgsize_lg2)
> +{
> + struct pt_iommu *iommu_table =
> + iommu_from_common(start_pts->range->common);
> + struct pt_range range = *start_pts->range;
> + struct pt_state pts =
> + pt_init(&range, start_pts->level, start_pts->table);
> + struct pt_iommu_collect_args collect = { .check_mapped = true };
> + int ret;
> +
> + pts.index = start_pts->index;
> + pts.end_index = start_pts->index + step;
> + for (; _pt_iter_load(&pts); pt_next_entry(&pts)) {
> + if (pts.type == PT_ENTRY_TABLE) {
> + collect.free_list =
> + IOMMU_PAGES_LIST_INIT(collect.free_list);
> + ret = pt_walk_descend_all(&pts, __collect_tables,
> + &collect);
> + if (ret)
> + return ret;
> +
> + /*
> + * The table item must be cleared before we can update
> + * the gather
> + */
> + pt_clear_entries(&pts, ilog2(1));
> +
> + iommu_pages_list_add(&collect.free_list,
> + pt_table_ptr(&pts));
> + gather_range_pages(
> + iotlb_gather, iommu_table, range.va,
> + log2_to_int(pt_table_item_lg2sz(&pts)),
> + &collect.free_list);
> + } else if (pts.type != PT_ENTRY_EMPTY) {
> + return -EADDRINUSE;
> + }
> + }
> + return 0;
> +}
> +
> +static int __map_range_leaf(struct pt_range *range, void *arg,
> + unsigned int level, struct pt_table_p *table)
> +{
> + struct pt_state pts = pt_init(range, level, table);
> + struct pt_iommu_map_args *map = arg;
> + unsigned int leaf_pgsize_lg2 = map->leaf_pgsize_lg2;
> + unsigned int start_index;
> + pt_oaddr_t oa = map->oa;
> + unsigned int step;
> + bool need_contig;
> + int ret = 0;
> +
> + PT_WARN_ON(map->leaf_level != level);
> + PT_WARN_ON(!pt_can_have_leaf(&pts));
> +
> + step = log2_to_int_t(unsigned int,
> + leaf_pgsize_lg2 - pt_table_item_lg2sz(&pts));
> + need_contig = leaf_pgsize_lg2 != pt_table_item_lg2sz(&pts);
> +
> + _pt_iter_first(&pts);
> + start_index = pts.index;
> + do {
> + pts.type = pt_load_entry_raw(&pts);
> + if (pts.type != PT_ENTRY_EMPTY || need_contig) {
> + if (pts.index != start_index)
> + pt_index_to_va(&pts);
> + ret = clear_contig(&pts, map->iotlb_gather, step,
> + leaf_pgsize_lg2);
> + if (ret)
> + break;
> + }
> +
> + if (IS_ENABLED(CONFIG_DEBUG_GENERIC_PT)) {
> + pt_index_to_va(&pts);
> + PT_WARN_ON(compute_best_pgsize(&pts, oa) !=
> + leaf_pgsize_lg2);
> + }
> + pt_install_leaf_entry(&pts, oa, leaf_pgsize_lg2, &map->attrs);
> +
> + oa += log2_to_int(leaf_pgsize_lg2);
> + pts.index += step;
> + } while (pts.index < pts.end_index);
> +
> + map->oa = oa;
> + return ret;
> +}
> +
> +static int __map_range(struct pt_range *range, void *arg, unsigned int level,
> + struct pt_table_p *table)
> +{
> + struct pt_state pts = pt_init(range, level, table);
> + struct pt_iommu_map_args *map = arg;
> + int ret;
> +
> + PT_WARN_ON(map->leaf_level == level);
> + PT_WARN_ON(!pt_can_have_table(&pts));
> +
> + _pt_iter_first(&pts);
> +
> + /* Descend to a child table */
> + do {
> + pts.type = pt_load_entry_raw(&pts);
> +
> + if (pts.type != PT_ENTRY_TABLE) {
> + if (pts.type != PT_ENTRY_EMPTY)
> + return -EADDRINUSE;
> + ret = pt_iommu_new_table(&pts, &map->attrs);
> + if (ret) {
> + /*
> + * Racing with another thread installing a table
> + */
> + if (ret == -EAGAIN)
> + continue;
> + return ret;
> + }
> + } else {
> + pts.table_lower = pt_table_ptr(&pts);
> + }
> +
> + /*
> + * The already present table can possibly be shared with another
> + * concurrent map.
> + */
> + if (map->leaf_level == level - 1)
> + ret = pt_descend(&pts, arg, __map_range_leaf);
> + else
> + ret = pt_descend(&pts, arg, __map_range);
> + if (ret)
> + return ret;
> +
> + pts.index++;
> + pt_index_to_va(&pts);
> + if (pts.index >= pts.end_index)
> + break;
> + } while (true);
> + return 0;
> +}
> +
> +/*
> + * Fast path for the easy case of mapping a 4k page to an already allocated
> + * table. This is a common workload. If it returns EAGAIN run the full algorithm
> + * instead.
> + */
> +static __always_inline int __do_map_single_page(struct pt_range *range,
> + void *arg, unsigned int level,
> + struct pt_table_p *table,
> + pt_level_fn_t descend_fn)
> +{
> + struct pt_state pts = pt_init(range, level, table);
> + struct pt_iommu_map_args *map = arg;
> +
> + pts.type = pt_load_single_entry(&pts);
> + if (level == 0) {
> + if (pts.type != PT_ENTRY_EMPTY)
> + return -EADDRINUSE;
> + pt_install_leaf_entry(&pts, map->oa, PAGE_SHIFT,
> + &map->attrs);
> + map->oa += PAGE_SIZE;
> + return 0;
> + }
> + if (pts.type == PT_ENTRY_TABLE)
> + return pt_descend(&pts, arg, descend_fn);
> + /* Something else, use the slow path */
> + return -EAGAIN;
> +}
> +PT_MAKE_LEVELS(__map_single_page, __do_map_single_page);
> +
> +/*
> + * Add a table to the top, increasing the top level as much as necessary to
> + * encompass range.
> + */
> +static int increase_top(struct pt_iommu *iommu_table, struct pt_range *range,
> + struct pt_iommu_map_args *map)
> +{
> + struct iommu_pages_list free_list = IOMMU_PAGES_LIST_INIT(free_list);
> + struct pt_common *common = common_from_iommu(iommu_table);
> + uintptr_t top_of_table = READ_ONCE(common->top_of_table);
> + uintptr_t new_top_of_table = top_of_table;
> + struct pt_table_p *table_mem;
> + unsigned int new_level;
> + spinlock_t *domain_lock;
> + unsigned long flags;
> + int ret;
> +
> + while (true) {
> + struct pt_range top_range =
> + _pt_top_range(common, new_top_of_table);
> + struct pt_state pts = pt_init_top(&top_range);
> +
> + top_range.va = range->va;
> + top_range.last_va = range->last_va;
> +
> + if (!pt_check_range(&top_range) && map->leaf_level <= pts.level)
> + break;
> +
> + pts.level++;
> + if (pts.level > PT_MAX_TOP_LEVEL ||
> + pt_table_item_lg2sz(&pts) >= common->max_vasz_lg2) {
> + ret = -ERANGE;
> + goto err_free;
> + }
> +
> + new_level = pts.level;
> + table_mem = table_alloc_top(
> + common, _pt_top_set(NULL, pts.level), map->attrs.gfp);
> + if (IS_ERR(table_mem))
> + return PTR_ERR(table_mem);
> + iommu_pages_list_add(&free_list, table_mem);
> +
> + /* The new table links to the lower table always at index 0 */
> + top_range.va = 0;
> + top_range.top_level = new_level;
> + pts.table_lower = pts.table;
> + pts.table = table_mem;
> + pt_load_single_entry(&pts);
> + PT_WARN_ON(pts.index != 0);
> + pt_install_table(&pts, virt_to_phys(pts.table_lower),
> + &map->attrs);
> + new_top_of_table = _pt_top_set(pts.table, pts.level);
> + }
> +
> + /*
> + * top_of_table is write locked by the spinlock, but readers can use
> + * READ_ONCE() to get the value. Since we encode both the level and the
> + * pointer in one quanta the lockless reader will always see something
> + * valid. The HW must be updated to the new level under the spinlock
> + * before top_of_table is updated so that concurrent readers don't map
> + * into the new level until it is fully functional. If another thread
> + * already updated it while we were working then throw everything away
> + * and try again.
> + */
> + domain_lock = iommu_table->driver_ops->get_top_lock(iommu_table);
> + spin_lock_irqsave(domain_lock, flags);
> + if (common->top_of_table != top_of_table) {
> + spin_unlock_irqrestore(domain_lock, flags);
> + ret = -EAGAIN;
> + goto err_free;
> + }
> +
> + /*
> + * We do not issue any flushes for change_top on the expectation that
> + * any walk cache will not become a problem by adding another layer to
> + * the tree. Misses will rewalk from the updated top pointer, hits
> + * continue to be correct. Negative caching is fine too since all the
> + * new IOVA added by the new top is non-present.
> + */
> + iommu_table->driver_ops->change_top(
> + iommu_table, virt_to_phys(table_mem), new_level);
> + WRITE_ONCE(common->top_of_table, new_top_of_table);
> + spin_unlock_irqrestore(domain_lock, flags);
> + return 0;
> +
> +err_free:
> + iommu_put_pages_list(&free_list);
> + return ret;
> +}
> +
> +static int check_map_range(struct pt_iommu *iommu_table, struct pt_range *range,
> + struct pt_iommu_map_args *map)
> +{
> + struct pt_common *common = common_from_iommu(iommu_table);
> + int ret;
> +
> + do {
> + ret = pt_check_range(range);
> + if (!pt_feature(common, PT_FEAT_DYNAMIC_TOP))
> + return ret;
> +
> + if (!ret && map->leaf_level <= range->top_level)
> + break;
> +
> + ret = increase_top(iommu_table, range, map);
> + if (ret && ret != -EAGAIN)
> + return ret;
> +
> + /* Reload the new top */
> + *range = pt_make_range(common, range->va, range->last_va);
> + } while (ret);
> + PT_WARN_ON(pt_check_range(range));
> + return 0;
> +}
> +
> +static int do_map(struct pt_range *range, bool single_page,
> + struct pt_iommu_map_args *map)
> +{
> + if (single_page) {
> + int ret;
> +
> + ret = pt_walk_range(range, __map_single_page, map);
> + if (ret != -EAGAIN)
> + return ret;
> + /* EAGAIN falls through to the full path */
> + }
> +
> + if (map->leaf_level == range->top_level)
> + return pt_walk_range(range, __map_range_leaf, map);
> + return pt_walk_range(range, __map_range, map);
> +}
> +
> +/**
> + * map_pages() - Install translation for an IOVA range
> + * @domain: Domain to manipulate
> + * @iova: IO virtual address to start
> + * @paddr: Physical/Output address to start
> + * @pgsize: Length of each page
> + * @pgcount: Length of the range in pgsize units starting from @iova
> + * @prot: A bitmap of IOMMU_READ/WRITE/CACHE/NOEXEC/MMIO
> + * @gfp: GFP flags for any memory allocations
> + * @mapped: Total bytes successfully mapped
> + *
> + * The range starting at IOVA will have paddr installed into it. The caller
> + * must specify a valid pgsize and pgcount to segment the range into compatible
> + * blocks.
> + *
> + * On error the caller will probably want to invoke unmap on the range from iova
> + * up to the amount indicated by @mapped to return the table back to an
> + * unchanged state.
> + *
> + * Context: The caller must hold a write range lock that includes the whole
> + * range.
> + *
> + * Returns: -ERRNO on failure, 0 on success. The number of bytes of VA that were
> + * mapped are added to @mapped, @mapped is not zerod first.
> + */
> +int DOMAIN_NS(map_pages)(struct iommu_domain *domain, unsigned long iova,
> + phys_addr_t paddr, size_t pgsize, size_t pgcount,
> + int prot, gfp_t gfp, size_t *mapped)
> +{
> + struct pt_iommu *iommu_table =
> + container_of(domain, struct pt_iommu, domain);
> + pt_vaddr_t pgsize_bitmap = iommu_table->domain.pgsize_bitmap;
> + struct pt_common *common = common_from_iommu(iommu_table);
> + struct iommu_iotlb_gather iotlb_gather;
> + pt_vaddr_t len = pgsize * pgcount;
> + struct pt_iommu_map_args map = {
> + .iotlb_gather = &iotlb_gather,
> + .oa = paddr,
> + .leaf_pgsize_lg2 = vaffs(pgsize),
> + };
> + bool single_page = false;
> + struct pt_range range;
> + int ret;
> +
> + iommu_iotlb_gather_init(&iotlb_gather);
> +
> + if (WARN_ON(!(prot & (IOMMU_READ | IOMMU_WRITE))))
> + return -EINVAL;
> +
> + /* Check the paddr doesn't exceed what the table can store */
> + if ((sizeof(pt_oaddr_t) < sizeof(paddr) &&
> + (pt_vaddr_t)paddr > PT_VADDR_MAX) ||
> + (common->max_oasz_lg2 != PT_VADDR_MAX_LG2 &&
> + oalog2_div(paddr, common->max_oasz_lg2)))
> + return -ERANGE;
> +
> + ret = pt_iommu_set_prot(common, &map.attrs, prot);
> + if (ret)
> + return ret;
> + map.attrs.gfp = gfp;
> +
> + ret = make_range_no_check(common, &range, iova, len);
> + if (ret)
> + return ret;
> +
> + /* Calculate target page size and level for the leaves */
> + if (pt_has_system_page_size(common) && pgsize == PAGE_SIZE &&
> + pgcount == 1) {
> + PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
> + if (log2_mod(iova | paddr, PAGE_SHIFT))
> + return -ENXIO;
> + map.leaf_pgsize_lg2 = PAGE_SHIFT;
> + map.leaf_level = 0;
> + single_page = true;
> + } else {
> + map.leaf_pgsize_lg2 = pt_compute_best_pgsize(
> + pgsize_bitmap, range.va, range.last_va, paddr);
> + if (!map.leaf_pgsize_lg2)
> + return -ENXIO;
> + map.leaf_level =
> + pt_pgsz_lg2_to_level(common, map.leaf_pgsize_lg2);
> + }
> +
> + ret = check_map_range(iommu_table, &range, &map);
> + if (ret)
> + return ret;
> +
> + PT_WARN_ON(map.leaf_level > range.top_level);
> +
> + ret = do_map(&range, single_page, &map);
> +
> + /*
> + * Table levels were freed and replaced with large items, flush any walk
> + * cache that may refer to the freed levels.
> + */
> + if (!iommu_pages_list_empty(&iotlb_gather.freelist))
> + iommu_iotlb_sync(&iommu_table->domain, &iotlb_gather);
> +
> + /* Bytes successfully mapped */
> + PT_WARN_ON(!ret && map.oa - paddr != len);
> + *mapped += map.oa - paddr;
> + return ret;
> +}
> +EXPORT_SYMBOL_NS_GPL(DOMAIN_NS(map_pages), "GENERIC_PT_IOMMU");
> +
> struct pt_unmap_args {
> struct iommu_pages_list free_list;
> pt_vaddr_t unmapped;
> @@ -445,6 +937,7 @@ static void pt_iommu_zero(struct pt_iommu_table *fmt_table)
> memset_after(fmt_table, 0, iommu.domain);
>
> /* The caller can initialize some of these values */
> + iommu_table->driver_ops = cfg.driver_ops;
> iommu_table->nid = cfg.nid;
> }
>
> @@ -478,6 +971,12 @@ int pt_iommu_init(struct pt_iommu_table *fmt_table,
> if (ret)
> return ret;
>
> + if (pt_feature(common, PT_FEAT_DYNAMIC_TOP) &&
> + WARN_ON(!iommu_table->driver_ops ||
> + !iommu_table->driver_ops->change_top ||
> + !iommu_table->driver_ops->get_top_lock))
> + return -EINVAL;
> +
> if (pt_feature(common, PT_FEAT_SIGN_EXTEND) &&
> (pt_feature(common, PT_FEAT_FULL_VA) ||
> pt_feature(common, PT_FEAT_DYNAMIC_TOP)))
> diff --git a/drivers/iommu/generic_pt/pt_iter.h b/drivers/iommu/generic_pt/pt_iter.h
> index 87f4a26c1a417a..c0d8617cce2928 100644
> --- a/drivers/iommu/generic_pt/pt_iter.h
> +++ b/drivers/iommu/generic_pt/pt_iter.h
> @@ -612,7 +612,7 @@ static inline int __pt_make_level_fn_err(struct pt_range *range, void *arg,
> * This builds a function call tree that can be fully inlined.
> * The caller must provide a function body in an __always_inline function::
> *
> - * static __always_inline int do(struct pt_range *range, void *arg,
> + * static __always_inline int do_fn(struct pt_range *range, void *arg,
> * unsigned int level, struct pt_table_p *table,
> * pt_level_fn_t descend_fn)
> *
> diff --git a/include/linux/generic_pt/iommu.h b/include/linux/generic_pt/iommu.h
> index ceb6bc9cea37cd..0d59423024d57f 100644
> --- a/include/linux/generic_pt/iommu.h
> +++ b/include/linux/generic_pt/iommu.h
> @@ -11,6 +11,7 @@
>
> struct iommu_iotlb_gather;
> struct pt_iommu_ops;
> +struct pt_iommu_driver_ops;
>
> /**
> * DOC: IOMMU Radix Page Table
> @@ -43,6 +44,12 @@ struct pt_iommu {
> */
> const struct pt_iommu_ops *ops;
>
> + /**
> + * @driver_ops: Function pointers provided by the HW driver to help
> + * manage HW details like caches.
> + */
> + const struct pt_iommu_driver_ops *driver_ops;
> +
> /**
> * @nid: Node ID to use for table memory allocations. The IOMMU driver
> * may want to set the NID to the device's NID, if there are multiple
> @@ -84,6 +91,53 @@ struct pt_iommu_ops {
> void (*deinit)(struct pt_iommu *iommu_table);
> };
>
> +/**
> + * struct pt_iommu_driver_ops - HW IOTLB cache flushing operations
> + *
> + * The IOMMU driver should implement these using container_of(iommu_table) to
> + * get to it's iommu_domain derived structure. All ops can be called in atomic
> + * contexts as they are buried under DMA API calls.
> + */
> +struct pt_iommu_driver_ops {
> + /**
> + * @change_top: Update the top of table pointer
> + * @iommu_table: Table to operate on
> + * @top_paddr: New CPU physical address of the top pointer
> + * @top_level: IOMMU PT level of the new top
> + *
> + * Called under the get_top_lock() spinlock. The driver must update all
> + * HW references to this domain with a new top address and
> + * configuration. On return mappings placed in the new top must be
> + * reachable by the HW.
> + *
> + * top_level encodes the level in IOMMU PT format, level 0 is the
> + * smallest page size increasing from there. This has to be translated
> + * to any HW specific format. During this call the new top will not be
> + * visible to any other API.
> + *
> + * This op is only used by PT_FEAT_DYNAMIC_TOP, and is required if
> + * enabled.
> + */
> + void (*change_top)(struct pt_iommu *iommu_table, phys_addr_t top_paddr,
> + unsigned int top_level);
> +
> + /**
> + * @get_top_lock: lock to hold when changing the table top
> + * @iommu_table: Table to operate on
> + *
> + * Return a lock to hold when changing the table top page table from
> + * being stored in HW. The lock will be held prior to calling
> + * change_top() and released once the top is fully visible.
> + *
> + * Typically this would be a lock that protects the iommu_domain's
> + * attachment list.
> + *
> + * This op is only used by PT_FEAT_DYNAMIC_TOP, and is required if
> + * enabled.
> + */
> + spinlock_t *(*get_top_lock)(struct pt_iommu *iommu_table);
> +};
> +
> static inline void pt_iommu_deinit(struct pt_iommu *iommu_table)
> {
> /*
> @@ -120,6 +174,10 @@ struct pt_iommu_cfg {
> #define IOMMU_PROTOTYPES(fmt) \
> phys_addr_t pt_iommu_##fmt##_iova_to_phys(struct iommu_domain *domain, \
> dma_addr_t iova); \
> + int pt_iommu_##fmt##_map_pages(struct iommu_domain *domain, \
> + unsigned long iova, phys_addr_t paddr, \
> + size_t pgsize, size_t pgcount, \
> + int prot, gfp_t gfp, size_t *mapped); \
> size_t pt_iommu_##fmt##_unmap_pages( \
> struct iommu_domain *domain, unsigned long iova, \
> size_t pgsize, size_t pgcount, \
> @@ -142,6 +200,7 @@ struct pt_iommu_cfg {
> */
> #define IOMMU_PT_DOMAIN_OPS(fmt) \
> .iova_to_phys = &pt_iommu_##fmt##_iova_to_phys, \
> + .map_pages = &pt_iommu_##fmt##_map_pages, \
> .unmap_pages = &pt_iommu_##fmt##_unmap_pages
>
> /*
> --
> 2.43.0
>
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
next prev parent reply other threads:[~2025-10-28 17:33 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-23 18:20 [PATCH v7 00/15] Consolidate iommu page table implementations (AMD) Jason Gunthorpe
2025-10-23 18:20 ` [PATCH v7 01/15] genpt: Generic Page Table base API Jason Gunthorpe
2025-10-25 15:13 ` Pasha Tatashin
2025-10-27 16:35 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 02/15] genpt: Add Documentation/ files Jason Gunthorpe
2025-10-25 15:15 ` Pasha Tatashin
2025-10-27 16:37 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 03/15] iommupt: Add the basic structure of the iommu implementation Jason Gunthorpe
2025-10-25 15:24 ` Pasha Tatashin
2025-10-27 12:58 ` Jason Gunthorpe
2025-10-27 16:40 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 04/15] iommupt: Add the AMD IOMMU v1 page table format Jason Gunthorpe
2025-10-31 9:44 ` Vasant Hegde
2025-10-23 18:20 ` [PATCH v7 05/15] iommupt: Add iova_to_phys op Jason Gunthorpe
2025-10-25 15:29 ` Pasha Tatashin
2025-10-27 16:51 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 06/15] iommupt: Add unmap_pages op Jason Gunthorpe
2025-10-25 15:40 ` Pasha Tatashin
2025-10-27 17:03 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 07/15] iommupt: Add map_pages op Jason Gunthorpe
2025-10-28 1:16 ` Tian, Kevin
2025-10-28 17:33 ` Samiullah Khawaja [this message]
2025-10-23 18:20 ` [PATCH v7 08/15] iommupt: Add read_and_clear_dirty op Jason Gunthorpe
2025-10-27 17:11 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 09/15] iommupt: Add a kunit test for Generic Page Table Jason Gunthorpe
2025-10-23 18:20 ` [PATCH v7 10/15] iommupt: Add a mock pagetable format for iommufd selftest to use Jason Gunthorpe
2025-10-30 1:07 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 11/15] iommufd: Change the selftest to use iommupt instead of xarray Jason Gunthorpe
2025-10-30 1:06 ` Samiullah Khawaja
2025-10-23 18:20 ` [PATCH v7 12/15] iommupt: Add the x86 64 bit page table format Jason Gunthorpe
2025-10-31 9:51 ` Vasant Hegde
2025-10-31 16:42 ` Jason Gunthorpe
2025-10-23 18:20 ` [PATCH v7 13/15] iommu/amd: Use the generic iommu page table Jason Gunthorpe
2025-10-30 10:22 ` Vasant Hegde
2025-10-23 18:20 ` [PATCH v7 14/15] iommu/amd: Remove AMD io_pgtable support Jason Gunthorpe
2025-10-30 17:06 ` Vasant Hegde
2025-10-23 18:20 ` [PATCH v7 15/15] iommupt: Add a kunit test for the IOMMU implementation Jason Gunthorpe
2025-10-29 16:00 ` Jason Gunthorpe
2025-10-25 14:52 ` [PATCH v7 00/15] Consolidate iommu page table implementations (AMD) Pasha Tatashin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAywjhS+-CNXTR3_EpVjsie3bmz_2szBR7nh53hA-dWCm5j1kA@mail.gmail.com \
--to=skhawaja@google.com \
--cc=aik@amd.com \
--cc=alejandro.j.jimenez@oracle.com \
--cc=alex@ghiti.fr \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=corbet@lwn.net \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=jgowans@amazon.com \
--cc=joro@8bytes.org \
--cc=justinstitt@google.com \
--cc=kevin.tian@intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=llvm@lists.linux.dev \
--cc=michael.roth@amd.com \
--cc=morbo@google.com \
--cc=nathan@kernel.org \
--cc=nick.desaulniers+lkml@gmail.com \
--cc=ojeda@kernel.org \
--cc=palmer@dabbelt.com \
--cc=pasha.tatashin@soleen.com \
--cc=patches@lists.linux.dev \
--cc=pjw@kernel.org \
--cc=robin.murphy@arm.com \
--cc=shuah@kernel.org \
--cc=suravee.suthikulpanit@amd.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).