All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] Let iommupt manage changes in page size internally
@ 2026-02-27 19:30 Jason Gunthorpe
  2026-02-27 19:30 ` [PATCH v3 1/2] iommupt: Directly call iommupt's unmap_range() Jason Gunthorpe
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Jason Gunthorpe @ 2026-02-27 19:30 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, Will Deacon
  Cc: Kevin Tian, patches, Samiullah Khawaja

Currently the core code has some helpers that use iommu_pgsize() to
fragment operations into single page-size chunks and then the driver has a
simplified single-page size implementation. This was helpful in
simplifying the driver code.

However, iommupt has a single shared implementation for all formats so we
can accept a little more complexity. Have the core code directly call
iommupt with the requested range to map/unmap and rely on it to change the
page size across the range as required.

The iommupt implementation of unmap is already fine to work like this, and
the map implementation can reset its walking paramters in-place with a
little more code.

The net result is about a 5% performance bump in the simple iommupt
map/unmap benchmarks of mapped alignment, and probably more for
unaligned/oddly sized ranges that are changing page sizes.

Introduce a iommupt_from_domain() function as a general way to convert
an iommu_domain to a struct pt_iommu if it is a iommupt based domain. I
expect to keep using this as more optimizations are introduced.

v3:
 - Rebase to v7.0-rc1
v2: https://patch.msgid.link/r/0-v2-973a6bdc820f+693-iommpt_map_direct_jgg@nvidia.com
 - Rebase to latest iommu tree
 - Adjust to the IOMMU_DEBUG_PAGEALLOC work
 - Fix missed trace calls for both map and unmap
 - Add a comment explaining the level changes
v1: https://patch.msgid.link/r/0-v1-d7be57da596d+3f8c0-iommpt_map_direct_jgg@nvidia.com

Jason Gunthorpe (2):
  iommupt: Directly call iommupt's unmap_range()
  iommupt: Avoid rewalking during map

 drivers/iommu/generic_pt/iommu_pt.h         | 162 +++++++++++---------
 drivers/iommu/generic_pt/kunit_generic_pt.h |  12 ++
 drivers/iommu/generic_pt/pt_iter.h          |  22 +++
 drivers/iommu/iommu.c                       |  66 ++++++--
 include/linux/generic_pt/iommu.h            |  69 +++++++--
 include/linux/iommu.h                       |   1 +
 6 files changed, 231 insertions(+), 101 deletions(-)


base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
-- 
2.43.0


^ permalink raw reply	[flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] iommupt: Avoid rewalking during map
@ 2026-02-28  1:00 kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2026-02-28  1:00 UTC (permalink / raw)
  To: oe-kbuild; +Cc: lkp, Dan Carpenter

BCC: lkp@intel.com
CC: oe-kbuild-all@lists.linux.dev
In-Reply-To: <2-v3-a1777ea76519+370f-iommpt_map_direct_jgg@nvidia.com>
References: <2-v3-a1777ea76519+370f-iommpt_map_direct_jgg@nvidia.com>
TO: Jason Gunthorpe <jgg@nvidia.com>
TO: iommu@lists.linux.dev
TO: Joerg Roedel <joro@8bytes.org>
TO: Robin Murphy <robin.murphy@arm.com>
TO: Will Deacon <will@kernel.org>
CC: Kevin Tian <kevin.tian@intel.com>
CC: patches@lists.linux.dev
CC: Samiullah Khawaja <skhawaja@google.com>

Hi Jason,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f]

url:    https://github.com/intel-lab-lkp/linux/commits/Jason-Gunthorpe/iommupt-Directly-call-iommupt-s-unmap_range/20260228-033653
base:   6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
patch link:    https://lore.kernel.org/r/2-v3-a1777ea76519%2B370f-iommpt_map_direct_jgg%40nvidia.com
patch subject: [PATCH v3 2/2] iommupt: Avoid rewalking during map
:::::: branch date: 5 hours ago
:::::: commit date: 5 hours ago
config: i386-randconfig-141-20260228 (https://download.01.org/0day-ci/archive/20260228/202602280840.ok5l06Wt-lkp@intel.com/config)
compiler: gcc-13 (Debian 13.3.0-16) 13.3.0
smatch version: v0.5.0-8994-gd50c5a4c

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <error27@gmail.com>
| Closes: https://lore.kernel.org/r/202602280840.ok5l06Wt-lkp@intel.com/

smatch warnings:
drivers/iommu/generic_pt/fmt/../iommu_pt.h:897 x86_64_map_range() warn: impossible condition '(paddr > 18446744073709551615) => (0-u32max > u64max)'
drivers/iommu/generic_pt/fmt/../iommu_pt.h:897 vtdss_map_range() warn: impossible condition '(paddr > 18446744073709551615) => (0-u32max > u64max)'
drivers/iommu/generic_pt/fmt/../iommu_pt.h:897 amdv1_map_range() warn: impossible condition '(paddr > 18446744073709551615) => (0-u32max > u64max)'

vim +897 drivers/iommu/generic_pt/fmt/../iommu_pt.h

dcd6a011a8d523 Jason Gunthorpe 2025-11-04  874  
c4359bc14c2605 Jason Gunthorpe 2026-02-27  875  static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  876  			 phys_addr_t paddr, dma_addr_t len, unsigned int prot,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  877  			 gfp_t gfp, size_t *mapped)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  878  {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  879  	pt_vaddr_t pgsize_bitmap = iommu_table->domain.pgsize_bitmap;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  880  	struct pt_common *common = common_from_iommu(iommu_table);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  881  	struct iommu_iotlb_gather iotlb_gather;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  882  	struct pt_iommu_map_args map = {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  883  		.iotlb_gather = &iotlb_gather,
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  884  		.oa = paddr,
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  885  	};
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  886  	bool single_page = false;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  887  	struct pt_range range;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  888  	int ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  889  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  890  	iommu_iotlb_gather_init(&iotlb_gather);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  891  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  892  	if (WARN_ON(!(prot & (IOMMU_READ | IOMMU_WRITE))))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  893  		return -EINVAL;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  894  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  895  	/* Check the paddr doesn't exceed what the table can store */
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  896  	if ((sizeof(pt_oaddr_t) < sizeof(paddr) &&
dcd6a011a8d523 Jason Gunthorpe 2025-11-04 @897  	     (pt_vaddr_t)paddr > PT_VADDR_MAX) ||
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  898  	    (common->max_oasz_lg2 != PT_VADDR_MAX_LG2 &&
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  899  	     oalog2_div(paddr, common->max_oasz_lg2)))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  900  		return -ERANGE;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  901  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  902  	ret = pt_iommu_set_prot(common, &map.attrs, prot);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  903  	if (ret)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  904  		return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  905  	map.attrs.gfp = gfp;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  906  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  907  	ret = make_range_no_check(common, &range, iova, len);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  908  	if (ret)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  909  		return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  910  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  911  	/* Calculate target page size and level for the leaves */
c4359bc14c2605 Jason Gunthorpe 2026-02-27  912  	if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  913  		PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  914  		if (log2_mod(iova | paddr, PAGE_SHIFT))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  915  			return -ENXIO;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  916  		map.leaf_pgsize_lg2 = PAGE_SHIFT;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  917  		map.leaf_level = 0;
c4359bc14c2605 Jason Gunthorpe 2026-02-27  918  		map.num_leaves = 1;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  919  		single_page = true;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  920  	} else {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  921  		map.leaf_pgsize_lg2 = pt_compute_best_pgsize(
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  922  			pgsize_bitmap, range.va, range.last_va, paddr);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  923  		if (!map.leaf_pgsize_lg2)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  924  			return -ENXIO;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  925  		map.leaf_level =
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  926  			pt_pgsz_lg2_to_level(common, map.leaf_pgsize_lg2);
c4359bc14c2605 Jason Gunthorpe 2026-02-27  927  		map.num_leaves = pt_pgsz_count(pgsize_bitmap, range.va,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  928  					       range.last_va, paddr,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  929  					       map.leaf_pgsize_lg2);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  930  	}
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  931  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  932  	ret = check_map_range(iommu_table, &range, &map);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  933  	if (ret)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  934  		return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  935  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  936  	PT_WARN_ON(map.leaf_level > range.top_level);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  937  
efa03dab7ce4ed Jason Gunthorpe 2025-10-23  938  	ret = do_map(&range, common, single_page, &map);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  939  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  940  	/*
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  941  	 * Table levels were freed and replaced with large items, flush any walk
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  942  	 * cache that may refer to the freed levels.
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  943  	 */
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  944  	if (!iommu_pages_list_empty(&iotlb_gather.freelist))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  945  		iommu_iotlb_sync(&iommu_table->domain, &iotlb_gather);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  946  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  947  	/* Bytes successfully mapped */
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  948  	PT_WARN_ON(!ret && map.oa - paddr != len);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  949  	*mapped += map.oa - paddr;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  950  	return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  951  }
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  952  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-05-09 23:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-27 19:30 [PATCH v3 0/2] Let iommupt manage changes in page size internally Jason Gunthorpe
2026-02-27 19:30 ` [PATCH v3 1/2] iommupt: Directly call iommupt's unmap_range() Jason Gunthorpe
2026-02-27 19:30 ` [PATCH v3 2/2] iommupt: Avoid rewalking during map Jason Gunthorpe
2026-05-09 17:41   ` Josua Mayer
2026-05-09 19:40     ` Jason Gunthorpe
2026-05-09 20:25       ` Josua Mayer
2026-05-09 23:39         ` Jason Gunthorpe
2026-03-03  3:55 ` [PATCH v3 0/2] Let iommupt manage changes in page size internally Baolu Lu
2026-03-17 12:58 ` Joerg Roedel
  -- strict thread matches above, loose matches on Subject: below --
2026-02-28  1:00 [PATCH v3 2/2] iommupt: Avoid rewalking during map kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.