All of lore.kernel.org
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: oe-kbuild@lists.linux.dev
Cc: lkp@intel.com, Dan Carpenter <error27@gmail.com>
Subject: Re: [PATCH v3 2/2] iommupt: Avoid rewalking during map
Date: Sat, 28 Feb 2026 09:00:41 +0800	[thread overview]
Message-ID: <202602280840.ok5l06Wt-lkp@intel.com> (raw)

BCC: lkp@intel.com
CC: oe-kbuild-all@lists.linux.dev
In-Reply-To: <2-v3-a1777ea76519+370f-iommpt_map_direct_jgg@nvidia.com>
References: <2-v3-a1777ea76519+370f-iommpt_map_direct_jgg@nvidia.com>
TO: Jason Gunthorpe <jgg@nvidia.com>
TO: iommu@lists.linux.dev
TO: Joerg Roedel <joro@8bytes.org>
TO: Robin Murphy <robin.murphy@arm.com>
TO: Will Deacon <will@kernel.org>
CC: Kevin Tian <kevin.tian@intel.com>
CC: patches@lists.linux.dev
CC: Samiullah Khawaja <skhawaja@google.com>

Hi Jason,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f]

url:    https://github.com/intel-lab-lkp/linux/commits/Jason-Gunthorpe/iommupt-Directly-call-iommupt-s-unmap_range/20260228-033653
base:   6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
patch link:    https://lore.kernel.org/r/2-v3-a1777ea76519%2B370f-iommpt_map_direct_jgg%40nvidia.com
patch subject: [PATCH v3 2/2] iommupt: Avoid rewalking during map
:::::: branch date: 5 hours ago
:::::: commit date: 5 hours ago
config: i386-randconfig-141-20260228 (https://download.01.org/0day-ci/archive/20260228/202602280840.ok5l06Wt-lkp@intel.com/config)
compiler: gcc-13 (Debian 13.3.0-16) 13.3.0
smatch version: v0.5.0-8994-gd50c5a4c

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <error27@gmail.com>
| Closes: https://lore.kernel.org/r/202602280840.ok5l06Wt-lkp@intel.com/

smatch warnings:
drivers/iommu/generic_pt/fmt/../iommu_pt.h:897 x86_64_map_range() warn: impossible condition '(paddr > 18446744073709551615) => (0-u32max > u64max)'
drivers/iommu/generic_pt/fmt/../iommu_pt.h:897 vtdss_map_range() warn: impossible condition '(paddr > 18446744073709551615) => (0-u32max > u64max)'
drivers/iommu/generic_pt/fmt/../iommu_pt.h:897 amdv1_map_range() warn: impossible condition '(paddr > 18446744073709551615) => (0-u32max > u64max)'

vim +897 drivers/iommu/generic_pt/fmt/../iommu_pt.h

dcd6a011a8d523 Jason Gunthorpe 2025-11-04  874  
c4359bc14c2605 Jason Gunthorpe 2026-02-27  875  static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  876  			 phys_addr_t paddr, dma_addr_t len, unsigned int prot,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  877  			 gfp_t gfp, size_t *mapped)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  878  {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  879  	pt_vaddr_t pgsize_bitmap = iommu_table->domain.pgsize_bitmap;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  880  	struct pt_common *common = common_from_iommu(iommu_table);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  881  	struct iommu_iotlb_gather iotlb_gather;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  882  	struct pt_iommu_map_args map = {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  883  		.iotlb_gather = &iotlb_gather,
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  884  		.oa = paddr,
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  885  	};
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  886  	bool single_page = false;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  887  	struct pt_range range;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  888  	int ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  889  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  890  	iommu_iotlb_gather_init(&iotlb_gather);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  891  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  892  	if (WARN_ON(!(prot & (IOMMU_READ | IOMMU_WRITE))))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  893  		return -EINVAL;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  894  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  895  	/* Check the paddr doesn't exceed what the table can store */
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  896  	if ((sizeof(pt_oaddr_t) < sizeof(paddr) &&
dcd6a011a8d523 Jason Gunthorpe 2025-11-04 @897  	     (pt_vaddr_t)paddr > PT_VADDR_MAX) ||
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  898  	    (common->max_oasz_lg2 != PT_VADDR_MAX_LG2 &&
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  899  	     oalog2_div(paddr, common->max_oasz_lg2)))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  900  		return -ERANGE;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  901  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  902  	ret = pt_iommu_set_prot(common, &map.attrs, prot);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  903  	if (ret)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  904  		return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  905  	map.attrs.gfp = gfp;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  906  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  907  	ret = make_range_no_check(common, &range, iova, len);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  908  	if (ret)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  909  		return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  910  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  911  	/* Calculate target page size and level for the leaves */
c4359bc14c2605 Jason Gunthorpe 2026-02-27  912  	if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  913  		PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  914  		if (log2_mod(iova | paddr, PAGE_SHIFT))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  915  			return -ENXIO;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  916  		map.leaf_pgsize_lg2 = PAGE_SHIFT;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  917  		map.leaf_level = 0;
c4359bc14c2605 Jason Gunthorpe 2026-02-27  918  		map.num_leaves = 1;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  919  		single_page = true;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  920  	} else {
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  921  		map.leaf_pgsize_lg2 = pt_compute_best_pgsize(
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  922  			pgsize_bitmap, range.va, range.last_va, paddr);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  923  		if (!map.leaf_pgsize_lg2)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  924  			return -ENXIO;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  925  		map.leaf_level =
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  926  			pt_pgsz_lg2_to_level(common, map.leaf_pgsize_lg2);
c4359bc14c2605 Jason Gunthorpe 2026-02-27  927  		map.num_leaves = pt_pgsz_count(pgsize_bitmap, range.va,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  928  					       range.last_va, paddr,
c4359bc14c2605 Jason Gunthorpe 2026-02-27  929  					       map.leaf_pgsize_lg2);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  930  	}
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  931  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  932  	ret = check_map_range(iommu_table, &range, &map);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  933  	if (ret)
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  934  		return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  935  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  936  	PT_WARN_ON(map.leaf_level > range.top_level);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  937  
efa03dab7ce4ed Jason Gunthorpe 2025-10-23  938  	ret = do_map(&range, common, single_page, &map);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  939  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  940  	/*
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  941  	 * Table levels were freed and replaced with large items, flush any walk
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  942  	 * cache that may refer to the freed levels.
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  943  	 */
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  944  	if (!iommu_pages_list_empty(&iotlb_gather.freelist))
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  945  		iommu_iotlb_sync(&iommu_table->domain, &iotlb_gather);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  946  
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  947  	/* Bytes successfully mapped */
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  948  	PT_WARN_ON(!ret && map.oa - paddr != len);
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  949  	*mapped += map.oa - paddr;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  950  	return ret;
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  951  }
dcd6a011a8d523 Jason Gunthorpe 2025-11-04  952  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

             reply	other threads:[~2026-02-28  1:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-28  1:00 kernel test robot [this message]
  -- strict thread matches above, loose matches on Subject: below --
2026-02-27 19:30 [PATCH v3 0/2] Let iommupt manage changes in page size internally Jason Gunthorpe
2026-02-27 19:30 ` [PATCH v3 2/2] iommupt: Avoid rewalking during map Jason Gunthorpe
2026-05-09 17:41   ` Josua Mayer
2026-05-09 19:40     ` Jason Gunthorpe
2026-05-09 20:25       ` Josua Mayer
2026-05-09 23:39         ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202602280840.ok5l06Wt-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=error27@gmail.com \
    --cc=oe-kbuild@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.