From: kernel test robot <lkp@intel.com>
To: Balbir Singh <balbirs@nvidia.com>, linux-mm@kvack.org
Cc: oe-kbuild-all@lists.linux.dev, linux-kernel@vger.kernel.org,
"Balbir Singh" <balbirs@nvidia.com>,
"Karol Herbst" <kherbst@redhat.com>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Shuah Khan" <skhan@linuxfoundation.org>,
"David Hildenbrand" <david@redhat.com>,
"Barry Song" <baohua@kernel.org>,
"Baolin Wang" <baolin.wang@linux.alibaba.com>,
"Ryan Roberts" <ryan.roberts@arm.com>,
"Matthew Wilcox" <willy@infradead.org>,
"Peter Xu" <peterx@redhat.com>, "Zi Yan" <ziy@nvidia.com>,
"Kefeng Wang" <wangkefeng.wang@huawei.com>,
"Jane Chu" <jane.chu@oracle.com>,
"Alistair Popple" <apopple@nvidia.com>,
"Donet Tom" <donettom@linux.ibm.com>,
"Mika Penttilä" <mpenttil@redhat.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Francois Dugast" <francois.dugast@intel.com>,
"Ralph Campbell" <rcampbell@nvidia.com>
Subject: Re: [v2 02/11] mm/thp: zone_device awareness in THP handling code
Date: Thu, 31 Jul 2025 04:05:24 +0800 [thread overview]
Message-ID: <202507310343.ZipoyitU-lkp@intel.com> (raw)
In-Reply-To: <20250730092139.3890844-3-balbirs@nvidia.com>
Hi Balbir,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on next-20250730]
[cannot apply to akpm-mm/mm-nonmm-unstable shuah-kselftest/next shuah-kselftest/fixes linus/master v6.16]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Balbir-Singh/mm-zone_device-support-large-zone-device-private-folios/20250730-172600
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20250730092139.3890844-3-balbirs%40nvidia.com
patch subject: [v2 02/11] mm/thp: zone_device awareness in THP handling code
config: i386-buildonly-randconfig-001-20250731 (https://download.01.org/0day-ci/archive/20250731/202507310343.ZipoyitU-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250731/202507310343.ZipoyitU-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507310343.ZipoyitU-lkp@intel.com/
All warnings (new ones prefixed by >>):
mm/rmap.c: In function 'try_to_migrate_one':
>> mm/rmap.c:2330:39: warning: unused variable 'pfn' [-Wunused-variable]
2330 | unsigned long pfn;
| ^~~
vim +/pfn +2330 mm/rmap.c
2273
2274 /*
2275 * @arg: enum ttu_flags will be passed to this argument.
2276 *
2277 * If TTU_SPLIT_HUGE_PMD is specified any PMD mappings will be split into PTEs
2278 * containing migration entries.
2279 */
2280 static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
2281 unsigned long address, void *arg)
2282 {
2283 struct mm_struct *mm = vma->vm_mm;
2284 DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address,
2285 PVMW_THP_DEVICE_PRIVATE);
2286 bool anon_exclusive, writable, ret = true;
2287 pte_t pteval;
2288 struct page *subpage;
2289 struct mmu_notifier_range range;
2290 enum ttu_flags flags = (enum ttu_flags)(long)arg;
2291 unsigned long pfn;
2292 unsigned long hsz = 0;
2293
2294 /*
2295 * When racing against e.g. zap_pte_range() on another cpu,
2296 * in between its ptep_get_and_clear_full() and folio_remove_rmap_*(),
2297 * try_to_migrate() may return before page_mapped() has become false,
2298 * if page table locking is skipped: use TTU_SYNC to wait for that.
2299 */
2300 if (flags & TTU_SYNC)
2301 pvmw.flags = PVMW_SYNC;
2302
2303 /*
2304 * For THP, we have to assume the worse case ie pmd for invalidation.
2305 * For hugetlb, it could be much worse if we need to do pud
2306 * invalidation in the case of pmd sharing.
2307 *
2308 * Note that the page can not be free in this function as call of
2309 * try_to_unmap() must hold a reference on the page.
2310 */
2311 range.end = vma_address_end(&pvmw);
2312 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
2313 address, range.end);
2314 if (folio_test_hugetlb(folio)) {
2315 /*
2316 * If sharing is possible, start and end will be adjusted
2317 * accordingly.
2318 */
2319 adjust_range_if_pmd_sharing_possible(vma, &range.start,
2320 &range.end);
2321
2322 /* We need the huge page size for set_huge_pte_at() */
2323 hsz = huge_page_size(hstate_vma(vma));
2324 }
2325 mmu_notifier_invalidate_range_start(&range);
2326
2327 while (page_vma_mapped_walk(&pvmw)) {
2328 /* PMD-mapped THP migration entry */
2329 if (!pvmw.pte) {
> 2330 unsigned long pfn;
2331
2332 if (flags & TTU_SPLIT_HUGE_PMD) {
2333 split_huge_pmd_locked(vma, pvmw.address,
2334 pvmw.pmd, true);
2335 ret = false;
2336 page_vma_mapped_walk_done(&pvmw);
2337 break;
2338 }
2339 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
2340 /*
2341 * Zone device private folios do not work well with
2342 * pmd_pfn() on some architectures due to pte
2343 * inversion.
2344 */
2345 if (is_pmd_device_private_entry(*pvmw.pmd)) {
2346 swp_entry_t entry = pmd_to_swp_entry(*pvmw.pmd);
2347
2348 pfn = swp_offset_pfn(entry);
2349 } else {
2350 pfn = pmd_pfn(*pvmw.pmd);
2351 }
2352
2353 subpage = folio_page(folio, pfn - folio_pfn(folio));
2354
2355 VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
2356 !folio_test_pmd_mappable(folio), folio);
2357
2358 if (set_pmd_migration_entry(&pvmw, subpage)) {
2359 ret = false;
2360 page_vma_mapped_walk_done(&pvmw);
2361 break;
2362 }
2363 continue;
2364 #endif
2365 }
2366
2367 /* Unexpected PMD-mapped THP? */
2368 VM_BUG_ON_FOLIO(!pvmw.pte, folio);
2369
2370 /*
2371 * Handle PFN swap PTEs, such as device-exclusive ones, that
2372 * actually map pages.
2373 */
2374 pteval = ptep_get(pvmw.pte);
2375 if (likely(pte_present(pteval))) {
2376 pfn = pte_pfn(pteval);
2377 } else {
2378 pfn = swp_offset_pfn(pte_to_swp_entry(pteval));
2379 VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
2380 }
2381
2382 subpage = folio_page(folio, pfn - folio_pfn(folio));
2383 address = pvmw.address;
2384 anon_exclusive = folio_test_anon(folio) &&
2385 PageAnonExclusive(subpage);
2386
2387 if (folio_test_hugetlb(folio)) {
2388 bool anon = folio_test_anon(folio);
2389
2390 /*
2391 * huge_pmd_unshare may unmap an entire PMD page.
2392 * There is no way of knowing exactly which PMDs may
2393 * be cached for this mm, so we must flush them all.
2394 * start/end were already adjusted above to cover this
2395 * range.
2396 */
2397 flush_cache_range(vma, range.start, range.end);
2398
2399 /*
2400 * To call huge_pmd_unshare, i_mmap_rwsem must be
2401 * held in write mode. Caller needs to explicitly
2402 * do this outside rmap routines.
2403 *
2404 * We also must hold hugetlb vma_lock in write mode.
2405 * Lock order dictates acquiring vma_lock BEFORE
2406 * i_mmap_rwsem. We can only try lock here and
2407 * fail if unsuccessful.
2408 */
2409 if (!anon) {
2410 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
2411 if (!hugetlb_vma_trylock_write(vma)) {
2412 page_vma_mapped_walk_done(&pvmw);
2413 ret = false;
2414 break;
2415 }
2416 if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) {
2417 hugetlb_vma_unlock_write(vma);
2418 flush_tlb_range(vma,
2419 range.start, range.end);
2420
2421 /*
2422 * The ref count of the PMD page was
2423 * dropped which is part of the way map
2424 * counting is done for shared PMDs.
2425 * Return 'true' here. When there is
2426 * no other sharing, huge_pmd_unshare
2427 * returns false and we will unmap the
2428 * actual page and drop map count
2429 * to zero.
2430 */
2431 page_vma_mapped_walk_done(&pvmw);
2432 break;
2433 }
2434 hugetlb_vma_unlock_write(vma);
2435 }
2436 /* Nuke the hugetlb page table entry */
2437 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
2438 if (pte_dirty(pteval))
2439 folio_mark_dirty(folio);
2440 writable = pte_write(pteval);
2441 } else if (likely(pte_present(pteval))) {
2442 flush_cache_page(vma, address, pfn);
2443 /* Nuke the page table entry. */
2444 if (should_defer_flush(mm, flags)) {
2445 /*
2446 * We clear the PTE but do not flush so potentially
2447 * a remote CPU could still be writing to the folio.
2448 * If the entry was previously clean then the
2449 * architecture must guarantee that a clear->dirty
2450 * transition on a cached TLB entry is written through
2451 * and traps if the PTE is unmapped.
2452 */
2453 pteval = ptep_get_and_clear(mm, address, pvmw.pte);
2454
2455 set_tlb_ubc_flush_pending(mm, pteval, address, address + PAGE_SIZE);
2456 } else {
2457 pteval = ptep_clear_flush(vma, address, pvmw.pte);
2458 }
2459 if (pte_dirty(pteval))
2460 folio_mark_dirty(folio);
2461 writable = pte_write(pteval);
2462 } else {
2463 pte_clear(mm, address, pvmw.pte);
2464 writable = is_writable_device_private_entry(pte_to_swp_entry(pteval));
2465 }
2466
2467 VM_WARN_ON_FOLIO(writable && folio_test_anon(folio) &&
2468 !anon_exclusive, folio);
2469
2470 /* Update high watermark before we lower rss */
2471 update_hiwater_rss(mm);
2472
2473 if (PageHWPoison(subpage)) {
2474 VM_WARN_ON_FOLIO(folio_is_device_private(folio), folio);
2475
2476 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
2477 if (folio_test_hugetlb(folio)) {
2478 hugetlb_count_sub(folio_nr_pages(folio), mm);
2479 set_huge_pte_at(mm, address, pvmw.pte, pteval,
2480 hsz);
2481 } else {
2482 dec_mm_counter(mm, mm_counter(folio));
2483 set_pte_at(mm, address, pvmw.pte, pteval);
2484 }
2485 } else if (likely(pte_present(pteval)) && pte_unused(pteval) &&
2486 !userfaultfd_armed(vma)) {
2487 /*
2488 * The guest indicated that the page content is of no
2489 * interest anymore. Simply discard the pte, vmscan
2490 * will take care of the rest.
2491 * A future reference will then fault in a new zero
2492 * page. When userfaultfd is active, we must not drop
2493 * this page though, as its main user (postcopy
2494 * migration) will not expect userfaults on already
2495 * copied pages.
2496 */
2497 dec_mm_counter(mm, mm_counter(folio));
2498 } else {
2499 swp_entry_t entry;
2500 pte_t swp_pte;
2501
2502 /*
2503 * arch_unmap_one() is expected to be a NOP on
2504 * architectures where we could have PFN swap PTEs,
2505 * so we'll not check/care.
2506 */
2507 if (arch_unmap_one(mm, vma, address, pteval) < 0) {
2508 if (folio_test_hugetlb(folio))
2509 set_huge_pte_at(mm, address, pvmw.pte,
2510 pteval, hsz);
2511 else
2512 set_pte_at(mm, address, pvmw.pte, pteval);
2513 ret = false;
2514 page_vma_mapped_walk_done(&pvmw);
2515 break;
2516 }
2517
2518 /* See folio_try_share_anon_rmap_pte(): clear PTE first. */
2519 if (folio_test_hugetlb(folio)) {
2520 if (anon_exclusive &&
2521 hugetlb_try_share_anon_rmap(folio)) {
2522 set_huge_pte_at(mm, address, pvmw.pte,
2523 pteval, hsz);
2524 ret = false;
2525 page_vma_mapped_walk_done(&pvmw);
2526 break;
2527 }
2528 } else if (anon_exclusive &&
2529 folio_try_share_anon_rmap_pte(folio, subpage)) {
2530 set_pte_at(mm, address, pvmw.pte, pteval);
2531 ret = false;
2532 page_vma_mapped_walk_done(&pvmw);
2533 break;
2534 }
2535
2536 /*
2537 * Store the pfn of the page in a special migration
2538 * pte. do_swap_page() will wait until the migration
2539 * pte is removed and then restart fault handling.
2540 */
2541 if (writable)
2542 entry = make_writable_migration_entry(
2543 page_to_pfn(subpage));
2544 else if (anon_exclusive)
2545 entry = make_readable_exclusive_migration_entry(
2546 page_to_pfn(subpage));
2547 else
2548 entry = make_readable_migration_entry(
2549 page_to_pfn(subpage));
2550 if (likely(pte_present(pteval))) {
2551 if (pte_young(pteval))
2552 entry = make_migration_entry_young(entry);
2553 if (pte_dirty(pteval))
2554 entry = make_migration_entry_dirty(entry);
2555 swp_pte = swp_entry_to_pte(entry);
2556 if (pte_soft_dirty(pteval))
2557 swp_pte = pte_swp_mksoft_dirty(swp_pte);
2558 if (pte_uffd_wp(pteval))
2559 swp_pte = pte_swp_mkuffd_wp(swp_pte);
2560 } else {
2561 swp_pte = swp_entry_to_pte(entry);
2562 if (pte_swp_soft_dirty(pteval))
2563 swp_pte = pte_swp_mksoft_dirty(swp_pte);
2564 if (pte_swp_uffd_wp(pteval))
2565 swp_pte = pte_swp_mkuffd_wp(swp_pte);
2566 }
2567 if (folio_test_hugetlb(folio))
2568 set_huge_pte_at(mm, address, pvmw.pte, swp_pte,
2569 hsz);
2570 else
2571 set_pte_at(mm, address, pvmw.pte, swp_pte);
2572 trace_set_migration_pte(address, pte_val(swp_pte),
2573 folio_order(folio));
2574 /*
2575 * No need to invalidate here it will synchronize on
2576 * against the special swap migration pte.
2577 */
2578 }
2579
2580 if (unlikely(folio_test_hugetlb(folio)))
2581 hugetlb_remove_rmap(folio);
2582 else
2583 folio_remove_rmap_pte(folio, subpage, vma);
2584 if (vma->vm_flags & VM_LOCKED)
2585 mlock_drain_local();
2586 folio_put(folio);
2587 }
2588
2589 mmu_notifier_invalidate_range_end(&range);
2590
2591 return ret;
2592 }
2593
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next prev parent reply other threads:[~2025-07-30 20:06 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-30 9:21 [v2 00/11] THP support for zone device page migration Balbir Singh
2025-07-30 9:21 ` [v2 01/11] mm/zone_device: support large zone device private folios Balbir Singh
2025-07-30 9:50 ` David Hildenbrand
2025-08-04 23:43 ` Balbir Singh
2025-08-05 4:22 ` Balbir Singh
2025-08-05 10:57 ` David Hildenbrand
2025-08-05 11:01 ` Balbir Singh
2025-08-05 12:58 ` David Hildenbrand
2025-08-05 21:15 ` Matthew Brost
2025-08-06 12:19 ` Balbir Singh
2025-07-30 9:21 ` [v2 02/11] mm/thp: zone_device awareness in THP handling code Balbir Singh
2025-07-30 11:16 ` Mika Penttilä
2025-07-30 11:27 ` Zi Yan
2025-07-30 11:30 ` Zi Yan
2025-07-30 11:42 ` Mika Penttilä
2025-07-30 12:08 ` Mika Penttilä
2025-07-30 12:25 ` Zi Yan
2025-07-30 12:49 ` Mika Penttilä
2025-07-30 15:10 ` Zi Yan
2025-07-30 15:40 ` Mika Penttilä
2025-07-30 15:58 ` Zi Yan
2025-07-30 16:29 ` Mika Penttilä
2025-07-31 7:15 ` David Hildenbrand
2025-07-31 8:39 ` Balbir Singh
2025-07-31 11:26 ` Zi Yan
2025-07-31 12:32 ` David Hildenbrand
2025-07-31 13:34 ` Zi Yan
2025-07-31 19:09 ` David Hildenbrand
2025-08-01 0:49 ` Balbir Singh
2025-08-01 1:09 ` Zi Yan
2025-08-01 7:01 ` David Hildenbrand
2025-08-01 1:16 ` Mika Penttilä
2025-08-01 4:44 ` Balbir Singh
2025-08-01 5:57 ` Balbir Singh
2025-08-01 6:01 ` Mika Penttilä
2025-08-01 7:04 ` David Hildenbrand
2025-08-01 8:01 ` Balbir Singh
2025-08-01 8:46 ` David Hildenbrand
2025-08-01 11:10 ` Zi Yan
2025-08-01 12:20 ` Mika Penttilä
2025-08-01 12:28 ` Zi Yan
2025-08-02 1:17 ` Balbir Singh
2025-08-02 10:37 ` Balbir Singh
2025-08-02 12:13 ` Mika Penttilä
2025-08-04 22:46 ` Balbir Singh
2025-08-04 23:26 ` Mika Penttilä
2025-08-05 4:10 ` Balbir Singh
2025-08-05 4:24 ` Mika Penttilä
2025-08-05 5:19 ` Mika Penttilä
2025-08-05 10:27 ` Balbir Singh
2025-08-05 10:35 ` Mika Penttilä
2025-08-05 10:36 ` Balbir Singh
2025-08-05 10:46 ` Mika Penttilä
2025-07-30 20:05 ` kernel test robot [this message]
2025-07-30 9:21 ` [v2 03/11] mm/migrate_device: THP migration of zone device pages Balbir Singh
2025-07-31 16:19 ` kernel test robot
2025-07-30 9:21 ` [v2 04/11] mm/memory/fault: add support for zone device THP fault handling Balbir Singh
2025-07-30 9:21 ` [v2 05/11] lib/test_hmm: test cases and support for zone device private THP Balbir Singh
2025-07-31 11:17 ` kernel test robot
2025-07-30 9:21 ` [v2 06/11] mm/memremap: add folio_split support Balbir Singh
2025-07-30 9:21 ` [v2 07/11] mm/thp: add split during migration support Balbir Singh
2025-07-31 10:04 ` kernel test robot
2025-07-30 9:21 ` [v2 08/11] lib/test_hmm: add test case for split pages Balbir Singh
2025-07-30 9:21 ` [v2 09/11] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-07-30 9:21 ` [v2 10/11] gpu/drm/nouveau: add THP migration support Balbir Singh
2025-07-30 9:21 ` [v2 11/11] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-07-30 11:30 ` [v2 00/11] THP support for zone device page migration David Hildenbrand
2025-07-30 23:18 ` Alistair Popple
2025-07-31 8:41 ` Balbir Singh
2025-07-31 8:56 ` David Hildenbrand
2025-08-05 21:34 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202507310343.ZipoyitU-lkp@intel.com \
--to=lkp@intel.com \
--cc=airlied@gmail.com \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=donettom@linux.ibm.com \
--cc=francois.dugast@intel.com \
--cc=jane.chu@oracle.com \
--cc=jglisse@redhat.com \
--cc=kherbst@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lyude@redhat.com \
--cc=matthew.brost@intel.com \
--cc=mpenttil@redhat.com \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=peterx@redhat.com \
--cc=rcampbell@nvidia.com \
--cc=ryan.roberts@arm.com \
--cc=simona@ffwll.ch \
--cc=skhan@linuxfoundation.org \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).