linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <balbirs@nvidia.com>
To: "Mika Penttilä" <mpenttil@redhat.com>, linux-mm@kvack.org
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	"Karol Herbst" <kherbst@redhat.com>,
	"Lyude Paul" <lyude@redhat.com>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Shuah Khan" <shuah@kernel.org>,
	"David Hildenbrand" <david@redhat.com>,
	"Barry Song" <baohua@kernel.org>,
	"Baolin Wang" <baolin.wang@linux.alibaba.com>,
	"Ryan Roberts" <ryan.roberts@arm.com>,
	"Matthew Wilcox" <willy@infradead.org>,
	"Peter Xu" <peterx@redhat.com>, "Zi Yan" <ziy@nvidia.com>,
	"Kefeng Wang" <wangkefeng.wang@huawei.com>,
	"Jane Chu" <jane.chu@oracle.com>,
	"Alistair Popple" <apopple@nvidia.com>,
	"Donet Tom" <donettom@linux.ibm.com>
Subject: Re: [v1 resend 03/12] mm/thp: zone_device awareness in THP handling code
Date: Tue, 8 Jul 2025 14:20:04 +1000	[thread overview]
Message-ID: <b84846bd-801f-42b6-b1d4-3d784ddbcd1f@nvidia.com> (raw)
In-Reply-To: <fd86a9f9-66b4-4994-908d-af4c6637442e@redhat.com>

On 7/7/25 13:49, Mika Penttilä wrote:
> 
> On 7/4/25 02:35, Balbir Singh wrote:
>> Make THP handling code in the mm subsystem for THP pages
>> aware of zone device pages. Although the code is
>> designed to be generic when it comes to handling splitting
>> of pages, the code is designed to work for THP page sizes
>> corresponding to HPAGE_PMD_NR.
>>
>> Modify page_vma_mapped_walk() to return true when a zone
>> device huge entry is present, enabling try_to_migrate()
>> and other code migration paths to appropriately process the
>> entry
>>
>> pmd_pfn() does not work well with zone device entries, use
>> pfn_pmd_entry_to_swap() for checking and comparison as for
>> zone device entries.
>>
>> try_to_map_to_unused_zeropage() does not apply to zone device
>> entries, zone device entries are ignored in the call.
>>
>> Cc: Karol Herbst <kherbst@redhat.com>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: "Jérôme Glisse" <jglisse@redhat.com>
>> Cc: Shuah Khan <shuah@kernel.org>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Peter Xu <peterx@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Cc: Jane Chu <jane.chu@oracle.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Donet Tom <donettom@linux.ibm.com>
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> ---
>>  mm/huge_memory.c     | 153 +++++++++++++++++++++++++++++++------------
>>  mm/migrate.c         |   2 +
>>  mm/page_vma_mapped.c |  10 +++
>>  mm/pgtable-generic.c |   6 ++
>>  mm/rmap.c            |  19 +++++-
>>  5 files changed, 146 insertions(+), 44 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index ce130225a8e5..e6e390d0308f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1711,7 +1711,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>  	if (unlikely(is_swap_pmd(pmd))) {
>>  		swp_entry_t entry = pmd_to_swp_entry(pmd);
>>  
>> -		VM_BUG_ON(!is_pmd_migration_entry(pmd));
>> +		VM_BUG_ON(!is_pmd_migration_entry(pmd) &&
>> +				!is_device_private_entry(entry));
>>  		if (!is_readable_migration_entry(entry)) {
>>  			entry = make_readable_migration_entry(
>>  							swp_offset(entry));
>> @@ -2222,10 +2223,17 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  		} else if (thp_migration_supported()) {
>>  			swp_entry_t entry;
>>  
>> -			VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
>>  			entry = pmd_to_swp_entry(orig_pmd);
>>  			folio = pfn_swap_entry_folio(entry);
>>  			flush_needed = 0;
>> +
>> +			VM_BUG_ON(!is_pmd_migration_entry(*pmd) &&
>> +					!folio_is_device_private(folio));
>> +
>> +			if (folio_is_device_private(folio)) {
>> +				folio_remove_rmap_pmd(folio, folio_page(folio, 0), vma);
>> +				WARN_ON_ONCE(folio_mapcount(folio) < 0);
>> +			}
>>  		} else
>>  			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
>>  
>> @@ -2247,6 +2255,15 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  				folio_mark_accessed(folio);
>>  		}
>>  
>> +		/*
>> +		 * Do a folio put on zone device private pages after
>> +		 * changes to mm_counter, because the folio_put() will
>> +		 * clean folio->mapping and the folio_test_anon() check
>> +		 * will not be usable.
>> +		 */
>> +		if (folio_is_device_private(folio))
>> +			folio_put(folio);
>> +
>>  		spin_unlock(ptl);
>>  		if (flush_needed)
>>  			tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
>> @@ -2375,7 +2392,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  		struct folio *folio = pfn_swap_entry_folio(entry);
>>  		pmd_t newpmd;
>>  
>> -		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
>> +		VM_BUG_ON(!is_pmd_migration_entry(*pmd) &&
>> +			  !folio_is_device_private(folio));
>>  		if (is_writable_migration_entry(entry)) {
>>  			/*
>>  			 * A protection check is difficult so
>> @@ -2388,9 +2406,11 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  			newpmd = swp_entry_to_pmd(entry);
>>  			if (pmd_swp_soft_dirty(*pmd))
>>  				newpmd = pmd_swp_mksoft_dirty(newpmd);
>> -		} else {
>> +		} else if (is_writable_device_private_entry(entry)) {
>> +			newpmd = swp_entry_to_pmd(entry);
>> +			entry = make_device_exclusive_entry(swp_offset(entry));
>> +		} else
>>  			newpmd = *pmd;
>> -		}
>>  
>>  		if (uffd_wp)
>>  			newpmd = pmd_swp_mkuffd_wp(newpmd);
>> @@ -2842,16 +2862,20 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  	struct page *page;
>>  	pgtable_t pgtable;
>>  	pmd_t old_pmd, _pmd;
>> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
>> -	bool anon_exclusive = false, dirty = false;
>> +	bool young, write, soft_dirty, uffd_wp = false;
>> +	bool anon_exclusive = false, dirty = false, present = false;
>>  	unsigned long addr;
>>  	pte_t *pte;
>>  	int i;
>> +	swp_entry_t swp_entry;
>>  
>>  	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
>>  	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
>>  	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
>> -	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
>> +
>> +	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)
>> +			&& !(is_swap_pmd(*pmd) &&
>> +			is_device_private_entry(pmd_to_swp_entry(*pmd))));
>>  
>>  	count_vm_event(THP_SPLIT_PMD);
>>  
>> @@ -2899,20 +2923,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  		return __split_huge_zero_page_pmd(vma, haddr, pmd);
>>  	}
>>  
>> -	pmd_migration = is_pmd_migration_entry(*pmd);
>> -	if (unlikely(pmd_migration)) {
>> -		swp_entry_t entry;
>>  
>> +	present = pmd_present(*pmd);
>> +	if (unlikely(!present)) {
>> +		swp_entry = pmd_to_swp_entry(*pmd);
>>  		old_pmd = *pmd;
>> -		entry = pmd_to_swp_entry(old_pmd);
>> -		page = pfn_swap_entry_to_page(entry);
>> -		write = is_writable_migration_entry(entry);
>> +
>> +		folio = pfn_swap_entry_folio(swp_entry);
>> +		VM_BUG_ON(!is_migration_entry(swp_entry) &&
>> +				!is_device_private_entry(swp_entry));
>> +		page = pfn_swap_entry_to_page(swp_entry);
>> +		write = is_writable_migration_entry(swp_entry);
>> +
>>  		if (PageAnon(page))
>> -			anon_exclusive = is_readable_exclusive_migration_entry(entry);
>> -		young = is_migration_entry_young(entry);
>> -		dirty = is_migration_entry_dirty(entry);
>> +			anon_exclusive =
>> +				is_readable_exclusive_migration_entry(swp_entry);
>>  		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>  		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>> +		young = is_migration_entry_young(swp_entry);
>> +		dirty = is_migration_entry_dirty(swp_entry);
>>  	} else {
> 
> This is where folio_try_share_anon_rmap_pmd() is skipped for device private pages, to which I referred in
> https://lore.kernel.org/linux-mm/f1e26e18-83db-4c0e-b8d8-0af8ffa8a206@redhat.com/
> 

Does it matter for device private pages/folios? It does not affect the freeze value.

Balbir Singh



  reply	other threads:[~2025-07-08  4:20 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-03 23:34 [v1 resend 00/12] THP support for zone device page migration Balbir Singh
2025-07-03 23:35 ` [v1 resend 01/12] mm/zone_device: support large zone device private folios Balbir Singh
2025-07-07  5:28   ` Alistair Popple
2025-07-08  6:47     ` Balbir Singh
2025-07-03 23:35 ` [v1 resend 02/12] mm/migrate_device: flags for selecting device private THP pages Balbir Singh
2025-07-07  5:31   ` Alistair Popple
2025-07-08  7:31     ` Balbir Singh
2025-07-19 20:06       ` Matthew Brost
2025-07-19 20:16         ` Matthew Brost
2025-07-18  3:15   ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 03/12] mm/thp: zone_device awareness in THP handling code Balbir Singh
2025-07-04  4:46   ` Mika Penttilä
2025-07-06  1:21     ` Balbir Singh
2025-07-04 11:10   ` Mika Penttilä
2025-07-05  0:14     ` Balbir Singh
2025-07-07  6:09       ` Alistair Popple
2025-07-08  7:40         ` Balbir Singh
2025-07-07  3:49   ` Mika Penttilä
2025-07-08  4:20     ` Balbir Singh [this message]
2025-07-08  4:30       ` Mika Penttilä
2025-07-07  6:07   ` Alistair Popple
2025-07-08  4:59     ` Balbir Singh
2025-07-22  4:42   ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 04/12] mm/migrate_device: THP migration of zone device pages Balbir Singh
2025-07-04 15:35   ` kernel test robot
2025-07-18  6:59   ` Matthew Brost
2025-07-18  7:04     ` Balbir Singh
2025-07-18  7:21       ` Matthew Brost
2025-07-18  8:22         ` Matthew Brost
2025-07-22  4:54           ` Matthew Brost
2025-07-19  2:10   ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 05/12] mm/memory/fault: add support for zone device THP fault handling Balbir Singh
2025-07-17 19:34   ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 06/12] lib/test_hmm: test cases and support for zone device private THP Balbir Singh
2025-07-03 23:35 ` [v1 resend 07/12] mm/memremap: add folio_split support Balbir Singh
2025-07-04 11:14   ` Mika Penttilä
2025-07-06  1:24     ` Balbir Singh
2025-07-03 23:35 ` [v1 resend 08/12] mm/thp: add split during migration support Balbir Singh
2025-07-04  5:17   ` Mika Penttilä
2025-07-04  6:43     ` Mika Penttilä
2025-07-05  0:26       ` Balbir Singh
2025-07-05  3:17         ` Mika Penttilä
2025-07-07  2:35           ` Balbir Singh
2025-07-07  3:29             ` Mika Penttilä
2025-07-08  7:37               ` Balbir Singh
2025-07-04 11:24   ` Zi Yan
2025-07-05  0:58     ` Balbir Singh
2025-07-05  1:55       ` Zi Yan
2025-07-06  1:15         ` Balbir Singh
2025-07-06  1:34           ` Zi Yan
2025-07-06  1:47             ` Balbir Singh
2025-07-06  2:34               ` Zi Yan
2025-07-06  3:03                 ` Zi Yan
2025-07-07  2:29                   ` Balbir Singh
2025-07-07  2:45                     ` Zi Yan
2025-07-08  3:31                       ` Balbir Singh
2025-07-08  7:43                       ` Balbir Singh
2025-07-16  5:34               ` Matthew Brost
2025-07-16 11:19                 ` Zi Yan
2025-07-16 16:24                   ` Matthew Brost
2025-07-16 21:53                     ` Balbir Singh
2025-07-17 22:24                       ` Matthew Brost
2025-07-17 23:04                         ` Zi Yan
2025-07-18  0:41                           ` Matthew Brost
2025-07-18  1:25                             ` Zi Yan
2025-07-18  3:33                               ` Matthew Brost
2025-07-18 15:06                                 ` Zi Yan
2025-07-23  0:00                                   ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 09/12] lib/test_hmm: add test case for split pages Balbir Singh
2025-07-03 23:35 ` [v1 resend 10/12] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-07-03 23:35 ` [v1 resend 11/12] gpu/drm/nouveau: add THP migration support Balbir Singh
2025-07-03 23:35 ` [v1 resend 12/12] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-07-04 16:16 ` [v1 resend 00/12] THP support for zone device page migration Zi Yan
2025-07-04 23:56   ` Balbir Singh
2025-07-08 14:53 ` David Hildenbrand
2025-07-08 22:43   ` Balbir Singh
2025-07-17 23:40 ` Matthew Brost
2025-07-18  3:57   ` Balbir Singh
2025-07-18  4:57     ` Matthew Brost
2025-07-21 23:48       ` Balbir Singh
2025-07-22  0:07         ` Matthew Brost
2025-07-22  0:51           ` Balbir Singh
2025-07-19  0:53     ` Matthew Brost
2025-07-21 11:42     ` Francois Dugast
2025-07-21 23:34       ` Balbir Singh
2025-07-22  0:01         ` Matthew Brost
2025-07-22 19:34         ` [PATCH] mm/hmm: Do not fault in device private pages owned by the caller Francois Dugast
2025-07-22 20:07           ` Andrew Morton
2025-07-23 15:34             ` Francois Dugast
2025-07-23 18:05               ` Matthew Brost
2025-07-24  0:25           ` Balbir Singh
2025-07-24  5:02             ` Matthew Brost
2025-07-24  5:46               ` Mika Penttilä
2025-07-24  5:57                 ` Matthew Brost
2025-07-24  6:04                   ` Mika Penttilä
2025-07-24  6:47                     ` Leon Romanovsky
2025-07-28 13:34               ` Jason Gunthorpe
2025-08-08  0:21           ` Matthew Brost
2025-08-08  9:43             ` Francois Dugast

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b84846bd-801f-42b6-b1d4-3d784ddbcd1f@nvidia.com \
    --to=balbirs@nvidia.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dakr@kernel.org \
    --cc=david@redhat.com \
    --cc=donettom@linux.ibm.com \
    --cc=jane.chu@oracle.com \
    --cc=jglisse@redhat.com \
    --cc=kherbst@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lyude@redhat.com \
    --cc=mpenttil@redhat.com \
    --cc=peterx@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=shuah@kernel.org \
    --cc=simona@ffwll.ch \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).