From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDD95C8303D for ; Fri, 4 Jul 2025 04:47:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27DBE6B8001; Fri, 4 Jul 2025 00:47:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E11044016D; Fri, 4 Jul 2025 00:47:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05C716B8001; Fri, 4 Jul 2025 00:46:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E926844016D for ; Fri, 4 Jul 2025 00:46:59 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9AA9E10938A for ; Fri, 4 Jul 2025 04:46:59 +0000 (UTC) X-FDA: 83625347358.04.57FD8EA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 2884440009 for ; Fri, 4 Jul 2025 04:46:56 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VMhX+34y; spf=pass (imf27.hostedemail.com: domain of mpenttil@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mpenttil@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751604417; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KGJxBw5laR5gCg/CMyckWDyf7g06WQ58qllfXjf3Zks=; b=qmSBopNeZRE4oGeHXMH4nvC17o5pa2JiqFuNpoFgNNTm/oBgOYBEnmypooc7D0QfVsbu2h dspjM9GqjltIbRbxCOtWTTBS/WjXYSIBYVJlJiZ9wZQ0tea4ehYltqpHC/CyKATuRlcLYo D8FxO4nUyOtYbRI5EjYB8sXGntAnmsU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VMhX+34y; spf=pass (imf27.hostedemail.com: domain of mpenttil@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mpenttil@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751604417; a=rsa-sha256; cv=none; b=PNm4pAgJfxV4tdHZ7+yXbRyEISpGUkGjMnPfp8F292SqUmv4sLhVNvMv4xBqy1DZZ0ZZ/u 9wNTmbR1Otqv+YQ+27LuxIFyP+HXkeJtexJ/apaecfgj8/mmN4hNnZU6ju1HPRmy+HsVQs GnJRplOKYlyvTxj+WoOi6ESZ5TGCf4c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751604416; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KGJxBw5laR5gCg/CMyckWDyf7g06WQ58qllfXjf3Zks=; b=VMhX+34yRtTyZU+BZCPkykHM3qTTkTykAALDN/Q0Z6Ic/ka6yXhL7Cz43pgfBH/9L10Wty hYRGoboJdcp9vRdvJ9hyzPykk4fy0YUovA4iootQ3+H9w6QEGYTAreZbCCKChO3Tk6VyjD CWJJIlAJPsww/uJE5QLPcwztYp8fhrg= Received: from mail-lj1-f199.google.com (mail-lj1-f199.google.com [209.85.208.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-625-JrlepXVpMhyfx4_ra2icMg-1; Fri, 04 Jul 2025 00:46:55 -0400 X-MC-Unique: JrlepXVpMhyfx4_ra2icMg-1 X-Mimecast-MFC-AGG-ID: JrlepXVpMhyfx4_ra2icMg_1751604414 Received: by mail-lj1-f199.google.com with SMTP id 38308e7fff4ca-32b3ce96f8dso2430831fa.0 for ; Thu, 03 Jul 2025 21:46:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751604414; x=1752209214; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KGJxBw5laR5gCg/CMyckWDyf7g06WQ58qllfXjf3Zks=; b=NuLzmLdiaBHwdRziJm+VPb7fcSepnJU7sF76DmnbsuAgE+dr6/etyqruJz9HF4CEyi 0H7v5WOdA7N/KjlrsrCINQ4ijNEoU9i/MyyMVzin3BjvLTsxB9c7+gkgpANuBaGsFFyS qI2ve53Rr/2whP/R71cFLI9qth9lbW8GdI8qTDdDJ35ClZmh7Aht35HHR3EnX9QX8Mfv IzLZzcvRmoUUXFN6v46sInmbhr0V4j4BXTT4SW2pdtgBlKhpFWysqKGAmVD2ze41AnIH 78abuTtO6rpdmJ2M0/wrDVQ5YRsMoyJJ8Lh10UmhV8cJVhbCIUo3u4VeyOaFhRmdpZKR j7GA== X-Forwarded-Encrypted: i=1; AJvYcCWWnrq6Elnz+huFDl5OMQbPIoI1/dEGZqrfvkXiPDGSmghixcHIC+EAQIscIMxJpUHhEGeuezUE4A==@kvack.org X-Gm-Message-State: AOJu0Yw+QEYueXLcna/1HtSY/MsaABtKDCz4GFWGLSDVUmf5qxxhEhwY W/0jKPZHo1hg/HlZFa/OPyIerwpqjWUY1iKHbKVRcd4JVTm7Up/QzTUffYZvSy+TFChm1Z+IoRH R+o9UfbqGKClNJC2XXETaB1UVhdt9TVg3nRxHxVeKDW1MRmwaGU4= X-Gm-Gg: ASbGncvLuAS69Y9yFamsu/AM4Mt50kAOY+YxMP/slehTy+c9dl3SppRjSC8t7r7yGrO In7+mv/pEtirPrjNHlhYnkcKGt0rdosYn/W9cjImLS4X8fKlZLcyNr2Y3crwu3xOssG+X9t+jS6 P9pwpk0G9UmIDGOF1dhRo3T6hRr0okEx/0njDMpWY7fNPQ0bSs6BP8GpCBWcB97aGVlPC4nyi2D DKnhFTzWb0jtg6JLSmhiXPsWBIItf1MNQkAYs5s8l3673ODNzjnD6yD2/3eSBtoyHmdYeDZ0UN7 kJZZ8AJOtE/HWyX9DHttNExDDc6GzkKtcx3bvttCXFITwLGa X-Received: by 2002:a2e:ae14:0:b0:32a:7d61:ded0 with SMTP id 38308e7fff4ca-32f03639f9cmr1342321fa.19.1751604413702; Thu, 03 Jul 2025 21:46:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHYhQPghFpsAmjSdbeUfERK7E4Nc8Le0K6jW7PuvmTz5Hb5MPmkVog37VvlvaMPyEWmPzF1xg== X-Received: by 2002:a2e:ae14:0:b0:32a:7d61:ded0 with SMTP id 38308e7fff4ca-32f03639f9cmr1342211fa.19.1751604413188; Thu, 03 Jul 2025 21:46:53 -0700 (PDT) Received: from [192.168.1.86] (85-23-48-6.bb.dnainternet.fi. [85.23.48.6]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-32e1b1427a7sm1199231fa.86.2025.07.03.21.46.52 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 03 Jul 2025 21:46:52 -0700 (PDT) Message-ID: Date: Fri, 4 Jul 2025 07:46:52 +0300 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [v1 resend 03/12] mm/thp: zone_device awareness in THP handling code To: Balbir Singh , linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Shuah Khan , David Hildenbrand , Barry Song , Baolin Wang , Ryan Roberts , Matthew Wilcox , Peter Xu , Zi Yan , Kefeng Wang , Jane Chu , Alistair Popple , Donet Tom References: <20250703233511.2028395-1-balbirs@nvidia.com> <20250703233511.2028395-4-balbirs@nvidia.com> From: =?UTF-8?Q?Mika_Penttil=C3=A4?= In-Reply-To: <20250703233511.2028395-4-balbirs@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: blo4D6lEXc4Jtf9vClE-1WgCcO1KzAhlP2bmcNsSuIc_1751604414 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: 9hic3s38hqi65e783rn7tqnmtph5xyxb X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2884440009 X-HE-Tag: 1751604416-780496 X-HE-Meta: U2FsdGVkX18k4fnqTU1pIGdvuZ38humujZgXUE56x8P3+Gj0yAnY0YQ2m3UaCMDm7yVHeXcTOgUPVMgk/KTM+7fgeX+9FXKCQIbsG94F7SKPe2jIoU5Qymi/XpEbQT4fLWhVQT/Dl9aq1Ccu0R+yiHVijWBloM2Bf9DoR6yTVMNQBZodMRSy2GzzEg4bAPIWoSLQfGgh9jDnFRGGgkAr64lXO7cbNH5m/7tmges4Mtk87FMe36GxRYGXY4fRARvEysectfuBnrsx+HkE/TV4I9yq6bC2StildQn/sOIbN05Pfv2SyAuZpfb/VP0H+QO8MlCQhLveb7KQEuaCGxsIicL4wQJihIWmlrU1imKSfqmZxVbmDjjgsAvdsQm+0riQxBq2MkZSVTUvRF2c8OPvM3Awy5kTwVL7BYQeAYTSQCy4HRnzkukuAsz7C2aWZfBx6hjdQu4DmE3t4Msqevv4epnPlIUKMCwxXXzjzYpcuFiFZDOtOvNY84Hib7N3K2bksVHf14mi7s5yiB4CJJbY/l7POxpP2enEoQy5CbwfaFriOJUuG1SJIdTIq/7khXh1RZLaPCuNBVRRqvAV6YfjaZoL1YzIq6p9r/AYhspC03638wNx8gNOMq3JvqPk9wWY0jHEk4gI85PrWJ6IrJ+vh496iY2/bXXrq4WXPAKLc0iWUwBG6VNNllc1WHQUwEzYvj4quy15o+bF5vFMabZk1AMreyrcjQGrB/i92yBVx4OLvR1CMiDzMRA/zB9706sT5EiNbhepvm9+fHDKWmRyp34dIPk3Yj4RHwYOHpckciqbqDPljakvWWS4NnPbhLW8SqCnsYiOwkGTEjfCBT8c+GiefSHhKndU7w8nps9ShGWtvkPVHweod4B29hs4vMmpM8ScQoj2MkWLCUQUrRzQkRhwtLjP9sBPX280wyWqk8KnClzHe7A2Cej5EvgVbjbF3FFqn0IlLC8Dzu0t3qr n+chjITQ j7hCpowwHWtp/GqdwfR3mAo6aShg2E37m2EtLYbKNvpi06ComeXEyYKEgKwt1OQ9CYk0tnanMfiYfCSUGK/Jtciwf9mH5IPtMNCrIoXsfi2Czk8UTDHASxbfDmmwXHmGcXu6vRl2qCXwBzS47jYo9kHWpxxaIDClNC+lvwSkKmhfW/vpEX7QrV3nkX0bD5ACtwoF2Y92JPo5quxwgHmiJHV4N4sSzIzHhOYA/q4whvxVz7mXLfAEBdW+siWEw8dzxhKPE5H795A/F07Ksc5p4VUfIB72voaTrxDMrEm9POH4hkYUh1C50PyJa3X3B23IsG9UTW8oROnRqlqKMAoRz6HfZswRD80laZY/X5EuVTJxVBAhiNh9HcNXVJYWkkngDViHv7L/1iJJQ23bzhMApU1K9cBvlep8bN94zHGmc8t7L6UfA9BzKhZb3dELk60q1yORAuv+qEG5vBD4RnQnM3SleRPfWelM4o9DywJld1l1o6cCqUSJ55xcK3aFCX9Lce5usZROOPQPTwubhsl6haRvPyYWj8nOjxF6v/Rv+AhUk/L5IiQTFkF/lHt5GzDS1yq25zC0J/mGVJClWNkuS8QnLVZbjXFt0+7IGOvfXOJkvK37U2janjLKLwrH/imzajfY6xpKQskxVnzfp7jcFHrEc6ABPA6WemudnzJo+1JWHY/z//jh3SRW+u4zTNsANKnjykrqYgpr85yu2D5lH8M2Mqz6UOxqVTpX4O2FZO4r0HmSuWh/J0sNoNE5gcq0p7ovb1fxJ/FjnwZJw7TIGuKpXwu9oeDRd+BqBXhfZlc/DNUHoBTQC4c4xqtJoppv3rZyj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 7/4/25 02:35, Balbir Singh wrote: > Make THP handling code in the mm subsystem for THP pages > aware of zone device pages. Although the code is > designed to be generic when it comes to handling splitting > of pages, the code is designed to work for THP page sizes > corresponding to HPAGE_PMD_NR. > > Modify page_vma_mapped_walk() to return true when a zone > device huge entry is present, enabling try_to_migrate() > and other code migration paths to appropriately process the > entry > > pmd_pfn() does not work well with zone device entries, use > pfn_pmd_entry_to_swap() for checking and comparison as for > zone device entries. > > try_to_map_to_unused_zeropage() does not apply to zone device > entries, zone device entries are ignored in the call. > > Cc: Karol Herbst > Cc: Lyude Paul > Cc: Danilo Krummrich > Cc: David Airlie > Cc: Simona Vetter > Cc: "Jérôme Glisse" > Cc: Shuah Khan > Cc: David Hildenbrand > Cc: Barry Song > Cc: Baolin Wang > Cc: Ryan Roberts > Cc: Matthew Wilcox > Cc: Peter Xu > Cc: Zi Yan > Cc: Kefeng Wang > Cc: Jane Chu > Cc: Alistair Popple > Cc: Donet Tom > > Signed-off-by: Balbir Singh > --- > mm/huge_memory.c | 153 +++++++++++++++++++++++++++++++------------ > mm/migrate.c | 2 + > mm/page_vma_mapped.c | 10 +++ > mm/pgtable-generic.c | 6 ++ > mm/rmap.c | 19 +++++- > 5 files changed, 146 insertions(+), 44 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index ce130225a8e5..e6e390d0308f 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1711,7 +1711,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, > if (unlikely(is_swap_pmd(pmd))) { > swp_entry_t entry = pmd_to_swp_entry(pmd); > > - VM_BUG_ON(!is_pmd_migration_entry(pmd)); > + VM_BUG_ON(!is_pmd_migration_entry(pmd) && > + !is_device_private_entry(entry)); > if (!is_readable_migration_entry(entry)) { > entry = make_readable_migration_entry( > swp_offset(entry)); > @@ -2222,10 +2223,17 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > } else if (thp_migration_supported()) { > swp_entry_t entry; > > - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); > entry = pmd_to_swp_entry(orig_pmd); > folio = pfn_swap_entry_folio(entry); > flush_needed = 0; > + > + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && > + !folio_is_device_private(folio)); > + > + if (folio_is_device_private(folio)) { > + folio_remove_rmap_pmd(folio, folio_page(folio, 0), vma); > + WARN_ON_ONCE(folio_mapcount(folio) < 0); > + } > } else > WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); > > @@ -2247,6 +2255,15 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > folio_mark_accessed(folio); > } > > + /* > + * Do a folio put on zone device private pages after > + * changes to mm_counter, because the folio_put() will > + * clean folio->mapping and the folio_test_anon() check > + * will not be usable. > + */ > + if (folio_is_device_private(folio)) > + folio_put(folio); > + > spin_unlock(ptl); > if (flush_needed) > tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); > @@ -2375,7 +2392,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > struct folio *folio = pfn_swap_entry_folio(entry); > pmd_t newpmd; > > - VM_BUG_ON(!is_pmd_migration_entry(*pmd)); > + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && > + !folio_is_device_private(folio)); > if (is_writable_migration_entry(entry)) { > /* > * A protection check is difficult so > @@ -2388,9 +2406,11 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > newpmd = swp_entry_to_pmd(entry); > if (pmd_swp_soft_dirty(*pmd)) > newpmd = pmd_swp_mksoft_dirty(newpmd); > - } else { > + } else if (is_writable_device_private_entry(entry)) { > + newpmd = swp_entry_to_pmd(entry); > + entry = make_device_exclusive_entry(swp_offset(entry)); > + } else > newpmd = *pmd; > - } > > if (uffd_wp) > newpmd = pmd_swp_mkuffd_wp(newpmd); > @@ -2842,16 +2862,20 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > struct page *page; > pgtable_t pgtable; > pmd_t old_pmd, _pmd; > - bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false; > - bool anon_exclusive = false, dirty = false; > + bool young, write, soft_dirty, uffd_wp = false; > + bool anon_exclusive = false, dirty = false, present = false; > unsigned long addr; > pte_t *pte; > int i; > + swp_entry_t swp_entry; > > VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); > VM_BUG_ON_VMA(vma->vm_start > haddr, vma); > VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); > - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); > + > + VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) > + && !(is_swap_pmd(*pmd) && > + is_device_private_entry(pmd_to_swp_entry(*pmd)))); > > count_vm_event(THP_SPLIT_PMD); > > @@ -2899,20 +2923,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > return __split_huge_zero_page_pmd(vma, haddr, pmd); > } > > - pmd_migration = is_pmd_migration_entry(*pmd); > - if (unlikely(pmd_migration)) { > - swp_entry_t entry; > > + present = pmd_present(*pmd); > + if (unlikely(!present)) { > + swp_entry = pmd_to_swp_entry(*pmd); > old_pmd = *pmd; > - entry = pmd_to_swp_entry(old_pmd); > - page = pfn_swap_entry_to_page(entry); > - write = is_writable_migration_entry(entry); > + > + folio = pfn_swap_entry_folio(swp_entry); > + VM_BUG_ON(!is_migration_entry(swp_entry) && > + !is_device_private_entry(swp_entry)); > + page = pfn_swap_entry_to_page(swp_entry); > + write = is_writable_migration_entry(swp_entry); Shouldn't write include is_writable_device_private_entry() also? > + > if (PageAnon(page)) > - anon_exclusive = is_readable_exclusive_migration_entry(entry); > - young = is_migration_entry_young(entry); > - dirty = is_migration_entry_dirty(entry); > + anon_exclusive = > + is_readable_exclusive_migration_entry(swp_entry); > soft_dirty = pmd_swp_soft_dirty(old_pmd); > uffd_wp = pmd_swp_uffd_wp(old_pmd); > + young = is_migration_entry_young(swp_entry); > + dirty = is_migration_entry_dirty(swp_entry); > } else { > /* > * Up to this point the pmd is present and huge and userland has > @@ -2996,30 +3025,45 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > * Note that NUMA hinting access restrictions are not transferred to > * avoid any possibility of altering permissions across VMAs. > */ > - if (freeze || pmd_migration) { > + if (freeze || !present) { > for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) { > pte_t entry; > - swp_entry_t swp_entry; > - > - if (write) > - swp_entry = make_writable_migration_entry( > - page_to_pfn(page + i)); > - else if (anon_exclusive) > - swp_entry = make_readable_exclusive_migration_entry( > - page_to_pfn(page + i)); > - else > - swp_entry = make_readable_migration_entry( > - page_to_pfn(page + i)); > - if (young) > - swp_entry = make_migration_entry_young(swp_entry); > - if (dirty) > - swp_entry = make_migration_entry_dirty(swp_entry); > - entry = swp_entry_to_pte(swp_entry); > - if (soft_dirty) > - entry = pte_swp_mksoft_dirty(entry); > - if (uffd_wp) > - entry = pte_swp_mkuffd_wp(entry); > - > + if (freeze || is_migration_entry(swp_entry)) { > + if (write) > + swp_entry = make_writable_migration_entry( > + page_to_pfn(page + i)); > + else if (anon_exclusive) > + swp_entry = make_readable_exclusive_migration_entry( > + page_to_pfn(page + i)); > + else > + swp_entry = make_readable_migration_entry( > + page_to_pfn(page + i)); > + if (young) > + swp_entry = make_migration_entry_young(swp_entry); > + if (dirty) > + swp_entry = make_migration_entry_dirty(swp_entry); > + entry = swp_entry_to_pte(swp_entry); > + if (soft_dirty) > + entry = pte_swp_mksoft_dirty(entry); > + if (uffd_wp) > + entry = pte_swp_mkuffd_wp(entry); > + } else { > + VM_BUG_ON(!is_device_private_entry(swp_entry)); > + if (write) > + swp_entry = make_writable_device_private_entry( > + page_to_pfn(page + i)); > + else if (anon_exclusive) > + swp_entry = make_device_exclusive_entry( > + page_to_pfn(page + i)); > + else > + swp_entry = make_readable_device_private_entry( > + page_to_pfn(page + i)); > + entry = swp_entry_to_pte(swp_entry); > + if (soft_dirty) > + entry = pte_swp_mksoft_dirty(entry); > + if (uffd_wp) > + entry = pte_swp_mkuffd_wp(entry); > + } > VM_WARN_ON(!pte_none(ptep_get(pte + i))); > set_pte_at(mm, addr, pte + i, entry); > } > @@ -3046,7 +3090,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > } > pte_unmap(pte); > > - if (!pmd_migration) > + if (present) > folio_remove_rmap_pmd(folio, page, vma); > if (freeze) > put_page(page); > @@ -3058,8 +3102,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, > pmd_t *pmd, bool freeze) > { > + > VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); > - if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) > + if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd) || > + (is_swap_pmd(*pmd) && > + is_device_private_entry(pmd_to_swp_entry(*pmd)))) > __split_huge_pmd_locked(vma, pmd, address, freeze); > } > > @@ -3238,6 +3285,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio, > VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio); > lockdep_assert_held(&lruvec->lru_lock); > > + if (folio_is_device_private(folio)) > + return; > + > if (list) { > /* page reclaim is reclaiming a huge page */ > VM_WARN_ON(folio_test_lru(folio)); > @@ -3252,6 +3302,7 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio, > list_add_tail(&new_folio->lru, &folio->lru); > folio_set_lru(new_folio); > } > + > } > > /* Racy check whether the huge page can be split */ > @@ -3543,6 +3594,10 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, > ((mapping || swap_cache) ? > folio_nr_pages(release) : 0)); > > + if (folio_is_device_private(release)) > + percpu_ref_get_many(&release->pgmap->ref, > + (1 << new_order) - 1); > + > lru_add_split_folio(origin_folio, release, lruvec, > list); > > @@ -4596,7 +4651,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, > return 0; > > flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); > - pmdval = pmdp_invalidate(vma, address, pvmw->pmd); > + if (!folio_is_device_private(folio)) > + pmdval = pmdp_invalidate(vma, address, pvmw->pmd); > + else > + pmdval = pmdp_huge_clear_flush(vma, address, pvmw->pmd); > > /* See folio_try_share_anon_rmap_pmd(): invalidate PMD first. */ > anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(page); > @@ -4646,6 +4704,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) > entry = pmd_to_swp_entry(*pvmw->pmd); > folio_get(folio); > pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot)); > + > + if (unlikely(folio_is_device_private(folio))) { > + if (pmd_write(pmde)) > + entry = make_writable_device_private_entry( > + page_to_pfn(new)); > + else > + entry = make_readable_device_private_entry( > + page_to_pfn(new)); > + pmde = swp_entry_to_pmd(entry); > + } > + > if (pmd_swp_soft_dirty(*pvmw->pmd)) > pmde = pmd_mksoft_dirty(pmde); > if (is_writable_migration_entry(entry)) > diff --git a/mm/migrate.c b/mm/migrate.c > index 767f503f0875..0b6ecf559b22 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -200,6 +200,8 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, > > if (PageCompound(page)) > return false; > + if (folio_is_device_private(folio)) > + return false; > VM_BUG_ON_PAGE(!PageAnon(page), page); > VM_BUG_ON_PAGE(!PageLocked(page), page); > VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page); > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index e981a1a292d2..ff8254e52de5 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -277,6 +277,16 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > * cannot return prematurely, while zap_huge_pmd() has > * cleared *pmd but not decremented compound_mapcount(). > */ > + swp_entry_t entry; > + > + if (!thp_migration_supported()) > + return not_found(pvmw); > + entry = pmd_to_swp_entry(pmde); > + if (is_device_private_entry(entry)) { > + pvmw->ptl = pmd_lock(mm, pvmw->pmd); > + return true; > + } > + > if ((pvmw->flags & PVMW_SYNC) && > thp_vma_suitable_order(vma, pvmw->address, > PMD_ORDER) && > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index 567e2d084071..604e8206a2ec 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -292,6 +292,12 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) > *pmdvalp = pmdval; > if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval))) > goto nomap; > + if (is_swap_pmd(pmdval)) { > + swp_entry_t entry = pmd_to_swp_entry(pmdval); > + > + if (is_device_private_entry(entry)) > + goto nomap; > + } > if (unlikely(pmd_trans_huge(pmdval))) > goto nomap; > if (unlikely(pmd_bad(pmdval))) { > diff --git a/mm/rmap.c b/mm/rmap.c > index bd83724d14b6..da1e5b03e1fe 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -2336,8 +2336,23 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, > break; > } > #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > - subpage = folio_page(folio, > - pmd_pfn(*pvmw.pmd) - folio_pfn(folio)); > + /* > + * Zone device private folios do not work well with > + * pmd_pfn() on some architectures due to pte > + * inversion. > + */ > + if (folio_is_device_private(folio)) { > + swp_entry_t entry = pmd_to_swp_entry(*pvmw.pmd); > + unsigned long pfn = swp_offset_pfn(entry); > + > + subpage = folio_page(folio, pfn > + - folio_pfn(folio)); > + } else { > + subpage = folio_page(folio, > + pmd_pfn(*pvmw.pmd) > + - folio_pfn(folio)); > + } > + > VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) || > !folio_test_pmd_mappable(folio), folio); >