From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 541121A5B8B; Sat, 8 Nov 2025 17:18:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762622298; cv=none; b=jADMr2jHKYlfX1SzFxqUktBAzORceheRsu1ppU0o8ZcPIhdTIHM8CtjVlHauiWt92Q1b2qvwoWS6HtKrvzGc0Vdjv0dMBS2wjjf2/72mn6n524lH3Kq91xe56Oulysftarn3WRF0sAdt6XNMTkVqHncgC2ZYCZSOCikRviSmWaA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762622298; c=relaxed/simple; bh=pESYDLXfcVD/iJn0c7wNfoZTyYhd+yM9sA3cQbI6Gww=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l1x3vrakdKB0MEahoOIISZmNn+4nlm40nss5RWUC4r3tFC8dA4nq1P8VJTU3niDGb701C9ytdh4NQHv7r38ODtvLP74QJ85MgwNxhJZgmCa4FvZvRX3FMD17iVFe9WLmvG3wLYrBEmj5d/VCIS/gZ7ncUiyeqXW+4bPjVovibbw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pXLXLja8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pXLXLja8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D797C4CEF5; Sat, 8 Nov 2025 17:18:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762622297; bh=pESYDLXfcVD/iJn0c7wNfoZTyYhd+yM9sA3cQbI6Gww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pXLXLja8D++3i+AfHiCU6NWWnKCTqEseSjb6R01Wd2qFTmLAUkW07+K3qAkLAB/iO 58twgr3qbPV7bJw2Xlx8je9ohYkBOStMy56jhmQ9FOSppxGZWilCo/StGKN5tm0P61 b1KRhblheTCEPQheRpoPPNKS+0o7Bp0uMRP+JxSsg6XvaByUie79tO9Vq/JnxBAmGN bRE3xmzZzdcNNdy2RVe2gI/se+/RKNc4bbk9TorBN9r5WaJCOkqffGrFhlZRQNVbLe KQYfPymH7R+7yeKssxlH529Od2FBgCcEzgK73afUL4OeUfCq0h8ZUMuFtRudOFqqWU nQWLuajG/ch7w== From: SeongJae Park To: Lorenzo Stoakes Cc: SeongJae Park , Andrew Morton , Janosch Frank , Claudio Imbrenda , David Hildenbrand , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Sven Schnelle , Peter Xu , Alexander Viro , Christian Brauner , Jan Kara , Arnd Bergmann , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Muchun Song , Oscar Salvador , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Axel Rasmussen , Yuanchu Xie , Wei Xu , Kemeng Shi , Kairui Song , Nhat Pham , Baoquan He , Chris Li , Matthew Wilcox , Jason Gunthorpe , Leon Romanovsky , Xu Xin , Chengming Zhou , Jann Horn , Miaohe Lin , Naoya Horiguchi , Pedro Falcato , Pasha Tatashin , Rik van Riel , Harry Yoo , Hugh Dickins , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, damon@lists.linux.dev Subject: Re: [PATCH v2 10/16] mm: replace pmd_to_swp_entry() with softleaf_from_pmd() Date: Sat, 8 Nov 2025 09:18:08 -0800 Message-ID: <20251108171808.77373-1-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <093e438c240272d081548222900a5c3b205e9a5d.1762621568.git.lorenzo.stoakes@oracle.com> References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit On Sat, 8 Nov 2025 17:08:24 +0000 Lorenzo Stoakes wrote: > Introduce softleaf_from_pmd() to do the equivalent operation for PMDs that > softleaf_from_pte() fulfils, and cascade changes through code base > accordingly, introducing helpers as necessary. > > We are then able to eliminate pmd_to_swp_entry(), is_pmd_migration_entry(), > is_pmd_device_private_entry() and is_pmd_non_present_folio_entry(). > > This further establishes the use of leaf operations throughout the code > base and further establishes the foundations for eliminating is_swap_pmd(). > > No functional change intended. > > Signed-off-by: Lorenzo Stoakes > --- > fs/proc/task_mmu.c | 27 +++-- > include/linux/leafops.h | 220 ++++++++++++++++++++++++++++++++++++++++ > include/linux/migrate.h | 2 +- > include/linux/swapops.h | 100 ------------------ > mm/damon/ops-common.c | 6 +- > mm/filemap.c | 6 +- > mm/hmm.c | 16 +-- > mm/huge_memory.c | 98 +++++++++--------- > mm/khugepaged.c | 4 +- > mm/madvise.c | 2 +- > mm/memory.c | 4 +- > mm/mempolicy.c | 4 +- > mm/migrate.c | 20 ++-- > mm/migrate_device.c | 14 +-- > mm/page_table_check.c | 16 +-- > mm/page_vma_mapped.c | 15 +-- > mm/pagewalk.c | 8 +- > mm/rmap.c | 4 +- > 18 files changed, 343 insertions(+), 223 deletions(-) [...] > diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c > index 971df8a16ba4..a218d9922234 100644 > --- a/mm/damon/ops-common.c > +++ b/mm/damon/ops-common.c > @@ -11,7 +11,7 @@ > #include > #include > #include > -#include > +#include > > #include "../internal.h" > #include "ops-common.h" > @@ -51,7 +51,7 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr > if (likely(pte_present(pteval))) > pfn = pte_pfn(pteval); > else > - pfn = swp_offset_pfn(pte_to_swp_entry(pteval)); > + pfn = softleaf_to_pfn(softleaf_from_pte(pteval)); > > folio = damon_get_folio(pfn); > if (!folio) > @@ -83,7 +83,7 @@ void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr > if (likely(pmd_present(pmdval))) > pfn = pmd_pfn(pmdval); > else > - pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval)); > + pfn = softleaf_to_pfn(softleaf_from_pmd(pmdval)); > > folio = damon_get_folio(pfn); > if (!folio) I'll try to take a time to review the whole series. But, for now, for this DAMON part change, Reviewed-by: SeongJae Park Thanks, SJ [...]