From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0093C83F03 for ; Wed, 2 Jul 2025 23:51:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 57DC56B00B5; Wed, 2 Jul 2025 19:51:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 554656B00B6; Wed, 2 Jul 2025 19:51:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 443756B00B7; Wed, 2 Jul 2025 19:51:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 336786B00B5 for ; Wed, 2 Jul 2025 19:51:44 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id CD214B9E62 for ; Wed, 2 Jul 2025 23:51:43 +0000 (UTC) X-FDA: 83620974486.01.136C8B6 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf24.hostedemail.com (Postfix) with ESMTP id 0BE84180002 for ; Wed, 2 Jul 2025 23:51:41 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OmRc81Fz; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of sj@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751500302; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xu2L2OkaMDuaGNeIc21UzLR7LUAH7KP6q/PlvrMkeN8=; b=D+LFeSLqISYH+ECq8tdXtobbPj1ftaqhPnMhGInshCnLbXKbx3RfNTKZNTyuKuQ9XdPBQW C85lNoAMrAMT/MXlMCTB5OiZteT/qrG00AVP5xI5jqK+oXeqnSI4FRjhB6ATkaqLiRnwRg X5ZgSbn9EPR580CxcgdZ8Ez6GGuwv6E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751500302; a=rsa-sha256; cv=none; b=pHZsdSuIbs8ALsl1V1Oaa7TqcbdzjFeQVZrznOIwiC6OP73sj5EGmBUGvxS80BHZEIPaAO pzu/JePYtxAbCPJkByomnSjbKBRZ2Tai0BfFQvFN/73kju419goDYK6STesOHe6iwbUfjK d7ZpVb9REjk9ElF6vpYQzkpl5OunjWo= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OmRc81Fz; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of sj@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=sj@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id AE625432CE; Wed, 2 Jul 2025 23:51:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20B92C4CEE7; Wed, 2 Jul 2025 23:51:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751500300; bh=phXhRif4QEx6/qVW8NoYnau8fG9rcxcu4dhrIomKfiw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OmRc81FzvYLXerzuw68/k+ABrjneyRG/WsseBhXMWJhEDHyHQw+IKVDOjr+DX9BJb lyMAIFg4uFGrmEthFp/rur4P5M9TG0RQI833SezpOkbFCQ/FF5Bk+J1vhPKgpyzKNj ajaHWw0eJlbbWcxR81fbnBZdq2WbgEHKP5/ekdSsY9Aje5hS6J0RpieeCq8G89lU/i /uncvHSJVB0N33IIJNJNUQojeBCMC58ffRg+HQzNMlm1YjsDk4uF5DYpNSnbbCLYOi zQRb+JFbNViTjD57YQZYCnH4ael3xCcp2Hl9fytDNJKaqg2NqDKis6W6ydUU4ZAqAx uMG912n8ySdpA== From: SeongJae Park To: Bijan Tabatabai Cc: SeongJae Park , damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, akpm@linux-foundation.org, corbet@lwn.net, joshua.hahnjy@gmail.com, bijantabatab@micron.com, venkataravis@micron.com, emirakhur@micron.com, ajayjoshi@micron.com, vtavarespetr@micron.com, Ravi Shankar Jonnalagadda Subject: Re: [RFC PATCH v3 09/13] mm/damon/vaddr: Add vaddr versions of migrate_{hot,cold} Date: Wed, 2 Jul 2025 16:51:38 -0700 Message-Id: <20250702235138.56720-1-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250702201337.5780-10-bijan311@gmail.com> References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: wq5gtxszpk4ow8jw17jcmd6a4y8mt8c3 X-Rspamd-Queue-Id: 0BE84180002 X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1751500301-252036 X-HE-Meta: U2FsdGVkX1/XUQdcYazDq5gyssWglp7343nMY/hxaFyD6zPHn4O4w6w8C9Q4jJWAujCmSIKie5kdPJFpQ/AGKACJBL105nOjGrIhjj5ETLgE0EI9fhLu6HyhzOJ3Di6HgX6Xdrtxgv8L5kMjPdAepJtfkqu9PeehowjnFxQ7QybzJfB7CTwsL8xqw0Usa3TB6SwZQdOO/1T3hUZQTdjBRJInVcsiB7aMpEJqMC7djaRxlepPdSVbimehQkc8SPXq490ktDEREUyHbcsEW2CtBrDoZfkg/oLc3RcgBt9RayBissOHDLOLjrbIs9VDieARhkjGCi2ihIp2WuY2G9s6yA/MryHllki2tN/G8dhnRspApYE3cBx+vMSNQVInwXNBRvY29Iy/wB6XZZfxunZLbEgKM3uZDb0QTbPCbD8+tK7QVob7qSfDfbZgiKok+csJ2APaULApjfAVbudo4TbtnmVJ/bheu43vPEtVRd1ahEB5MYbeO2iX4mLmh9GRntcfRoJ+MzHwzmIph7lHkUbapCA3xLPu2t+pI2D9eI9//UA2/rROE7XsJaGVNFuqiOCTyzrT8iRxS6lIG6mGvBvv+JRnnpLrQmzVy8M04+mQiGv684e33/zKK++zH6Djben0Ip3ht27MJcRjJqbewnGOVKQL0eLF5woElY0ZV17P+JY1I0Gkek20hY6hfxwLfN9zryIm+iA+rO2KHnntYea/oH1avKEAc/+t0TCp+OSmPmcNr4SiN2Smy8G7nA33FF0RR951ZzR+Odf2mLrBUMyH3N8eIkslfpzuo4A2AjqsiYRftCFTgrVFzQs26chIJ5whtpuy5TmrjOfFCEQedpZ1o2QZ17H0Oss8bVPnpqMDSnMT8HY3+kqHQCVnpEPLgfmwOyVEs2KTEnU3E6aZuuWgn2viTgb0290GaH+3tLXUx1Bm728v7nhOX8NiZRSLJ3TmUDxewUIjvf7W3uKKFPW +FV0weC4 voDGghGmqfv4hOmTVRLAPjeGS1+NkQX6BiPdCw2bMzII1H1ZbJwDr1LTHISJJBS2vY9DEAx3o66FlkIWWNoyvq9XhvkjoRSj75ggawWf95dxjFkY67544BlB0uBcDSRvXSTMYVLhUve70/ecV7+MqF9m/RtF4jE5vs6ln2olvyVmge1wRlQPJtROCaKr7fWhTpCSnWl/OWGu9k/VqQuklJwRwqL/8YHEFZq8sYBlwqfb7PRfKXkSbanl/ZoNZyPRm9iscFCToZ445UcGkODiV6kFJ5au7fQ4CROvR6imcojKT0k03mZjFBamzAlkTkaCCwMBjxlEJArz7gN/SyBDt1+RWOZwJH7g+zrxvWtl00Hfnfy/kra0aLfrbSz/iq+amwfCxrXa8s8AqQdo+xA0u20LlkAQG+EeJ6DUEYDAgwnKE2PnFaxf3EqPFHjwM+o8M7Pzp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, 2 Jul 2025 15:13:32 -0500 Bijan Tabatabai wrote: > From: Bijan Tabatabai > > migrate_{hot,cold} are paddr schemes that are used to migrate hot/cold > data to a specified node. However, these schemes are only available when > doing physical address monitoring. This patch adds an implementation for > them virtual address monitoring as well. > > Co-developed-by: Ravi Shankar Jonnalagadda > Signed-off-by: Ravi Shankar Jonnalagadda > Signed-off-by: Bijan Tabatabai > --- > mm/damon/vaddr.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 102 insertions(+) > > diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c > index 46554e49a478..5cdfdc47c5ff 100644 > --- a/mm/damon/vaddr.c > +++ b/mm/damon/vaddr.c > @@ -15,6 +15,7 @@ > #include > #include > > +#include "../internal.h" > #include "ops-common.h" > > #ifdef CONFIG_DAMON_VADDR_KUNIT_TEST > @@ -610,6 +611,65 @@ static unsigned int damon_va_check_accesses(struct damon_ctx *ctx) > return max_nr_accesses; > } > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +static int damos_va_migrate_pmd_entry(pmd_t *pmd, unsigned long addr, > + unsigned long next, struct mm_walk *walk) I'd suggest to put CONFIG_TRANSPARENT_HUGEPAGE check into the body of this function and handle both pmd and pte here, consistent to damon_young_pmd_entry(). > +{ > + struct list_head *migration_list = walk->private; > + struct folio *folio; > + spinlock_t *ptl; > + pmd_t pmde; > + > + ptl = pmd_lock(walk->mm, pmd); > + pmde = pmdp_get(pmd); > + > + if (!pmd_present(pmde) || !pmd_trans_huge(pmde)) > + goto unlock; > + > + folio = damon_get_folio(pmd_pfn(pmde)); > + if (!folio) > + goto unlock; > + > + if (!folio_isolate_lru(folio)) > + goto put_folio; > + > + list_add(&folio->lru, migration_list); > + > +put_folio: > + folio_put(folio); > +unlock: > + spin_unlock(ptl); > + return 0; > +} > +#else > +#define damos_va_migrate_pmd_entry NULL > +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > + > +static int damos_va_migrate_pte_entry(pte_t *pte, unsigned long addr, > + unsigned long enxt, struct mm_walk *walk) Nit. s/enxt/next/ ? > +{ > + struct list_head *migration_list = walk->private; > + struct folio *folio; > + pte_t ptent; > + > + ptent = ptep_get(pte); > + if (pte_none(*pte) || !pte_present(*pte)) > + return 0; Shouldn't we use cached pte value (ptent) instad of *pte? I'd suggest merging this into damos_va_migrate_pmd_entry() consistent to damon_young_pmd_entry(). > + > + folio = damon_get_folio(pte_pfn(ptent)); > + if (!folio) > + return 0; > + > + if (!folio_isolate_lru(folio)) > + goto out; > + > + list_add(&folio->lru, migration_list); > + > +out: > + folio_put(folio); > + return 0; > +} > + > /* > * Functions for the target validity check and cleanup > */ > @@ -653,6 +713,41 @@ static unsigned long damos_madvise(struct damon_target *target, > } > #endif /* CONFIG_ADVISE_SYSCALLS */ > > +static unsigned long damos_va_migrate(struct damon_target *target, > + struct damon_region *r, struct damos *s, > + unsigned long *sz_filter_passed) > +{ > + LIST_HEAD(folio_list); > + struct task_struct *task; > + struct mm_struct *mm; > + unsigned long applied = 0; > + struct mm_walk_ops walk_ops = { > + .pmd_entry = damos_va_migrate_pmd_entry, > + .pte_entry = damos_va_migrate_pte_entry, > + .walk_lock = PGWALK_RDLOCK, > + }; > + > + task = damon_get_task_struct(target); > + if (!task) > + return 0; > + > + mm = damon_get_mm(target); > + if (!mm) > + goto put_task; > + > + mmap_read_lock(mm); > + walk_page_range(mm, r->ar.start, r->ar.end, &walk_ops, &folio_list); > + mmap_read_unlock(mm); > + mmput(mm); > + > + applied = damon_migrate_pages(&folio_list, s->target_nid); > + cond_resched(); > + > +put_task: > + put_task_struct(task); Seems task is not being used in real, so this variable and the related code in this function can be removed? Or, am I missing something? > + return applied * PAGE_SIZE; > +} > + > static unsigned long damon_va_apply_scheme(struct damon_ctx *ctx, > struct damon_target *t, struct damon_region *r, > struct damos *scheme, unsigned long *sz_filter_passed) > @@ -675,6 +770,9 @@ static unsigned long damon_va_apply_scheme(struct damon_ctx *ctx, > case DAMOS_NOHUGEPAGE: > madv_action = MADV_NOHUGEPAGE; > break; > + case DAMOS_MIGRATE_HOT: > + case DAMOS_MIGRATE_COLD: > + return damos_va_migrate(t, r, scheme, sz_filter_passed); > case DAMOS_STAT: > return 0; > default: > @@ -695,6 +793,10 @@ static int damon_va_scheme_score(struct damon_ctx *context, > switch (scheme->action) { > case DAMOS_PAGEOUT: > return damon_cold_score(context, r, scheme); > + case DAMOS_MIGRATE_HOT: > + return damon_hot_score(context, r, scheme); > + case DAMOS_MIGRATE_COLD: > + return damon_cold_score(context, r, scheme); > default: > break; > } > -- > 2.43.5 Thanks, SJ