From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCFF7C77B7F for ; Fri, 27 Jun 2025 13:41:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 514A86B008C; Fri, 27 Jun 2025 09:41:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C6246B0095; Fri, 27 Jun 2025 09:41:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B4006B0096; Fri, 27 Jun 2025 09:41:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 279886B008C for ; Fri, 27 Jun 2025 09:41:13 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C5FC4160115 for ; Fri, 27 Jun 2025 13:41:12 +0000 (UTC) X-FDA: 83601291984.23.1F8C0FE Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) by imf28.hostedemail.com (Postfix) with ESMTP id E2F00C0004 for ; Fri, 27 Jun 2025 13:41:10 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ve5UMMAS; spf=pass (imf28.hostedemail.com: domain of refault0@gmail.com designates 209.85.219.46 as permitted sender) smtp.mailfrom=refault0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751031670; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6ZIpAGWj/y0UyMFYtff0PzG8j+UXSjPMsKCQ4iUUGxI=; b=wgjEu5h5oeA3YFZnICl/w9GslKBbc4YXvccJ3Zb5A9FuRdBE3PRFneVARz3eZO2OSAz1lS 7SLlNc4Rz+CrZugrQg0OUPsaTjciuuFfNBpn2C7Zig4kJvGQZF0z0ivIdCdcfsbUK8whI0 Ta0tzuBLU8qv/wJ2YT2ZDNLNaUep/kU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751031670; a=rsa-sha256; cv=none; b=VgTvuH790wy2KNeiAkVYjal/aANtODHU8NjaQDWghMo61ZWQQJWrF6OGnmTDq1e9hw1wDJ MV5AtT2Ap0MIag2NPHY7pRx0AMIWcQD52VMPMUspr6wkNxKOG8GhJismilicJ3PhfKkGOx SE2Ys+NbVAhkn1AXJGq/xPOBifsbZ/E= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ve5UMMAS; spf=pass (imf28.hostedemail.com: domain of refault0@gmail.com designates 209.85.219.46 as permitted sender) smtp.mailfrom=refault0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-qv1-f46.google.com with SMTP id 6a1803df08f44-6fd1b2a57a0so18309676d6.1 for ; Fri, 27 Jun 2025 06:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751031670; x=1751636470; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=6ZIpAGWj/y0UyMFYtff0PzG8j+UXSjPMsKCQ4iUUGxI=; b=Ve5UMMAS2ry06QszIK3AGok4/USvgBHKkqt+2hHfY469SpWArljsejw+wqvnuFcuwp q2EfqHRL7KFf0qRNhG9Ox/vGvP6MRllsLdK3kHL3bLqGBRWg/RKDjfJ115FfSpciX7mz P0Z7/6GvGzgY7OICQKANpKyoCWU2dVew2cNdsCjvzp3Rj/3QyjQDXESZW6gBDEWR+yKR Uv7Sw54wPK1VqSMk/DwbX6pUniJBZ505iFWrn8j2LDch6nk03HPs6BBdhvbpq302ULHS kzXpImEvRx4fse1/wE3eZPIVPqiCfFxcRGihXNpU6Gor91o4Hse/1fnhC5+uzGZTp6JI eT8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751031670; x=1751636470; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6ZIpAGWj/y0UyMFYtff0PzG8j+UXSjPMsKCQ4iUUGxI=; b=LxEeSoJ9slbVk8Hb7HFq/nwnYK6/R19DB29bMf03E/MBs4xnhT/Og2oy2xWVLfbAAF YUJ5jKWHcK3JB2MNpKyz9tndOnURFORKq0O3DG9HuCybxu5XGOcrcu0+BqYzJ06zJHyz u6v8Wr7RG0Qd1FMjaD8hamhSL8I04ILX6IAD1xFC/6hw4liz+JV0EkjW24WaGPXKkykE BRtWn49fSxz8PbktbLA2EPnAHaB7wBRmkIj+za2Gf3DVEo3S3DTepGk5e/dF9DUgbJA8 n0Ph3t1XN+pgVXqpYdDnfLjJP6uv2K71qXwplP7to33B+oGevgQpThYarTPamEL3IAGf vm5A== X-Forwarded-Encrypted: i=1; AJvYcCVsZFfhxZptwsBgLSBcKeGpRxhQHkg2vTiG9D+L13lC1UvMV9wnQ9wwBMGH/SFNGTDvtFBec2+WJQ==@kvack.org X-Gm-Message-State: AOJu0YzJIIPUMmZ/yupMKxjA2iXjN+DTAeSiLSguN/9fwJsXtQJUmcd4 mVh3e6S/3ImxAZtJTsEBdflr55AlOUpzzn8t613CbNVq4RPYw1+T2FpIQld2/4B/y/ydXfFVZi5 nHgxLvY1wbqYd1c+jaIUtmK/fI0zxbIE= X-Gm-Gg: ASbGnctDVl3U63NwcIJ4fOAT9Y0CI7FLf+qArdl/zG18nlRkeUfM+6siePi9XbTmlE9 AQa34fEEtWqoipxaBqYiH6JudNX7cq9gmk63NLrWYUzzEvYLgEU+BbblX1tNppgdrX/+KCp40UR IEUuS/kyOC6q6s4vkk0JAQCdIvMMqqlYiMQc6oXsTjpOOtZUYjxyRbkKNeHg== X-Google-Smtp-Source: AGHT+IHA2JlMf+ZdWpIxOrEJ38y6SQPYt6c9PoM/6RnGp9KXrYgTIpoS8k+HvmAvFixP2JQcNSJV7pqmmDLZ1PmJDrU= X-Received: by 2002:a05:6214:234e:b0:6fd:75ef:3dc3 with SMTP id 6a1803df08f44-70002ee7e82mr61898646d6.28.1751031669845; Fri, 27 Jun 2025 06:41:09 -0700 (PDT) MIME-Version: 1.0 References: <20250627115510.3273675-1-david@redhat.com> <20250627115510.3273675-2-david@redhat.com> In-Reply-To: <20250627115510.3273675-2-david@redhat.com> From: Lance Yang Date: Fri, 27 Jun 2025 21:40:53 +0800 X-Gm-Features: Ac12FXygmukaPYXRW-7CXUKcIKAmlaRH58xf8GpgMjINQwR8gwYQYg4AJC9gJb8 Message-ID: Subject: Re: [PATCH v1 1/4] mm: convert FPB_IGNORE_* into FPB_HONOR_* To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Rik van Riel , Harry Yoo Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E2F00C0004 X-Stat-Signature: fkk7kzzqqr154m4ohqamzzf7jhcmwts9 X-Rspam-User: X-HE-Tag: 1751031670-139792 X-HE-Meta: U2FsdGVkX1+rR2ffc7QmBX9flsnLPPSsPy2xY9QWq0GS4tuKTTms2LcD9//lkowKYeUt1sERYIp8vSZmLveyCpB8i6iN2mKnYSMZcIceuW6G4kfLQzIuH2RHGnWno2PpotsnQc8Taa9pamICY41hoqEtF6MpqXsCezOpD6+6exibL7Vt3aJxaVzF5XJ0Ng3vmrTvO2Or9fsV46+it+p9+8h9cEXGsqyWjpfUheMA6mqFYn5+jKMie1KMksKJJjqwNngxD8pLKBrlAaGP/4PTy+SGBhKTYksWFRJF+ChQWYYHD4y8ckF4d4Vv9dXoWf0NpIg5A3/XmH7ypnplD4q62U8WeRzELujE1j0DYX/G0n5oBtv6A5GrpKNmDK5+pTTVfPbr6RyXXtzC5k5ORVd+HDtANRF3k8ci0SCgnyH043XLaDxy6xd+9Lkn+UGKUbABO+zcNTsNsfSNJLJHUqfuI0aEcpcoI1D8x8zqoo8o0cgwwxvUlWTqteSwkUGeP5zzXaTdZf8go3F423/hYy4Mt4AXMQ7a84gjli0BhCFnsJFEKXjbPhRn7ROd+fZJgI5I5NSZ0+6yk/i507kOOXIKvC0P/l9Da1+xgEAbpYPvRbcdXHPOfGEvG55vh6ofKM0/zJmqTOgA6+EglbTfZbObt73paPV2HAlsJO6s19dmDiiPbf9W/hKgAI9gw9yKPsPjzMAuV1X5NJHLoJVVUiqrR98SNQMNAFbK9NvsP1BjVmtM0c+AtvOBH5Wa2e4D2nYNF298RLadUTAtc39h5twl4FZt9d/Kz4Nvb9wkEaaXXoJNA6R/V4LWMIMKkD7eaz+kZ+H1zY8voKdwgNqOcqagyT3jfRK3hZDqqNspLGi2ePPBCgtMJVqKYIFxKtZkX1KfKoAyVZ5QjT7rtkoeNXNgHnWN1w3xvyucPnQR5u1HKnze5ecsucwfbem+sWKTWElz2y+Hzpv9Q0W32I8+ek6 AkCHahZV r/HlX2uyu/gWiHV0oJQFv38x3xnb3cY/CCXnGrxJcVSWoAEsbVGK647xkCKKlbySzRXiH5MwE45zshGFR/lB0jvMsLCwxZNzj297mSiO7IBrR+SGEKd9WOk3XrG/Xwe3znC934fNtoCe6IL/IupgUW9UimVe1qMip2wn1nMrouE2MFKqnqwhJMIbmDMbfe9tbgbXHUsjWtvMiSeY4iWACTEDMw7ofD1akAeXHOQoNQTGWwcoERvN/bRT24kQ1JpBbHI9sdmj8tOVtGuZeKz6CTqvlEB2+j950D5s6C/QCQesjKKsUHil2cir/iW6epSuQh68mdJ+Zms5JkdA+7lTVF+07X019GmZjPqqFA3CJ2lmwz8TdnfeVzzQoNX+hcYlVXzNBJk/1948RcCiQ6HpFKwtJaw83u3TKGp816kWAMuZwaFu/NOaXFvOQECjnLQSt4TIpUxDUg7r1H/m4EUfSha959Sa8tVF9OOyLUS/Hn1E1U1rCKH/YUR+eoqeX0Tj7yCjxx3frpqN9o0n8p9Zjl85FLuoyabX2cZ11/kxBC6WCJRuSV6FMgq/TW3nuRanBXjiUFchyV6ehP/BeYHJmfK9Tc+d+ZFi783cCVFVG9ugJMl1AekUt671t58x6iutHttpK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jun 27, 2025 at 7:55=E2=80=AFPM David Hildenbrand wrote: > > Honoring these PTE bits is the exception, so let's invert the meaning. > > With this change, most callers don't have to pass any flags. > > No functional change intended. > LGTM. Feel free to add: Reviewed-by: Lance Yang Thanks, Lance > Signed-off-by: David Hildenbrand > --- > mm/internal.h | 16 ++++++++-------- > mm/madvise.c | 3 +-- > mm/memory.c | 11 +++++------ > mm/mempolicy.c | 4 +--- > mm/mlock.c | 3 +-- > mm/mremap.c | 3 +-- > mm/rmap.c | 3 +-- > 7 files changed, 18 insertions(+), 25 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index e84217e27778d..9690c75063881 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -202,17 +202,17 @@ static inline void vma_close(struct vm_area_struct = *vma) > /* Flags for folio_pte_batch(). */ > typedef int __bitwise fpb_t; > > -/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ > -#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) > +/* Compare PTEs honoring the dirty bit. */ > +#define FPB_HONOR_DIRTY ((__force fpb_t)BIT(0)) > > -/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bi= t. */ > -#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) > +/* Compare PTEs honoring the soft-dirty bit. */ > +#define FPB_HONOR_SOFT_DIRTY ((__force fpb_t)BIT(1)) > > static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) > { > - if (flags & FPB_IGNORE_DIRTY) > + if (!(flags & FPB_HONOR_DIRTY)) > pte =3D pte_mkclean(pte); > - if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) > + if (likely(!(flags & FPB_HONOR_SOFT_DIRTY))) > pte =3D pte_clear_soft_dirty(pte); > return pte_wrprotect(pte_mkold(pte)); > } > @@ -236,8 +236,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t p= te, fpb_t flags) > * pages of the same large folio. > * > * All PTEs inside a PTE batch have the same PTE bits set, excluding the= PFN, > - * the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and > - * soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY). > + * the accessed bit, writable bit, dirty bit (unless FPB_HONOR_DIRTY is = set) and > + * soft-dirty bit (unless FPB_HONOR_SOFT_DIRTY is set). > * > * start_ptep must map any page of the folio. max_nr must be at least on= e and > * must be limited by the caller so scanning cannot exceed a single page= table. > diff --git a/mm/madvise.c b/mm/madvise.c > index e61e32b2cd91f..661bb743d2216 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -347,10 +347,9 @@ static inline int madvise_folio_pte_batch(unsigned l= ong addr, unsigned long end, > pte_t pte, bool *any_young, > bool *any_dirty) > { > - const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRT= Y; > int max_nr =3D (end - addr) / PAGE_SIZE; > > - return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags,= NULL, > + return folio_pte_batch(folio, addr, ptep, pte, max_nr, 0, NULL, > any_young, any_dirty); > } > > diff --git a/mm/memory.c b/mm/memory.c > index 0f9b32a20e5b7..ab2d6c1425691 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -990,10 +990,10 @@ copy_present_ptes(struct vm_area_struct *dst_vma, s= truct vm_area_struct *src_vma > * by keeping the batching logic separate. > */ > if (unlikely(!*prealloc && folio_test_large(folio) && max_nr !=3D= 1)) { > - if (src_vma->vm_flags & VM_SHARED) > - flags |=3D FPB_IGNORE_DIRTY; > - if (!vma_soft_dirty_enabled(src_vma)) > - flags |=3D FPB_IGNORE_SOFT_DIRTY; > + if (!(src_vma->vm_flags & VM_SHARED)) > + flags |=3D FPB_HONOR_DIRTY; > + if (vma_soft_dirty_enabled(src_vma)) > + flags |=3D FPB_HONOR_SOFT_DIRTY; > > nr =3D folio_pte_batch(folio, addr, src_pte, pte, max_nr,= flags, > &any_writable, NULL, NULL); > @@ -1535,7 +1535,6 @@ static inline int zap_present_ptes(struct mmu_gathe= r *tlb, > struct zap_details *details, int *rss, bool *force_flush, > bool *force_break, bool *any_skipped) > { > - const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRT= Y; > struct mm_struct *mm =3D tlb->mm; > struct folio *folio; > struct page *page; > @@ -1565,7 +1564,7 @@ static inline int zap_present_ptes(struct mmu_gathe= r *tlb, > * by keeping the batching logic separate. > */ > if (unlikely(folio_test_large(folio) && max_nr !=3D 1)) { > - nr =3D folio_pte_batch(folio, addr, pte, ptent, max_nr, f= pb_flags, > + nr =3D folio_pte_batch(folio, addr, pte, ptent, max_nr, 0= , > NULL, NULL, NULL); > > zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent,= nr, > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 1ff7b2174eb77..2a25eedc3b1c0 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -675,7 +675,6 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_wa= lk *walk) > static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, > unsigned long end, struct mm_walk *walk) > { > - const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRT= Y; > struct vm_area_struct *vma =3D walk->vma; > struct folio *folio; > struct queue_pages *qp =3D walk->private; > @@ -713,8 +712,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigne= d long addr, > continue; > if (folio_test_large(folio) && max_nr !=3D 1) > nr =3D folio_pte_batch(folio, addr, pte, ptent, > - max_nr, fpb_flags, > - NULL, NULL, NULL); > + max_nr, 0, NULL, NULL, NULL)= ; > /* > * vm_normal_folio() filters out zero pages, but there mi= ght > * still be reserved folios to skip, perhaps in a VDSO. > diff --git a/mm/mlock.c b/mm/mlock.c > index 3cb72b579ffd3..2238cdc5eb1c1 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -307,14 +307,13 @@ void munlock_folio(struct folio *folio) > static inline unsigned int folio_mlock_step(struct folio *folio, > pte_t *pte, unsigned long addr, unsigned long end) > { > - const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRT= Y; > unsigned int count =3D (end - addr) >> PAGE_SHIFT; > pte_t ptent =3D ptep_get(pte); > > if (!folio_test_large(folio)) > return 1; > > - return folio_pte_batch(folio, addr, pte, ptent, count, fpb_flags,= NULL, > + return folio_pte_batch(folio, addr, pte, ptent, count, 0, NULL, > NULL, NULL); > } > > diff --git a/mm/mremap.c b/mm/mremap.c > index 36585041c760d..d4d3ffc931502 100644 > --- a/mm/mremap.c > +++ b/mm/mremap.c > @@ -173,7 +173,6 @@ static pte_t move_soft_dirty_pte(pte_t pte) > static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned l= ong addr, > pte_t *ptep, pte_t pte, int max_nr) > { > - const fpb_t flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > struct folio *folio; > > if (max_nr =3D=3D 1) > @@ -183,7 +182,7 @@ static int mremap_folio_pte_batch(struct vm_area_stru= ct *vma, unsigned long addr > if (!folio || !folio_test_large(folio)) > return 1; > > - return folio_pte_batch(folio, addr, ptep, pte, max_nr, flags, NUL= L, > + return folio_pte_batch(folio, addr, ptep, pte, max_nr, 0, NULL, > NULL, NULL); > } > > diff --git a/mm/rmap.c b/mm/rmap.c > index 3b74bb19c11dd..a29d7d29c7283 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1849,7 +1849,6 @@ void folio_remove_rmap_pud(struct folio *folio, str= uct page *page, > static inline bool can_batch_unmap_folio_ptes(unsigned long addr, > struct folio *folio, pte_t *ptep) > { > - const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRT= Y; > int max_nr =3D folio_nr_pages(folio); > pte_t pte =3D ptep_get(ptep); > > @@ -1860,7 +1859,7 @@ static inline bool can_batch_unmap_folio_ptes(unsig= ned long addr, > if (pte_pfn(pte) !=3D folio_pfn(folio)) > return false; > > - return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags,= NULL, > + return folio_pte_batch(folio, addr, ptep, pte, max_nr, 0, NULL, > NULL, NULL) =3D=3D max_nr; > } > > -- > 2.49.0 > >