From: "David Hildenbrand (Arm)" <david@kernel.org>
To: David Carlier <devnexen@gmail.com>,
Barry Song <21cnbao@gmail.com>, Kairui Song <kasong@tencent.com>,
Chris Li <chrisl@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Nhat Pham <nphamcs@gmail.com>, Baoquan He <bhe@redhat.com>,
Youngjun Park <youngjun.park@lge.com>,
NeilBrown <neil@brown.name>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v3] mm/page_io: use sio->len for PSWPIN accounting in sio_read_complete()
Date: Wed, 1 Apr 2026 10:32:30 +0200 [thread overview]
Message-ID: <a8a8533c-992a-4ac5-a387-3cde8b6e40b5@kernel.org> (raw)
In-Reply-To: <20260401074753.238053-1-devnexen@gmail.com>
On 4/1/26 09:47, David Carlier wrote:
> sio_read_complete() uses sio->pages to account global PSWPIN vm events,
> but sio->pages tracks the number of bvec entries (folios), not base
> pages.
>
> While large folios cannot currently reach this path (SWP_FS_OPS and
> SWP_SYNCHRONOUS_IO are mutually exclusive, and mTHP swap-in allocation
> is gated on SWP_SYNCHRONOUS_IO), the accounting is semantically
> inconsistent with the per-memcg path which correctly uses
> folio_nr_pages().
>
> Use sio->len >> PAGE_SHIFT instead, which gives the correct base page
> count since sio->len is accumulated via folio_size(folio).
>
> Signed-off-by: David Carlier <devnexen@gmail.com>
For the next time: Please don't send new revisions as reply to other
revisions :)
> ---
> mm/page_io.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/page_io.c b/mm/page_io.c
> index 63b262f4c5a9..1389cd57ca88 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -497,7 +497,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> folio_mark_uptodate(folio);
> folio_unlock(folio);
> }
> - count_vm_events(PSWPIN, sio->pages);
> + count_vm_events(PSWPIN, sio->len >> PAGE_SHIFT);
> } else {
> for (p = 0; p < sio->pages; p++) {
> struct folio *folio = page_folio(sio->bvec[p].bv_page);
sio->len always covers full pages as processed in swap_read_folio_fs(),
so there should not be any difference.
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
--
Cheers,
David
next prev parent reply other threads:[~2026-04-01 8:32 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 7:12 [PATCH v2] mm/page_io: fix PSWPIN undercount for large folios in sio_read_complete() David Carlier
2026-03-30 22:20 ` Andrew Morton
2026-03-31 22:33 ` Barry Song
2026-04-01 7:10 ` David CARLIER
2026-04-01 7:30 ` David Hildenbrand (Arm)
2026-04-01 20:22 ` Barry Song
2026-04-01 7:47 ` [PATCH v3] mm/page_io: use sio->len for PSWPIN accounting " David Carlier
2026-04-01 8:32 ` David Hildenbrand (Arm) [this message]
2026-04-01 22:58 ` Andrew Morton
2026-04-01 23:43 ` Barry Song
2026-04-02 3:07 ` Chris Li
2026-04-02 4:11 ` Matthew Wilcox
2026-04-02 5:51 ` Barry Song
2026-04-02 6:01 ` David CARLIER
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a8a8533c-992a-4ac5-a387-3cde8b6e40b5@kernel.org \
--to=david@kernel.org \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=devnexen@gmail.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=neil@brown.name \
--cc=nphamcs@gmail.com \
--cc=shikemeng@huaweicloud.com \
--cc=youngjun.park@lge.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox