From: Matthew Wilcox <willy@infradead.org>
To: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ying.huang@intel.com, david@redhat.com, Zi Yan <ziy@nvidia.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
hughd@google.com
Subject: Re: [PATCH 0/6] mm: convert numa balancing functions to use a folio
Date: Mon, 18 Sep 2023 13:57:04 +0100 [thread overview]
Message-ID: <ZQhJIDXO1m5XFYH4@casper.infradead.org> (raw)
In-Reply-To: <20230918103213.4166210-1-wangkefeng.wang@huawei.com>
On Mon, Sep 18, 2023 at 06:32:07PM +0800, Kefeng Wang wrote:
> The do_numa_pages only handle non-compound page, and only PMD-mapped THP
> is handled in do_huge_pmd_numa_page(), but large, PTE-mapped folio will
> be supported, let's convert more numa balancing functions to use/take a
> folio in preparation for that, no functional change intended for now.
>
> Kefeng Wang (6):
> sched/numa, mm: make numa migrate functions to take a folio
> mm: mempolicy: make mpol_misplaced() to take a folio
> mm: memory: make numa_migrate_prep() to take a folio
> mm: memory: use a folio in do_numa_page()
> mm: memory: add vm_normal_pmd_folio()
> mm: huge_memory: use a folio in do_huge_pmd_numa_page()
This all seems OK. It's kind of hard to review though because you change
the same line multiple times. I think it works out better to go top-down
instead of bottom-up. That is, start with do_numa_page() and pass
&folio->page to numa_migrate_prep. Then do vm_normal_pmd_folio() followed
by do_huge_pmd_numa_page(). Fourth would have been numa_migrate_prep(),
etc. I don't want to ask you to redo the entire series, but for future
patch series.
Also, it's nce to do things like remove the unnecessary 'extern' from
function declarations when you change them from page to folio. And
please try to stick to 80 columns; I know it's not always easy/possible.
next prev parent reply other threads:[~2023-09-18 12:57 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-18 10:32 [PATCH 0/6] mm: convert numa balancing functions to use a folio Kefeng Wang
2023-09-18 10:32 ` [PATCH 1/6] sched/numa, mm: make numa migrate functions to take " Kefeng Wang
2023-09-20 3:05 ` Huang, Ying
2023-09-20 7:57 ` Kefeng Wang
2023-09-18 10:32 ` [PATCH 2/6] mm: mempolicy: make mpol_misplaced() " Kefeng Wang
2023-09-18 10:32 ` [PATCH 3/6] mm: memory: make numa_migrate_prep() " Kefeng Wang
2023-09-18 10:32 ` [PATCH 4/6] mm: memory: use a folio in do_numa_page() Kefeng Wang
2023-09-18 10:32 ` [PATCH 5/6] mm: memory: add vm_normal_pmd_folio() Kefeng Wang
2023-09-20 3:12 ` Huang, Ying
2023-09-20 8:07 ` Kefeng Wang
2023-09-18 10:32 ` [PATCH 6/6] mm: huge_memory: use a folio in do_huge_pmd_numa_page() Kefeng Wang
2023-09-18 12:57 ` Matthew Wilcox [this message]
2023-09-18 23:59 ` [PATCH 0/6] mm: convert numa balancing functions to use a folio Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZQhJIDXO1m5XFYH4@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=wangkefeng.wang@huawei.com \
--cc=ying.huang@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).