From: "Zi Yan" <ziy@nvidia.com>
To: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>,
<david@fromorbit.com>, <willy@infradead.org>,
<chandan.babu@oracle.com>, <djwong@kernel.org>,
<brauner@kernel.org>, <akpm@linux-foundation.org>
Cc: <yang@os.amperecomputing.com>, <linux-kernel@vger.kernel.org>,
<linux-mm@kvack.org>, <john.g.garry@oracle.com>,
<linux-fsdevel@vger.kernel.org>, <hare@suse.de>,
<p.raghav@samsung.com>, <mcgrof@kernel.org>,
<gost.dev@samsung.com>, <cl@os.amperecomputing.com>,
<linux-xfs@vger.kernel.org>, <hch@lst.de>
Subject: Re: [PATCH v9 04/10] mm: split a folio in minimum folio order chunks
Date: Mon, 08 Jul 2024 09:53:03 -0400 [thread overview]
Message-ID: <D2K7HHAVJDR9.8PR2HQZ00FXA@nvidia.com> (raw)
In-Reply-To: <20240704112320.82104-5-kernel@pankajraghav.com>
[-- Attachment #1: Type: text/plain, Size: 2008 bytes --]
On Thu Jul 4, 2024 at 7:23 AM EDT, Pankaj Raghav (Samsung) wrote:
> From: Luis Chamberlain <mcgrof@kernel.org>
>
> split_folio() and split_folio_to_list() assume order 0, to support
> minorder for non-anonymous folios, we must expand these to check the
> folio mapping order and use that.
>
> Set new_order to be at least minimum folio order if it is set in
> split_huge_page_to_list() so that we can maintain minimum folio order
> requirement in the page cache.
>
> Update the debugfs write files used for testing to ensure the order
> is respected as well. We simply enforce the min order when a file
> mapping is used.
>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> ---
> include/linux/huge_mm.h | 14 ++++++++---
> mm/huge_memory.c | 55 ++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 61 insertions(+), 8 deletions(-)
<snip>
>
> @@ -3265,6 +3277,21 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> return ret;
> }
>
> +int split_folio_to_list(struct folio *folio, struct list_head *list)
> +{
> + unsigned int min_order = 0;
> +
> + if (!folio_test_anon(folio)) {
> + if (!folio->mapping && folio_test_pmd_mappable(folio)) {
> + count_vm_event(THP_SPLIT_PAGE_FAILED);
> + return -EBUSY;
> + }
This should be
if (!folio->mapping) {
if (folio_test_pmd_mappable(folio))
count_vm_event(THP_SPLIT_PAGE_FAILED);
return -EBUSY;
}
Otherwise, a non PMD mappable folio with no mapping will fall through
and cause NULL pointer dereference in mapping_min_folio_order().
> + min_order = mapping_min_folio_order(folio->mapping);
> + }
> +
> + return split_huge_page_to_list_to_order(&folio->page, list, min_order);
> +}
> +
> void __folio_undo_large_rmappable(struct folio *folio)
> {
> struct deferred_split *ds_queue;
--
Best Regards,
Yan, Zi
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
next prev parent reply other threads:[~2024-07-08 13:53 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-04 11:23 [PATCH v9 00/10] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 01/10] fs: Allow fine-grained control of folio sizes Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 02/10] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 03/10] readahead: allocate folios with mapping_min_order in readahead Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 04/10] mm: split a folio in minimum folio order chunks Pankaj Raghav (Samsung)
2024-07-08 13:53 ` Zi Yan [this message]
2024-07-09 11:04 ` Pankaj Raghav (Samsung)
2024-07-09 12:49 ` Zi Yan
2024-07-04 11:23 ` [PATCH v9 05/10] filemap: cap PTE range to be created to allowed zero fill in folio_map_range() Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 06/10] iomap: fix iomap_dio_zero() for fs bs > system page size Pankaj Raghav (Samsung)
2024-07-04 15:37 ` Hannes Reinecke
2024-07-04 22:13 ` Dave Chinner
2024-07-05 6:14 ` Hannes Reinecke
2024-07-05 14:19 ` Pankaj Raghav (Samsung)
2024-07-08 17:56 ` Darrick J. Wong
2024-07-08 22:18 ` Dave Chinner
2024-07-04 11:23 ` [PATCH v9 07/10] xfs: use kvmalloc for xattr buffers Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 08/10] xfs: expose block size in stat Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 09/10] xfs: make the calculation generic in xfs_sb_validate_fsb_count() Pankaj Raghav (Samsung)
2024-07-04 11:23 ` [PATCH v9 10/10] xfs: enable block size larger than page size support Pankaj Raghav (Samsung)
2024-07-08 22:12 ` [PATCH v9 00/10] enable bs > ps in XFS Luis Chamberlain
2024-07-08 22:27 ` Matthew Wilcox
2024-07-08 22:40 ` Stephen Rothwell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D2K7HHAVJDR9.8PR2HQZ00FXA@nvidia.com \
--to=ziy@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=chandan.babu@oracle.com \
--cc=cl@os.amperecomputing.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=john.g.garry@oracle.com \
--cc=kernel@pankajraghav.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=p.raghav@samsung.com \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).