From: Gavin Shan <gshan@redhat.com>
To: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
david@redhat.com, djwong@kernel.org, willy@infradead.org,
akpm@linux-foundation.org, hughd@google.com,
torvalds@linux-foundation.org, zhenyzha@redhat.com,
shan.gavin@gmail.com
Subject: [PATCH 2/4] mm/filemap: Skip to allocate PMD-sized folios if needed
Date: Tue, 25 Jun 2024 19:06:44 +1000 [thread overview]
Message-ID: <20240625090646.1194644-3-gshan@redhat.com> (raw)
In-Reply-To: <20240625090646.1194644-1-gshan@redhat.com>
On ARM64, HPAGE_PMD_ORDER is 13 when the base page size is 64KB. The
PMD-sized page cache can't be supported by xarray as the following
error messages indicate.
------------[ cut here ]------------
WARNING: CPU: 35 PID: 7484 at lib/xarray.c:1025 xas_split_alloc+0xf8/0x128
Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib \
nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct \
nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 \
ip_set rfkill nf_tables nfnetlink vfat fat virtio_balloon drm \
fuse xfs libcrc32c crct10dif_ce ghash_ce sha2_ce sha256_arm64 \
sha1_ce virtio_net net_failover virtio_console virtio_blk failover \
dimlib virtio_mmio
CPU: 35 PID: 7484 Comm: test Kdump: loaded Tainted: G W 6.10.0-rc5-gavin+ #9
Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-1.el9 05/24/2024
pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
pc : xas_split_alloc+0xf8/0x128
lr : split_huge_page_to_list_to_order+0x1c4/0x720
sp : ffff800087a4f6c0
x29: ffff800087a4f6c0 x28: ffff800087a4f720 x27: 000000001fffffff
x26: 0000000000000c40 x25: 000000000000000d x24: ffff00010625b858
x23: ffff800087a4f720 x22: ffffffdfc0780000 x21: 0000000000000000
x20: 0000000000000000 x19: ffffffdfc0780000 x18: 000000001ff40000
x17: 00000000ffffffff x16: 0000018000000000 x15: 51ec004000000000
x14: 0000e00000000000 x13: 0000000000002000 x12: 0000000000000020
x11: 51ec000000000000 x10: 51ece1c0ffff8000 x9 : ffffbeb961a44d28
x8 : 0000000000000003 x7 : ffffffdfc0456420 x6 : ffff0000e1aa6eb8
x5 : 20bf08b4fe778fca x4 : ffffffdfc0456420 x3 : 0000000000000c40
x2 : 000000000000000d x1 : 000000000000000c x0 : 0000000000000000
Call trace:
xas_split_alloc+0xf8/0x128
split_huge_page_to_list_to_order+0x1c4/0x720
truncate_inode_partial_folio+0xdc/0x160
truncate_inode_pages_range+0x1b4/0x4a8
truncate_pagecache_range+0x84/0xa0
xfs_flush_unmap_range+0x70/0x90 [xfs]
xfs_file_fallocate+0xfc/0x4d8 [xfs]
vfs_fallocate+0x124/0x2e8
ksys_fallocate+0x4c/0xa0
__arm64_sys_fallocate+0x24/0x38
invoke_syscall.constprop.0+0x7c/0xd8
do_el0_svc+0xb4/0xd0
el0_svc+0x44/0x1d8
el0t_64_sync_handler+0x134/0x150
el0t_64_sync+0x17c/0x180
Fix it by skipping to allocate PMD-sized page cache when its size
is larger than MAX_PAGECACHE_ORDER. For this specific case, we will
fall to regular path where the readahead window is determined by BDI's
sysfs file (read_ahead_kb).
Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
Cc: stable@kernel.org # v5.18+
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
mm/filemap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 876cc64aadd7..b306861d9d36 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3124,7 +3124,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/* Use the readahead code, even if readahead is disabled */
- if (vm_flags & VM_HUGEPAGE) {
+ if ((vm_flags & VM_HUGEPAGE) && HPAGE_PMD_ORDER <= MAX_PAGECACHE_ORDER) {
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
ra->size = HPAGE_PMD_NR;
--
2.45.1
next prev parent reply other threads:[~2024-06-25 9:07 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-25 9:06 [PATCH 0/4] mm/filemap: Limit page cache size to that supported by xarray Gavin Shan
2024-06-25 9:06 ` [PATCH 1/4] mm/filemap: Make MAX_PAGECACHE_ORDER acceptable to xarray Gavin Shan
2024-06-25 18:43 ` David Hildenbrand
2024-06-25 9:06 ` Gavin Shan [this message]
2024-06-25 18:44 ` [PATCH 2/4] mm/filemap: Skip to allocate PMD-sized folios if needed David Hildenbrand
2024-06-25 9:06 ` [PATCH 3/4] mm/readahead: Limit page cache size in page_cache_ra_order() Gavin Shan
2024-06-25 18:45 ` David Hildenbrand
2024-06-26 0:48 ` Gavin Shan
2024-06-25 9:06 ` [PATCH 4/4] mm/shmem: Disable PMD-sized page cache if needed Gavin Shan
2024-06-25 18:50 ` David Hildenbrand
2024-06-26 8:24 ` Ryan Roberts
2024-06-25 18:37 ` [PATCH 0/4] mm/filemap: Limit page cache size to that supported by xarray Andrew Morton
2024-06-25 18:51 ` David Hildenbrand
2024-06-25 18:58 ` Andrew Morton
2024-06-25 19:05 ` David Hildenbrand
2024-06-26 0:37 ` Gavin Shan
2024-06-26 20:38 ` Andrew Morton
2024-06-26 23:05 ` Gavin Shan
2024-06-26 20:54 ` Matthew Wilcox
2024-06-26 23:48 ` Gavin Shan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240625090646.1194644-3-gshan@redhat.com \
--to=gshan@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=djwong@kernel.org \
--cc=hughd@google.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shan.gavin@gmail.com \
--cc=torvalds@linux-foundation.org \
--cc=willy@infradead.org \
--cc=zhenyzha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).