public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: jane.chu@oracle.com
To: Greg KH <gregkh@linuxfoundation.org>
Cc: tj@kernel.org, linux-kernel@vger.kernel.org, jane.chu@oracle.com
Subject: Re: [PATCH] fs/kernfs: raise sb->maxbytes to MAX_LFS_FILESIZE
Date: Mon, 24 Nov 2025 09:06:21 -0800	[thread overview]
Message-ID: <cf484bab-597a-482a-bcc1-20ebcc979573@oracle.com> (raw)
In-Reply-To: <2025112410-carnivore-anemia-e6eb@gregkh>

Hi, Greg,

On 11/24/2025 8:17 AM, Greg KH wrote:
> On Tue, Nov 11, 2025 at 01:26:06PM -0700, Jane Chu wrote:
>> On an ARM64 A1 system, it's possible to have physical memory span
>> up to the 64T boundary, like below
>>
>> $ lsmem -b -r -n -o range,size
>> 0x0000000080000000-0x00000000bfffffff 1073741824
>> 0x0000080000000000-0x000008007fffffff 2147483648
>> 0x00000800c0000000-0x0000087fffffffff 546534588416
>> 0x0000400000000000-0x00004000bfffffff 3221225472
>> 0x0000400100000000-0x0000407fffffffff 545460846592
>>
>> So it's time to extend /sys/kernel/mm/page_idle/bitmap to be able
>> to account for >2G number of pages, by raising the kernfs file size
>> limit.
> 
> Wait, we are having sysfs files that are bigger than >2G?  Which files
> exactly?

This file:  /sys/kernel/mm/page_idle/bitmap
that tracks idle pages, 1 bit per page.

Because of the above memory span, so even though the system has < 64TiB 
memory, we still need to be able to seek beyond the 2GiB point in the 
/sys/kernel/mm/page_idle/bitmap file.

without fix:
--------------
2 Gb
$ sudo dd if=/sys/kernel/mm/page_idle/bitmap of=/dev/null bs=8 
skip=$((2*1024*1024*1024/8)) count=1
dd: /sys/kernel/mm/page_idle/bitmap: cannot skip: Invalid argument  <--
0+0 records in
0+0 records out
0 bytes copied, 0.00017564 s, 0.0 kB/s  <--

with fix:
------------
2 Gb
$ sudo dd if=/sys/kernel/mm/page_idle/bitmap of=/dev/null bs=8 
skip=$((2*1024*1024*1024/8)) count=1
dd: /sys/kernel/mm/page_idle/bitmap: cannot skip to specified offset <-- 
ignore
1+0 records in
1+0 records out
8 bytes copied, 0.000165122 s, 48.4 kB/s  <--

thanks,
-jane

> 
>> Signed-off-by: Jane Chu <jane.chu@oracle.com>
>> ---
>>   fs/kernfs/mount.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
>> index 76eaf64b9d9e..3ac52e141766 100644
>> --- a/fs/kernfs/mount.c
>> +++ b/fs/kernfs/mount.c
>> @@ -298,6 +298,7 @@ static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *k
>>   	if (info->root->flags & KERNFS_ROOT_SUPPORT_EXPORTOP)
>>   		sb->s_export_op = &kernfs_export_ops;
>>   	sb->s_time_gran = 1;
>> +	sb->s_maxbytes  = MAX_LFS_FILESIZE;
> 
> What is the default setting for s_maxbytes today?
> 
> thanks,
> 
> greg k-h


  reply	other threads:[~2025-11-24 17:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-11 20:26 [PATCH] fs/kernfs: raise sb->maxbytes to MAX_LFS_FILESIZE Jane Chu
2025-11-24 16:17 ` Greg KH
2025-11-24 17:06   ` jane.chu [this message]
2025-11-24 17:27     ` Greg KH
2025-11-24 17:54       ` jane.chu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cf484bab-597a-482a-bcc1-20ebcc979573@oracle.com \
    --to=jane.chu@oracle.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox