public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Nirjhar Roy (IBM)" <nirjhar.roy.lists@gmail.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: linux-xfs@vger.kernel.org, ritesh.list@gmail.com,
	ojaswin@linux.ibm.com, hch@infradead.org
Subject: Re: [PATCH v1] xfs: Fix rgcount/rgsize value reported in XFS_IOC_FSGEOMETRY ioctl
Date: Tue, 16 Dec 2025 13:20:19 +0530	[thread overview]
Message-ID: <a653845a-6ae4-4506-a449-95b348e15e70@gmail.com> (raw)
In-Reply-To: <20251209155939.GV89472@frogsfrogsfrogs>


On 12/9/25 21:29, Darrick J. Wong wrote:
> On Tue, Dec 09, 2025 at 03:53:34PM +0530, Nirjhar Roy (IBM) wrote:
>> On 12/9/25 12:20, Darrick J. Wong wrote:
>>> On Tue, Dec 09, 2025 at 11:05:21AM +0530, Nirjhar Roy (IBM) wrote:
>>>> On 12/8/25 23:10, Darrick J. Wong wrote:
>>>>> On Mon, Dec 08, 2025 at 08:46:11PM +0530, Nirjhar Roy (IBM) wrote:
>>>>>> With mkfs.xfs -m dir=0 i.e, with XFS_SB_FEAT_INCOMPAT_METADIR
>>>>>> disabled, number of realtime groups should be reported as 1 and
>>>>>> the size of it should be equal to total number of realtime
>>>>>> extents since this the entire realtime filesystem has only 1
>>>>>> realtime group.
>>>>> No.  This (pre-metadir realtime having one group encompassing the entire
>>>>> rt volume) is an implementation detail, not a property of the filesystem
>>>>> geometry.
>>>>>
>>>>> Or put another way: a metadir rt filesystem with one rtgroup that covers
>>>>> the entire rt device is different from a pre-metadir rt filesystem.
>>>>> xfs_info should present that distinction to userspace, particularly
>>>>> since xfs_scrub cares about that difference.
>>>> Okay, got it. A quick question:
>>>>
>>>> A metadir rt filesystem will have 1 bitmap/summary file per rt AG, isn't it?
>>> Per rtgroup, but yes.
>>>
>>>> If yes, then shouldn't functions like xfs_rtx_to_rbmblock(mp,
>>>> xfs_rtxnum_t        rtx) return offset of the corresponding bitmap file of
>>>> the rt AG where rtx belongs?
>>> xfs_rtx_to_rbmblock is an unfortunate function.  Given an extent number
>>> within an rtgroup, it tells you the corresponding block number within
>>> that rtgroup's bitmap file.  Yes, that's confusing because xfs_rtxnum_t
>>> historically denotes an extent number anywhere on the rt volume.
>>>
>>> IOWs, it *should* be an xfs_rgxnum_t (group extent number), which could
>> So the current XFS code with metadir enabled, calls xfs_rtx_to_rbmblock() in
>> such a way that the extent number passed to the function is relative to the
>> AG and not an absolute extent number, am I right?
> Correct.

Okay. A couple of questions regarding realtime fs growth?

Q1. Looking at the loop in "xfs_growfs_rt()"

     for (rgno = old_rgcount; rgno < new_rgcount; rgno++) {
         xfs_rtbxlen_t    rextents = div_u64(in->newblocks, in->extsize);

         error = xfs_rtgroup_alloc(mp, rgno, new_rgcount, rextents);
         if (error)
             goto out_unlock;

         error = xfs_growfs_rtg(mp, rgno, in->newblocks, delta_rtb,
                        in->extsize);
         if (error) {
             struct xfs_rtgroup    *rtg;

             rtg = xfs_rtgroup_grab(mp, rgno);
             if (!WARN_ON_ONCE(!rtg)) {
                 xfs_rtunmount_rtg(rtg);
                 xfs_rtgroup_rele(rtg);
                 xfs_rtgroup_free(mp, rgno);
             }
             break;
         }

     }

So are we supporting partial growth success i.e, if we are trying to 
increase the size from 4 rtgs to 8 rtgs, and we fail (let's say) after 
6th rtg, so the new realtime fs size will be 6 rtgs, right? The reason 
why I am asking is that I don't see the rollbacks of the previously 
allocated rtgs in the 2nd error path i.e, after error = 
xfs_growfs_rtg(mp, rgno, in->newblocks, delta_rtb, in->extsize);


Q2. In the function xfs_rtcopy_summary(), shouldn't the last line be 
"return error" instead of "return 0"? If yes, then I will send a patch 
fixing this.

--NR


>
> --D
>
>>> be a u32 quantity EXCEPT there's a stupid corner case: pre-metadir rt
>>> volumes being treated as if they have one huge group.
>>>
>>> It's theoretically possible for the "single" rtgroup of a pre-metadir rt
>>> volume to have more than 2^32 blocks.  You're unlikely to find one in
>>> practice because (a) old kernels screw up some of the computations and
>>> explode, and (b) lack of sharding means the performance is terrible.
>>>
>>> However, we don't want to create copy-pasted twins of the functions so
>>> we took a dumb shortcut and made xfs_rtx_to_rbmblock take xfs_rtxnum_t.
>>> Were someone to make a Rust XFS, they really ought to define separate
>>> types for each distinct geometry usage, and define From traits to go
>>> from one to the other.  Then our typesafety nightmare will be over. ;)
>> Okay, got it.
>>>> Right now, looking at the definition of
>>>> xfs_rtx_to_rbmblock() it looks like it calculates the offset as if there is
>>>> only 1 global bitmap file?
>>> Right.
>> Okay, thank you so much for the explanation.
>>
>> --NR
>>
>>> --D
>>>
>>>> --NR
>>>>
>>>>> --D
>>>>>
>>>>>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>>>>>> ---
>>>>>>     fs/xfs/libxfs/xfs_sb.c | 8 +++-----
>>>>>>     1 file changed, 3 insertions(+), 5 deletions(-)
>>>>>>
>>>>>> diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
>>>>>> index cdd16dd805d7..989553e7ec02 100644
>>>>>> --- a/fs/xfs/libxfs/xfs_sb.c
>>>>>> +++ b/fs/xfs/libxfs/xfs_sb.c
>>>>>> @@ -875,7 +875,7 @@ __xfs_sb_from_disk(
>>>>>>     	} else {
>>>>>>     		to->sb_metadirino = NULLFSINO;
>>>>>>     		to->sb_rgcount = 1;
>>>>>> -		to->sb_rgextents = 0;
>>>>>> +		to->sb_rgextents = to->sb_rextents;
>>>>>>     	}
>>>>>>     	if (to->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_ZONED) {
>>>>>> @@ -1586,10 +1586,8 @@ xfs_fs_geometry(
>>>>>>     	geo->version = XFS_FSOP_GEOM_VERSION_V5;
>>>>>> -	if (xfs_has_rtgroups(mp)) {
>>>>>> -		geo->rgcount = sbp->sb_rgcount;
>>>>>> -		geo->rgextents = sbp->sb_rgextents;
>>>>>> -	}
>>>>>> +	geo->rgcount = sbp->sb_rgcount;
>>>>>> +	geo->rgextents = sbp->sb_rgextents;
>>>>>>     	if (xfs_has_zoned(mp)) {
>>>>>>     		geo->rtstart = sbp->sb_rtstart;
>>>>>>     		geo->rtreserved = sbp->sb_rtreserved;
>>>>>> -- 
>>>>>> 2.43.5
>>>>>>
>>>>>>
>>>> -- 
>>>> Nirjhar Roy
>>>> Linux Kernel Developer
>>>> IBM, Bangalore
>>>>
>> -- 
>> Nirjhar Roy
>> Linux Kernel Developer
>> IBM, Bangalore
>>
>>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


  reply	other threads:[~2025-12-16  7:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-08 15:16 [PATCH v1] xfs: Fix rgcount/rgsize value reported in XFS_IOC_FSGEOMETRY ioctl Nirjhar Roy (IBM)
2025-12-08 17:40 ` Darrick J. Wong
2025-12-09  5:35   ` Nirjhar Roy (IBM)
2025-12-09  6:50     ` Darrick J. Wong
2025-12-09 10:23       ` Nirjhar Roy (IBM)
2025-12-09 15:59         ` Darrick J. Wong
2025-12-16  7:50           ` Nirjhar Roy (IBM) [this message]
2025-12-16 15:49             ` Darrick J. Wong
2025-12-17  5:42               ` Nirjhar Roy (IBM)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a653845a-6ae4-4506-a449-95b348e15e70@gmail.com \
    --to=nirjhar.roy.lists@gmail.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=ojaswin@linux.ibm.com \
    --cc=ritesh.list@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox