* [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure
@ 2025-11-19 7:38 Mehdi Ben Hadj Khelifa
2025-11-19 14:14 ` Christian Brauner
2025-11-19 19:58 ` Viacheslav Dubeyko
0 siblings, 2 replies; 28+ messages in thread
From: Mehdi Ben Hadj Khelifa @ 2025-11-19 7:38 UTC (permalink / raw)
To: slava, glaubitz, frank.li, brauner, jack, viro
Cc: linux-fsdevel, linux-kernel, skhan, david.hunter.linux, khalid,
linux-kernel-mentees, Mehdi Ben Hadj Khelifa,
syzbot+ad45f827c88778ff7df6
The regression introduced by commit aca740cecbe5 ("fs: open block device
after superblock creation") allows setup_bdev_super() to fail after a new
superblock has been allocated by sget_fc(), but before hfs_fill_super()
takes ownership of the filesystem-specific s_fs_info data.
In that case, hfs_put_super() and the failure paths of hfs_fill_super()
are never reached, leaving the HFS mdb structures attached to s->s_fs_info
unreleased.The default kill_block_super() teardown also does not free
HFS-specific resources, resulting in a memory leak on early mount failure.
Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
hfs_put_super() and the hfs_fill_super() failure path into a dedicated
hfs_kill_sb() implementation. This ensures that both normal unmount and
early teardown paths (including setup_bdev_super() failure) correctly
release HFS metadata.
This also preserves the intended layering: generic_shutdown_super()
handles VFS-side cleanup, while HFS filesystem state is fully destroyed
afterwards.
Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
---
ChangeLog:
Changes from v1:
-Changed the patch direction to focus on hfs changes specifically as
suggested by al viro
Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
Note:This patch might need some more testing as I only did run selftests
with no regression, check dmesg output for no regression, run reproducer
with no bug and test it with syzbot as well.
fs/hfs/super.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/fs/hfs/super.c b/fs/hfs/super.c
index 47f50fa555a4..06e1c25e47dc 100644
--- a/fs/hfs/super.c
+++ b/fs/hfs/super.c
@@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
{
cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
hfs_mdb_close(sb);
- /* release the MDB's resources */
- hfs_mdb_put(sb);
}
static void flush_mdb(struct work_struct *work)
@@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
bail_no_root:
pr_err("get root inode failed\n");
bail:
- hfs_mdb_put(sb);
return res;
}
@@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
return 0;
}
+static void hfs_kill_sb(struct super_block *sb)
+{
+ generic_shutdown_super(sb);
+ hfs_mdb_put(sb);
+ if (sb->s_bdev) {
+ sync_blockdev(sb->s_bdev);
+ bdev_fput(sb->s_bdev_file);
+ }
+
+}
+
static struct file_system_type hfs_fs_type = {
.owner = THIS_MODULE,
.name = "hfs",
- .kill_sb = kill_block_super,
+ .kill_sb = hfs_kill_sb,
.fs_flags = FS_REQUIRES_DEV,
.init_fs_context = hfs_init_fs_context,
};
--
2.52.0
^ permalink raw reply related [flat|nested] 28+ messages in thread* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 7:38 [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure Mehdi Ben Hadj Khelifa @ 2025-11-19 14:14 ` Christian Brauner 2025-11-19 15:27 ` Mehdi Ben Hadj Khelifa 2025-11-19 19:58 ` Viacheslav Dubeyko 1 sibling, 1 reply; 28+ messages in thread From: Christian Brauner @ 2025-11-19 14:14 UTC (permalink / raw) To: Mehdi Ben Hadj Khelifa Cc: slava, glaubitz, frank.li, jack, viro, linux-fsdevel, linux-kernel, skhan, david.hunter.linux, khalid, linux-kernel-mentees, syzbot+ad45f827c88778ff7df6 On Wed, Nov 19, 2025 at 08:38:20AM +0100, Mehdi Ben Hadj Khelifa wrote: > The regression introduced by commit aca740cecbe5 ("fs: open block device > after superblock creation") allows setup_bdev_super() to fail after a new > superblock has been allocated by sget_fc(), but before hfs_fill_super() > takes ownership of the filesystem-specific s_fs_info data. > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > unreleased.The default kill_block_super() teardown also does not free > HFS-specific resources, resulting in a memory leak on early mount failure. > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > hfs_kill_sb() implementation. This ensures that both normal unmount and > early teardown paths (including setup_bdev_super() failure) correctly > release HFS metadata. > > This also preserves the intended layering: generic_shutdown_super() > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > afterwards. > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") I don't think that's correct. The bug was introduced when hfs was converted to the new mount api as this was the point where sb->s_fs_info allocation was moved from fill_super() to init_fs_context() in ffcd06b6d13b ("hfs: convert hfs to use the new mount api") which was way after that commit. I also think this isn't the best way to do it. There's no need to open-code kill_block_super() at all. That whole hfs_mdb_get() calling hfs_mdb_put() is completely backwards and the cleanup labels make no sense - predated anything you did ofc. It should not call hfs_mdb_put(). It's only caller is fill_super() which already cleans everything up. So really hfs_kill_super() should just free the allocation and it should be moved out of hfs_mdb_put(). And that solution is already something I mentioned in my earlier review. Let me test a patch. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 14:14 ` Christian Brauner @ 2025-11-19 15:27 ` Mehdi Ben Hadj Khelifa 0 siblings, 0 replies; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-19 15:27 UTC (permalink / raw) To: Christian Brauner Cc: slava, glaubitz, frank.li, jack, viro, linux-fsdevel, linux-kernel, skhan, david.hunter.linux, khalid, linux-kernel-mentees, syzbot+ad45f827c88778ff7df6 On 11/19/25 3:14 PM, Christian Brauner wrote: > On Wed, Nov 19, 2025 at 08:38:20AM +0100, Mehdi Ben Hadj Khelifa wrote: >> The regression introduced by commit aca740cecbe5 ("fs: open block device >> after superblock creation") allows setup_bdev_super() to fail after a new >> superblock has been allocated by sget_fc(), but before hfs_fill_super() >> takes ownership of the filesystem-specific s_fs_info data. >> >> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >> unreleased.The default kill_block_super() teardown also does not free >> HFS-specific resources, resulting in a memory leak on early mount failure. >> >> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >> hfs_kill_sb() implementation. This ensures that both normal unmount and >> early teardown paths (including setup_bdev_super() failure) correctly >> release HFS metadata. >> >> This also preserves the intended layering: generic_shutdown_super() >> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >> afterwards. >> >> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > I don't think that's correct. > > The bug was introduced when hfs was converted to the new mount api as > this was the point where sb->s_fs_info allocation was moved from > fill_super() to init_fs_context() in ffcd06b6d13b ("hfs: convert hfs to > use the new mount api") which was way after that commit. Ah, That then is definitely the cause since the allocation is from init_fs_context() and in this error path that leaks it didn't call fill_super() yet where in old code would be the allocation of the s_fs_info struct that is being leaked... so that would be where the bug is introduced as you have mentionned thanks for pointing that out! > > I also think this isn't the best way to do it. There's no need to > open-code kill_block_super() at all. > I did think do call kill_block_super() instead in hfs_kill_sb() instead of open-coding it but I went with what Al Viro has suggested... > That whole hfs_mdb_get() calling hfs_mdb_put() is completely backwards > and the cleanup labels make no sense - predated anything you did ofc. It > should not call hfs_mdb_put(). It's only caller is fill_super() which > already cleans everything up. So really hfs_kill_super() should just > free the allocation and it should be moved out of hfs_mdb_put(). > I also thought of such solution to make things clearer of the deallocation of the memory of s_fs_info and to separate it from the deloading/freeing of it's contents. > And that solution is already something I mentioned in my earlier review. I thought you have suggested the same as what the al viro has suggested by your second point here:" or add a wrapper around kill_block_super() for hfs and free it after ->kill_sb() has run. " > Let me test a patch. I just checked your patch and seems to be what I'm thinking about except the stuff that is in hfs_mdb_get() which I didn't know about.But since the hfs_kill_super() is now implemented with freeing the s_fs_info instead of just referring to kill_block_super(), It should fix the issue. I did just download your patch and test it by running local repro, boot the kernel, run selftests before and after with no seen regression.Does that add the Tested-by tag? Thanks for you insights Christian! Tell me if I should send something as a follow up for my patch. Best Regards, Mehdi Ben Hadj Khelifa ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 7:38 [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure Mehdi Ben Hadj Khelifa 2025-11-19 14:14 ` Christian Brauner @ 2025-11-19 19:58 ` Viacheslav Dubeyko 2025-11-19 22:21 ` Mehdi Ben Hadj Khelifa ` (2 more replies) 1 sibling, 3 replies; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-19 19:58 UTC (permalink / raw) To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, brauner@kernel.org, mehdi.benhadjkhelifa@gmail.com, viro@zeniv.linux.org.uk, jack@suse.cz Cc: khalid@kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > The regression introduced by commit aca740cecbe5 ("fs: open block device > after superblock creation") allows setup_bdev_super() to fail after a new > superblock has been allocated by sget_fc(), but before hfs_fill_super() > takes ownership of the filesystem-specific s_fs_info data. > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > unreleased.The default kill_block_super() teardown also does not free > HFS-specific resources, resulting in a memory leak on early mount failure. > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > hfs_kill_sb() implementation. This ensures that both normal unmount and > early teardown paths (including setup_bdev_super() failure) correctly > release HFS metadata. > > This also preserves the intended layering: generic_shutdown_super() > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > afterwards. > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > --- > ChangeLog: > > Changes from v1: > > -Changed the patch direction to focus on hfs changes specifically as > suggested by al viro > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > Note:This patch might need some more testing as I only did run selftests > with no regression, check dmesg output for no regression, run reproducer > with no bug and test it with syzbot as well. Have you run xfstests for the patch? Unfortunately, we have multiple xfstests failures for HFS now. And you can check the list of known issues here [1]. The main point of such run of xfstests is to check that maybe some issue(s) could be fixed by the patch. And, more important that you don't introduce new issues. ;) > > fs/hfs/super.c | 16 ++++++++++++---- > 1 file changed, 12 insertions(+), 4 deletions(-) > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > index 47f50fa555a4..06e1c25e47dc 100644 > --- a/fs/hfs/super.c > +++ b/fs/hfs/super.c > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > { > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > hfs_mdb_close(sb); > - /* release the MDB's resources */ > - hfs_mdb_put(sb); > } > > static void flush_mdb(struct work_struct *work) > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > bail_no_root: > pr_err("get root inode failed\n"); > bail: > - hfs_mdb_put(sb); > return res; > } > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > return 0; > } > > +static void hfs_kill_sb(struct super_block *sb) > +{ > + generic_shutdown_super(sb); > + hfs_mdb_put(sb); > + if (sb->s_bdev) { > + sync_blockdev(sb->s_bdev); > + bdev_fput(sb->s_bdev_file); > + } > + > +} > + > static struct file_system_type hfs_fs_type = { > .owner = THIS_MODULE, > .name = "hfs", > - .kill_sb = kill_block_super, It looks like we have the same issue for the case of HFS+ [2]. Could you please double check that HFS+ should be fixed too? Thanks, Slava. > + .kill_sb = hfs_kill_sb, > .fs_flags = FS_REQUIRES_DEV, > .init_fs_context = hfs_init_fs_context, > }; [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 19:58 ` Viacheslav Dubeyko @ 2025-11-19 22:21 ` Mehdi Ben Hadj Khelifa 2025-11-19 22:24 ` Mehdi Ben Hadj Khelifa 2025-11-21 19:44 ` Mehdi Ben Hadj Khelifa 2025-11-26 13:48 ` Christian Brauner 2 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-19 22:21 UTC (permalink / raw) To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, brauner@kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz Cc: khalid@kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >> The regression introduced by commit aca740cecbe5 ("fs: open block device >> after superblock creation") allows setup_bdev_super() to fail after a new >> superblock has been allocated by sget_fc(), but before hfs_fill_super() >> takes ownership of the filesystem-specific s_fs_info data. >> >> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >> unreleased.The default kill_block_super() teardown also does not free >> HFS-specific resources, resulting in a memory leak on early mount failure. >> >> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >> hfs_kill_sb() implementation. This ensures that both normal unmount and >> early teardown paths (including setup_bdev_super() failure) correctly >> release HFS metadata. >> >> This also preserves the intended layering: generic_shutdown_super() >> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >> afterwards. >> >> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >> --- >> ChangeLog: >> >> Changes from v1: >> >> -Changed the patch direction to focus on hfs changes specifically as >> suggested by al viro >> >> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >> >> Note:This patch might need some more testing as I only did run selftests >> with no regression, check dmesg output for no regression, run reproducer >> with no bug and test it with syzbot as well. > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > failures for HFS now. And you can check the list of known issues here [1]. The > main point of such run of xfstests is to check that maybe some issue(s) could be > fixed by the patch. And, more important that you don't introduce new issues. ;) > I did not know of such tests. I will try to run them for both my patch and christian's patch[1] and report the results. >> >> fs/hfs/super.c | 16 ++++++++++++---- >> 1 file changed, 12 insertions(+), 4 deletions(-) >> >> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >> index 47f50fa555a4..06e1c25e47dc 100644 >> --- a/fs/hfs/super.c >> +++ b/fs/hfs/super.c >> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >> { >> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >> hfs_mdb_close(sb); >> - /* release the MDB's resources */ >> - hfs_mdb_put(sb); >> } >> >> static void flush_mdb(struct work_struct *work) >> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >> bail_no_root: >> pr_err("get root inode failed\n"); >> bail: >> - hfs_mdb_put(sb); >> return res; >> } >> >> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >> return 0; >> } >> >> +static void hfs_kill_sb(struct super_block *sb) >> +{ >> + generic_shutdown_super(sb); >> + hfs_mdb_put(sb); >> + if (sb->s_bdev) { >> + sync_blockdev(sb->s_bdev); >> + bdev_fput(sb->s_bdev_file); >> + } >> + >> +} >> + >> static struct file_system_type hfs_fs_type = { >> .owner = THIS_MODULE, >> .name = "hfs", >> - .kill_sb = kill_block_super, > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > double check that HFS+ should be fixed too? > Yes, I will check it tomorrow in addition to running xfstests and report my findings in response to this email. But I'm not sure if my solution would be the attended fix or a similar solution to what christian did is preferred instead for HFS+. We'll discuss it when I send a response. > Thanks, > Slava. > Thank you for your insights Slava! Best Regards, Mehdi Ben Hadj Khelifa >> + .kill_sb = hfs_kill_sb, >> .fs_flags = FS_REQUIRES_DEV, >> .init_fs_context = hfs_init_fs_context, >> }; > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 22:21 ` Mehdi Ben Hadj Khelifa @ 2025-11-19 22:24 ` Mehdi Ben Hadj Khelifa 0 siblings, 0 replies; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-19 22:24 UTC (permalink / raw) To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, brauner@kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz Cc: khalid@kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/19/25 11:21 PM, Mehdi Ben Hadj Khelifa wrote: > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>> The regression introduced by commit aca740cecbe5 ("fs: open block device >>> after superblock creation") allows setup_bdev_super() to fail after a >>> new >>> superblock has been allocated by sget_fc(), but before hfs_fill_super() >>> takes ownership of the filesystem-specific s_fs_info data. >>> >>> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >>> are never reached, leaving the HFS mdb structures attached to s- >>> >s_fs_info >>> unreleased.The default kill_block_super() teardown also does not free >>> HFS-specific resources, resulting in a memory leak on early mount >>> failure. >>> >>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >>> hfs_kill_sb() implementation. This ensures that both normal unmount and >>> early teardown paths (including setup_bdev_super() failure) correctly >>> release HFS metadata. >>> >>> This also preserves the intended layering: generic_shutdown_super() >>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >>> afterwards. >>> >>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >>> --- >>> ChangeLog: >>> >>> Changes from v1: >>> >>> -Changed the patch direction to focus on hfs changes specifically as >>> suggested by al viro >>> >>> Link:https://lore.kernel.org/all/20251114165255.101361-1- >>> mehdi.benhadjkhelifa@gmail.com/ >>> >>> Note:This patch might need some more testing as I only did run selftests >>> with no regression, check dmesg output for no regression, run reproducer >>> with no bug and test it with syzbot as well. >> >> Have you run xfstests for the patch? Unfortunately, we have multiple >> xfstests >> failures for HFS now. And you can check the list of known issues here >> [1]. The >> main point of such run of xfstests is to check that maybe some >> issue(s) could be >> fixed by the patch. And, more important that you don't introduce new >> issues. ;) >> > I did not know of such tests. I will try to run them for both my patch > and christian's patch[1] and report the results. Forgot to reference the mentioned link "[1]". Sorry for the noise. [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27 >>> >>> fs/hfs/super.c | 16 ++++++++++++---- >>> 1 file changed, 12 insertions(+), 4 deletions(-) >>> >>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >>> index 47f50fa555a4..06e1c25e47dc 100644 >>> --- a/fs/hfs/super.c >>> +++ b/fs/hfs/super.c >>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >>> { >>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >>> hfs_mdb_close(sb); >>> - /* release the MDB's resources */ >>> - hfs_mdb_put(sb); >>> } >>> static void flush_mdb(struct work_struct *work) >>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, >>> struct fs_context *fc) >>> bail_no_root: >>> pr_err("get root inode failed\n"); >>> bail: >>> - hfs_mdb_put(sb); >>> return res; >>> } >>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct >>> fs_context *fc) >>> return 0; >>> } >>> +static void hfs_kill_sb(struct super_block *sb) >>> +{ >>> + generic_shutdown_super(sb); >>> + hfs_mdb_put(sb); >>> + if (sb->s_bdev) { >>> + sync_blockdev(sb->s_bdev); >>> + bdev_fput(sb->s_bdev_file); >>> + } >>> + >>> +} >>> + >>> static struct file_system_type hfs_fs_type = { >>> .owner = THIS_MODULE, >>> .name = "hfs", >>> - .kill_sb = kill_block_super, >> >> It looks like we have the same issue for the case of HFS+ [2]. Could >> you please >> double check that HFS+ should be fixed too? >> > Yes, I will check it tomorrow in addition to running xfstests and report > my findings in response to this email. But I'm not sure if my solution > would be the attended fix or a similar solution to what christian did is > preferred instead for HFS+. We'll discuss it when I send a response. >> Thanks, >> Slava. >> > Thank you for your insights Slava! > > Best Regards, > Mehdi Ben Hadj Khelifa >>> + .kill_sb = hfs_kill_sb, >>> .fs_flags = FS_REQUIRES_DEV, >>> .init_fs_context = hfs_init_fs_context, >>> }; >> >> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues >> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/ >> super.c#L694 > ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 19:58 ` Viacheslav Dubeyko 2025-11-19 22:21 ` Mehdi Ben Hadj Khelifa @ 2025-11-21 19:44 ` Mehdi Ben Hadj Khelifa 2025-11-21 21:15 ` Viacheslav Dubeyko 2025-11-26 13:48 ` Christian Brauner 2 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-21 19:44 UTC (permalink / raw) To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, brauner@kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz Cc: khalid@kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >> The regression introduced by commit aca740cecbe5 ("fs: open block device >> after superblock creation") allows setup_bdev_super() to fail after a new >> superblock has been allocated by sget_fc(), but before hfs_fill_super() >> takes ownership of the filesystem-specific s_fs_info data. >> >> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >> unreleased.The default kill_block_super() teardown also does not free >> HFS-specific resources, resulting in a memory leak on early mount failure. >> >> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >> hfs_kill_sb() implementation. This ensures that both normal unmount and >> early teardown paths (including setup_bdev_super() failure) correctly >> release HFS metadata. >> >> This also preserves the intended layering: generic_shutdown_super() >> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >> afterwards. >> >> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >> --- >> ChangeLog: >> >> Changes from v1: >> >> -Changed the patch direction to focus on hfs changes specifically as >> suggested by al viro >> >> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >> >> Note:This patch might need some more testing as I only did run selftests >> with no regression, check dmesg output for no regression, run reproducer >> with no bug and test it with syzbot as well. > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > failures for HFS now. And you can check the list of known issues here [1]. The > main point of such run of xfstests is to check that maybe some issue(s) could be > fixed by the patch. And, more important that you don't introduce new issues. ;) > I have tried to run the xfstests with a kernel built with my patch and also without my patch for TEST and SCRATCH devices and in both cases my system crashes in running the generic/631 test.Still unsure of the cause. For more context, I'm running the tests on the 6.18-rc5 version of the kernel and the devices and the environment setup is as follows: For device creation and mounting(also tried it with dd and had same results): fallocate -l 10G test.img fallocate -l 10G scratch.img sudo mkfs.hfs test.img sudo losetup /dev/loop0 ./test.img sudo losetup /dev/loop1 ./scratch.img sudo mkdir -p /mnt/test /mnt/scratch sudo mount /dev/loop0 /mnt/test For environment setup(local.config): export TEST_DEV=/dev/loop0 export TEST_DIR=/mnt/test export SCRATCH_DEV=/dev/loop1 export SCRATCH_MNT=/mnt/scratch Ran the tests using:sudo ./check -g auto If more context is needed to know the point of failure or if I have made a mistake during setup I'm happy to receive your comments since this is my first time trying to run xfstests. >> >> fs/hfs/super.c | 16 ++++++++++++---- >> 1 file changed, 12 insertions(+), 4 deletions(-) >> >> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >> index 47f50fa555a4..06e1c25e47dc 100644 >> --- a/fs/hfs/super.c >> +++ b/fs/hfs/super.c >> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >> { >> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >> hfs_mdb_close(sb); >> - /* release the MDB's resources */ >> - hfs_mdb_put(sb); >> } >> >> static void flush_mdb(struct work_struct *work) >> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >> bail_no_root: >> pr_err("get root inode failed\n"); >> bail: >> - hfs_mdb_put(sb); >> return res; >> } >> >> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >> return 0; >> } >> >> +static void hfs_kill_sb(struct super_block *sb) >> +{ >> + generic_shutdown_super(sb); >> + hfs_mdb_put(sb); >> + if (sb->s_bdev) { >> + sync_blockdev(sb->s_bdev); >> + bdev_fput(sb->s_bdev_file); >> + } >> + >> +} >> + >> static struct file_system_type hfs_fs_type = { >> .owner = THIS_MODULE, >> .name = "hfs", >> - .kill_sb = kill_block_super, > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > double check that HFS+ should be fixed too? > I have checked the same error path and it seems that hfsplus_sb_info is not freed in that path(I could provide the exact call stack which would cause such a memory leak) although I didn't create or run any reproducers for this particular filesystem type. If you would like a patch for this issue, would something like what is shown below be acceptable? : +static void hfsplus_kill_super(struct super_block *sb) +{ + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); + + kill_block_super(sb); + kfree(sbi); +} + static struct file_system_type hfsplus_fs_type = { .owner = THIS_MODULE, .name = "hfsplus", - .kill_sb = kill_block_super, + .kill_sb = hfsplus_kill_super, .fs_flags = FS_REQUIRES_DEV, .init_fs_context = hfsplus_init_fs_context, }; If there is something to add, remove or adjust. Please let me know in the case of you willing accepting such a patch of course. > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj Khelifa >> + .kill_sb = hfs_kill_sb, >> .fs_flags = FS_REQUIRES_DEV, >> .init_fs_context = hfs_init_fs_context, >> }; > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 19:44 ` Mehdi Ben Hadj Khelifa @ 2025-11-21 21:15 ` Viacheslav Dubeyko 2025-11-21 22:48 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-21 21:15 UTC (permalink / raw) To: jack@suse.cz, glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, brauner@kernel.org, mehdi.benhadjkhelifa@gmail.com, viro@zeniv.linux.org.uk Cc: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, linux-kernel-mentees@lists.linuxfoundation.org, khalid@kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, david.hunter.linux@gmail.com, skhan@linuxfoundation.org On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > > after superblock creation") allows setup_bdev_super() to fail after a new > > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > > takes ownership of the filesystem-specific s_fs_info data. > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > > unreleased.The default kill_block_super() teardown also does not free > > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > > early teardown paths (including setup_bdev_super() failure) correctly > > > release HFS metadata. > > > > > > This also preserves the intended layering: generic_shutdown_super() > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > > afterwards. > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > > --- > > > ChangeLog: > > > > > > Changes from v1: > > > > > > -Changed the patch direction to focus on hfs changes specifically as > > > suggested by al viro > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > > > Note:This patch might need some more testing as I only did run selftests > > > with no regression, check dmesg output for no regression, run reproducer > > > with no bug and test it with syzbot as well. > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > > failures for HFS now. And you can check the list of known issues here [1]. The > > main point of such run of xfstests is to check that maybe some issue(s) could be > > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > I have tried to run the xfstests with a kernel built with my patch and > also without my patch for TEST and SCRATCH devices and in both cases my > system crashes in running the generic/631 test.Still unsure of the > cause. For more context, I'm running the tests on the 6.18-rc5 version > of the kernel and the devices and the environment setup is as follows: > > For device creation and mounting(also tried it with dd and had same > results): > fallocate -l 10G test.img > fallocate -l 10G scratch.img > sudo mkfs.hfs test.img > sudo losetup /dev/loop0 ./test.img > sudo losetup /dev/loop1 ./scratch.img > sudo mkdir -p /mnt/test /mnt/scratch > sudo mount /dev/loop0 /mnt/test > > For environment setup(local.config): > export TEST_DEV=/dev/loop0 > export TEST_DIR=/mnt/test > export SCRATCH_DEV=/dev/loop1 > export SCRATCH_MNT=/mnt/scratch This is my configuration: export TEST_DEV=/dev/loop50 export TEST_DIR=/mnt/test export SCRATCH_DEV=/dev/loop51 export SCRATCH_MNT=/mnt/scratch export FSTYP=hfs Probably, you've missed FSTYP. Did you tried to run other file system at first (for example, ext4) to be sure that everything is good? > > Ran the tests using:sudo ./check -g auto > You are brave guy. :) Currently, I am trying to fix the issues for quick group: sudo ./check -g quick > If more context is needed to know the point of failure or if I have made > a mistake during setup I'm happy to receive your comments since this is > my first time trying to run xfstests. > I don't see the crash on my side. sudo ./check generic/631 FSTYP -- hfs PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025 MKFS_OPTIONS -- /dev/loop51 MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch generic/631 [not run] attr namespace trusted not supported by this filesystem type: hfs Ran: generic/631 Not run: generic/631 Passed all 1 tests This test simply is not running for HFS case. I see that HFS+ is failing for generic/631, but I don't see the crash. I am running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something dramatically. My guess that, maybe, xfstests suite is trying to run some other file system but not HFS. > > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > > index 47f50fa555a4..06e1c25e47dc 100644 > > > --- a/fs/hfs/super.c > > > +++ b/fs/hfs/super.c > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > > { > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > > hfs_mdb_close(sb); > > > - /* release the MDB's resources */ > > > - hfs_mdb_put(sb); > > > } > > > > > > static void flush_mdb(struct work_struct *work) > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > > bail_no_root: > > > pr_err("get root inode failed\n"); > > > bail: > > > - hfs_mdb_put(sb); > > > return res; > > > } > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > > return 0; > > > } > > > > > > +static void hfs_kill_sb(struct super_block *sb) > > > +{ > > > + generic_shutdown_super(sb); > > > + hfs_mdb_put(sb); > > > + if (sb->s_bdev) { > > > + sync_blockdev(sb->s_bdev); > > > + bdev_fput(sb->s_bdev_file); > > > + } > > > + > > > +} > > > + > > > static struct file_system_type hfs_fs_type = { > > > .owner = THIS_MODULE, > > > .name = "hfs", > > > - .kill_sb = kill_block_super, I've realized that if we are trying to solve the issue with pure call of kill_block_super() for the case of HFS/HFS+, then we could have the same trouble for other file systems. It make sense to check that we do not have likewise trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix, jfs, squashfs, freevxfs, befs. > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > > double check that HFS+ should be fixed too? > > > I have checked the same error path and it seems that hfsplus_sb_info is > not freed in that path(I could provide the exact call stack which would > cause such a memory leak) although I didn't create or run any > reproducers for this particular filesystem type. > If you would like a patch for this issue, would something like what is > shown below be acceptable? : > > +static void hfsplus_kill_super(struct super_block *sb) > +{ > + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); > + > + kill_block_super(sb); > + kfree(sbi); > +} > + > static struct file_system_type hfsplus_fs_type = { > .owner = THIS_MODULE, > .name = "hfsplus", > - .kill_sb = kill_block_super, > + .kill_sb = hfsplus_kill_super, > .fs_flags = FS_REQUIRES_DEV, > .init_fs_context = hfsplus_init_fs_context, > }; > > If there is something to add, remove or adjust. Please let me know in > the case of you willing accepting such a patch of course. We call hfs_mdb_put() for the case of HFS: void hfs_mdb_put(struct super_block *sb) { if (!HFS_SB(sb)) return; /* free the B-trees */ hfs_btree_close(HFS_SB(sb)->ext_tree); hfs_btree_close(HFS_SB(sb)->cat_tree); /* free the buffers holding the primary and alternate MDBs */ brelse(HFS_SB(sb)->mdb_bh); brelse(HFS_SB(sb)->alt_mdb_bh); unload_nls(HFS_SB(sb)->nls_io); unload_nls(HFS_SB(sb)->nls_disk); kfree(HFS_SB(sb)->bitmap); kfree(HFS_SB(sb)); sb->s_fs_info = NULL; } So, we need likewise course of actions for HFS+ because we have multiple pointers in superblock too: struct hfsplus_sb_info { void *s_vhdr_buf; struct hfsplus_vh *s_vhdr; void *s_backup_vhdr_buf; struct hfsplus_vh *s_backup_vhdr; struct hfs_btree *ext_tree; struct hfs_btree *cat_tree; struct hfs_btree *attr_tree; atomic_t attr_tree_state; struct inode *alloc_file; struct inode *hidden_dir; struct nls_table *nls; /* Runtime variables */ u32 blockoffset; u32 min_io_size; sector_t part_start; sector_t sect_count; int fs_shift; /* immutable data from the volume header */ u32 alloc_blksz; int alloc_blksz_shift; u32 total_blocks; u32 data_clump_blocks, rsrc_clump_blocks; /* mutable data from the volume header, protected by alloc_mutex */ u32 free_blocks; struct mutex alloc_mutex; /* mutable data from the volume header, protected by vh_mutex */ u32 next_cnid; u32 file_count; u32 folder_count; struct mutex vh_mutex; /* Config options */ u32 creator; u32 type; umode_t umask; kuid_t uid; kgid_t gid; int part, session; unsigned long flags; int work_queued; /* non-zero delayed work is queued */ struct delayed_work sync_work; /* FS sync delayed work */ spinlock_t work_lock; /* protects sync_work and work_queued */ struct rcu_head rcu; }; Thanks, Slava. > > > > + .kill_sb = hfs_kill_sb, > > > .fs_flags = FS_REQUIRES_DEV, > > > .init_fs_context = hfs_init_fs_context, > > > }; > > > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues > > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 21:15 ` Viacheslav Dubeyko @ 2025-11-21 22:48 ` Mehdi Ben Hadj Khelifa 2025-11-21 22:04 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-21 22:48 UTC (permalink / raw) To: Viacheslav Dubeyko, jack@suse.cz, glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, linux-kernel-mentees@lists.linuxfoundation.org, khalid@kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, david.hunter.linux@gmail.com, skhan@linuxfoundation.org On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> The regression introduced by commit aca740cecbe5 ("fs: open block device >>>> after superblock creation") allows setup_bdev_super() to fail after a new >>>> superblock has been allocated by sget_fc(), but before hfs_fill_super() >>>> takes ownership of the filesystem-specific s_fs_info data. >>>> >>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >>>> unreleased.The default kill_block_super() teardown also does not free >>>> HFS-specific resources, resulting in a memory leak on early mount failure. >>>> >>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >>>> hfs_kill_sb() implementation. This ensures that both normal unmount and >>>> early teardown paths (including setup_bdev_super() failure) correctly >>>> release HFS metadata. >>>> >>>> This also preserves the intended layering: generic_shutdown_super() >>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >>>> afterwards. >>>> >>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >>>> --- >>>> ChangeLog: >>>> >>>> Changes from v1: >>>> >>>> -Changed the patch direction to focus on hfs changes specifically as >>>> suggested by al viro >>>> >>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >>>> >>>> Note:This patch might need some more testing as I only did run selftests >>>> with no regression, check dmesg output for no regression, run reproducer >>>> with no bug and test it with syzbot as well. >>> >>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests >>> failures for HFS now. And you can check the list of known issues here [1]. The >>> main point of such run of xfstests is to check that maybe some issue(s) could be >>> fixed by the patch. And, more important that you don't introduce new issues. ;) >>> >> I have tried to run the xfstests with a kernel built with my patch and >> also without my patch for TEST and SCRATCH devices and in both cases my >> system crashes in running the generic/631 test.Still unsure of the >> cause. For more context, I'm running the tests on the 6.18-rc5 version >> of the kernel and the devices and the environment setup is as follows: >> >> For device creation and mounting(also tried it with dd and had same >> results): >> fallocate -l 10G test.img >> fallocate -l 10G scratch.img >> sudo mkfs.hfs test.img >> sudo losetup /dev/loop0 ./test.img >> sudo losetup /dev/loop1 ./scratch.img >> sudo mkdir -p /mnt/test /mnt/scratch >> sudo mount /dev/loop0 /mnt/test >> >> For environment setup(local.config): >> export TEST_DEV=/dev/loop0 >> export TEST_DIR=/mnt/test >> export SCRATCH_DEV=/dev/loop1 >> export SCRATCH_MNT=/mnt/scratch > > This is my configuration: > > export TEST_DEV=/dev/loop50 > export TEST_DIR=/mnt/test > export SCRATCH_DEV=/dev/loop51 > export SCRATCH_MNT=/mnt/scratch > > export FSTYP=hfs > Ah, Missed that option. I will try with that in my next testing. > Probably, you've missed FSTYP. Did you tried to run other file system at first > (for example, ext4) to be sure that everything is good? > No, I barely squeezed in time today to the testing for the HFS so I didn't do any preliminary testing but I will check that too my next run before trying to test HFS. >> >> Ran the tests using:sudo ./check -g auto >> > > You are brave guy. :) Currently, I am trying to fix the issues for quick group: > > sudo ./check -g quick > I thought I needed to do a more exhaustive testing so I went with auto. I will try to experiment with quick my next round of testing. Thanks for the heads up! >> If more context is needed to know the point of failure or if I have made >> a mistake during setup I'm happy to receive your comments since this is >> my first time trying to run xfstests. >> > > I don't see the crash on my side. > > sudo ./check generic/631 > FSTYP -- hfs > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP > PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025 > MKFS_OPTIONS -- /dev/loop51 > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > generic/631 [not run] attr namespace trusted not supported by this > filesystem type: hfs > Ran: generic/631 > Not run: generic/631 > Passed all 1 tests > > This test simply is not running for HFS case. > > I see that HFS+ is failing for generic/631, but I don't see the crash. I am > running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something > dramatically. > > My guess that, maybe, xfstests suite is trying to run some other file system but > not HFS. > I'm assuming that it's running HFSPLUS testing foir me because I just realised that the package that I downloaded to do mkfs.hfs is just a symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if mkfs.hfs is deprecated somehow. I will probably build it from source if available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS i don't think that a fail on generic/631 would crash my system multiple times with different kernels. I would have to test with ext4 before and play around more to find out why that happened.. >>>> >>>> fs/hfs/super.c | 16 ++++++++++++---- >>>> 1 file changed, 12 insertions(+), 4 deletions(-) >>>> >>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >>>> index 47f50fa555a4..06e1c25e47dc 100644 >>>> --- a/fs/hfs/super.c >>>> +++ b/fs/hfs/super.c >>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >>>> { >>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >>>> hfs_mdb_close(sb); >>>> - /* release the MDB's resources */ >>>> - hfs_mdb_put(sb); >>>> } >>>> >>>> static void flush_mdb(struct work_struct *work) >>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >>>> bail_no_root: >>>> pr_err("get root inode failed\n"); >>>> bail: >>>> - hfs_mdb_put(sb); >>>> return res; >>>> } >>>> >>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >>>> return 0; >>>> } >>>> >>>> +static void hfs_kill_sb(struct super_block *sb) >>>> +{ >>>> + generic_shutdown_super(sb); >>>> + hfs_mdb_put(sb); >>>> + if (sb->s_bdev) { >>>> + sync_blockdev(sb->s_bdev); >>>> + bdev_fput(sb->s_bdev_file); >>>> + } >>>> + >>>> +} >>>> + >>>> static struct file_system_type hfs_fs_type = { >>>> .owner = THIS_MODULE, >>>> .name = "hfs", >>>> - .kill_sb = kill_block_super, > > I've realized that if we are trying to solve the issue with pure call of > kill_block_super() for the case of HFS/HFS+, then we could have the same trouble > for other file systems. It make sense to check that we do not have likewise > trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix, > jfs, squashfs, freevxfs, befs. While I was doing my original fix for hfs, I did notice that too. Many other filesystems(not all) don't have a "custom" super block destroyer and they just refer to the generic kill_block_super() function which might lead to the same problem as HFS and HFS+. That would more digging too. I will see what I can do next when we finish HFS and potentially HFS+ first. > >>> >>> It looks like we have the same issue for the case of HFS+ [2]. Could you please >>> double check that HFS+ should be fixed too? >>> >> I have checked the same error path and it seems that hfsplus_sb_info is >> not freed in that path(I could provide the exact call stack which would >> cause such a memory leak) although I didn't create or run any >> reproducers for this particular filesystem type. >> If you would like a patch for this issue, would something like what is >> shown below be acceptable? : >> >> +static void hfsplus_kill_super(struct super_block *sb) >> +{ >> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); >> + >> + kill_block_super(sb); >> + kfree(sbi); >> +} >> + >> static struct file_system_type hfsplus_fs_type = { >> .owner = THIS_MODULE, >> .name = "hfsplus", >> - .kill_sb = kill_block_super, >> + .kill_sb = hfsplus_kill_super, >> .fs_flags = FS_REQUIRES_DEV, >> .init_fs_context = hfsplus_init_fs_context, >> }; >> >> If there is something to add, remove or adjust. Please let me know in >> the case of you willing accepting such a patch of course. > > We call hfs_mdb_put() for the case of HFS: > > void hfs_mdb_put(struct super_block *sb) > { > if (!HFS_SB(sb)) > return; > /* free the B-trees */ > hfs_btree_close(HFS_SB(sb)->ext_tree); > hfs_btree_close(HFS_SB(sb)->cat_tree); > > /* free the buffers holding the primary and alternate MDBs */ > brelse(HFS_SB(sb)->mdb_bh); > brelse(HFS_SB(sb)->alt_mdb_bh); > > unload_nls(HFS_SB(sb)->nls_io); > unload_nls(HFS_SB(sb)->nls_disk); > > kfree(HFS_SB(sb)->bitmap); > kfree(HFS_SB(sb)); > sb->s_fs_info = NULL; > } > > So, we need likewise course of actions for HFS+ because we have multiple > pointers in superblock too: > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in christian's patch because fill_super() (for the each specific filesystem) is responsible for cleaning up the superblock in case of failure and you can reference christian's patch[1] which he explained the reasoning for here[2].And in the error path the we are trying to fix, fill_super() isn't even called yet. So such pointers shouldn't be pointing to anything allocated yet hence only freeing the pointer to the sb_info here is sufficient I think. [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27 [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/ > struct hfsplus_sb_info { > void *s_vhdr_buf; > struct hfsplus_vh *s_vhdr; > void *s_backup_vhdr_buf; > struct hfsplus_vh *s_backup_vhdr; > struct hfs_btree *ext_tree; > struct hfs_btree *cat_tree; > struct hfs_btree *attr_tree; > atomic_t attr_tree_state; > struct inode *alloc_file; > struct inode *hidden_dir; > struct nls_table *nls; > > /* Runtime variables */ > u32 blockoffset; > u32 min_io_size; > sector_t part_start; > sector_t sect_count; > int fs_shift; > > /* immutable data from the volume header */ > u32 alloc_blksz; > int alloc_blksz_shift; > u32 total_blocks; > u32 data_clump_blocks, rsrc_clump_blocks; > > /* mutable data from the volume header, protected by alloc_mutex */ > u32 free_blocks; > struct mutex alloc_mutex; > > /* mutable data from the volume header, protected by vh_mutex */ > u32 next_cnid; > u32 file_count; > u32 folder_count; > struct mutex vh_mutex; > > /* Config options */ > u32 creator; > u32 type; > > umode_t umask; > kuid_t uid; > kgid_t gid; > > int part, session; > unsigned long flags; > > int work_queued; /* non-zero delayed work is queued */ > struct delayed_work sync_work; /* FS sync delayed work */ > spinlock_t work_lock; /* protects sync_work and work_queued */ > struct rcu_head rcu; > }; > > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj Khelifa >> >>>> + .kill_sb = hfs_kill_sb, >>>> .fs_flags = FS_REQUIRES_DEV, >>>> .init_fs_context = hfs_init_fs_context, >>>> }; >>> >>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues >>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 22:48 ` Mehdi Ben Hadj Khelifa @ 2025-11-21 22:04 ` Viacheslav Dubeyko 2025-11-21 23:16 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-21 22:04 UTC (permalink / raw) To: jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, mehdi.benhadjkhelifa@gmail.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, david.hunter.linux@gmail.com, khalid@kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, skhan@linuxfoundation.org On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > > > > after superblock creation") allows setup_bdev_super() to fail after a new > > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > > > > takes ownership of the filesystem-specific s_fs_info data. > > > > > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > > > > unreleased.The default kill_block_super() teardown also does not free > > > > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > > > > early teardown paths (including setup_bdev_super() failure) correctly > > > > > release HFS metadata. > > > > > > > > > > This also preserves the intended layering: generic_shutdown_super() > > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > > > > afterwards. > > > > > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > > > > --- > > > > > ChangeLog: > > > > > > > > > > Changes from v1: > > > > > > > > > > -Changed the patch direction to focus on hfs changes specifically as > > > > > suggested by al viro > > > > > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > > > > > > > Note:This patch might need some more testing as I only did run selftests > > > > > with no regression, check dmesg output for no regression, run reproducer > > > > > with no bug and test it with syzbot as well. > > > > > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > > > > failures for HFS now. And you can check the list of known issues here [1]. The > > > > main point of such run of xfstests is to check that maybe some issue(s) could be > > > > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > > > > > I have tried to run the xfstests with a kernel built with my patch and > > > also without my patch for TEST and SCRATCH devices and in both cases my > > > system crashes in running the generic/631 test.Still unsure of the > > > cause. For more context, I'm running the tests on the 6.18-rc5 version > > > of the kernel and the devices and the environment setup is as follows: > > > > > > For device creation and mounting(also tried it with dd and had same > > > results): > > > fallocate -l 10G test.img > > > fallocate -l 10G scratch.img > > > sudo mkfs.hfs test.img > > > sudo losetup /dev/loop0 ./test.img > > > sudo losetup /dev/loop1 ./scratch.img > > > sudo mkdir -p /mnt/test /mnt/scratch > > > sudo mount /dev/loop0 /mnt/test > > > > > > For environment setup(local.config): > > > export TEST_DEV=/dev/loop0 > > > export TEST_DIR=/mnt/test > > > export SCRATCH_DEV=/dev/loop1 > > > export SCRATCH_MNT=/mnt/scratch > > > > This is my configuration: > > > > export TEST_DEV=/dev/loop50 > > export TEST_DIR=/mnt/test > > export SCRATCH_DEV=/dev/loop51 > > export SCRATCH_MNT=/mnt/scratch > > > > export FSTYP=hfs > > > Ah, Missed that option. I will try with that in my next testing. > > Probably, you've missed FSTYP. Did you tried to run other file system at first > > (for example, ext4) to be sure that everything is good? > > > No, I barely squeezed in time today to the testing for the HFS so I > didn't do any preliminary testing but I will check that too my next run > before trying to test HFS. > > > > > > Ran the tests using:sudo ./check -g auto > > > > > > > You are brave guy. :) Currently, I am trying to fix the issues for quick group: > > > > sudo ./check -g quick > > > I thought I needed to do a more exhaustive testing so I went with auto. > I will try to experiment with quick my next round of testing. Thanks for > the heads up! > > > If more context is needed to know the point of failure or if I have made > > > a mistake during setup I'm happy to receive your comments since this is > > > my first time trying to run xfstests. > > > > > > > I don't see the crash on my side. > > > > sudo ./check generic/631 > > FSTYP -- hfs > > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP > > PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025 > > MKFS_OPTIONS -- /dev/loop51 > > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > > > generic/631 [not run] attr namespace trusted not supported by this > > filesystem type: hfs > > Ran: generic/631 > > Not run: generic/631 > > Passed all 1 tests > > > > This test simply is not running for HFS case. > > > > I see that HFS+ is failing for generic/631, but I don't see the crash. I am > > running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something > > dramatically. > > > > My guess that, maybe, xfstests suite is trying to run some other file system but > > not HFS. > > > I'm assuming that it's running HFSPLUS testing foir me because I just > realised that the package that I downloaded to do mkfs.hfs is just a > symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for > mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if > mkfs.hfs is deprecated somehow. I will probably build it from source if > available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS > i don't think that a fail on generic/631 would crash my system multiple > times with different kernels. I would have to test with ext4 before and > play around more to find out why that happened.. The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus can create HFS volume by using this option: -h create an HFS format filesystem (HFS Plus is the default) I don't have any special package installed for HFS on my side. Thanks, Slava. > > > > > > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > > > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > > > > index 47f50fa555a4..06e1c25e47dc 100644 > > > > > --- a/fs/hfs/super.c > > > > > +++ b/fs/hfs/super.c > > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > > > > { > > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > > > > hfs_mdb_close(sb); > > > > > - /* release the MDB's resources */ > > > > > - hfs_mdb_put(sb); > > > > > } > > > > > > > > > > static void flush_mdb(struct work_struct *work) > > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > > > > bail_no_root: > > > > > pr_err("get root inode failed\n"); > > > > > bail: > > > > > - hfs_mdb_put(sb); > > > > > return res; > > > > > } > > > > > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > > > > return 0; > > > > > } > > > > > > > > > > +static void hfs_kill_sb(struct super_block *sb) > > > > > +{ > > > > > + generic_shutdown_super(sb); > > > > > + hfs_mdb_put(sb); > > > > > + if (sb->s_bdev) { > > > > > + sync_blockdev(sb->s_bdev); > > > > > + bdev_fput(sb->s_bdev_file); > > > > > + } > > > > > + > > > > > +} > > > > > + > > > > > static struct file_system_type hfs_fs_type = { > > > > > .owner = THIS_MODULE, > > > > > .name = "hfs", > > > > > - .kill_sb = kill_block_super, > > > > I've realized that if we are trying to solve the issue with pure call of > > kill_block_super() for the case of HFS/HFS+, then we could have the same trouble > > for other file systems. It make sense to check that we do not have likewise > > trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix, > > jfs, squashfs, freevxfs, befs. > While I was doing my original fix for hfs, I did notice that too. Many > other filesystems(not all) don't have a "custom" super block destroyer > and they just refer to the generic kill_block_super() function which > might lead to the same problem as HFS and HFS+. That would more digging > too. I will see what I can do next when we finish HFS and potentially > HFS+ first. > > > > > > > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > > > > double check that HFS+ should be fixed too? > > > > > > > I have checked the same error path and it seems that hfsplus_sb_info is > > > not freed in that path(I could provide the exact call stack which would > > > cause such a memory leak) although I didn't create or run any > > > reproducers for this particular filesystem type. > > > If you would like a patch for this issue, would something like what is > > > shown below be acceptable? : > > > > > > +static void hfsplus_kill_super(struct super_block *sb) > > > +{ > > > + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); > > > + > > > + kill_block_super(sb); > > > + kfree(sbi); > > > +} > > > + > > > static struct file_system_type hfsplus_fs_type = { > > > .owner = THIS_MODULE, > > > .name = "hfsplus", > > > - .kill_sb = kill_block_super, > > > + .kill_sb = hfsplus_kill_super, > > > .fs_flags = FS_REQUIRES_DEV, > > > .init_fs_context = hfsplus_init_fs_context, > > > }; > > > > > > If there is something to add, remove or adjust. Please let me know in > > > the case of you willing accepting such a patch of course. > > > > We call hfs_mdb_put() for the case of HFS: > > > > void hfs_mdb_put(struct super_block *sb) > > { > > if (!HFS_SB(sb)) > > return; > > /* free the B-trees */ > > hfs_btree_close(HFS_SB(sb)->ext_tree); > > hfs_btree_close(HFS_SB(sb)->cat_tree); > > > > /* free the buffers holding the primary and alternate MDBs */ > > brelse(HFS_SB(sb)->mdb_bh); > > brelse(HFS_SB(sb)->alt_mdb_bh); > > > > unload_nls(HFS_SB(sb)->nls_io); > > unload_nls(HFS_SB(sb)->nls_disk); > > > > kfree(HFS_SB(sb)->bitmap); > > kfree(HFS_SB(sb)); > > sb->s_fs_info = NULL; > > } > > > > So, we need likewise course of actions for HFS+ because we have multiple > > pointers in superblock too: > > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in > christian's patch because fill_super() (for the each specific > filesystem) is responsible for cleaning up the superblock in case of > failure and you can reference christian's patch[1] which he explained > the reasoning for here[2].And in the error path the we are trying to > fix, fill_super() isn't even called yet. So such pointers shouldn't be > pointing to anything allocated yet hence only freeing the pointer to the > sb_info here is sufficient I think. > [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27 > [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/ > > struct hfsplus_sb_info { > > void *s_vhdr_buf; > > struct hfsplus_vh *s_vhdr; > > void *s_backup_vhdr_buf; > > struct hfsplus_vh *s_backup_vhdr; > > struct hfs_btree *ext_tree; > > struct hfs_btree *cat_tree; > > struct hfs_btree *attr_tree; > > atomic_t attr_tree_state; > > struct inode *alloc_file; > > struct inode *hidden_dir; > > struct nls_table *nls; > > > > /* Runtime variables */ > > u32 blockoffset; > > u32 min_io_size; > > sector_t part_start; > > sector_t sect_count; > > int fs_shift; > > > > /* immutable data from the volume header */ > > u32 alloc_blksz; > > int alloc_blksz_shift; > > u32 total_blocks; > > u32 data_clump_blocks, rsrc_clump_blocks; > > > > /* mutable data from the volume header, protected by alloc_mutex */ > > u32 free_blocks; > > struct mutex alloc_mutex; > > > > /* mutable data from the volume header, protected by vh_mutex */ > > u32 next_cnid; > > u32 file_count; > > u32 folder_count; > > struct mutex vh_mutex; > > > > /* Config options */ > > u32 creator; > > u32 type; > > > > umode_t umask; > > kuid_t uid; > > kgid_t gid; > > > > int part, session; > > unsigned long flags; > > > > int work_queued; /* non-zero delayed work is queued */ > > struct delayed_work sync_work; /* FS sync delayed work */ > > spinlock_t work_lock; /* protects sync_work and work_queued */ > > struct rcu_head rcu; > > }; > > > > > > Thanks, > > Slava. > > > Best Regards, > Mehdi Ben Hadj Khelifa > > > > > > > > > + .kill_sb = hfs_kill_sb, > > > > > .fs_flags = FS_REQUIRES_DEV, > > > > > .init_fs_context = hfs_init_fs_context, > > > > > }; > > > > > > > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues > > > > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 22:04 ` Viacheslav Dubeyko @ 2025-11-21 23:16 ` Mehdi Ben Hadj Khelifa 2025-11-21 22:28 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-21 23:16 UTC (permalink / raw) To: Viacheslav Dubeyko, jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, david.hunter.linux@gmail.com, khalid@kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, skhan@linuxfoundation.org On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device >>>>>> after superblock creation") allows setup_bdev_super() to fail after a new >>>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super() >>>>>> takes ownership of the filesystem-specific s_fs_info data. >>>>>> >>>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >>>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >>>>>> unreleased.The default kill_block_super() teardown also does not free >>>>>> HFS-specific resources, resulting in a memory leak on early mount failure. >>>>>> >>>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >>>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >>>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and >>>>>> early teardown paths (including setup_bdev_super() failure) correctly >>>>>> release HFS metadata. >>>>>> >>>>>> This also preserves the intended layering: generic_shutdown_super() >>>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >>>>>> afterwards. >>>>>> >>>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >>>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >>>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >>>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >>>>>> --- >>>>>> ChangeLog: >>>>>> >>>>>> Changes from v1: >>>>>> >>>>>> -Changed the patch direction to focus on hfs changes specifically as >>>>>> suggested by al viro >>>>>> >>>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >>>>>> >>>>>> Note:This patch might need some more testing as I only did run selftests >>>>>> with no regression, check dmesg output for no regression, run reproducer >>>>>> with no bug and test it with syzbot as well. >>>>> >>>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests >>>>> failures for HFS now. And you can check the list of known issues here [1]. The >>>>> main point of such run of xfstests is to check that maybe some issue(s) could be >>>>> fixed by the patch. And, more important that you don't introduce new issues. ;) >>>>> >>>> I have tried to run the xfstests with a kernel built with my patch and >>>> also without my patch for TEST and SCRATCH devices and in both cases my >>>> system crashes in running the generic/631 test.Still unsure of the >>>> cause. For more context, I'm running the tests on the 6.18-rc5 version >>>> of the kernel and the devices and the environment setup is as follows: >>>> >>>> For device creation and mounting(also tried it with dd and had same >>>> results): >>>> fallocate -l 10G test.img >>>> fallocate -l 10G scratch.img >>>> sudo mkfs.hfs test.img >>>> sudo losetup /dev/loop0 ./test.img >>>> sudo losetup /dev/loop1 ./scratch.img >>>> sudo mkdir -p /mnt/test /mnt/scratch >>>> sudo mount /dev/loop0 /mnt/test >>>> >>>> For environment setup(local.config): >>>> export TEST_DEV=/dev/loop0 >>>> export TEST_DIR=/mnt/test >>>> export SCRATCH_DEV=/dev/loop1 >>>> export SCRATCH_MNT=/mnt/scratch >>> >>> This is my configuration: >>> >>> export TEST_DEV=/dev/loop50 >>> export TEST_DIR=/mnt/test >>> export SCRATCH_DEV=/dev/loop51 >>> export SCRATCH_MNT=/mnt/scratch >>> >>> export FSTYP=hfs >>> >> Ah, Missed that option. I will try with that in my next testing. >>> Probably, you've missed FSTYP. Did you tried to run other file system at first >>> (for example, ext4) to be sure that everything is good? >>> >> No, I barely squeezed in time today to the testing for the HFS so I >> didn't do any preliminary testing but I will check that too my next run >> before trying to test HFS. >>>> >>>> Ran the tests using:sudo ./check -g auto >>>> >>> >>> You are brave guy. :) Currently, I am trying to fix the issues for quick group: >>> >>> sudo ./check -g quick >>> >> I thought I needed to do a more exhaustive testing so I went with auto. >> I will try to experiment with quick my next round of testing. Thanks for >> the heads up! >>>> If more context is needed to know the point of failure or if I have made >>>> a mistake during setup I'm happy to receive your comments since this is >>>> my first time trying to run xfstests. >>>> >>> >>> I don't see the crash on my side. >>> >>> sudo ./check generic/631 >>> FSTYP -- hfs >>> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP >>> PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025 >>> MKFS_OPTIONS -- /dev/loop51 >>> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch >>> >>> generic/631 [not run] attr namespace trusted not supported by this >>> filesystem type: hfs >>> Ran: generic/631 >>> Not run: generic/631 >>> Passed all 1 tests >>> >>> This test simply is not running for HFS case. >>> >>> I see that HFS+ is failing for generic/631, but I don't see the crash. I am >>> running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something >>> dramatically. >>> >>> My guess that, maybe, xfstests suite is trying to run some other file system but >>> not HFS. >>> >> I'm assuming that it's running HFSPLUS testing foir me because I just >> realised that the package that I downloaded to do mkfs.hfs is just a >> symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for >> mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if >> mkfs.hfs is deprecated somehow. I will probably build it from source if >> available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS >> i don't think that a fail on generic/631 would crash my system multiple >> times with different kernels. I would have to test with ext4 before and >> play around more to find out why that happened.. > > The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus > can create HFS volume by using this option: > > -h create an HFS format filesystem (HFS Plus is the default) > > I don't have any special package installed for HFS on my side. > In my case, -h option in mkfs.hfsplus doesn't create hfs format filesystem. I checked kernel docs and found this[1] which refers to a package called hfsutils which has hformat as a binary for creating HFS filesystems. I just got it and used it successfully. I will be rerunning all tests soon. [1]:https://docs.kernel.org/filesystems/hfs.html > Thanks, > Slava. > Also did you check my other comments on the code part of your last reply? Just making sure. Thanks. Best Regards, Mehdi Ben Hadj Khelifa >>>>>> >>>>>> fs/hfs/super.c | 16 ++++++++++++---- >>>>>> 1 file changed, 12 insertions(+), 4 deletions(-) >>>>>> >>>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >>>>>> index 47f50fa555a4..06e1c25e47dc 100644 >>>>>> --- a/fs/hfs/super.c >>>>>> +++ b/fs/hfs/super.c >>>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >>>>>> { >>>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >>>>>> hfs_mdb_close(sb); >>>>>> - /* release the MDB's resources */ >>>>>> - hfs_mdb_put(sb); >>>>>> } >>>>>> >>>>>> static void flush_mdb(struct work_struct *work) >>>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >>>>>> bail_no_root: >>>>>> pr_err("get root inode failed\n"); >>>>>> bail: >>>>>> - hfs_mdb_put(sb); >>>>>> return res; >>>>>> } >>>>>> >>>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >>>>>> return 0; >>>>>> } >>>>>> >>>>>> +static void hfs_kill_sb(struct super_block *sb) >>>>>> +{ >>>>>> + generic_shutdown_super(sb); >>>>>> + hfs_mdb_put(sb); >>>>>> + if (sb->s_bdev) { >>>>>> + sync_blockdev(sb->s_bdev); >>>>>> + bdev_fput(sb->s_bdev_file); >>>>>> + } >>>>>> + >>>>>> +} >>>>>> + >>>>>> static struct file_system_type hfs_fs_type = { >>>>>> .owner = THIS_MODULE, >>>>>> .name = "hfs", >>>>>> - .kill_sb = kill_block_super, >>> >>> I've realized that if we are trying to solve the issue with pure call of >>> kill_block_super() for the case of HFS/HFS+, then we could have the same trouble >>> for other file systems. It make sense to check that we do not have likewise >>> trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix, >>> jfs, squashfs, freevxfs, befs. >> While I was doing my original fix for hfs, I did notice that too. Many >> other filesystems(not all) don't have a "custom" super block destroyer >> and they just refer to the generic kill_block_super() function which >> might lead to the same problem as HFS and HFS+. That would more digging >> too. I will see what I can do next when we finish HFS and potentially >> HFS+ first. >>> >>>>> >>>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please >>>>> double check that HFS+ should be fixed too? >>>>> >>>> I have checked the same error path and it seems that hfsplus_sb_info is >>>> not freed in that path(I could provide the exact call stack which would >>>> cause such a memory leak) although I didn't create or run any >>>> reproducers for this particular filesystem type. >>>> If you would like a patch for this issue, would something like what is >>>> shown below be acceptable? : >>>> >>>> +static void hfsplus_kill_super(struct super_block *sb) >>>> +{ >>>> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); >>>> + >>>> + kill_block_super(sb); >>>> + kfree(sbi); >>>> +} >>>> + >>>> static struct file_system_type hfsplus_fs_type = { >>>> .owner = THIS_MODULE, >>>> .name = "hfsplus", >>>> - .kill_sb = kill_block_super, >>>> + .kill_sb = hfsplus_kill_super, >>>> .fs_flags = FS_REQUIRES_DEV, >>>> .init_fs_context = hfsplus_init_fs_context, >>>> }; >>>> >>>> If there is something to add, remove or adjust. Please let me know in >>>> the case of you willing accepting such a patch of course. >>> >>> We call hfs_mdb_put() for the case of HFS: >>> >>> void hfs_mdb_put(struct super_block *sb) >>> { >>> if (!HFS_SB(sb)) >>> return; >>> /* free the B-trees */ >>> hfs_btree_close(HFS_SB(sb)->ext_tree); >>> hfs_btree_close(HFS_SB(sb)->cat_tree); >>> >>> /* free the buffers holding the primary and alternate MDBs */ >>> brelse(HFS_SB(sb)->mdb_bh); >>> brelse(HFS_SB(sb)->alt_mdb_bh); >>> >>> unload_nls(HFS_SB(sb)->nls_io); >>> unload_nls(HFS_SB(sb)->nls_disk); >>> >>> kfree(HFS_SB(sb)->bitmap); >>> kfree(HFS_SB(sb)); >>> sb->s_fs_info = NULL; >>> } >>> >>> So, we need likewise course of actions for HFS+ because we have multiple >>> pointers in superblock too: >>> >> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >> christian's patch because fill_super() (for the each specific >> filesystem) is responsible for cleaning up the superblock in case of >> failure and you can reference christian's patch[1] which he explained >> the reasoning for here[2].And in the error path the we are trying to >> fix, fill_super() isn't even called yet. So such pointers shouldn't be >> pointing to anything allocated yet hence only freeing the pointer to the >> sb_info here is sufficient I think. >> [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27 >> [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/ >>> struct hfsplus_sb_info { >>> void *s_vhdr_buf; >>> struct hfsplus_vh *s_vhdr; >>> void *s_backup_vhdr_buf; >>> struct hfsplus_vh *s_backup_vhdr; >>> struct hfs_btree *ext_tree; >>> struct hfs_btree *cat_tree; >>> struct hfs_btree *attr_tree; >>> atomic_t attr_tree_state; >>> struct inode *alloc_file; >>> struct inode *hidden_dir; >>> struct nls_table *nls; >>> >>> /* Runtime variables */ >>> u32 blockoffset; >>> u32 min_io_size; >>> sector_t part_start; >>> sector_t sect_count; >>> int fs_shift; >>> >>> /* immutable data from the volume header */ >>> u32 alloc_blksz; >>> int alloc_blksz_shift; >>> u32 total_blocks; >>> u32 data_clump_blocks, rsrc_clump_blocks; >>> >>> /* mutable data from the volume header, protected by alloc_mutex */ >>> u32 free_blocks; >>> struct mutex alloc_mutex; >>> >>> /* mutable data from the volume header, protected by vh_mutex */ >>> u32 next_cnid; >>> u32 file_count; >>> u32 folder_count; >>> struct mutex vh_mutex; >>> >>> /* Config options */ >>> u32 creator; >>> u32 type; >>> >>> umode_t umask; >>> kuid_t uid; >>> kgid_t gid; >>> >>> int part, session; >>> unsigned long flags; >>> >>> int work_queued; /* non-zero delayed work is queued */ >>> struct delayed_work sync_work; /* FS sync delayed work */ >>> spinlock_t work_lock; /* protects sync_work and work_queued */ >>> struct rcu_head rcu; >>> }; >>> >> >> >>> Thanks, >>> Slava. >>> >> Best Regards, >> Mehdi Ben Hadj Khelifa >> >>>> >>>>>> + .kill_sb = hfs_kill_sb, >>>>>> .fs_flags = FS_REQUIRES_DEV, >>>>>> .init_fs_context = hfs_init_fs_context, >>>>>> }; >>>>> >>>>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues >>>>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 23:16 ` Mehdi Ben Hadj Khelifa @ 2025-11-21 22:28 ` Viacheslav Dubeyko 2025-11-21 23:36 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-21 22:28 UTC (permalink / raw) To: jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, mehdi.benhadjkhelifa@gmail.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, khalid@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: > > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > > > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > > > > > > after superblock creation") allows setup_bdev_super() to fail after a new > > > > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > > > > > > takes ownership of the filesystem-specific s_fs_info data. > > > > > > > > > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > > > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > > > > > > unreleased.The default kill_block_super() teardown also does not free > > > > > > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > > > > > > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > > > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > > > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > > > > > > early teardown paths (including setup_bdev_super() failure) correctly > > > > > > > release HFS metadata. > > > > > > > > > > > > > > This also preserves the intended layering: generic_shutdown_super() > > > > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > > > > > > afterwards. > > > > > > > > > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > > > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > > > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > > > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > > > > > > --- > > > > > > > ChangeLog: > > > > > > > > > > > > > > Changes from v1: > > > > > > > > > > > > > > -Changed the patch direction to focus on hfs changes specifically as > > > > > > > suggested by al viro > > > > > > > > > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > > > > > > > > > > > Note:This patch might need some more testing as I only did run selftests > > > > > > > with no regression, check dmesg output for no regression, run reproducer > > > > > > > with no bug and test it with syzbot as well. > > > > > > > > > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > > > > > > failures for HFS now. And you can check the list of known issues here [1]. The > > > > > > main point of such run of xfstests is to check that maybe some issue(s) could be > > > > > > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > > > > > > > > > I have tried to run the xfstests with a kernel built with my patch and > > > > > also without my patch for TEST and SCRATCH devices and in both cases my > > > > > system crashes in running the generic/631 test.Still unsure of the > > > > > cause. For more context, I'm running the tests on the 6.18-rc5 version > > > > > of the kernel and the devices and the environment setup is as follows: > > > > > > > > > > For device creation and mounting(also tried it with dd and had same > > > > > results): > > > > > fallocate -l 10G test.img > > > > > fallocate -l 10G scratch.img > > > > > sudo mkfs.hfs test.img > > > > > sudo losetup /dev/loop0 ./test.img > > > > > sudo losetup /dev/loop1 ./scratch.img > > > > > sudo mkdir -p /mnt/test /mnt/scratch > > > > > sudo mount /dev/loop0 /mnt/test > > > > > > > > > > For environment setup(local.config): > > > > > export TEST_DEV=/dev/loop0 > > > > > export TEST_DIR=/mnt/test > > > > > export SCRATCH_DEV=/dev/loop1 > > > > > export SCRATCH_MNT=/mnt/scratch > > > > > > > > This is my configuration: > > > > > > > > export TEST_DEV=/dev/loop50 > > > > export TEST_DIR=/mnt/test > > > > export SCRATCH_DEV=/dev/loop51 > > > > export SCRATCH_MNT=/mnt/scratch > > > > > > > > export FSTYP=hfs > > > > > > > Ah, Missed that option. I will try with that in my next testing. > > > > Probably, you've missed FSTYP. Did you tried to run other file system at first > > > > (for example, ext4) to be sure that everything is good? > > > > > > > No, I barely squeezed in time today to the testing for the HFS so I > > > didn't do any preliminary testing but I will check that too my next run > > > before trying to test HFS. > > > > > > > > > > Ran the tests using:sudo ./check -g auto > > > > > > > > > > > > > You are brave guy. :) Currently, I am trying to fix the issues for quick group: > > > > > > > > sudo ./check -g quick > > > > > > > I thought I needed to do a more exhaustive testing so I went with auto. > > > I will try to experiment with quick my next round of testing. Thanks for > > > the heads up! > > > > > If more context is needed to know the point of failure or if I have made > > > > > a mistake during setup I'm happy to receive your comments since this is > > > > > my first time trying to run xfstests. > > > > > > > > > > > > > I don't see the crash on my side. > > > > > > > > sudo ./check generic/631 > > > > FSTYP -- hfs > > > > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP > > > > PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025 > > > > MKFS_OPTIONS -- /dev/loop51 > > > > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > > > > > > > generic/631 [not run] attr namespace trusted not supported by this > > > > filesystem type: hfs > > > > Ran: generic/631 > > > > Not run: generic/631 > > > > Passed all 1 tests > > > > > > > > This test simply is not running for HFS case. > > > > > > > > I see that HFS+ is failing for generic/631, but I don't see the crash. I am > > > > running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something > > > > dramatically. > > > > > > > > My guess that, maybe, xfstests suite is trying to run some other file system but > > > > not HFS. > > > > > > > I'm assuming that it's running HFSPLUS testing foir me because I just > > > realised that the package that I downloaded to do mkfs.hfs is just a > > > symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for > > > mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if > > > mkfs.hfs is deprecated somehow. I will probably build it from source if > > > available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS > > > i don't think that a fail on generic/631 would crash my system multiple > > > times with different kernels. I would have to test with ext4 before and > > > play around more to find out why that happened.. > > > > The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus > > can create HFS volume by using this option: > > > > -h create an HFS format filesystem (HFS Plus is the default) > > > > I don't have any special package installed for HFS on my side. > > > In my case, -h option in mkfs.hfsplus doesn't create hfs format > filesystem. I checked kernel docs and found this[1] which refers to a > package called hfsutils which has hformat as a binary for creating HFS > filesystems. I just got it and used it successfully. I will be rerunning > all tests soon. > [1]:https://docs.kernel.org/filesystems/hfs.html > > Thanks, > > Slava. > > > Also did you check my other comments on the code part of your last > reply? Just making sure. Thanks. > > Best Regards, > Mehdi Ben Hadj Khelifa > > > > > > > > > > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > > > > > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > > > > > > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > > > > > > index 47f50fa555a4..06e1c25e47dc 100644 > > > > > > > --- a/fs/hfs/super.c > > > > > > > +++ b/fs/hfs/super.c > > > > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > > > > > > { > > > > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > > > > > > hfs_mdb_close(sb); > > > > > > > - /* release the MDB's resources */ > > > > > > > - hfs_mdb_put(sb); > > > > > > > } > > > > > > > > > > > > > > static void flush_mdb(struct work_struct *work) > > > > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > > > > > > bail_no_root: > > > > > > > pr_err("get root inode failed\n"); > > > > > > > bail: > > > > > > > - hfs_mdb_put(sb); > > > > > > > return res; > > > > > > > } > > > > > > > > > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > > > > > > return 0; > > > > > > > } > > > > > > > > > > > > > > +static void hfs_kill_sb(struct super_block *sb) > > > > > > > +{ > > > > > > > + generic_shutdown_super(sb); > > > > > > > + hfs_mdb_put(sb); > > > > > > > + if (sb->s_bdev) { > > > > > > > + sync_blockdev(sb->s_bdev); > > > > > > > + bdev_fput(sb->s_bdev_file); > > > > > > > + } > > > > > > > + > > > > > > > +} > > > > > > > + > > > > > > > static struct file_system_type hfs_fs_type = { > > > > > > > .owner = THIS_MODULE, > > > > > > > .name = "hfs", > > > > > > > - .kill_sb = kill_block_super, > > > > > > > > I've realized that if we are trying to solve the issue with pure call of > > > > kill_block_super() for the case of HFS/HFS+, then we could have the same trouble > > > > for other file systems. It make sense to check that we do not have likewise > > > > trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix, > > > > jfs, squashfs, freevxfs, befs. > > > While I was doing my original fix for hfs, I did notice that too. Many > > > other filesystems(not all) don't have a "custom" super block destroyer > > > and they just refer to the generic kill_block_super() function which > > > might lead to the same problem as HFS and HFS+. That would more digging > > > too. I will see what I can do next when we finish HFS and potentially > > > HFS+ first. > > > > > > > > > > > > > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > > > > > > double check that HFS+ should be fixed too? > > > > > > > > > > > I have checked the same error path and it seems that hfsplus_sb_info is > > > > > not freed in that path(I could provide the exact call stack which would > > > > > cause such a memory leak) although I didn't create or run any > > > > > reproducers for this particular filesystem type. > > > > > If you would like a patch for this issue, would something like what is > > > > > shown below be acceptable? : > > > > > > > > > > +static void hfsplus_kill_super(struct super_block *sb) > > > > > +{ > > > > > + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); > > > > > + > > > > > + kill_block_super(sb); > > > > > + kfree(sbi); > > > > > +} > > > > > + > > > > > static struct file_system_type hfsplus_fs_type = { > > > > > .owner = THIS_MODULE, > > > > > .name = "hfsplus", > > > > > - .kill_sb = kill_block_super, > > > > > + .kill_sb = hfsplus_kill_super, > > > > > .fs_flags = FS_REQUIRES_DEV, > > > > > .init_fs_context = hfsplus_init_fs_context, > > > > > }; > > > > > > > > > > If there is something to add, remove or adjust. Please let me know in > > > > > the case of you willing accepting such a patch of course. > > > > > > > > We call hfs_mdb_put() for the case of HFS: > > > > > > > > void hfs_mdb_put(struct super_block *sb) > > > > { > > > > if (!HFS_SB(sb)) > > > > return; > > > > /* free the B-trees */ > > > > hfs_btree_close(HFS_SB(sb)->ext_tree); > > > > hfs_btree_close(HFS_SB(sb)->cat_tree); > > > > > > > > /* free the buffers holding the primary and alternate MDBs */ > > > > brelse(HFS_SB(sb)->mdb_bh); > > > > brelse(HFS_SB(sb)->alt_mdb_bh); > > > > > > > > unload_nls(HFS_SB(sb)->nls_io); > > > > unload_nls(HFS_SB(sb)->nls_disk); > > > > > > > > kfree(HFS_SB(sb)->bitmap); > > > > kfree(HFS_SB(sb)); > > > > sb->s_fs_info = NULL; > > > > } > > > > > > > > So, we need likewise course of actions for HFS+ because we have multiple > > > > pointers in superblock too: > > > > > > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in > > > christian's patch because fill_super() (for the each specific > > > filesystem) is responsible for cleaning up the superblock in case of > > > failure and you can reference christian's patch[1] which he explained > > > the reasoning for here[2].And in the error path the we are trying to > > > fix, fill_super() isn't even called yet. So such pointers shouldn't be > > > pointing to anything allocated yet hence only freeing the pointer to the > > > sb_info here is sufficient I think. I was confused that your code with hfs_mdb_put() is still in this email. So, yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of failure. It means that if something wasn't been freed, then it will be issue in these methods. Then, I don't see what should else need to be added here. Some file systems do sb->s_fs_info = NULL. But absence of this statement is not critical, from my point of view. Thanks, Slava. > > > [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27 > > > [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/ > > > > struct hfsplus_sb_info { > > > > void *s_vhdr_buf; > > > > struct hfsplus_vh *s_vhdr; > > > > void *s_backup_vhdr_buf; > > > > struct hfsplus_vh *s_backup_vhdr; > > > > struct hfs_btree *ext_tree; > > > > struct hfs_btree *cat_tree; > > > > struct hfs_btree *attr_tree; > > > > atomic_t attr_tree_state; > > > > struct inode *alloc_file; > > > > struct inode *hidden_dir; > > > > struct nls_table *nls; > > > > > > > > /* Runtime variables */ > > > > u32 blockoffset; > > > > u32 min_io_size; > > > > sector_t part_start; > > > > sector_t sect_count; > > > > int fs_shift; > > > > > > > > /* immutable data from the volume header */ > > > > u32 alloc_blksz; > > > > int alloc_blksz_shift; > > > > u32 total_blocks; > > > > u32 data_clump_blocks, rsrc_clump_blocks; > > > > > > > > /* mutable data from the volume header, protected by alloc_mutex */ > > > > u32 free_blocks; > > > > struct mutex alloc_mutex; > > > > > > > > /* mutable data from the volume header, protected by vh_mutex */ > > > > u32 next_cnid; > > > > u32 file_count; > > > > u32 folder_count; > > > > struct mutex vh_mutex; > > > > > > > > /* Config options */ > > > > u32 creator; > > > > u32 type; > > > > > > > > umode_t umask; > > > > kuid_t uid; > > > > kgid_t gid; > > > > > > > > int part, session; > > > > unsigned long flags; > > > > > > > > int work_queued; /* non-zero delayed work is queued */ > > > > struct delayed_work sync_work; /* FS sync delayed work */ > > > > spinlock_t work_lock; /* protects sync_work and work_queued */ > > > > struct rcu_head rcu; > > > > }; > > > > > > > > > > > > > > Thanks, > > > > Slava. > > > > > > > Best Regards, > > > Mehdi Ben Hadj Khelifa > > > > > > > > > > > > > > > + .kill_sb = hfs_kill_sb, > > > > > > > .fs_flags = FS_REQUIRES_DEV, > > > > > > > .init_fs_context = hfs_init_fs_context, > > > > > > > }; > > > > > > > > > > > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues > > > > > > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 22:28 ` Viacheslav Dubeyko @ 2025-11-21 23:36 ` Mehdi Ben Hadj Khelifa 2025-11-21 23:01 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-21 23:36 UTC (permalink / raw) To: Viacheslav Dubeyko, jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, khalid@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: > On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device >>>>>>>> after superblock creation") allows setup_bdev_super() to fail after a new >>>>>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super() >>>>>>>> takes ownership of the filesystem-specific s_fs_info data. >>>>>>>> >>>>>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >>>>>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >>>>>>>> unreleased.The default kill_block_super() teardown also does not free >>>>>>>> HFS-specific resources, resulting in a memory leak on early mount failure. >>>>>>>> >>>>>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >>>>>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >>>>>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and >>>>>>>> early teardown paths (including setup_bdev_super() failure) correctly >>>>>>>> release HFS metadata. >>>>>>>> >>>>>>>> This also preserves the intended layering: generic_shutdown_super() >>>>>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >>>>>>>> afterwards. >>>>>>>> >>>>>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >>>>>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>>>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >>>>>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>>>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >>>>>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >>>>>>>> --- >>>>>>>> ChangeLog: >>>>>>>> >>>>>>>> Changes from v1: >>>>>>>> >>>>>>>> -Changed the patch direction to focus on hfs changes specifically as >>>>>>>> suggested by al viro >>>>>>>> >>>>>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >>>>>>>> >>>>>>>> Note:This patch might need some more testing as I only did run selftests >>>>>>>> with no regression, check dmesg output for no regression, run reproducer >>>>>>>> with no bug and test it with syzbot as well. >>>>>>> >>>>>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests >>>>>>> failures for HFS now. And you can check the list of known issues here [1]. The >>>>>>> main point of such run of xfstests is to check that maybe some issue(s) could be >>>>>>> fixed by the patch. And, more important that you don't introduce new issues. ;) >>>>>>> >>>>>> I have tried to run the xfstests with a kernel built with my patch and >>>>>> also without my patch for TEST and SCRATCH devices and in both cases my >>>>>> system crashes in running the generic/631 test.Still unsure of the >>>>>> cause. For more context, I'm running the tests on the 6.18-rc5 version >>>>>> of the kernel and the devices and the environment setup is as follows: >>>>>> >>>>>> For device creation and mounting(also tried it with dd and had same >>>>>> results): >>>>>> fallocate -l 10G test.img >>>>>> fallocate -l 10G scratch.img >>>>>> sudo mkfs.hfs test.img >>>>>> sudo losetup /dev/loop0 ./test.img >>>>>> sudo losetup /dev/loop1 ./scratch.img >>>>>> sudo mkdir -p /mnt/test /mnt/scratch >>>>>> sudo mount /dev/loop0 /mnt/test >>>>>> >>>>>> For environment setup(local.config): >>>>>> export TEST_DEV=/dev/loop0 >>>>>> export TEST_DIR=/mnt/test >>>>>> export SCRATCH_DEV=/dev/loop1 >>>>>> export SCRATCH_MNT=/mnt/scratch >>>>> >>>>> This is my configuration: >>>>> >>>>> export TEST_DEV=/dev/loop50 >>>>> export TEST_DIR=/mnt/test >>>>> export SCRATCH_DEV=/dev/loop51 >>>>> export SCRATCH_MNT=/mnt/scratch >>>>> >>>>> export FSTYP=hfs >>>>> >>>> Ah, Missed that option. I will try with that in my next testing. >>>>> Probably, you've missed FSTYP. Did you tried to run other file system at first >>>>> (for example, ext4) to be sure that everything is good? >>>>> >>>> No, I barely squeezed in time today to the testing for the HFS so I >>>> didn't do any preliminary testing but I will check that too my next run >>>> before trying to test HFS. >>>>>> >>>>>> Ran the tests using:sudo ./check -g auto >>>>>> >>>>> >>>>> You are brave guy. :) Currently, I am trying to fix the issues for quick group: >>>>> >>>>> sudo ./check -g quick >>>>> >>>> I thought I needed to do a more exhaustive testing so I went with auto. >>>> I will try to experiment with quick my next round of testing. Thanks for >>>> the heads up! >>>>>> If more context is needed to know the point of failure or if I have made >>>>>> a mistake during setup I'm happy to receive your comments since this is >>>>>> my first time trying to run xfstests. >>>>>> >>>>> >>>>> I don't see the crash on my side. >>>>> >>>>> sudo ./check generic/631 >>>>> FSTYP -- hfs >>>>> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP >>>>> PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025 >>>>> MKFS_OPTIONS -- /dev/loop51 >>>>> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch >>>>> >>>>> generic/631 [not run] attr namespace trusted not supported by this >>>>> filesystem type: hfs >>>>> Ran: generic/631 >>>>> Not run: generic/631 >>>>> Passed all 1 tests >>>>> >>>>> This test simply is not running for HFS case. >>>>> >>>>> I see that HFS+ is failing for generic/631, but I don't see the crash. I am >>>>> running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something >>>>> dramatically. >>>>> >>>>> My guess that, maybe, xfstests suite is trying to run some other file system but >>>>> not HFS. >>>>> >>>> I'm assuming that it's running HFSPLUS testing foir me because I just >>>> realised that the package that I downloaded to do mkfs.hfs is just a >>>> symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for >>>> mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if >>>> mkfs.hfs is deprecated somehow. I will probably build it from source if >>>> available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS >>>> i don't think that a fail on generic/631 would crash my system multiple >>>> times with different kernels. I would have to test with ext4 before and >>>> play around more to find out why that happened.. >>> >>> The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus >>> can create HFS volume by using this option: >>> >>> -h create an HFS format filesystem (HFS Plus is the default) >>> >>> I don't have any special package installed for HFS on my side. >>> >> In my case, -h option in mkfs.hfsplus doesn't create hfs format >> filesystem. I checked kernel docs and found this[1] which refers to a >> package called hfsutils which has hformat as a binary for creating HFS >> filesystems. I just got it and used it successfully. I will be rerunning >> all tests soon. >> [1]:https://docs.kernel.org/filesystems/hfs.html >>> Thanks, >>> Slava. >>> >> Also did you check my other comments on the code part of your last >> reply? Just making sure. Thanks. >> >> Best Regards, >> Mehdi Ben Hadj Khelifa >>>>>>>> >>>>>>>> fs/hfs/super.c | 16 ++++++++++++---- >>>>>>>> 1 file changed, 12 insertions(+), 4 deletions(-) >>>>>>>> >>>>>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >>>>>>>> index 47f50fa555a4..06e1c25e47dc 100644 >>>>>>>> --- a/fs/hfs/super.c >>>>>>>> +++ b/fs/hfs/super.c >>>>>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >>>>>>>> { >>>>>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >>>>>>>> hfs_mdb_close(sb); >>>>>>>> - /* release the MDB's resources */ >>>>>>>> - hfs_mdb_put(sb); >>>>>>>> } >>>>>>>> >>>>>>>> static void flush_mdb(struct work_struct *work) >>>>>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >>>>>>>> bail_no_root: >>>>>>>> pr_err("get root inode failed\n"); >>>>>>>> bail: >>>>>>>> - hfs_mdb_put(sb); >>>>>>>> return res; >>>>>>>> } >>>>>>>> >>>>>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >>>>>>>> return 0; >>>>>>>> } >>>>>>>> >>>>>>>> +static void hfs_kill_sb(struct super_block *sb) >>>>>>>> +{ >>>>>>>> + generic_shutdown_super(sb); >>>>>>>> + hfs_mdb_put(sb); >>>>>>>> + if (sb->s_bdev) { >>>>>>>> + sync_blockdev(sb->s_bdev); >>>>>>>> + bdev_fput(sb->s_bdev_file); >>>>>>>> + } >>>>>>>> + >>>>>>>> +} >>>>>>>> + >>>>>>>> static struct file_system_type hfs_fs_type = { >>>>>>>> .owner = THIS_MODULE, >>>>>>>> .name = "hfs", >>>>>>>> - .kill_sb = kill_block_super, >>>>> >>>>> I've realized that if we are trying to solve the issue with pure call of >>>>> kill_block_super() for the case of HFS/HFS+, then we could have the same trouble >>>>> for other file systems. It make sense to check that we do not have likewise >>>>> trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix, >>>>> jfs, squashfs, freevxfs, befs. >>>> While I was doing my original fix for hfs, I did notice that too. Many >>>> other filesystems(not all) don't have a "custom" super block destroyer >>>> and they just refer to the generic kill_block_super() function which >>>> might lead to the same problem as HFS and HFS+. That would more digging >>>> too. I will see what I can do next when we finish HFS and potentially >>>> HFS+ first. >>>>> >>>>>>> >>>>>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please >>>>>>> double check that HFS+ should be fixed too? >>>>>>> >>>>>> I have checked the same error path and it seems that hfsplus_sb_info is >>>>>> not freed in that path(I could provide the exact call stack which would >>>>>> cause such a memory leak) although I didn't create or run any >>>>>> reproducers for this particular filesystem type. >>>>>> If you would like a patch for this issue, would something like what is >>>>>> shown below be acceptable? : >>>>>> >>>>>> +static void hfsplus_kill_super(struct super_block *sb) >>>>>> +{ >>>>>> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); >>>>>> + >>>>>> + kill_block_super(sb); >>>>>> + kfree(sbi); >>>>>> +} >>>>>> + >>>>>> static struct file_system_type hfsplus_fs_type = { >>>>>> .owner = THIS_MODULE, >>>>>> .name = "hfsplus", >>>>>> - .kill_sb = kill_block_super, >>>>>> + .kill_sb = hfsplus_kill_super, >>>>>> .fs_flags = FS_REQUIRES_DEV, >>>>>> .init_fs_context = hfsplus_init_fs_context, >>>>>> }; >>>>>> >>>>>> If there is something to add, remove or adjust. Please let me know in >>>>>> the case of you willing accepting such a patch of course. >>>>> >>>>> We call hfs_mdb_put() for the case of HFS: >>>>> >>>>> void hfs_mdb_put(struct super_block *sb) >>>>> { >>>>> if (!HFS_SB(sb)) >>>>> return; >>>>> /* free the B-trees */ >>>>> hfs_btree_close(HFS_SB(sb)->ext_tree); >>>>> hfs_btree_close(HFS_SB(sb)->cat_tree); >>>>> >>>>> /* free the buffers holding the primary and alternate MDBs */ >>>>> brelse(HFS_SB(sb)->mdb_bh); >>>>> brelse(HFS_SB(sb)->alt_mdb_bh); >>>>> >>>>> unload_nls(HFS_SB(sb)->nls_io); >>>>> unload_nls(HFS_SB(sb)->nls_disk); >>>>> >>>>> kfree(HFS_SB(sb)->bitmap); >>>>> kfree(HFS_SB(sb)); >>>>> sb->s_fs_info = NULL; >>>>> } >>>>> >>>>> So, we need likewise course of actions for HFS+ because we have multiple >>>>> pointers in superblock too: >>>>> >>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>> christian's patch because fill_super() (for the each specific >>>> filesystem) is responsible for cleaning up the superblock in case of >>>> failure and you can reference christian's patch[1] which he explained >>>> the reasoning for here[2].And in the error path the we are trying to >>>> fix, fill_super() isn't even called yet. So such pointers shouldn't be >>>> pointing to anything allocated yet hence only freeing the pointer to the >>>> sb_info here is sufficient I think. > > I was confused that your code with hfs_mdb_put() is still in this email. So, > yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of > failure. It means that if something wasn't been freed, then it will be issue in > these methods. Then, I don't see what should else need to be added here. Some > file systems do sb->s_fs_info = NULL. But absence of this statement is not > critical, from my point of view. > Thanks for the input. I will be sending the same mentionned patch after doing testing for it and also after finishing my testing for the hfs patch too. > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj Khelifa >>>> [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27 >>>> [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/ >>>>> struct hfsplus_sb_info { >>>>> void *s_vhdr_buf; >>>>> struct hfsplus_vh *s_vhdr; >>>>> void *s_backup_vhdr_buf; >>>>> struct hfsplus_vh *s_backup_vhdr; >>>>> struct hfs_btree *ext_tree; >>>>> struct hfs_btree *cat_tree; >>>>> struct hfs_btree *attr_tree; >>>>> atomic_t attr_tree_state; >>>>> struct inode *alloc_file; >>>>> struct inode *hidden_dir; >>>>> struct nls_table *nls; >>>>> >>>>> /* Runtime variables */ >>>>> u32 blockoffset; >>>>> u32 min_io_size; >>>>> sector_t part_start; >>>>> sector_t sect_count; >>>>> int fs_shift; >>>>> >>>>> /* immutable data from the volume header */ >>>>> u32 alloc_blksz; >>>>> int alloc_blksz_shift; >>>>> u32 total_blocks; >>>>> u32 data_clump_blocks, rsrc_clump_blocks; >>>>> >>>>> /* mutable data from the volume header, protected by alloc_mutex */ >>>>> u32 free_blocks; >>>>> struct mutex alloc_mutex; >>>>> >>>>> /* mutable data from the volume header, protected by vh_mutex */ >>>>> u32 next_cnid; >>>>> u32 file_count; >>>>> u32 folder_count; >>>>> struct mutex vh_mutex; >>>>> >>>>> /* Config options */ >>>>> u32 creator; >>>>> u32 type; >>>>> >>>>> umode_t umask; >>>>> kuid_t uid; >>>>> kgid_t gid; >>>>> >>>>> int part, session; >>>>> unsigned long flags; >>>>> >>>>> int work_queued; /* non-zero delayed work is queued */ >>>>> struct delayed_work sync_work; /* FS sync delayed work */ >>>>> spinlock_t work_lock; /* protects sync_work and work_queued */ >>>>> struct rcu_head rcu; >>>>> }; >>>>> >>>> >>>> >>>>> Thanks, >>>>> Slava. >>>>> >>>> Best Regards, >>>> Mehdi Ben Hadj Khelifa >>>> >>>>>> >>>>>>>> + .kill_sb = hfs_kill_sb, >>>>>>>> .fs_flags = FS_REQUIRES_DEV, >>>>>>>> .init_fs_context = hfs_init_fs_context, >>>>>>>> }; >>>>>>> >>>>>>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues >>>>>>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694 ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 23:36 ` Mehdi Ben Hadj Khelifa @ 2025-11-21 23:01 ` Viacheslav Dubeyko 2025-11-25 22:14 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-21 23:01 UTC (permalink / raw) To: jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, mehdi.benhadjkhelifa@gmail.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, david.hunter.linux@gmail.com, skhan@linuxfoundation.org, linux-kernel@vger.kernel.org, khalid@kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: > > On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: > > > > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > > > > > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > > > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > > > <skipped> > > > > > > > > > > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in > > > > > christian's patch because fill_super() (for the each specific > > > > > filesystem) is responsible for cleaning up the superblock in case of > > > > > failure and you can reference christian's patch[1] which he explained > > > > > the reasoning for here[2].And in the error path the we are trying to > > > > > fix, fill_super() isn't even called yet. So such pointers shouldn't be > > > > > pointing to anything allocated yet hence only freeing the pointer to the > > > > > sb_info here is sufficient I think. > > > > I was confused that your code with hfs_mdb_put() is still in this email. So, > > yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of > > failure. It means that if something wasn't been freed, then it will be issue in > > these methods. Then, I don't see what should else need to be added here. Some > > file systems do sb->s_fs_info = NULL. But absence of this statement is not > > critical, from my point of view. > > > Thanks for the input. I will be sending the same mentionned patch after > doing testing for it and also after finishing my testing for the hfs > patch too. > > I am guessing... Should we consider to introduce some xfstest, self-test, or unit-test to detect this issue in all Linux's file systems family? Thanks, Slava. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-21 23:01 ` Viacheslav Dubeyko @ 2025-11-25 22:14 ` Mehdi Ben Hadj Khelifa 2025-11-25 22:36 ` Viacheslav Dubeyko 2025-11-27 17:45 ` David Hunter 0 siblings, 2 replies; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-25 22:14 UTC (permalink / raw) To: Viacheslav Dubeyko, jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, david.hunter.linux@gmail.com, skhan@linuxfoundation.org, linux-kernel@vger.kernel.org, khalid@kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: > On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: >>> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>> > > <skipped> > >>>>>>> >>>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>>>> christian's patch because fill_super() (for the each specific >>>>>> filesystem) is responsible for cleaning up the superblock in case of >>>>>> failure and you can reference christian's patch[1] which he explained >>>>>> the reasoning for here[2].And in the error path the we are trying to >>>>>> fix, fill_super() isn't even called yet. So such pointers shouldn't be >>>>>> pointing to anything allocated yet hence only freeing the pointer to the >>>>>> sb_info here is sufficient I think. >>> >>> I was confused that your code with hfs_mdb_put() is still in this email. So, >>> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of >>> failure. It means that if something wasn't been freed, then it will be issue in >>> these methods. Then, I don't see what should else need to be added here. Some >>> file systems do sb->s_fs_info = NULL. But absence of this statement is not >>> critical, from my point of view. >>> >> Thanks for the input. I will be sending the same mentionned patch after >> doing testing for it and also after finishing my testing for the hfs >> patch too. >>> > > I am guessing... Should we consider to introduce some xfstest, self-test, or > unit-test to detect this issue in all Linux's file systems family? > Yes, It isn't that hard either IIUC you just need to fail the bdev_file_open_by_dev() function somehow to trigger this error path.. > Thanks, > Slava. So I wanted to update you on my testing for the hfs patch and the hfsplus patch. For the testing I used both my desktop pc and my laptop pc running the same configuraitons and the same linux distribution to have more accurate testing. There are three variants that I used for testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 kernel with hfs or hfsplus patch. Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable in my search for it. they all point to mkfs.hfsplus and you pointed out that mkfs.hfsplus can create hfs filesystems with the -h flag but in my case it doesn't. I pointed out last time that I found a tool to create HFS filesystems which it does (it's called hformat) but the xfstests require the availability of mkfs.hfs and fsck.hfs for them to run. More help on this is needed for me to run hfs tests. I also tested ext4 as you have suggested as a base to compare to. Here is my summary of testing: For Stable kernel 6.17.8: On desktop: ext4 tests ran successfully. hfsplus tests crash the pc around generic 631 test. On Laptop: ext4 and hfsplus tests ran successfully. For 6.18-rc7 kernel: On desktop: ext4 tests ran successfully same results as the stable kernel. hfsplus crashes on testing startup.For launching any test. On Laptop: ext4 tests ran successfully same results as the stable kernel. hfsplus crashes on testing startup.For launcing any test. For the patched 6.18-rc7 kernel. Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. Should be noted that I have tried many different setups regarding the devices and their creation for the 6.18-rc7 kernel and none of them worked.Still I can't deduce what is causing the issue.If they work for you, my only assumption is that some dependency of xfstests is not met on my part even though I made sure that I do cover them all especially with repeatedly failed testing... What could be the issue here on my end if you have any idea? Also should I send you the hfsplus patch in one of my earlier replies[1] for you to test too and maybe add it to hfsplus? Best Regards, Mehdi Ben Hadj Khelifa [1]:https://lore.kernel.org/all/3ad2e91e-2c7f-488b-a119-51d62a6e95b8@gmail.com/ ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-25 22:14 ` Mehdi Ben Hadj Khelifa @ 2025-11-25 22:36 ` Viacheslav Dubeyko 2025-11-27 17:45 ` David Hunter 1 sibling, 0 replies; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-25 22:36 UTC (permalink / raw) To: jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, mehdi.benhadjkhelifa@gmail.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, david.hunter.linux@gmail.com, skhan@linuxfoundation.org, khalid@kernel.org, linux-kernel@vger.kernel.org On Tue, 2025-11-25 at 23:14 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: > > On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: > > > > On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: > > > > > > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > > > > > > > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > > > > > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > > > > > > > > > <skipped> > > > > > > > > > > > > > > > > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in > > > > > > > christian's patch because fill_super() (for the each specific > > > > > > > filesystem) is responsible for cleaning up the superblock in case of > > > > > > > failure and you can reference christian's patch[1] which he explained > > > > > > > the reasoning for here[2].And in the error path the we are trying to > > > > > > > fix, fill_super() isn't even called yet. So such pointers shouldn't be > > > > > > > pointing to anything allocated yet hence only freeing the pointer to the > > > > > > > sb_info here is sufficient I think. > > > > > > > > I was confused that your code with hfs_mdb_put() is still in this email. So, > > > > yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of > > > > failure. It means that if something wasn't been freed, then it will be issue in > > > > these methods. Then, I don't see what should else need to be added here. Some > > > > file systems do sb->s_fs_info = NULL. But absence of this statement is not > > > > critical, from my point of view. > > > > > > > Thanks for the input. I will be sending the same mentionned patch after > > > doing testing for it and also after finishing my testing for the hfs > > > patch too. > > > > > > > > I am guessing... Should we consider to introduce some xfstest, self-test, or > > unit-test to detect this issue in all Linux's file systems family? > > > Yes, It isn't that hard either IIUC you just need to fail the > bdev_file_open_by_dev() function somehow to trigger this error path.. > > Thanks, > > Slava. > > So I wanted to update you on my testing for the hfs patch and the > hfsplus patch. For the testing I used both my desktop pc and my laptop > pc running the same configuraitons and the same linux distribution to > have more accurate testing. There are three variants that I used for > testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 > kernel with hfs or hfsplus patch. > > Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable > in my search for it. they all point to mkfs.hfsplus and you pointed out > that mkfs.hfsplus can create hfs filesystems with the -h flag but in my > case it doesn't. I pointed out last time that I found a tool to create > HFS filesystems which it does (it's called hformat) but the xfstests > require the availability of mkfs.hfs and fsck.hfs for them to run. More > help on this is needed for me to run hfs tests. I also tested ext4 as > you have suggested as a base to compare to. Here is my summary of testing: > > For Stable kernel 6.17.8: > > On desktop: > ext4 tests ran successfully. > hfsplus tests crash the pc around generic 631 test. > > On Laptop: > ext4 and hfsplus tests ran successfully. > > For 6.18-rc7 kernel: > > On desktop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launching any test. > > On Laptop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launcing any test. > > > For the patched 6.18-rc7 kernel. > > Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. > > > Should be noted that I have tried many different setups regarding the > devices and their creation for the 6.18-rc7 kernel and none of them > worked.Still I can't deduce what is causing the issue.If they work for > you, my only assumption is that some dependency of xfstests is not met > on my part even though I made sure that I do cover them all especially > with repeatedly failed testing... > > What could be the issue here on my end if you have any idea? Which particular crash do you have? Could you please share the call trace? Let me build the latest kernel and run xfstests again on my side. > > Also should I send you the hfsplus patch in one of my earlier replies[1] > for you to test too and maybe add it to hfsplus? > > Let's clarify at first what's going on with xfstests on your side. Thanks, Slava. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-25 22:14 ` Mehdi Ben Hadj Khelifa 2025-11-25 22:36 ` Viacheslav Dubeyko @ 2025-11-27 17:45 ` David Hunter 2025-11-29 13:06 ` Mehdi Ben Hadj Khelifa 1 sibling, 1 reply; 28+ messages in thread From: David Hunter @ 2025-11-27 17:45 UTC (permalink / raw) To: Mehdi Ben Hadj Khelifa, Viacheslav Dubeyko, jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, skhan@linuxfoundation.org, linux-kernel@vger.kernel.org, khalid@kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/25/25 17:14, Mehdi Ben Hadj Khelifa wrote: > On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: >> On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: >>> On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: >>>> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>>>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>>> >> >> <skipped> >> >>>>>>>> >>>>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>>>>> christian's patch because fill_super() (for the each specific >>>>>>> filesystem) is responsible for cleaning up the superblock in case of >>>>>>> failure and you can reference christian's patch[1] which he >>>>>>> explained >>>>>>> the reasoning for here[2].And in the error path the we are trying to >>>>>>> fix, fill_super() isn't even called yet. So such pointers >>>>>>> shouldn't be >>>>>>> pointing to anything allocated yet hence only freeing the pointer >>>>>>> to the >>>>>>> sb_info here is sufficient I think. >>>> >>>> I was confused that your code with hfs_mdb_put() is still in this >>>> email. So, >>>> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in >>>> the case of >>>> failure. It means that if something wasn't been freed, then it will >>>> be issue in >>>> these methods. Then, I don't see what should else need to be added >>>> here. Some >>>> file systems do sb->s_fs_info = NULL. But absence of this statement >>>> is not >>>> critical, from my point of view. >>>> >>> Thanks for the input. I will be sending the same mentionned patch after >>> doing testing for it and also after finishing my testing for the hfs >>> patch too. >>>> >> >> I am guessing... Should we consider to introduce some xfstest, self- >> test, or >> unit-test to detect this issue in all Linux's file systems family? >> > Yes, It isn't that hard either IIUC you just need to fail the > bdev_file_open_by_dev() function somehow to trigger this error path.. >> Thanks, >> Slava. > > So I wanted to update you on my testing for the hfs patch and the > hfsplus patch. For the testing I used both my desktop pc and my laptop > pc running the same configuraitons and the same linux distribution to > have more accurate testing. There are three variants that I used for > testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 > kernel with hfs or hfsplus patch. > > Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable > in my search for it. they all point to mkfs.hfsplus and you pointed out > that mkfs.hfsplus can create hfs filesystems with the -h flag but in my > case it doesn't. I pointed out last time that I found a tool to create > HFS filesystems which it does (it's called hformat) but the xfstests > require the availability of mkfs.hfs and fsck.hfs for them to run. More > help on this is needed for me to run hfs tests. I also tested ext4 as > you have suggested as a base to compare to. Here is my summary of testing: > > For Stable kernel 6.17.8: > > On desktop: > ext4 tests ran successfully. > hfsplus tests crash the pc around generic 631 test. > > On Laptop: > ext4 and hfsplus tests ran successfully. > > For 6.18-rc7 kernel: > > On desktop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launching any test. > > On Laptop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launcing any test. > > > For the patched 6.18-rc7 kernel. > > Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. > > > Should be noted that I have tried many different setups regarding the > devices and their creation for the 6.18-rc7 kernel and none of them > worked.Still I can't deduce what is causing the issue.If they work for > you, my only assumption is that some dependency of xfstests is not met > on my part even though I made sure that I do cover them all especially > with repeatedly failed testing... > > What could be the issue here on my end if you have any idea? > > Also should I send you the hfsplus patch in one of my earlier replies[1] > for you to test too and maybe add it to hfsplus? > > Best Regards, > Mehdi Ben Hadj Khelifa > > [1]:https://lore.kernel.org/all/3ad2e91e-2c7f-488b- > a119-51d62a6e95b8@gmail.com/ > > > > > > Hey everyone, I am helping Shuah with the Linux Kernel Mentorship Program. I wanted to report that many mentees have also had problems with xfstests recently. I am tracking what happens here, so I can help future developers, but I just wanted to let everyone know that others are also having issues with xfstests. Also, Mehdi, by any chance have you used any of the configuration targets like "localmodconfig". I am wondering if something in your configuration file is missing. Thanks, David Hunter ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-27 17:45 ` David Hunter @ 2025-11-29 13:06 ` Mehdi Ben Hadj Khelifa 0 siblings, 0 replies; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-29 13:06 UTC (permalink / raw) To: David Hunter, Viacheslav Dubeyko, jack@suse.cz, glaubitz@physik.fu-berlin.de, slava@dubeyko.com, frank.li@vivo.com, brauner@kernel.org, viro@zeniv.linux.org.uk Cc: linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, skhan@linuxfoundation.org, linux-kernel@vger.kernel.org, khalid@kernel.org, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/27/25 6:45 PM, David Hunter wrote: > On 11/25/25 17:14, Mehdi Ben Hadj Khelifa wrote: >> On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: >>> On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: >>>>> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>>>>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>>>> >>> >>> <skipped> >>> >>>>>>>>> >>>>>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>>>>>> christian's patch because fill_super() (for the each specific >>>>>>>> filesystem) is responsible for cleaning up the superblock in case of >>>>>>>> failure and you can reference christian's patch[1] which he >>>>>>>> explained >>>>>>>> the reasoning for here[2].And in the error path the we are trying to >>>>>>>> fix, fill_super() isn't even called yet. So such pointers >>>>>>>> shouldn't be >>>>>>>> pointing to anything allocated yet hence only freeing the pointer >>>>>>>> to the >>>>>>>> sb_info here is sufficient I think. >>>>> >>>>> I was confused that your code with hfs_mdb_put() is still in this >>>>> email. So, >>>>> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in >>>>> the case of >>>>> failure. It means that if something wasn't been freed, then it will >>>>> be issue in >>>>> these methods. Then, I don't see what should else need to be added >>>>> here. Some >>>>> file systems do sb->s_fs_info = NULL. But absence of this statement >>>>> is not >>>>> critical, from my point of view. >>>>> >>>> Thanks for the input. I will be sending the same mentionned patch after >>>> doing testing for it and also after finishing my testing for the hfs >>>> patch too. >>>>> >>> >>> I am guessing... Should we consider to introduce some xfstest, self- >>> test, or >>> unit-test to detect this issue in all Linux's file systems family? >>> >> Yes, It isn't that hard either IIUC you just need to fail the >> bdev_file_open_by_dev() function somehow to trigger this error path.. >>> Thanks, >>> Slava. >> >> So I wanted to update you on my testing for the hfs patch and the >> hfsplus patch. For the testing I used both my desktop pc and my laptop >> pc running the same configuraitons and the same linux distribution to >> have more accurate testing. There are three variants that I used for >> testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 >> kernel with hfs or hfsplus patch. >> >> Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable >> in my search for it. they all point to mkfs.hfsplus and you pointed out >> that mkfs.hfsplus can create hfs filesystems with the -h flag but in my >> case it doesn't. I pointed out last time that I found a tool to create >> HFS filesystems which it does (it's called hformat) but the xfstests >> require the availability of mkfs.hfs and fsck.hfs for them to run. More >> help on this is needed for me to run hfs tests. I also tested ext4 as >> you have suggested as a base to compare to. Here is my summary of testing: >> >> For Stable kernel 6.17.8: >> >> On desktop: >> ext4 tests ran successfully. >> hfsplus tests crash the pc around generic 631 test. >> >> On Laptop: >> ext4 and hfsplus tests ran successfully. >> >> For 6.18-rc7 kernel: >> >> On desktop: >> ext4 tests ran successfully same results as the stable kernel. >> hfsplus crashes on testing startup.For launching any test. >> >> On Laptop: >> ext4 tests ran successfully same results as the stable kernel. >> hfsplus crashes on testing startup.For launcing any test. >> >> >> For the patched 6.18-rc7 kernel. >> >> Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. >> >> >> Should be noted that I have tried many different setups regarding the >> devices and their creation for the 6.18-rc7 kernel and none of them >> worked.Still I can't deduce what is causing the issue.If they work for >> you, my only assumption is that some dependency of xfstests is not met >> on my part even though I made sure that I do cover them all especially >> with repeatedly failed testing... >> >> What could be the issue here on my end if you have any idea? >> >> Also should I send you the hfsplus patch in one of my earlier replies[1] >> for you to test too and maybe add it to hfsplus? >> >> Best Regards, >> Mehdi Ben Hadj Khelifa >> >> [1]:https://lore.kernel.org/all/3ad2e91e-2c7f-488b- >> a119-51d62a6e95b8@gmail.com/ >> >> >> >> >> >> > > Hey everyone, > > I am helping Shuah with the Linux Kernel Mentorship Program. I wanted to > report that many mentees have also had problems with xfstests recently. > > I am tracking what happens here, so I can help future developers, but I > just wanted to let everyone know that others are also having issues with > xfstests. > > Also, Mehdi, by any chance have you used any of the configuration > targets like "localmodconfig". I am wondering if something in your > configuration file is missing. > No I have not,Thanks for the suggestion.I have been messing with a lot of syzbot configurations but IIRC I always have made sure to make mrproper and copy my in use .config file before building for my pc.I still want to reproduce the same crash as I did before on my part now to see what caused the problem anyway but my assumption for the time being is that localmodconfig would fix it. I will keep you updated on the matter too David.Thanks > Thanks, > David Hunter Best Regards, Mehdi Ben Hadj khelifa ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-19 19:58 ` Viacheslav Dubeyko 2025-11-19 22:21 ` Mehdi Ben Hadj Khelifa 2025-11-21 19:44 ` Mehdi Ben Hadj Khelifa @ 2025-11-26 13:48 ` Christian Brauner 2025-11-26 16:06 ` Mehdi Ben Hadj Khelifa 2 siblings, 1 reply; 28+ messages in thread From: Christian Brauner @ 2025-11-26 13:48 UTC (permalink / raw) To: Viacheslav Dubeyko Cc: glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, mehdi.benhadjkhelifa@gmail.com, viro@zeniv.linux.org.uk, jack@suse.cz, khalid@kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com [-- Attachment #1: Type: text/plain, Size: 4017 bytes --] On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote: > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > after superblock creation") allows setup_bdev_super() to fail after a new > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > takes ownership of the filesystem-specific s_fs_info data. > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > unreleased.The default kill_block_super() teardown also does not free > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > early teardown paths (including setup_bdev_super() failure) correctly > > release HFS metadata. > > > > This also preserves the intended layering: generic_shutdown_super() > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > afterwards. > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > --- > > ChangeLog: > > > > Changes from v1: > > > > -Changed the patch direction to focus on hfs changes specifically as > > suggested by al viro > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > Note:This patch might need some more testing as I only did run selftests > > with no regression, check dmesg output for no regression, run reproducer > > with no bug and test it with syzbot as well. > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > failures for HFS now. And you can check the list of known issues here [1]. The > main point of such run of xfstests is to check that maybe some issue(s) could be > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > index 47f50fa555a4..06e1c25e47dc 100644 > > --- a/fs/hfs/super.c > > +++ b/fs/hfs/super.c > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > { > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > hfs_mdb_close(sb); > > - /* release the MDB's resources */ > > - hfs_mdb_put(sb); > > } > > > > static void flush_mdb(struct work_struct *work) > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > bail_no_root: > > pr_err("get root inode failed\n"); > > bail: > > - hfs_mdb_put(sb); > > return res; > > } > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > return 0; > > } > > > > +static void hfs_kill_sb(struct super_block *sb) > > +{ > > + generic_shutdown_super(sb); > > + hfs_mdb_put(sb); > > + if (sb->s_bdev) { > > + sync_blockdev(sb->s_bdev); > > + bdev_fput(sb->s_bdev_file); > > + } > > + > > +} > > + > > static struct file_system_type hfs_fs_type = { > > .owner = THIS_MODULE, > > .name = "hfs", > > - .kill_sb = kill_block_super, > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > double check that HFS+ should be fixed too? There's no need to open-code this unless I'm missing something. All you need is the following two patches - untested. Both issues were introduced by the conversion to the new mount api. [-- Attachment #2: 0001-hfs-ensure-sb-s_fs_info-is-always-cleaned-up.patch --] [-- Type: text/x-diff, Size: 4593 bytes --] From 058747cefb26196f3c192c76c631051581b29b27 Mon Sep 17 00:00:00 2001 From: Christian Brauner <brauner@kernel.org> Date: Wed, 19 Nov 2025 14:30:51 +0100 Subject: [PATCH 1/2] hfs: ensure sb->s_fs_info is always cleaned up When hfs was converted to the new mount api a bug was introduced by changing the allocation pattern of sb->s_fs_info. If setup_bdev_super() fails after a new superblock has been allocated by sget_fc(), but before hfs_fill_super() takes ownership of the filesystem-specific s_fs_info data it was leaked. Fix this by freeing sb->s_fs_info in hfs_kill_super(). Fixes: ffcd06b6d13b ("hfs: convert hfs to use the new mount api") Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 Signed-off-by: Christian Brauner <brauner@kernel.org> --- fs/hfs/mdb.c | 35 ++++++++++++++--------------------- fs/hfs/super.c | 10 +++++++++- 2 files changed, 23 insertions(+), 22 deletions(-) diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c index 53f3fae60217..f28cd24dee84 100644 --- a/fs/hfs/mdb.c +++ b/fs/hfs/mdb.c @@ -92,7 +92,7 @@ int hfs_mdb_get(struct super_block *sb) /* See if this is an HFS filesystem */ bh = sb_bread512(sb, part_start + HFS_MDB_BLK, mdb); if (!bh) - goto out; + return -EIO; if (mdb->drSigWord == cpu_to_be16(HFS_SUPER_MAGIC)) break; @@ -102,13 +102,14 @@ int hfs_mdb_get(struct super_block *sb) * (should do this only for cdrom/loop though) */ if (hfs_part_find(sb, &part_start, &part_size)) - goto out; + return -EIO; } HFS_SB(sb)->alloc_blksz = size = be32_to_cpu(mdb->drAlBlkSiz); if (!size || (size & (HFS_SECTOR_SIZE - 1))) { pr_err("bad allocation block size %d\n", size); - goto out_bh; + brelse(bh); + return -EIO; } size = min(HFS_SB(sb)->alloc_blksz, (u32)PAGE_SIZE); @@ -125,14 +126,16 @@ int hfs_mdb_get(struct super_block *sb) brelse(bh); if (!sb_set_blocksize(sb, size)) { pr_err("unable to set blocksize to %u\n", size); - goto out; + return -EIO; } bh = sb_bread512(sb, part_start + HFS_MDB_BLK, mdb); if (!bh) - goto out; - if (mdb->drSigWord != cpu_to_be16(HFS_SUPER_MAGIC)) - goto out_bh; + return -EIO; + if (mdb->drSigWord != cpu_to_be16(HFS_SUPER_MAGIC)) { + brelse(bh); + return -EIO; + } HFS_SB(sb)->mdb_bh = bh; HFS_SB(sb)->mdb = mdb; @@ -174,7 +177,7 @@ int hfs_mdb_get(struct super_block *sb) HFS_SB(sb)->bitmap = kzalloc(8192, GFP_KERNEL); if (!HFS_SB(sb)->bitmap) - goto out; + return -EIO; /* read in the bitmap */ block = be16_to_cpu(mdb->drVBMSt) + part_start; @@ -185,7 +188,7 @@ int hfs_mdb_get(struct super_block *sb) bh = sb_bread(sb, off >> sb->s_blocksize_bits); if (!bh) { pr_err("unable to read volume bitmap\n"); - goto out; + return -EIO; } off2 = off & (sb->s_blocksize - 1); len = min((int)sb->s_blocksize - off2, size); @@ -199,12 +202,12 @@ int hfs_mdb_get(struct super_block *sb) HFS_SB(sb)->ext_tree = hfs_btree_open(sb, HFS_EXT_CNID, hfs_ext_keycmp); if (!HFS_SB(sb)->ext_tree) { pr_err("unable to open extent tree\n"); - goto out; + return -EIO; } HFS_SB(sb)->cat_tree = hfs_btree_open(sb, HFS_CAT_CNID, hfs_cat_keycmp); if (!HFS_SB(sb)->cat_tree) { pr_err("unable to open catalog tree\n"); - goto out; + return -EIO; } attrib = mdb->drAtrb; @@ -229,12 +232,6 @@ int hfs_mdb_get(struct super_block *sb) } return 0; - -out_bh: - brelse(bh); -out: - hfs_mdb_put(sb); - return -EIO; } /* @@ -359,8 +356,6 @@ void hfs_mdb_close(struct super_block *sb) * Release the resources associated with the in-core MDB. */ void hfs_mdb_put(struct super_block *sb) { - if (!HFS_SB(sb)) - return; /* free the B-trees */ hfs_btree_close(HFS_SB(sb)->ext_tree); hfs_btree_close(HFS_SB(sb)->cat_tree); @@ -373,6 +368,4 @@ void hfs_mdb_put(struct super_block *sb) unload_nls(HFS_SB(sb)->nls_disk); kfree(HFS_SB(sb)->bitmap); - kfree(HFS_SB(sb)); - sb->s_fs_info = NULL; } diff --git a/fs/hfs/super.c b/fs/hfs/super.c index 47f50fa555a4..df289cbdd4e8 100644 --- a/fs/hfs/super.c +++ b/fs/hfs/super.c @@ -431,10 +431,18 @@ static int hfs_init_fs_context(struct fs_context *fc) return 0; } +static void hfs_kill_super(struct super_block *sb) +{ + struct hfs_sb_info *hsb = HFS_SB(sb); + + kill_block_super(sb); + kfree(hsb); +} + static struct file_system_type hfs_fs_type = { .owner = THIS_MODULE, .name = "hfs", - .kill_sb = kill_block_super, + .kill_sb = hfs_kill_super, .fs_flags = FS_REQUIRES_DEV, .init_fs_context = hfs_init_fs_context, }; -- 2.47.3 [-- Attachment #3: 0002-hfsplus-ensure-sb-s_fs_info-is-always-cleaned-up.patch --] [-- Type: text/x-diff, Size: 2053 bytes --] From 54b8d07c3e1a22dc07ecb03415edd96524d14642 Mon Sep 17 00:00:00 2001 From: Christian Brauner <brauner@kernel.org> Date: Wed, 26 Nov 2025 14:43:24 +0100 Subject: [PATCH 2/2] hfsplus: ensure sb->s_fs_info is always cleaned up When hfsplus was converted to the new mount api a bug was introduced by changing the allocation pattern of sb->s_fs_info. If setup_bdev_super() fails after a new superblock has been allocated by sget_fc(), but before hfsplus_fill_super() takes ownership of the filesystem-specific s_fs_info data it was leaked. Fix this by freeing sb->s_fs_info in hfsplus_kill_super(). commit 432f7c78cb00 ("hfsplus: convert hfsplus to use the new mount api") Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 Signed-off-by: Christian Brauner <brauner@kernel.org> --- fs/hfsplus/super.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c index 16bc4abc67e0..8734520f6419 100644 --- a/fs/hfsplus/super.c +++ b/fs/hfsplus/super.c @@ -328,8 +328,6 @@ static void hfsplus_put_super(struct super_block *sb) hfs_btree_close(sbi->ext_tree); kfree(sbi->s_vhdr_buf); kfree(sbi->s_backup_vhdr_buf); - call_rcu(&sbi->rcu, delayed_free); - hfs_dbg("finished\n"); } @@ -629,7 +627,6 @@ static int hfsplus_fill_super(struct super_block *sb, struct fs_context *fc) out_unload_nls: unload_nls(sbi->nls); unload_nls(nls); - kfree(sbi); return err; } @@ -688,10 +685,18 @@ static int hfsplus_init_fs_context(struct fs_context *fc) return 0; } +static void hfsplus_kill_super(struct super_block *sb) +{ + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); + + kill_block_super(sb); + call_rcu(&sbi->rcu, delayed_free); +} + static struct file_system_type hfsplus_fs_type = { .owner = THIS_MODULE, .name = "hfsplus", - .kill_sb = kill_block_super, + .kill_sb = hfsplus_kill_super, .fs_flags = FS_REQUIRES_DEV, .init_fs_context = hfsplus_init_fs_context, }; -- 2.47.3 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-26 13:48 ` Christian Brauner @ 2025-11-26 16:06 ` Mehdi Ben Hadj Khelifa 2025-11-26 22:30 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-26 16:06 UTC (permalink / raw) To: Christian Brauner, Viacheslav Dubeyko Cc: glaubitz@physik.fu-berlin.de, frank.li@vivo.com, slava@dubeyko.com, viro@zeniv.linux.org.uk, jack@suse.cz, khalid@kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, skhan@linuxfoundation.org, david.hunter.linux@gmail.com, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 11/26/25 2:48 PM, Christian Brauner wrote: > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote: >> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>> The regression introduced by commit aca740cecbe5 ("fs: open block device >>> after superblock creation") allows setup_bdev_super() to fail after a new >>> superblock has been allocated by sget_fc(), but before hfs_fill_super() >>> takes ownership of the filesystem-specific s_fs_info data. >>> >>> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >>> unreleased.The default kill_block_super() teardown also does not free >>> HFS-specific resources, resulting in a memory leak on early mount failure. >>> >>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >>> hfs_kill_sb() implementation. This ensures that both normal unmount and >>> early teardown paths (including setup_bdev_super() failure) correctly >>> release HFS metadata. >>> >>> This also preserves the intended layering: generic_shutdown_super() >>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >>> afterwards. >>> >>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >>> --- >>> ChangeLog: >>> >>> Changes from v1: >>> >>> -Changed the patch direction to focus on hfs changes specifically as >>> suggested by al viro >>> >>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >>> >>> Note:This patch might need some more testing as I only did run selftests >>> with no regression, check dmesg output for no regression, run reproducer >>> with no bug and test it with syzbot as well. >> >> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests >> failures for HFS now. And you can check the list of known issues here [1]. The >> main point of such run of xfstests is to check that maybe some issue(s) could be >> fixed by the patch. And, more important that you don't introduce new issues. ;) >> >>> >>> fs/hfs/super.c | 16 ++++++++++++---- >>> 1 file changed, 12 insertions(+), 4 deletions(-) >>> >>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >>> index 47f50fa555a4..06e1c25e47dc 100644 >>> --- a/fs/hfs/super.c >>> +++ b/fs/hfs/super.c >>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >>> { >>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >>> hfs_mdb_close(sb); >>> - /* release the MDB's resources */ >>> - hfs_mdb_put(sb); >>> } >>> >>> static void flush_mdb(struct work_struct *work) >>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >>> bail_no_root: >>> pr_err("get root inode failed\n"); >>> bail: >>> - hfs_mdb_put(sb); >>> return res; >>> } >>> >>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >>> return 0; >>> } >>> >>> +static void hfs_kill_sb(struct super_block *sb) >>> +{ >>> + generic_shutdown_super(sb); >>> + hfs_mdb_put(sb); >>> + if (sb->s_bdev) { >>> + sync_blockdev(sb->s_bdev); >>> + bdev_fput(sb->s_bdev_file); >>> + } >>> + >>> +} >>> + >>> static struct file_system_type hfs_fs_type = { >>> .owner = THIS_MODULE, >>> .name = "hfs", >>> - .kill_sb = kill_block_super, >> >> It looks like we have the same issue for the case of HFS+ [2]. Could you please >> double check that HFS+ should be fixed too? > > There's no need to open-code this unless I'm missing something. All you > need is the following two patches - untested. Both issues were > introduced by the conversion to the new mount api. Yes, I don't think open-code is needed here IIUC, also as I mentionned before I went by the suggestion of Al Viro in previous replies that's my main reason for doing it that way in the first place. Also me and Slava are working on testing the mentionned patches, Should I sent them from my part to the maintainers and mailing lists once testing has been done? Best Regards, Mehdi Ben Hadj Khelifa ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-26 16:06 ` Mehdi Ben Hadj Khelifa @ 2025-11-26 22:30 ` Viacheslav Dubeyko 2025-11-27 8:59 ` Christian Brauner 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-26 22:30 UTC (permalink / raw) To: brauner@kernel.org, mehdi.benhadjkhelifa@gmail.com Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, linux-fsdevel@vger.kernel.org, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, glaubitz@physik.fu-berlin.de, viro@zeniv.linux.org.uk, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, skhan@linuxfoundation.org On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/26/25 2:48 PM, Christian Brauner wrote: > > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote: > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > > > after superblock creation") allows setup_bdev_super() to fail after a new > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > > > takes ownership of the filesystem-specific s_fs_info data. > > > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > > > unreleased.The default kill_block_super() teardown also does not free > > > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > > > early teardown paths (including setup_bdev_super() failure) correctly > > > > release HFS metadata. > > > > > > > > This also preserves the intended layering: generic_shutdown_super() > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > > > afterwards. > > > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > > > --- > > > > ChangeLog: > > > > > > > > Changes from v1: > > > > > > > > -Changed the patch direction to focus on hfs changes specifically as > > > > suggested by al viro > > > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > > > > > Note:This patch might need some more testing as I only did run selftests > > > > with no regression, check dmesg output for no regression, run reproducer > > > > with no bug and test it with syzbot as well. > > > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > > > failures for HFS now. And you can check the list of known issues here [1]. The > > > main point of such run of xfstests is to check that maybe some issue(s) could be > > > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > > > > > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > > > index 47f50fa555a4..06e1c25e47dc 100644 > > > > --- a/fs/hfs/super.c > > > > +++ b/fs/hfs/super.c > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > > > { > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > > > hfs_mdb_close(sb); > > > > - /* release the MDB's resources */ > > > > - hfs_mdb_put(sb); > > > > } > > > > > > > > static void flush_mdb(struct work_struct *work) > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > > > bail_no_root: > > > > pr_err("get root inode failed\n"); > > > > bail: > > > > - hfs_mdb_put(sb); > > > > return res; > > > > } > > > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > > > return 0; > > > > } > > > > > > > > +static void hfs_kill_sb(struct super_block *sb) > > > > +{ > > > > + generic_shutdown_super(sb); > > > > + hfs_mdb_put(sb); > > > > + if (sb->s_bdev) { > > > > + sync_blockdev(sb->s_bdev); > > > > + bdev_fput(sb->s_bdev_file); > > > > + } > > > > + > > > > +} > > > > + > > > > static struct file_system_type hfs_fs_type = { > > > > .owner = THIS_MODULE, > > > > .name = "hfs", > > > > - .kill_sb = kill_block_super, > > > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > > > double check that HFS+ should be fixed too? > > > > There's no need to open-code this unless I'm missing something. All you > > need is the following two patches - untested. Both issues were > > introduced by the conversion to the new mount api. > Yes, I don't think open-code is needed here IIUC, also as I mentionned > before I went by the suggestion of Al Viro in previous replies that's my > main reason for doing it that way in the first place. > > Also me and Slava are working on testing the mentionned patches, Should > I sent them from my part to the maintainers and mailing lists once > testing has been done? > > I have run the xfstests on the latest kernel. Everything works as expected: sudo ./check -g auto FSTYP -- hfsplus PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025 MKFS_OPTIONS -- /dev/loop51 MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch generic/001 22s ... 53s generic/002 17s ... 43s <skipped> Failures: generic/003 generic/013 generic/020 generic/034 generic/037 generic/039 generic/040 generic/041 generic/056 generic/057 generic/062 generic/065 generic/066 generic/069 generic/070 generic/073 generic/074 generic/079 generic/091 generic/097 generic/101 generic/104 generic/106 generic/107 generic/113 generic/127 generic/241 generic/258 generic/263 generic/285 generic/321 generic/322 generic/335 generic/336 generic/337 generic/339 generic/341 generic/342 generic/343 generic/348 generic/363 generic/376 generic/377 generic/405 generic/412 generic/418 generic/464 generic/471 generic/475 generic/479 generic/480 generic/481 generic/489 generic/490 generic/498 generic/502 generic/510 generic/523 generic/525 generic/526 generic/527 generic/533 generic/534 generic/535 generic/547 generic/551 generic/552 generic/557 generic/563 generic/564 generic/617 generic/631 generic/637 generic/640 generic/642 generic/647 generic/650 generic/690 generic/728 generic/729 generic/760 generic/764 generic/771 generic/776 Failed 84 of 767 tests Currently, failures are expected. But I don't see any serious crash, especially, on every single test. So, I can apply two patches that Christian shared and test it on my side. I had impression that Christian has taken the patch for HFS already in his tree. Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests with applied patches at first. Thanks, Slava. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-26 22:30 ` Viacheslav Dubeyko @ 2025-11-27 8:59 ` Christian Brauner 2025-11-27 20:19 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Christian Brauner @ 2025-11-27 8:59 UTC (permalink / raw) To: Viacheslav Dubeyko Cc: mehdi.benhadjkhelifa@gmail.com, jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, linux-fsdevel@vger.kernel.org, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, glaubitz@physik.fu-berlin.de, viro@zeniv.linux.org.uk, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, skhan@linuxfoundation.org On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote: > On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote: > > On 11/26/25 2:48 PM, Christian Brauner wrote: > > > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote: > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > > > > after superblock creation") allows setup_bdev_super() to fail after a new > > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > > > > takes ownership of the filesystem-specific s_fs_info data. > > > > > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > > > > unreleased.The default kill_block_super() teardown also does not free > > > > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > > > > early teardown paths (including setup_bdev_super() failure) correctly > > > > > release HFS metadata. > > > > > > > > > > This also preserves the intended layering: generic_shutdown_super() > > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > > > > afterwards. > > > > > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > > > > --- > > > > > ChangeLog: > > > > > > > > > > Changes from v1: > > > > > > > > > > -Changed the patch direction to focus on hfs changes specifically as > > > > > suggested by al viro > > > > > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > > > > > > > Note:This patch might need some more testing as I only did run selftests > > > > > with no regression, check dmesg output for no regression, run reproducer > > > > > with no bug and test it with syzbot as well. > > > > > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > > > > failures for HFS now. And you can check the list of known issues here [1]. The > > > > main point of such run of xfstests is to check that maybe some issue(s) could be > > > > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > > > > > > > > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > > > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > > > > index 47f50fa555a4..06e1c25e47dc 100644 > > > > > --- a/fs/hfs/super.c > > > > > +++ b/fs/hfs/super.c > > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > > > > { > > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > > > > hfs_mdb_close(sb); > > > > > - /* release the MDB's resources */ > > > > > - hfs_mdb_put(sb); > > > > > } > > > > > > > > > > static void flush_mdb(struct work_struct *work) > > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > > > > bail_no_root: > > > > > pr_err("get root inode failed\n"); > > > > > bail: > > > > > - hfs_mdb_put(sb); > > > > > return res; > > > > > } > > > > > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > > > > return 0; > > > > > } > > > > > > > > > > +static void hfs_kill_sb(struct super_block *sb) > > > > > +{ > > > > > + generic_shutdown_super(sb); > > > > > + hfs_mdb_put(sb); > > > > > + if (sb->s_bdev) { > > > > > + sync_blockdev(sb->s_bdev); > > > > > + bdev_fput(sb->s_bdev_file); > > > > > + } > > > > > + > > > > > +} > > > > > + > > > > > static struct file_system_type hfs_fs_type = { > > > > > .owner = THIS_MODULE, > > > > > .name = "hfs", > > > > > - .kill_sb = kill_block_super, > > > > > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > > > > double check that HFS+ should be fixed too? > > > > > > There's no need to open-code this unless I'm missing something. All you > > > need is the following two patches - untested. Both issues were > > > introduced by the conversion to the new mount api. > > Yes, I don't think open-code is needed here IIUC, also as I mentionned > > before I went by the suggestion of Al Viro in previous replies that's my > > main reason for doing it that way in the first place. > > > > Also me and Slava are working on testing the mentionned patches, Should > > I sent them from my part to the maintainers and mailing lists once > > testing has been done? > > > > > > I have run the xfstests on the latest kernel. Everything works as expected: > > sudo ./check -g auto > FSTYP -- hfsplus > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP > PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025 > MKFS_OPTIONS -- /dev/loop51 > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > generic/001 22s ... 53s > generic/002 17s ... 43s > > <skipped> > > Failures: generic/003 generic/013 generic/020 generic/034 generic/037 > generic/039 generic/040 generic/041 generic/056 generic/057 generic/062 > generic/065 generic/066 generic/069 generic/070 generic/073 generic/074 > generic/079 generic/091 generic/097 generic/101 generic/104 generic/106 > generic/107 generic/113 generic/127 generic/241 generic/258 generic/263 > generic/285 generic/321 generic/322 generic/335 generic/336 generic/337 > generic/339 generic/341 generic/342 generic/343 generic/348 generic/363 > generic/376 generic/377 generic/405 generic/412 generic/418 generic/464 > generic/471 generic/475 generic/479 generic/480 generic/481 generic/489 > generic/490 generic/498 generic/502 generic/510 generic/523 generic/525 > generic/526 generic/527 generic/533 generic/534 generic/535 generic/547 > generic/551 generic/552 generic/557 generic/563 generic/564 generic/617 > generic/631 generic/637 generic/640 generic/642 generic/647 generic/650 > generic/690 generic/728 generic/729 generic/760 generic/764 generic/771 > generic/776 > Failed 84 of 767 tests > > Currently, failures are expected. But I don't see any serious crash, especially, > on every single test. > > So, I can apply two patches that Christian shared and test it on my side. > > I had impression that Christian has taken the patch for HFS already in his tree. > Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests > with applied patches at first. Feel free to taken them. ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-27 8:59 ` Christian Brauner @ 2025-11-27 20:19 ` Viacheslav Dubeyko 2025-11-29 12:48 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-11-27 20:19 UTC (permalink / raw) To: brauner@kernel.org Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, mehdi.benhadjkhelifa@gmail.com, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, glaubitz@physik.fu-berlin.de, viro@zeniv.linux.org.uk, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, skhan@linuxfoundation.org On Thu, 2025-11-27 at 09:59 +0100, Christian Brauner wrote: > On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote: > > On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/26/25 2:48 PM, Christian Brauner wrote: > > > > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote: > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device > > > > > > after superblock creation") allows setup_bdev_super() to fail after a new > > > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super() > > > > > > takes ownership of the filesystem-specific s_fs_info data. > > > > > > > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super() > > > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info > > > > > > unreleased.The default kill_block_super() teardown also does not free > > > > > > HFS-specific resources, resulting in a memory leak on early mount failure. > > > > > > > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from > > > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated > > > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and > > > > > > early teardown paths (including setup_bdev_super() failure) correctly > > > > > > release HFS metadata. > > > > > > > > > > > > This also preserves the intended layering: generic_shutdown_super() > > > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed > > > > > > afterwards. > > > > > > > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation") > > > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 > > > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com > > > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk> > > > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > > > > > > --- > > > > > > ChangeLog: > > > > > > > > > > > > Changes from v1: > > > > > > > > > > > > -Changed the patch direction to focus on hfs changes specifically as > > > > > > suggested by al viro > > > > > > > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ > > > > > > > > > > > > Note:This patch might need some more testing as I only did run selftests > > > > > > with no regression, check dmesg output for no regression, run reproducer > > > > > > with no bug and test it with syzbot as well. > > > > > > > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests > > > > > failures for HFS now. And you can check the list of known issues here [1]. The > > > > > main point of such run of xfstests is to check that maybe some issue(s) could be > > > > > fixed by the patch. And, more important that you don't introduce new issues. ;) > > > > > > > > > > > > > > > > > fs/hfs/super.c | 16 ++++++++++++---- > > > > > > 1 file changed, 12 insertions(+), 4 deletions(-) > > > > > > > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c > > > > > > index 47f50fa555a4..06e1c25e47dc 100644 > > > > > > --- a/fs/hfs/super.c > > > > > > +++ b/fs/hfs/super.c > > > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) > > > > > > { > > > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); > > > > > > hfs_mdb_close(sb); > > > > > > - /* release the MDB's resources */ > > > > > > - hfs_mdb_put(sb); > > > > > > } > > > > > > > > > > > > static void flush_mdb(struct work_struct *work) > > > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) > > > > > > bail_no_root: > > > > > > pr_err("get root inode failed\n"); > > > > > > bail: > > > > > > - hfs_mdb_put(sb); > > > > > > return res; > > > > > > } > > > > > > > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) > > > > > > return 0; > > > > > > } > > > > > > > > > > > > +static void hfs_kill_sb(struct super_block *sb) > > > > > > +{ > > > > > > + generic_shutdown_super(sb); > > > > > > + hfs_mdb_put(sb); > > > > > > + if (sb->s_bdev) { > > > > > > + sync_blockdev(sb->s_bdev); > > > > > > + bdev_fput(sb->s_bdev_file); > > > > > > + } > > > > > > + > > > > > > +} > > > > > > + > > > > > > static struct file_system_type hfs_fs_type = { > > > > > > .owner = THIS_MODULE, > > > > > > .name = "hfs", > > > > > > - .kill_sb = kill_block_super, > > > > > > > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please > > > > > double check that HFS+ should be fixed too? > > > > > > > > There's no need to open-code this unless I'm missing something. All you > > > > need is the following two patches - untested. Both issues were > > > > introduced by the conversion to the new mount api. > > > Yes, I don't think open-code is needed here IIUC, also as I mentionned > > > before I went by the suggestion of Al Viro in previous replies that's my > > > main reason for doing it that way in the first place. > > > > > > Also me and Slava are working on testing the mentionned patches, Should > > > I sent them from my part to the maintainers and mailing lists once > > > testing has been done? > > > > > > > > > > I have run the xfstests on the latest kernel. Everything works as expected: > > > > sudo ./check -g auto > > FSTYP -- hfsplus > > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP > > PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025 > > MKFS_OPTIONS -- /dev/loop51 > > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > > > generic/001 22s ... 53s > > generic/002 17s ... 43s > > > > <skipped> > > > > Failures: generic/003 generic/013 generic/020 generic/034 generic/037 > > generic/039 generic/040 generic/041 generic/056 generic/057 generic/062 > > generic/065 generic/066 generic/069 generic/070 generic/073 generic/074 > > generic/079 generic/091 generic/097 generic/101 generic/104 generic/106 > > generic/107 generic/113 generic/127 generic/241 generic/258 generic/263 > > generic/285 generic/321 generic/322 generic/335 generic/336 generic/337 > > generic/339 generic/341 generic/342 generic/343 generic/348 generic/363 > > generic/376 generic/377 generic/405 generic/412 generic/418 generic/464 > > generic/471 generic/475 generic/479 generic/480 generic/481 generic/489 > > generic/490 generic/498 generic/502 generic/510 generic/523 generic/525 > > generic/526 generic/527 generic/533 generic/534 generic/535 generic/547 > > generic/551 generic/552 generic/557 generic/563 generic/564 generic/617 > > generic/631 generic/637 generic/640 generic/642 generic/647 generic/650 > > generic/690 generic/728 generic/729 generic/760 generic/764 generic/771 > > generic/776 > > Failed 84 of 767 tests > > > > Currently, failures are expected. But I don't see any serious crash, especially, > > on every single test. > > > > So, I can apply two patches that Christian shared and test it on my side. > > > > I had impression that Christian has taken the patch for HFS already in his tree. > > Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests > > with applied patches at first. > > Feel free to taken them. Sounds good! So, I have xfestests run results: HFS without patch: sudo ./check -g auto FSTYP -- hfs PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025 MKFS_OPTIONS -- /dev/loop51 MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch <skipped> Failed 140 of 766 tests HFS with patch: sudo ./check -g auto FSTYP -- hfs PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025 MKFS_OPTIONS -- /dev/loop51 MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch <skipped> Failed 139 of 766 tests HFS+ without patch: sudo ./check -g auto FSTYP -- hfsplus PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025 MKFS_OPTIONS -- /dev/loop51 MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch <skipped> Failed 84 of 767 tests HFS+ with patch: sudo ./check -g FSTYP -- hfsplus PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025 MKFS_OPTIONS -- /dev/loop51 MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch <skipped> Failed 81 of 767 tests As far as I can see, the situation is improving with the patches. I can say that patches have been tested and I am ready to pick up the patches into HFS/HFS+ tree. Mehdi, should I expect the formal patches from you? Or should I take the patches as it is? Thanks, Slava. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-27 20:19 ` Viacheslav Dubeyko @ 2025-11-29 12:48 ` Mehdi Ben Hadj Khelifa 2025-12-01 19:24 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-11-29 12:48 UTC (permalink / raw) To: Viacheslav Dubeyko, brauner@kernel.org Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, glaubitz@physik.fu-berlin.de, viro@zeniv.linux.org.uk, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, skhan@linuxfoundation.org On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: > On Thu, 2025-11-27 at 09:59 +0100, Christian Brauner wrote: >> On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote: >>> On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/26/25 2:48 PM, Christian Brauner wrote: >>>>> On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote: >>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device >>>>>>> after superblock creation") allows setup_bdev_super() to fail after a new >>>>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super() >>>>>>> takes ownership of the filesystem-specific s_fs_info data. >>>>>>> >>>>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super() >>>>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info >>>>>>> unreleased.The default kill_block_super() teardown also does not free >>>>>>> HFS-specific resources, resulting in a memory leak on early mount failure. >>>>>>> >>>>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from >>>>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated >>>>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and >>>>>>> early teardown paths (including setup_bdev_super() failure) correctly >>>>>>> release HFS metadata. >>>>>>> >>>>>>> This also preserves the intended layering: generic_shutdown_super() >>>>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed >>>>>>> afterwards. >>>>>>> >>>>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation") >>>>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6 >>>>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com >>>>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> >>>>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> >>>>>>> --- >>>>>>> ChangeLog: >>>>>>> >>>>>>> Changes from v1: >>>>>>> >>>>>>> -Changed the patch direction to focus on hfs changes specifically as >>>>>>> suggested by al viro >>>>>>> >>>>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/ >>>>>>> >>>>>>> Note:This patch might need some more testing as I only did run selftests >>>>>>> with no regression, check dmesg output for no regression, run reproducer >>>>>>> with no bug and test it with syzbot as well. >>>>>> >>>>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests >>>>>> failures for HFS now. And you can check the list of known issues here [1]. The >>>>>> main point of such run of xfstests is to check that maybe some issue(s) could be >>>>>> fixed by the patch. And, more important that you don't introduce new issues. ;) >>>>>> >>>>>>> >>>>>>> fs/hfs/super.c | 16 ++++++++++++---- >>>>>>> 1 file changed, 12 insertions(+), 4 deletions(-) >>>>>>> >>>>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c >>>>>>> index 47f50fa555a4..06e1c25e47dc 100644 >>>>>>> --- a/fs/hfs/super.c >>>>>>> +++ b/fs/hfs/super.c >>>>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb) >>>>>>> { >>>>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work); >>>>>>> hfs_mdb_close(sb); >>>>>>> - /* release the MDB's resources */ >>>>>>> - hfs_mdb_put(sb); >>>>>>> } >>>>>>> >>>>>>> static void flush_mdb(struct work_struct *work) >>>>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc) >>>>>>> bail_no_root: >>>>>>> pr_err("get root inode failed\n"); >>>>>>> bail: >>>>>>> - hfs_mdb_put(sb); >>>>>>> return res; >>>>>>> } >>>>>>> >>>>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc) >>>>>>> return 0; >>>>>>> } >>>>>>> >>>>>>> +static void hfs_kill_sb(struct super_block *sb) >>>>>>> +{ >>>>>>> + generic_shutdown_super(sb); >>>>>>> + hfs_mdb_put(sb); >>>>>>> + if (sb->s_bdev) { >>>>>>> + sync_blockdev(sb->s_bdev); >>>>>>> + bdev_fput(sb->s_bdev_file); >>>>>>> + } >>>>>>> + >>>>>>> +} >>>>>>> + >>>>>>> static struct file_system_type hfs_fs_type = { >>>>>>> .owner = THIS_MODULE, >>>>>>> .name = "hfs", >>>>>>> - .kill_sb = kill_block_super, >>>>>> >>>>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please >>>>>> double check that HFS+ should be fixed too? >>>>> >>>>> There's no need to open-code this unless I'm missing something. All you >>>>> need is the following two patches - untested. Both issues were >>>>> introduced by the conversion to the new mount api. >>>> Yes, I don't think open-code is needed here IIUC, also as I mentionned >>>> before I went by the suggestion of Al Viro in previous replies that's my >>>> main reason for doing it that way in the first place. >>>> >>>> Also me and Slava are working on testing the mentionned patches, Should >>>> I sent them from my part to the maintainers and mailing lists once >>>> testing has been done? >>>> >>>> >>> >>> I have run the xfstests on the latest kernel. Everything works as expected: >>> >>> sudo ./check -g auto >>> FSTYP -- hfsplus >>> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP >>> PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025 >>> MKFS_OPTIONS -- /dev/loop51 >>> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch >>> >>> generic/001 22s ... 53s >>> generic/002 17s ... 43s >>> >>> <skipped> >>> >>> Failures: generic/003 generic/013 generic/020 generic/034 generic/037 >>> generic/039 generic/040 generic/041 generic/056 generic/057 generic/062 >>> generic/065 generic/066 generic/069 generic/070 generic/073 generic/074 >>> generic/079 generic/091 generic/097 generic/101 generic/104 generic/106 >>> generic/107 generic/113 generic/127 generic/241 generic/258 generic/263 >>> generic/285 generic/321 generic/322 generic/335 generic/336 generic/337 >>> generic/339 generic/341 generic/342 generic/343 generic/348 generic/363 >>> generic/376 generic/377 generic/405 generic/412 generic/418 generic/464 >>> generic/471 generic/475 generic/479 generic/480 generic/481 generic/489 >>> generic/490 generic/498 generic/502 generic/510 generic/523 generic/525 >>> generic/526 generic/527 generic/533 generic/534 generic/535 generic/547 >>> generic/551 generic/552 generic/557 generic/563 generic/564 generic/617 >>> generic/631 generic/637 generic/640 generic/642 generic/647 generic/650 >>> generic/690 generic/728 generic/729 generic/760 generic/764 generic/771 >>> generic/776 >>> Failed 84 of 767 tests >>> >>> Currently, failures are expected. But I don't see any serious crash, especially, >>> on every single test. >>> >>> So, I can apply two patches that Christian shared and test it on my side. >>> >>> I had impression that Christian has taken the patch for HFS already in his tree. >>> Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests >>> with applied patches at first. >> >> Feel free to taken them. > > Sounds good! > > So, I have xfestests run results: > > HFS without patch: > > sudo ./check -g auto > FSTYP -- hfs > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP > PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025 > MKFS_OPTIONS -- /dev/loop51 > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > <skipped> > > Failed 140 of 766 tests > > HFS with patch: > > sudo ./check -g auto > FSTYP -- hfs > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP > PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025 > MKFS_OPTIONS -- /dev/loop51 > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > <skipped> > > Failed 139 of 766 tests > > HFS+ without patch: > > sudo ./check -g auto > FSTYP -- hfsplus > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP > PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025 > MKFS_OPTIONS -- /dev/loop51 > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > <skipped> > > Failed 84 of 767 tests > > HFS+ with patch: > > sudo ./check -g > FSTYP -- hfsplus > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP > PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025 > MKFS_OPTIONS -- /dev/loop51 > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch > > <skipped> > > Failed 81 of 767 tests > > As far as I can see, the situation is improving with the patches. I can say that > patches have been tested and I am ready to pick up the patches into HFS/HFS+ > tree. > > Mehdi, should I expect the formal patches from you? Or should I take the patches > as it is? > I can send them from my part. Should I add signed-off-by tag at the end appended to them? Also, I want to give an apologies for the delayed/none reply about the crash of xfstests on my part. I went back testing them 3 days earlier and they started showing different results again and then I have broken my finger....Which caused me to have much slower progress.I'm still working on getting the same crashes as I did before where I get them when running any test.Because I ran quick tests and they didn't crash. only with auto around the 631 test for desktop and around 642 on my laptop for both not patched and patched kernels.I'm going to update you on that matter when I can have predictable behavior and cause of the crash/call stack.But expect slow progress from my part here for the reason I mentionned before. > Thanks, > Slava. Best Regards, Mehdi Ben Hadj Khelifa ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-11-29 12:48 ` Mehdi Ben Hadj Khelifa @ 2025-12-01 19:24 ` Viacheslav Dubeyko 2025-12-01 21:19 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-12-01 19:24 UTC (permalink / raw) To: brauner@kernel.org, mehdi.benhadjkhelifa@gmail.com Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, linux-fsdevel@vger.kernel.org, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, skhan@linuxfoundation.org, glaubitz@physik.fu-berlin.de, viro@zeniv.linux.org.uk, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: > > <skipped> > > > > As far as I can see, the situation is improving with the patches. I can say that > > patches have been tested and I am ready to pick up the patches into HFS/HFS+ > > tree. > > > > Mehdi, should I expect the formal patches from you? Or should I take the patches > > as it is? > > > > I can send them from my part. Should I add signed-off-by tag at the end > appended to them? > If you are OK with the current commit message, then I can simply add your signed-off-by tag on my side. If you would like to polish the commit message somehow, then I can wait the patches from you. So, what is your decision? > > Also, I want to give an apologies for the delayed/none reply about the > crash of xfstests on my part. I went back testing them 3 days earlier > and they started showing different results again and then I have broken > my finger....Which caused me to have much slower progress.I'm still > working on getting the same crashes as I did before where I get them > when running any test.Because I ran quick tests and they didn't crash. > only with auto around the 631 test for desktop and around 642 on my > laptop for both not patched and patched kernels.I'm going to update you > on that matter when I can have predictable behavior and cause of the > crash/call stack.But expect slow progress from my part here for the > reason I mentionned before. > No problem. Take your time. Thanks, Slava. > ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-12-01 19:24 ` Viacheslav Dubeyko @ 2025-12-01 21:19 ` Mehdi Ben Hadj Khelifa 2025-12-01 20:37 ` Viacheslav Dubeyko 0 siblings, 1 reply; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-12-01 21:19 UTC (permalink / raw) To: Viacheslav Dubeyko, brauner@kernel.org Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, linux-fsdevel@vger.kernel.org, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, skhan@linuxfoundation.org, glaubitz@physik.fu-berlin.de, viro@zeniv.linux.org.uk, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com On 12/1/25 8:24 PM, Viacheslav Dubeyko wrote: > On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: >>> > > <skipped> > >>> >>> As far as I can see, the situation is improving with the patches. I can say that >>> patches have been tested and I am ready to pick up the patches into HFS/HFS+ >>> tree. >>> >>> Mehdi, should I expect the formal patches from you? Or should I take the patches >>> as it is? >>> >> >> I can send them from my part. Should I add signed-off-by tag at the end >> appended to them? >> > > If you are OK with the current commit message, then I can simply add your > signed-off-by tag on my side. If you would like to polish the commit message > somehow, then I can wait the patches from you. So, what is your decision? > I would like to send patches from my part as a v3. Mainly so that it's more clear in the mailing list what has happened and maybe add a cover letter to suggest that other filesystems could be affected too. If that is not preferred, It's okay if you just add my signed-off-by tag. Commit message for me seems descriptive enough as it is. Also I wanted to ask 2 questions here: 1. Is adding the cc for stable here recommended so that this fix get backported into older stable kernel? 2. Is it normal to have the Reported-by and Fixes tag for the hfsplus patch even though the reported bug is for HFS? I guess it's under the same of the discovered HFS bug so it references that? >> >> Also, I want to give an apologies for the delayed/none reply about the >> crash of xfstests on my part. I went back testing them 3 days earlier >> and they started showing different results again and then I have broken >> my finger....Which caused me to have much slower progress.I'm still >> working on getting the same crashes as I did before where I get them >> when running any test.Because I ran quick tests and they didn't crash. >> only with auto around the 631 test for desktop and around 642 on my >> laptop for both not patched and patched kernels.I'm going to update you >> on that matter when I can have predictable behavior and cause of the >> crash/call stack.But expect slow progress from my part here for the >> reason I mentionned before. >> > > No problem. Take your time. > Thanks ! > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj khelifa >> ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-12-01 21:19 ` Mehdi Ben Hadj Khelifa @ 2025-12-01 20:37 ` Viacheslav Dubeyko 2025-12-01 21:57 ` Mehdi Ben Hadj Khelifa 0 siblings, 1 reply; 28+ messages in thread From: Viacheslav Dubeyko @ 2025-12-01 20:37 UTC (permalink / raw) To: brauner@kernel.org, mehdi.benhadjkhelifa@gmail.com Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, linux-fsdevel@vger.kernel.org, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, skhan@linuxfoundation.org, glaubitz@physik.fu-berlin.de, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, viro@zeniv.linux.org.uk On Mon, 2025-12-01 at 22:19 +0100, Mehdi Ben Hadj Khelifa wrote: > On 12/1/25 8:24 PM, Viacheslav Dubeyko wrote: > > On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: > > > > > > > > <skipped> > > > > > > > > > > As far as I can see, the situation is improving with the patches. I can say that > > > > patches have been tested and I am ready to pick up the patches into HFS/HFS+ > > > > tree. > > > > > > > > Mehdi, should I expect the formal patches from you? Or should I take the patches > > > > as it is? > > > > > > > > > > I can send them from my part. Should I add signed-off-by tag at the end > > > appended to them? > > > > > > > If you are OK with the current commit message, then I can simply add your > > signed-off-by tag on my side. If you would like to polish the commit message > > somehow, then I can wait the patches from you. So, what is your decision? > > > I would like to send patches from my part as a v3. Mainly so that it's > more clear in the mailing list what has happened and maybe add a cover > letter to suggest that other filesystems could be affected too. If that > is not preferred, It's okay if you just add my signed-off-by tag. Commit > message for me seems descriptive enough as it is. > OK. Sounds good. > Also I wanted to ask 2 questions here: > > 1. Is adding the cc for stable here recommended so that this fix get > backported into older stable kernel? > I think it's good to have it. > 2. Is it normal to have the Reported-by and Fixes tag for the hfsplus > patch even though the reported bug is for HFS? I guess it's under the > same of the discovered HFS bug so it references that? So, we haven't syzbot report for the case of HFS+. However, you can consider me as reporter of the potential issue for HFS+ case. And I assume that Fixes tag should be different for the case of HFS+. Potentially, we could miss the Fixes tag for the case of HFS+ if you don't know what should be used as Fixes tag here. Thanks, Slava. > > ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure 2025-12-01 20:37 ` Viacheslav Dubeyko @ 2025-12-01 21:57 ` Mehdi Ben Hadj Khelifa 0 siblings, 0 replies; 28+ messages in thread From: Mehdi Ben Hadj Khelifa @ 2025-12-01 21:57 UTC (permalink / raw) To: Viacheslav Dubeyko, brauner@kernel.org Cc: jack@suse.cz, khalid@kernel.org, frank.li@vivo.com, linux-fsdevel@vger.kernel.org, slava@dubeyko.com, david.hunter.linux@gmail.com, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org, skhan@linuxfoundation.org, glaubitz@physik.fu-berlin.de, syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com, viro@zeniv.linux.org.uk On 12/1/25 9:37 PM, Viacheslav Dubeyko wrote: > On Mon, 2025-12-01 at 22:19 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 12/1/25 8:24 PM, Viacheslav Dubeyko wrote: >>> On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: >>>>> >>> >>> <skipped> >>> >>>>> >>>>> As far as I can see, the situation is improving with the patches. I can say that >>>>> patches have been tested and I am ready to pick up the patches into HFS/HFS+ >>>>> tree. >>>>> >>>>> Mehdi, should I expect the formal patches from you? Or should I take the patches >>>>> as it is? >>>>> >>>> >>>> I can send them from my part. Should I add signed-off-by tag at the end >>>> appended to them? >>>> >>> >>> If you are OK with the current commit message, then I can simply add your >>> signed-off-by tag on my side. If you would like to polish the commit message >>> somehow, then I can wait the patches from you. So, what is your decision? >>> >> I would like to send patches from my part as a v3. Mainly so that it's >> more clear in the mailing list what has happened and maybe add a cover >> letter to suggest that other filesystems could be affected too. If that >> is not preferred, It's okay if you just add my signed-off-by tag. Commit >> message for me seems descriptive enough as it is. >> > > OK. Sounds good. > >> Also I wanted to ask 2 questions here: >> >> 1. Is adding the cc for stable here recommended so that this fix get >> backported into older stable kernel? >> > > I think it's good to have it. > Okay I will add it to both patches >> 2. Is it normal to have the Reported-by and Fixes tag for the hfsplus >> patch even though the reported bug is for HFS? I guess it's under the >> same of the discovered HFS bug so it references that? > > So, we haven't syzbot report for the case of HFS+. However, you can consider me > as reporter of the potential issue for HFS+ case. And I assume that Fixes tag > should be different for the case of HFS+. Potentially, we could miss the Fixes > tag for the case of HFS+ if you don't know what should be used as Fixes tag > here. > I will make sure to adjust the patches's descriptions as you have suggested and I will be sending them tonight or early next morning. > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj Khelifa >>> ^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2025-12-01 20:57 UTC | newest] Thread overview: 28+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-11-19 7:38 [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure Mehdi Ben Hadj Khelifa 2025-11-19 14:14 ` Christian Brauner 2025-11-19 15:27 ` Mehdi Ben Hadj Khelifa 2025-11-19 19:58 ` Viacheslav Dubeyko 2025-11-19 22:21 ` Mehdi Ben Hadj Khelifa 2025-11-19 22:24 ` Mehdi Ben Hadj Khelifa 2025-11-21 19:44 ` Mehdi Ben Hadj Khelifa 2025-11-21 21:15 ` Viacheslav Dubeyko 2025-11-21 22:48 ` Mehdi Ben Hadj Khelifa 2025-11-21 22:04 ` Viacheslav Dubeyko 2025-11-21 23:16 ` Mehdi Ben Hadj Khelifa 2025-11-21 22:28 ` Viacheslav Dubeyko 2025-11-21 23:36 ` Mehdi Ben Hadj Khelifa 2025-11-21 23:01 ` Viacheslav Dubeyko 2025-11-25 22:14 ` Mehdi Ben Hadj Khelifa 2025-11-25 22:36 ` Viacheslav Dubeyko 2025-11-27 17:45 ` David Hunter 2025-11-29 13:06 ` Mehdi Ben Hadj Khelifa 2025-11-26 13:48 ` Christian Brauner 2025-11-26 16:06 ` Mehdi Ben Hadj Khelifa 2025-11-26 22:30 ` Viacheslav Dubeyko 2025-11-27 8:59 ` Christian Brauner 2025-11-27 20:19 ` Viacheslav Dubeyko 2025-11-29 12:48 ` Mehdi Ben Hadj Khelifa 2025-12-01 19:24 ` Viacheslav Dubeyko 2025-12-01 21:19 ` Mehdi Ben Hadj Khelifa 2025-12-01 20:37 ` Viacheslav Dubeyko 2025-12-01 21:57 ` Mehdi Ben Hadj Khelifa
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).