* [PATCH] hfs: evaluate the upper 32bits for detecting overflow
@ 2026-02-11 10:29 Tetsuo Handa
2026-02-11 22:36 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-11 10:29 UTC (permalink / raw)
To: Jori Koolstra, Viacheslav Dubeyko, John Paul Adrian Glaubitz,
Yangtao Li
Cc: linux-fsdevel
Commit b226804532a8 ("hfs: Replace BUG_ON with error handling for CNID
count checks") replaced BUG_ON() with "atomic64_inc_return() => check for
overflow => atomic64_dec() if overflowed" pattern. That approach works
because the 64bits signed variable is initialized using a 32bits unsigned
variable, making sure that the initial value is in [0, U32_MAX] range.
However, if HFS_SB(sb)->file_count is smaller than number of file inodes
that actually exists due to filesystem corruption, calling
atomic64_dec(&HFS_SB(sb)->file_count) from hfs_delete_inode() can make
HFS_SB(sb)->file_count < 0.
As a result, "atomic64_read(&sbi->file_count) > U32_MAX" comparison in
is_hfs_cnid_counts_valid() fails to detect overflow when
HFS_SB(sb)->file_count < 0, for this is a comparison between signed
64bits and unsigned 32bits. Evaluate the upper 32bits of the 64bits
variable for detecting overflow.
Fixes: b226804532a8 ("hfs: Replace BUG_ON with error handling for CNID count checks")
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
Only compile tested.
fs/hfs/inode.c | 6 +++---
fs/hfs/mdb.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 878535db64d6..7b5a4686aa79 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -199,7 +199,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
spin_lock_init(&HFS_I(inode)->open_dir_lock);
hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
- if (next_id > U32_MAX) {
+ if (next_id >> 32) {
atomic64_dec(&HFS_SB(sb)->next_id);
pr_err("cannot create new inode: next CNID exceeds limit\n");
goto out_discard;
@@ -217,7 +217,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
if (S_ISDIR(mode)) {
inode->i_size = 2;
folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
- if (folder_count> U32_MAX) {
+ if (folder_count >> 32) {
atomic64_dec(&HFS_SB(sb)->folder_count);
pr_err("cannot create new inode: folder count exceeds limit\n");
goto out_discard;
@@ -231,7 +231,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
} else if (S_ISREG(mode)) {
HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
- if (file_count > U32_MAX) {
+ if (file_count >> 32) {
atomic64_dec(&HFS_SB(sb)->file_count);
pr_err("cannot create new inode: file count exceeds limit\n");
goto out_discard;
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index a97cea35ca2e..68d3c0714057 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -69,15 +69,15 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
struct hfs_sb_info *sbi = HFS_SB(sb);
bool corrupted = false;
- if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
+ if (unlikely(atomic64_read(&sbi->next_id) >> 32)) {
pr_warn("next CNID exceeds limit\n");
corrupted = true;
}
- if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
+ if (unlikely(atomic64_read(&sbi->file_count) >> 32)) {
pr_warn("file count exceeds limit\n");
corrupted = true;
}
- if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
+ if (unlikely(atomic64_read(&sbi->folder_count) >> 32)) {
pr_warn("folder count exceeds limit\n");
corrupted = true;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-11 10:29 [PATCH] hfs: evaluate the upper 32bits for detecting overflow Tetsuo Handa
@ 2026-02-11 22:36 ` Viacheslav Dubeyko
2026-02-12 13:27 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-11 22:36 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Wed, 2026-02-11 at 19:29 +0900, Tetsuo Handa wrote:
> Commit b226804532a8 ("hfs: Replace BUG_ON with error handling for CNID
> count checks") replaced BUG_ON() with "atomic64_inc_return() => check for
> overflow => atomic64_dec() if overflowed" pattern. That approach works
> because the 64bits signed variable is initialized using a 32bits unsigned
> variable, making sure that the initial value is in [0, U32_MAX] range.
>
> However, if HFS_SB(sb)->file_count is smaller than number of file inodes
> that actually exists due to filesystem corruption, calling
> atomic64_dec(&HFS_SB(sb)->file_count) from hfs_delete_inode() can make
> HFS_SB(sb)->file_count < 0.
>
> As a result, "atomic64_read(&sbi->file_count) > U32_MAX" comparison in
> is_hfs_cnid_counts_valid() fails to detect overflow when
> HFS_SB(sb)->file_count < 0, for this is a comparison between signed
> 64bits and unsigned 32bits. Evaluate the upper 32bits of the 64bits
> variable for detecting overflow.
>
> Fixes: b226804532a8 ("hfs: Replace BUG_ON with error handling for CNID count checks")
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> ---
> Only compile tested.
Maybe, it makes sense to run some tests? :)
>
> fs/hfs/inode.c | 6 +++---
> fs/hfs/mdb.c | 6 +++---
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
> index 878535db64d6..7b5a4686aa79 100644
> --- a/fs/hfs/inode.c
> +++ b/fs/hfs/inode.c
> @@ -199,7 +199,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> spin_lock_init(&HFS_I(inode)->open_dir_lock);
> hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
> next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
> - if (next_id > U32_MAX) {
> + if (next_id >> 32) {
What's about upper_32_bits()?
> atomic64_dec(&HFS_SB(sb)->next_id);
> pr_err("cannot create new inode: next CNID exceeds limit\n");
> goto out_discard;
> @@ -217,7 +217,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> if (S_ISDIR(mode)) {
> inode->i_size = 2;
> folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
> - if (folder_count> U32_MAX) {
> + if (folder_count >> 32) {
Ditto.
> atomic64_dec(&HFS_SB(sb)->folder_count);
> pr_err("cannot create new inode: folder count exceeds limit\n");
> goto out_discard;
> @@ -231,7 +231,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> } else if (S_ISREG(mode)) {
> HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
> file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
> - if (file_count > U32_MAX) {
> + if (file_count >> 32) {
Ditto.
> atomic64_dec(&HFS_SB(sb)->file_count);
> pr_err("cannot create new inode: file count exceeds limit\n");
> goto out_discard;
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index a97cea35ca2e..68d3c0714057 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -69,15 +69,15 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
> struct hfs_sb_info *sbi = HFS_SB(sb);
> bool corrupted = false;
>
> - if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
> + if (unlikely(atomic64_read(&sbi->next_id) >> 32)) {
Ditto.
> pr_warn("next CNID exceeds limit\n");
> corrupted = true;
> }
> - if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
> + if (unlikely(atomic64_read(&sbi->file_count) >> 32)) {
Ditto.
> pr_warn("file count exceeds limit\n");
> corrupted = true;
> }
> - if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
> + if (unlikely(atomic64_read(&sbi->folder_count) >> 32)) {
Ditto.
Thanks,
Slava.
> pr_warn("folder count exceeds limit\n");
> corrupted = true;
> }
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-11 22:36 ` Viacheslav Dubeyko
@ 2026-02-12 13:27 ` Tetsuo Handa
2026-02-12 21:51 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-12 13:27 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/12 7:36, Viacheslav Dubeyko wrote:
>> Only compile tested.
>
> Maybe, it makes sense to run some tests? :)
>
I tried below diff. While this diff worked, I came to feel that we don't need to
fail operations upon overflow of ->file_count or ->folder_count.
Since ->next_id is used for inode number, we should check for next_id >= 16.
But ->file_count and ->folder_count are (if I understand correctly) only for
statistical purpose and *currently checking for overflow on creation and not
checking for overflow on deletion*. There are ->root_files and ->root_dirs
which are also for statistical purpose and *currently not checking for overflow*.
Overflowing on these counters are not fatal enough to make operations fail.
I think that we can use 32bits atomic_t for ->file_count / ->folder_count, and cap
max/min range using atomic_add_unless(v, 1, -1)/atomic_add_unless(v, -1, 0).
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 878535db64d6..b2c095555797 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -199,7 +199,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
spin_lock_init(&HFS_I(inode)->open_dir_lock);
hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
- if (next_id > U32_MAX) {
+ if (upper_32_bits(next_id)) {
atomic64_dec(&HFS_SB(sb)->next_id);
pr_err("cannot create new inode: next CNID exceeds limit\n");
goto out_discard;
@@ -217,7 +217,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
if (S_ISDIR(mode)) {
inode->i_size = 2;
folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
- if (folder_count> U32_MAX) {
+ if (upper_32_bits(folder_count)) {
atomic64_dec(&HFS_SB(sb)->folder_count);
pr_err("cannot create new inode: folder count exceeds limit\n");
goto out_discard;
@@ -231,7 +231,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
} else if (S_ISREG(mode)) {
HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
- if (file_count > U32_MAX) {
+ if (upper_32_bits(file_count)) {
atomic64_dec(&HFS_SB(sb)->file_count);
pr_err("cannot create new inode: file count exceeds limit\n");
goto out_discard;
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index a97cea35ca2e..bdfa54833a4f 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -68,16 +68,17 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
{
struct hfs_sb_info *sbi = HFS_SB(sb);
bool corrupted = false;
+ s64 next_id = atomic64_read(&sbi->next_id);
- if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
+ if (unlikely(upper_32_bits(next_id) || next_id < HFS_FIRSTUSER_CNID)) {
pr_warn("next CNID exceeds limit\n");
corrupted = true;
}
- if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
+ if (unlikely(upper_32_bits(atomic64_read(&sbi->file_count)))) {
pr_warn("file count exceeds limit\n");
corrupted = true;
}
- if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
+ if (unlikely(upper_32_bits(atomic64_read(&sbi->folder_count)))) {
pr_warn("folder count exceeds limit\n");
corrupted = true;
}
^ permalink raw reply related [flat|nested] 17+ messages in thread
* RE: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-12 13:27 ` Tetsuo Handa
@ 2026-02-12 21:51 ` Viacheslav Dubeyko
2026-02-13 10:34 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-12 21:51 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Thu, 2026-02-12 at 22:27 +0900, Tetsuo Handa wrote:
> On 2026/02/12 7:36, Viacheslav Dubeyko wrote:
> > > Only compile tested.
> >
> > Maybe, it makes sense to run some tests? :)
> >
>
> I tried below diff. While this diff worked, I came to feel that we don't need to
> fail operations upon overflow of ->file_count or ->folder_count.
>
> Since ->next_id is used for inode number, we should check for next_id >= 16.
>
> But ->file_count and ->folder_count are (if I understand correctly) only for
> statistical purpose and *currently checking for overflow on creation and not
> checking for overflow on deletion*.
>
These fields exist not for statistical purpose. We store these values in struct
hfs_mdb [1, 2] and, finally, on the volume. As file system driver as FSCK tool
can use these values for comparing with b-trees' content.
As I remember, we are checking the deletion case too [3].
> There are ->root_files and ->root_dirs
> which are also for statistical purpose and *currently not checking for overflow*.
It's also not for statistical purpose. :) I think to have the checking logic
for root_files and root_dirs will be good too.
> Overflowing on these counters are not fatal enough to make operations fail.
>
> I think that we can use 32bits atomic_t for ->file_count / ->folder_count, and cap
> max/min range using atomic_add_unless(v, 1, -1)/atomic_add_unless(v, -1, 0).
These values are __be32 and it means that U32_MAX is completely normal value.
This is why atomic64_t was selected.
Thanks,
Slava.
>
> diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
> index 878535db64d6..b2c095555797 100644
> --- a/fs/hfs/inode.c
> +++ b/fs/hfs/inode.c
> @@ -199,7 +199,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> spin_lock_init(&HFS_I(inode)->open_dir_lock);
> hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
> next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
> - if (next_id > U32_MAX) {
> + if (upper_32_bits(next_id)) {
> atomic64_dec(&HFS_SB(sb)->next_id);
> pr_err("cannot create new inode: next CNID exceeds limit\n");
> goto out_discard;
> @@ -217,7 +217,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> if (S_ISDIR(mode)) {
> inode->i_size = 2;
> folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
> - if (folder_count> U32_MAX) {
> + if (upper_32_bits(folder_count)) {
> atomic64_dec(&HFS_SB(sb)->folder_count);
> pr_err("cannot create new inode: folder count exceeds limit\n");
> goto out_discard;
> @@ -231,7 +231,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> } else if (S_ISREG(mode)) {
> HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
> file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
> - if (file_count > U32_MAX) {
> + if (upper_32_bits(file_count)) {
> atomic64_dec(&HFS_SB(sb)->file_count);
> pr_err("cannot create new inode: file count exceeds limit\n");
> goto out_discard;
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index a97cea35ca2e..bdfa54833a4f 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -68,16 +68,17 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
> {
> struct hfs_sb_info *sbi = HFS_SB(sb);
> bool corrupted = false;
> + s64 next_id = atomic64_read(&sbi->next_id);
>
> - if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
> + if (unlikely(upper_32_bits(next_id) || next_id < HFS_FIRSTUSER_CNID)) {
> pr_warn("next CNID exceeds limit\n");
> corrupted = true;
> }
> - if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
> + if (unlikely(upper_32_bits(atomic64_read(&sbi->file_count)))) {
> pr_warn("file count exceeds limit\n");
> corrupted = true;
> }
> - if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
> + if (unlikely(upper_32_bits(atomic64_read(&sbi->folder_count)))) {
> pr_warn("folder count exceeds limit\n");
> corrupted = true;
> }
[1]
https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L217
[2] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L282
[3] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/inode.c#L264
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-12 21:51 ` Viacheslav Dubeyko
@ 2026-02-13 10:34 ` Tetsuo Handa
2026-02-13 22:45 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-13 10:34 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/13 6:51, Viacheslav Dubeyko wrote:
> On Thu, 2026-02-12 at 22:27 +0900, Tetsuo Handa wrote:
>> On 2026/02/12 7:36, Viacheslav Dubeyko wrote:
>>>> Only compile tested.
>>>
>>> Maybe, it makes sense to run some tests? :)
>>>
>>
>> I tried below diff. While this diff worked, I came to feel that we don't need to
>> fail operations upon overflow of ->file_count or ->folder_count.
>>
>> Since ->next_id is used for inode number, we should check for next_id >= 16.
>>
>> But ->file_count and ->folder_count are (if I understand correctly) only for
>> statistical purpose and *currently checking for overflow on creation and not
>> checking for overflow on deletion*.
>>
>
> These fields exist not for statistical purpose. We store these values in struct
> hfs_mdb [1, 2] and, finally, on the volume. As file system driver as FSCK tool
> can use these values for comparing with b-trees' content.
>
> As I remember, we are checking the deletion case too [3].
>
>> There are ->root_files and ->root_dirs
>> which are also for statistical purpose and *currently not checking for overflow*.
>
> It's also not for statistical purpose. :) I think to have the checking logic
> for root_files and root_dirs will be good too.
Well, I called "statistical purpose" because inaccurate values do not cause serious
problems (such as memory corruption, system crash, loss of file data).
>
>> Overflowing on these counters are not fatal enough to make operations fail.
>>
>> I think that we can use 32bits atomic_t for ->file_count / ->folder_count, and cap
>> max/min range using atomic_add_unless(v, 1, -1)/atomic_add_unless(v, -1, 0).
>
> These values are __be32 and it means that U32_MAX is completely normal value.
> This is why atomic64_t was selected.
atomic_add_unless(v, 1, -1) is atomic version of
if (v != -1) v++;
and atomic_add_unless(v, -1, 0) is atomic version of
if (v != 0) v--;
. The v can accept U32_MAX as normal value.
Below is what I suggest; don't fail operations if counter values for files/directories
overflowed, instead later suggest fsck.hfs when writing to mdb. This is a heuristic
based on an assumption that "legitimate users unlikely crete 65536+ files/directories
under the root directory and 4294967296+ files/directories within one filesystem";
in other words, "overflow of the counter values is likely a signal for a filesystem
corruption (or fuzz testing)".
fs/hfs/hfs_fs.h | 5 +++--
fs/hfs/inode.c | 54 +++++++++++++++++++++++++++++++-----------------------
fs/hfs/mdb.c | 30 +++++++++++-------------------
fs/hfs/super.c | 8 +++-----
4 files changed, 48 insertions(+), 49 deletions(-)
diff --git a/fs/hfs/hfs_fs.h b/fs/hfs/hfs_fs.h
index ac0e83f77a0f..779394f7f2b9 100644
--- a/fs/hfs/hfs_fs.h
+++ b/fs/hfs/hfs_fs.h
@@ -80,10 +80,10 @@ struct hfs_sb_info {
the extents b-tree */
struct hfs_btree *cat_tree; /* Information about
the catalog b-tree */
- atomic64_t file_count; /* The number of
+ atomic_t file_count; /* The number of
regular files in
the filesystem */
- atomic64_t folder_count; /* The number of
+ atomic_t folder_count; /* The number of
directories in the
filesystem */
atomic64_t next_id; /* The next available
@@ -132,6 +132,7 @@ struct hfs_sb_info {
int work_queued; /* non-zero delayed work is queued */
struct delayed_work mdb_work; /* MDB flush delayed work */
spinlock_t work_lock; /* protects mdb_work and work_queued */
+ bool suggest_fsck;
};
#define HFS_FLG_BITMAP_DIRTY 0
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 878535db64d6..0ed145b92803 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -185,8 +185,6 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
struct super_block *sb = dir->i_sb;
struct inode *inode = new_inode(sb);
s64 next_id;
- s64 file_count;
- s64 folder_count;
int err = -ENOMEM;
if (!inode)
@@ -199,7 +197,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
spin_lock_init(&HFS_I(inode)->open_dir_lock);
hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
- if (next_id > U32_MAX) {
+ if (upper_32_bits(next_id)) {
atomic64_dec(&HFS_SB(sb)->next_id);
pr_err("cannot create new inode: next CNID exceeds limit\n");
goto out_discard;
@@ -216,28 +214,28 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
if (S_ISDIR(mode)) {
inode->i_size = 2;
- folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
- if (folder_count> U32_MAX) {
- atomic64_dec(&HFS_SB(sb)->folder_count);
- pr_err("cannot create new inode: folder count exceeds limit\n");
- goto out_discard;
+ if (unlikely(!atomic_add_unless(&HFS_SB(sb)->folder_count, 1, -1)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (dir->i_ino == HFS_ROOT_CNID) {
+ if (unlikely(HFS_SB(sb)->root_dirs == U16_MAX))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_dirs++;
}
- if (dir->i_ino == HFS_ROOT_CNID)
- HFS_SB(sb)->root_dirs++;
inode->i_op = &hfs_dir_inode_operations;
inode->i_fop = &hfs_dir_operations;
inode->i_mode |= S_IRWXUGO;
inode->i_mode &= ~HFS_SB(inode->i_sb)->s_dir_umask;
} else if (S_ISREG(mode)) {
HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
- file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
- if (file_count > U32_MAX) {
- atomic64_dec(&HFS_SB(sb)->file_count);
- pr_err("cannot create new inode: file count exceeds limit\n");
- goto out_discard;
+ if (unlikely(!atomic_add_unless(&HFS_SB(sb)->file_count, 1, -1)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (dir->i_ino == HFS_ROOT_CNID) {
+ if (unlikely(HFS_SB(sb)->root_files == U16_MAX))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_files++;
}
- if (dir->i_ino == HFS_ROOT_CNID)
- HFS_SB(sb)->root_files++;
inode->i_op = &hfs_file_inode_operations;
inode->i_fop = &hfs_file_operations;
inode->i_mapping->a_ops = &hfs_aops;
@@ -272,17 +270,27 @@ void hfs_delete_inode(struct inode *inode)
hfs_dbg("ino %lu\n", inode->i_ino);
if (S_ISDIR(inode->i_mode)) {
- atomic64_dec(&HFS_SB(sb)->folder_count);
- if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
- HFS_SB(sb)->root_dirs--;
+ if (unlikely(!atomic_add_unless(&HFS_SB(sb)->folder_count, -1, 0)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID)) {
+ if (unlikely(!HFS_SB(sb)->root_dirs))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_dirs--;
+ }
set_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags);
hfs_mark_mdb_dirty(sb);
return;
}
- atomic64_dec(&HFS_SB(sb)->file_count);
- if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
- HFS_SB(sb)->root_files--;
+ if (unlikely(!atomic_add_unless(&HFS_SB(sb)->file_count, -1, 0)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID)) {
+ if (unlikely(!HFS_SB(sb)->root_files))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_files--;
+ }
if (S_ISREG(inode->i_mode)) {
if (!inode->i_nlink) {
inode->i_size = 0;
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index a97cea35ca2e..ef969e3c2192 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -67,22 +67,11 @@ static int hfs_get_last_session(struct super_block *sb,
bool is_hfs_cnid_counts_valid(struct super_block *sb)
{
struct hfs_sb_info *sbi = HFS_SB(sb);
- bool corrupted = false;
+ s64 next_id = atomic64_read(&sbi->next_id);
- if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
- pr_warn("next CNID exceeds limit\n");
- corrupted = true;
- }
- if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
- pr_warn("file count exceeds limit\n");
- corrupted = true;
- }
- if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
- pr_warn("folder count exceeds limit\n");
- corrupted = true;
- }
-
- return !corrupted;
+ if (unlikely(upper_32_bits(next_id) || next_id < HFS_FIRSTUSER_CNID))
+ sbi->suggest_fsck = true;
+ return !sbi->suggest_fsck;
}
/*
@@ -177,8 +166,8 @@ int hfs_mdb_get(struct super_block *sb)
atomic64_set(&HFS_SB(sb)->next_id, be32_to_cpu(mdb->drNxtCNID));
HFS_SB(sb)->root_files = be16_to_cpu(mdb->drNmFls);
HFS_SB(sb)->root_dirs = be16_to_cpu(mdb->drNmRtDirs);
- atomic64_set(&HFS_SB(sb)->file_count, be32_to_cpu(mdb->drFilCnt));
- atomic64_set(&HFS_SB(sb)->folder_count, be32_to_cpu(mdb->drDirCnt));
+ atomic_set(&HFS_SB(sb)->file_count, be32_to_cpu(mdb->drFilCnt));
+ atomic_set(&HFS_SB(sb)->folder_count, be32_to_cpu(mdb->drDirCnt));
if (!is_hfs_cnid_counts_valid(sb)) {
pr_warn("filesystem possibly corrupted, running fsck.hfs is recommended. Mounting read-only.\n");
@@ -291,6 +280,9 @@ void hfs_mdb_commit(struct super_block *sb)
if (sb_rdonly(sb))
return;
+ if (!is_hfs_cnid_counts_valid(sb))
+ pr_warn("filesystem error was detected, running fsck.hfs is recommended.\n");
+
lock_buffer(HFS_SB(sb)->mdb_bh);
if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
/* These parameters may have been modified, so write them back */
@@ -301,9 +293,9 @@ void hfs_mdb_commit(struct super_block *sb)
mdb->drNmFls = cpu_to_be16(HFS_SB(sb)->root_files);
mdb->drNmRtDirs = cpu_to_be16(HFS_SB(sb)->root_dirs);
mdb->drFilCnt =
- cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->file_count));
+ cpu_to_be32((u32)atomic_read(&HFS_SB(sb)->file_count));
mdb->drDirCnt =
- cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->folder_count));
+ cpu_to_be32((u32)atomic_read(&HFS_SB(sb)->folder_count));
/* write MDB to disk */
mark_buffer_dirty(HFS_SB(sb)->mdb_bh);
diff --git a/fs/hfs/super.c b/fs/hfs/super.c
index 97546d6b41f4..499e89cacd82 100644
--- a/fs/hfs/super.c
+++ b/fs/hfs/super.c
@@ -34,7 +34,6 @@ MODULE_LICENSE("GPL");
static int hfs_sync_fs(struct super_block *sb, int wait)
{
- is_hfs_cnid_counts_valid(sb);
hfs_mdb_commit(sb);
return 0;
}
@@ -66,8 +65,6 @@ static void flush_mdb(struct work_struct *work)
sbi->work_queued = 0;
spin_unlock(&sbi->work_lock);
- is_hfs_cnid_counts_valid(sb);
-
hfs_mdb_commit(sb);
}
@@ -322,8 +319,9 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
int silent = fc->sb_flags & SB_SILENT;
int res;
- atomic64_set(&sbi->file_count, 0);
- atomic64_set(&sbi->folder_count, 0);
+ sbi->suggest_fsck = false;
+ atomic_set(&sbi->file_count, 0);
+ atomic_set(&sbi->folder_count, 0);
atomic64_set(&sbi->next_id, 0);
/* load_nls_default does not fail */
^ permalink raw reply related [flat|nested] 17+ messages in thread
* RE: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-13 10:34 ` Tetsuo Handa
@ 2026-02-13 22:45 ` Viacheslav Dubeyko
2026-02-14 7:09 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-13 22:45 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Fri, 2026-02-13 at 19:34 +0900, Tetsuo Handa wrote:
> On 2026/02/13 6:51, Viacheslav Dubeyko wrote:
> > On Thu, 2026-02-12 at 22:27 +0900, Tetsuo Handa wrote:
> > > On 2026/02/12 7:36, Viacheslav Dubeyko wrote:
> > > > > Only compile tested.
> > > >
> > > > Maybe, it makes sense to run some tests? :)
> > > >
> > >
> > > I tried below diff. While this diff worked, I came to feel that we don't need to
> > > fail operations upon overflow of ->file_count or ->folder_count.
> > >
> > > Since ->next_id is used for inode number, we should check for next_id >= 16.
> > >
> > > But ->file_count and ->folder_count are (if I understand correctly) only for
> > > statistical purpose and *currently checking for overflow on creation and not
> > > checking for overflow on deletion*.
> > >
> >
> > These fields exist not for statistical purpose. We store these values in struct
> > hfs_mdb [1, 2] and, finally, on the volume. As file system driver as FSCK tool
> > can use these values for comparing with b-trees' content.
> >
> > As I remember, we are checking the deletion case too [3].
> >
> > > There are ->root_files and ->root_dirs
> > > which are also for statistical purpose and *currently not checking for overflow*.
> >
> > It's also not for statistical purpose. :) I think to have the checking logic
> > for root_files and root_dirs will be good too.
>
> Well, I called "statistical purpose" because inaccurate values do not cause serious
> problems (such as memory corruption, system crash, loss of file data).
>
> >
> > > Overflowing on these counters are not fatal enough to make operations fail.
> > >
> > > I think that we can use 32bits atomic_t for ->file_count / ->folder_count, and cap
> > > max/min range using atomic_add_unless(v, 1, -1)/atomic_add_unless(v, -1, 0).
> >
> > These values are __be32 and it means that U32_MAX is completely normal value.
> > This is why atomic64_t was selected.
>
> atomic_add_unless(v, 1, -1) is atomic version of
>
> if (v != -1) v++;
>
> and atomic_add_unless(v, -1, 0) is atomic version of
>
> if (v != 0) v--;
>
> . The v can accept U32_MAX as normal value.
>
> Below is what I suggest; don't fail operations if counter values for files/directories
> overflowed, instead later suggest fsck.hfs when writing to mdb. This is a heuristic
> based on an assumption that "legitimate users unlikely crete 65536+ files/directories
> under the root directory and 4294967296+ files/directories within one filesystem";
> in other words, "overflow of the counter values is likely a signal for a filesystem
> corruption (or fuzz testing)".
>
>
typedef struct {
int counter;
} atomic_t;
UINT_MAX is 4,294,967,295 (or 0xffffffff in hexadecimal).
INT_MAX: 32-bit Signed Integer: Ranges from -2,147,483,648 to +2,147,483,647.
So, you cannot represent __be32 in signed integer.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-13 22:45 ` Viacheslav Dubeyko
@ 2026-02-14 7:09 ` Tetsuo Handa
2026-02-17 0:59 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-14 7:09 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/14 7:45, Viacheslav Dubeyko wrote:
> typedef struct {
> int counter;
> } atomic_t;
>
> UINT_MAX is 4,294,967,295 (or 0xffffffff in hexadecimal).
> INT_MAX: 32-bit Signed Integer: Ranges from -2,147,483,648 to +2,147,483,647.
>
> So, you cannot represent __be32 in signed integer.
I can't catch what you are talking about.
There is no difference among e.g. s32, u32, int, unsigned int, __le32, __be32
in that these types can represent 4294967296 integer values. The difference
among these types is how the pattern represented using 32 bits is interpreted.
The -1 in int or s32 is the same with 4294967295 in unsigned int or u32.
----------
#include <stdio.h>
int main(int argc, char *argv[])
{
int i = -1;
printf("%d\n", (int) i); // prints -1
printf("%u\n", (unsigned int) i); // prints 4294967295
return 0;
}
----------
We can represent __be32 using signed 32bits integer when counting number of
files/directories (which by definition cannot take a negative value), for
we can interpret [-2147483648,-1] range as [2147483648,4294967295] range
because we don't need to handle [-2147483648,-1] range.
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-14 7:09 ` Tetsuo Handa
@ 2026-02-17 0:59 ` Viacheslav Dubeyko
2026-02-17 12:41 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-17 0:59 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Sat, 2026-02-14 at 16:09 +0900, Tetsuo Handa wrote:
> On 2026/02/14 7:45, Viacheslav Dubeyko wrote:
> > typedef struct {
> > int counter;
> > } atomic_t;
> >
> > UINT_MAX is 4,294,967,295 (or 0xffffffff in hexadecimal).
> > INT_MAX: 32-bit Signed Integer: Ranges from -2,147,483,648 to +2,147,483,647.
> >
> > So, you cannot represent __be32 in signed integer.
>
> I can't catch what you are talking about.
>
> There is no difference among e.g. s32, u32, int, unsigned int, __le32, __be32
> in that these types can represent 4294967296 integer values. The difference
> among these types is how the pattern represented using 32 bits is interpreted.
> The -1 in int or s32 is the same with 4294967295 in unsigned int or u32.
>
> ----------
> #include <stdio.h>
>
> int main(int argc, char *argv[])
> {
> int i = -1;
> printf("%d\n", (int) i); // prints -1
> printf("%u\n", (unsigned int) i); // prints 4294967295
> return 0;
> }
> ----------
>
> We can represent __be32 using signed 32bits integer when counting number of
> files/directories (which by definition cannot take a negative value), for
> we can interpret [-2147483648,-1] range as [2147483648,4294967295] range
> because we don't need to handle [-2147483648,-1] range.
If you suggest to increment the atomic_t until U32_MAX and somehow keep in the
mind that we need to treat negative value as positive, then it's confusing and
nobody will follow to this solution. It will introduce the bugs easily. Using
atomic64_t is clear solution, we don't need to use any tricks and everybody can
follow to such technique.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-17 0:59 ` Viacheslav Dubeyko
@ 2026-02-17 12:41 ` Tetsuo Handa
2026-02-18 22:07 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-17 12:41 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/17 9:59, Viacheslav Dubeyko wrote:
>> We can represent __be32 using signed 32bits integer when counting number of
>> files/directories (which by definition cannot take a negative value), for
>> we can interpret [-2147483648,-1] range as [2147483648,4294967295] range
>> because we don't need to handle [-2147483648,-1] range.
>
> If you suggest to increment the atomic_t until U32_MAX and somehow keep in the
> mind that we need to treat negative value as positive, then it's confusing and
> nobody will follow to this solution. It will introduce the bugs easily. Using
> atomic64_t is clear solution, we don't need to use any tricks and everybody can
> follow to such technique.
Then, we can use u32 and U32_MIN/U32_MAX like below. (I'm not trying to convert
next_id to u32, for it needs more inspection before we are convinced that there is
no possibility for overflow.) Do you still dislike converting to u32?
fs/hfs/hfs_fs.h | 37 ++++++++++++++++++++++++++++++++++---
fs/hfs/inode.c | 54 +++++++++++++++++++++++++++++++-----------------------
fs/hfs/mdb.c | 31 ++++---------------------------
fs/hfs/super.c | 12 +++++++-----
4 files changed, 76 insertions(+), 58 deletions(-)
diff --git a/fs/hfs/hfs_fs.h b/fs/hfs/hfs_fs.h
index ac0e83f77a0f..73792460f938 100644
--- a/fs/hfs/hfs_fs.h
+++ b/fs/hfs/hfs_fs.h
@@ -80,10 +80,10 @@ struct hfs_sb_info {
the extents b-tree */
struct hfs_btree *cat_tree; /* Information about
the catalog b-tree */
- atomic64_t file_count; /* The number of
+ u32 file_count; /* The number of
regular files in
the filesystem */
- atomic64_t folder_count; /* The number of
+ u32 folder_count; /* The number of
directories in the
filesystem */
atomic64_t next_id; /* The next available
@@ -132,6 +132,7 @@ struct hfs_sb_info {
int work_queued; /* non-zero delayed work is queued */
struct delayed_work mdb_work; /* MDB flush delayed work */
spinlock_t work_lock; /* protects mdb_work and work_queued */
+ bool suggest_fsck;
};
#define HFS_FLG_BITMAP_DIRTY 0
@@ -199,7 +200,6 @@ extern void hfs_delete_inode(struct inode *inode);
extern const struct xattr_handler * const hfs_xattr_handlers[];
/* mdb.c */
-extern bool is_hfs_cnid_counts_valid(struct super_block *sb);
extern int hfs_mdb_get(struct super_block *sb);
extern void hfs_mdb_commit(struct super_block *sb);
extern void hfs_mdb_close(struct super_block *sb);
@@ -289,4 +289,35 @@ static inline void hfs_bitmap_dirty(struct super_block *sb)
__bh; \
})
+static inline bool inc_u32_unless_wraparound(u32 *v)
+{
+ while (1) {
+ const u32 t = data_race(READ_ONCE(*v)); /* silence KCSAN */
+
+ if (unlikely(t == U32_MAX))
+ return false;
+ if (likely(cmpxchg(v, t, t + 1) == t))
+ return true;
+ }
+}
+
+static inline bool dec_u32_unless_wraparound(u32 *v)
+{
+ while (1) {
+ const u32 t = data_race(READ_ONCE(*v)); /* silence KCSAN */
+
+ if (unlikely(t == U32_MIN))
+ return false;
+ if (likely(cmpxchg(v, t, t - 1) == t))
+ return true;
+ }
+}
+
+static inline bool is_hfs_cnid_counts_valid(struct super_block *sb)
+{
+ s64 next_id = atomic64_read(&HFS_SB(sb)->next_id);
+
+ return next_id >= HFS_FIRSTUSER_CNID && !upper_32_bits(next_id);
+}
+
#endif
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 878535db64d6..5dc48629d27c 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -185,8 +185,6 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
struct super_block *sb = dir->i_sb;
struct inode *inode = new_inode(sb);
s64 next_id;
- s64 file_count;
- s64 folder_count;
int err = -ENOMEM;
if (!inode)
@@ -199,7 +197,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
spin_lock_init(&HFS_I(inode)->open_dir_lock);
hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
- if (next_id > U32_MAX) {
+ if (upper_32_bits(next_id)) {
atomic64_dec(&HFS_SB(sb)->next_id);
pr_err("cannot create new inode: next CNID exceeds limit\n");
goto out_discard;
@@ -216,28 +214,28 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
if (S_ISDIR(mode)) {
inode->i_size = 2;
- folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
- if (folder_count> U32_MAX) {
- atomic64_dec(&HFS_SB(sb)->folder_count);
- pr_err("cannot create new inode: folder count exceeds limit\n");
- goto out_discard;
+ if (unlikely(!inc_u32_unless_wraparound(&HFS_SB(sb)->folder_count)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (dir->i_ino == HFS_ROOT_CNID) {
+ if (unlikely(HFS_SB(sb)->root_dirs == U16_MAX))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_dirs++;
}
- if (dir->i_ino == HFS_ROOT_CNID)
- HFS_SB(sb)->root_dirs++;
inode->i_op = &hfs_dir_inode_operations;
inode->i_fop = &hfs_dir_operations;
inode->i_mode |= S_IRWXUGO;
inode->i_mode &= ~HFS_SB(inode->i_sb)->s_dir_umask;
} else if (S_ISREG(mode)) {
HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
- file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
- if (file_count > U32_MAX) {
- atomic64_dec(&HFS_SB(sb)->file_count);
- pr_err("cannot create new inode: file count exceeds limit\n");
- goto out_discard;
+ if (unlikely(!inc_u32_unless_wraparound(&HFS_SB(sb)->file_count)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (dir->i_ino == HFS_ROOT_CNID) {
+ if (unlikely(HFS_SB(sb)->root_files == U16_MAX))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_files++;
}
- if (dir->i_ino == HFS_ROOT_CNID)
- HFS_SB(sb)->root_files++;
inode->i_op = &hfs_file_inode_operations;
inode->i_fop = &hfs_file_operations;
inode->i_mapping->a_ops = &hfs_aops;
@@ -272,17 +270,27 @@ void hfs_delete_inode(struct inode *inode)
hfs_dbg("ino %lu\n", inode->i_ino);
if (S_ISDIR(inode->i_mode)) {
- atomic64_dec(&HFS_SB(sb)->folder_count);
- if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
- HFS_SB(sb)->root_dirs--;
+ if (unlikely(!dec_u32_unless_wraparound(&HFS_SB(sb)->folder_count)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID)) {
+ if (unlikely(!HFS_SB(sb)->root_dirs))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_dirs--;
+ }
set_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags);
hfs_mark_mdb_dirty(sb);
return;
}
- atomic64_dec(&HFS_SB(sb)->file_count);
- if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
- HFS_SB(sb)->root_files--;
+ if (unlikely(!dec_u32_unless_wraparound(&HFS_SB(sb)->file_count)))
+ HFS_SB(sb)->suggest_fsck = true;
+ if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID)) {
+ if (unlikely(!HFS_SB(sb)->root_files))
+ HFS_SB(sb)->suggest_fsck = true;
+ else
+ HFS_SB(sb)->root_files--;
+ }
if (S_ISREG(inode->i_mode)) {
if (!inode->i_nlink) {
inode->i_size = 0;
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index a97cea35ca2e..c0e55e7233c9 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -64,27 +64,6 @@ static int hfs_get_last_session(struct super_block *sb,
return 0;
}
-bool is_hfs_cnid_counts_valid(struct super_block *sb)
-{
- struct hfs_sb_info *sbi = HFS_SB(sb);
- bool corrupted = false;
-
- if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
- pr_warn("next CNID exceeds limit\n");
- corrupted = true;
- }
- if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
- pr_warn("file count exceeds limit\n");
- corrupted = true;
- }
- if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
- pr_warn("folder count exceeds limit\n");
- corrupted = true;
- }
-
- return !corrupted;
-}
-
/*
* hfs_mdb_get()
*
@@ -177,8 +156,8 @@ int hfs_mdb_get(struct super_block *sb)
atomic64_set(&HFS_SB(sb)->next_id, be32_to_cpu(mdb->drNxtCNID));
HFS_SB(sb)->root_files = be16_to_cpu(mdb->drNmFls);
HFS_SB(sb)->root_dirs = be16_to_cpu(mdb->drNmRtDirs);
- atomic64_set(&HFS_SB(sb)->file_count, be32_to_cpu(mdb->drFilCnt));
- atomic64_set(&HFS_SB(sb)->folder_count, be32_to_cpu(mdb->drDirCnt));
+ HFS_SB(sb)->file_count = be32_to_cpu(mdb->drFilCnt);
+ HFS_SB(sb)->folder_count = be32_to_cpu(mdb->drDirCnt);
if (!is_hfs_cnid_counts_valid(sb)) {
pr_warn("filesystem possibly corrupted, running fsck.hfs is recommended. Mounting read-only.\n");
@@ -300,10 +279,8 @@ void hfs_mdb_commit(struct super_block *sb)
cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->next_id));
mdb->drNmFls = cpu_to_be16(HFS_SB(sb)->root_files);
mdb->drNmRtDirs = cpu_to_be16(HFS_SB(sb)->root_dirs);
- mdb->drFilCnt =
- cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->file_count));
- mdb->drDirCnt =
- cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->folder_count));
+ mdb->drFilCnt = cpu_to_be32(HFS_SB(sb)->file_count);
+ mdb->drDirCnt = cpu_to_be32(HFS_SB(sb)->folder_count);
/* write MDB to disk */
mark_buffer_dirty(HFS_SB(sb)->mdb_bh);
diff --git a/fs/hfs/super.c b/fs/hfs/super.c
index 97546d6b41f4..e150550a5ded 100644
--- a/fs/hfs/super.c
+++ b/fs/hfs/super.c
@@ -34,7 +34,6 @@ MODULE_LICENSE("GPL");
static int hfs_sync_fs(struct super_block *sb, int wait)
{
- is_hfs_cnid_counts_valid(sb);
hfs_mdb_commit(sb);
return 0;
}
@@ -50,6 +49,10 @@ static void hfs_put_super(struct super_block *sb)
{
cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
hfs_mdb_close(sb);
+
+ if (!is_hfs_cnid_counts_valid(sb) || HFS_SB(sb)->suggest_fsck)
+ pr_warn("filesystem error was detected, running fsck.hfs is recommended.\n");
+
/* release the MDB's resources */
hfs_mdb_put(sb);
}
@@ -66,8 +69,6 @@ static void flush_mdb(struct work_struct *work)
sbi->work_queued = 0;
spin_unlock(&sbi->work_lock);
- is_hfs_cnid_counts_valid(sb);
-
hfs_mdb_commit(sb);
}
@@ -322,8 +323,9 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
int silent = fc->sb_flags & SB_SILENT;
int res;
- atomic64_set(&sbi->file_count, 0);
- atomic64_set(&sbi->folder_count, 0);
+ sbi->suggest_fsck = false;
+ sbi->file_count = 0;
+ sbi->folder_count = 0;
atomic64_set(&sbi->next_id, 0);
/* load_nls_default does not fail */
^ permalink raw reply related [flat|nested] 17+ messages in thread
* RE: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-17 12:41 ` Tetsuo Handa
@ 2026-02-18 22:07 ` Viacheslav Dubeyko
2026-02-19 14:00 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-18 22:07 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Tue, 2026-02-17 at 21:41 +0900, Tetsuo Handa wrote:
> On 2026/02/17 9:59, Viacheslav Dubeyko wrote:
> > > We can represent __be32 using signed 32bits integer when counting number of
> > > files/directories (which by definition cannot take a negative value), for
> > > we can interpret [-2147483648,-1] range as [2147483648,4294967295] range
> > > because we don't need to handle [-2147483648,-1] range.
> >
> > If you suggest to increment the atomic_t until U32_MAX and somehow keep in the
> > mind that we need to treat negative value as positive, then it's confusing and
> > nobody will follow to this solution. It will introduce the bugs easily. Using
> > atomic64_t is clear solution, we don't need to use any tricks and everybody can
> > follow to such technique.
>
> Then, we can use u32 and U32_MIN/U32_MAX like below. (I'm not trying to convert
> next_id to u32, for it needs more inspection before we are convinced that there is
> no possibility for overflow.) Do you still dislike converting to u32?
Please, see my comments below. :)
>
> fs/hfs/hfs_fs.h | 37 ++++++++++++++++++++++++++++++++++---
> fs/hfs/inode.c | 54 +++++++++++++++++++++++++++++++-----------------------
> fs/hfs/mdb.c | 31 ++++---------------------------
> fs/hfs/super.c | 12 +++++++-----
> 4 files changed, 76 insertions(+), 58 deletions(-)
>
> diff --git a/fs/hfs/hfs_fs.h b/fs/hfs/hfs_fs.h
> index ac0e83f77a0f..73792460f938 100644
> --- a/fs/hfs/hfs_fs.h
> +++ b/fs/hfs/hfs_fs.h
> @@ -80,10 +80,10 @@ struct hfs_sb_info {
> the extents b-tree */
> struct hfs_btree *cat_tree; /* Information about
> the catalog b-tree */
> - atomic64_t file_count; /* The number of
> + u32 file_count; /* The number of
> regular files in
> the filesystem */
> - atomic64_t folder_count; /* The number of
> + u32 folder_count; /* The number of
> directories in the
> filesystem */
> atomic64_t next_id; /* The next available
> @@ -132,6 +132,7 @@ struct hfs_sb_info {
> int work_queued; /* non-zero delayed work is queued */
> struct delayed_work mdb_work; /* MDB flush delayed work */
> spinlock_t work_lock; /* protects mdb_work and work_queued */
> + bool suggest_fsck;
I completely don't see the point in adding likewise field. We always can share
warning message in the point of problem detection.
> };
>
> #define HFS_FLG_BITMAP_DIRTY 0
> @@ -199,7 +200,6 @@ extern void hfs_delete_inode(struct inode *inode);
> extern const struct xattr_handler * const hfs_xattr_handlers[];
>
> /* mdb.c */
> -extern bool is_hfs_cnid_counts_valid(struct super_block *sb);
> extern int hfs_mdb_get(struct super_block *sb);
> extern void hfs_mdb_commit(struct super_block *sb);
> extern void hfs_mdb_close(struct super_block *sb);
> @@ -289,4 +289,35 @@ static inline void hfs_bitmap_dirty(struct super_block *sb)
> __bh; \
> })
>
> +static inline bool inc_u32_unless_wraparound(u32 *v)
> +{
> + while (1) {
> + const u32 t = data_race(READ_ONCE(*v)); /* silence KCSAN */
> +
> + if (unlikely(t == U32_MAX))
> + return false;
> + if (likely(cmpxchg(v, t, t + 1) == t))
> + return true;
> + }
> +}
> +
> +static inline bool dec_u32_unless_wraparound(u32 *v)
> +{
> + while (1) {
> + const u32 t = data_race(READ_ONCE(*v)); /* silence KCSAN */
> +
> + if (unlikely(t == U32_MIN))
> + return false;
> + if (likely(cmpxchg(v, t, t - 1) == t))
> + return true;
> + }
> +}
What's the point to make code so complicated? What's wrong with atomic64_t? Are
you trying to save 32 bits? :) I completely don't see the point in such very
complicated logic.
> +
> +static inline bool is_hfs_cnid_counts_valid(struct super_block *sb)
> +{
> + s64 next_id = atomic64_read(&HFS_SB(sb)->next_id);
> +
> + return next_id >= HFS_FIRSTUSER_CNID && !upper_32_bits(next_id);
> +}
> +
> #endif
> diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
> index 878535db64d6..5dc48629d27c 100644
> --- a/fs/hfs/inode.c
> +++ b/fs/hfs/inode.c
> @@ -185,8 +185,6 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> struct super_block *sb = dir->i_sb;
> struct inode *inode = new_inode(sb);
> s64 next_id;
> - s64 file_count;
> - s64 folder_count;
> int err = -ENOMEM;
>
> if (!inode)
> @@ -199,7 +197,7 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> spin_lock_init(&HFS_I(inode)->open_dir_lock);
> hfs_cat_build_key(sb, (btree_key *)&HFS_I(inode)->cat_key, dir->i_ino, name);
> next_id = atomic64_inc_return(&HFS_SB(sb)->next_id);
> - if (next_id > U32_MAX) {
> + if (upper_32_bits(next_id)) {
> atomic64_dec(&HFS_SB(sb)->next_id);
> pr_err("cannot create new inode: next CNID exceeds limit\n");
> goto out_discard;
> @@ -216,28 +214,28 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
> if (S_ISDIR(mode)) {
> inode->i_size = 2;
> - folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
> - if (folder_count> U32_MAX) {
> - atomic64_dec(&HFS_SB(sb)->folder_count);
> - pr_err("cannot create new inode: folder count exceeds limit\n");
> - goto out_discard;
> + if (unlikely(!inc_u32_unless_wraparound(&HFS_SB(sb)->folder_count)))
> + HFS_SB(sb)->suggest_fsck = true;
> + if (dir->i_ino == HFS_ROOT_CNID) {
> + if (unlikely(HFS_SB(sb)->root_dirs == U16_MAX))
> + HFS_SB(sb)->suggest_fsck = true;
> + else
> + HFS_SB(sb)->root_dirs++;
> }
> - if (dir->i_ino == HFS_ROOT_CNID)
> - HFS_SB(sb)->root_dirs++;
> inode->i_op = &hfs_dir_inode_operations;
> inode->i_fop = &hfs_dir_operations;
> inode->i_mode |= S_IRWXUGO;
> inode->i_mode &= ~HFS_SB(inode->i_sb)->s_dir_umask;
> } else if (S_ISREG(mode)) {
> HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
> - file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
> - if (file_count > U32_MAX) {
> - atomic64_dec(&HFS_SB(sb)->file_count);
> - pr_err("cannot create new inode: file count exceeds limit\n");
> - goto out_discard;
> + if (unlikely(!inc_u32_unless_wraparound(&HFS_SB(sb)->file_count)))
> + HFS_SB(sb)->suggest_fsck = true;
> + if (dir->i_ino == HFS_ROOT_CNID) {
> + if (unlikely(HFS_SB(sb)->root_files == U16_MAX))
> + HFS_SB(sb)->suggest_fsck = true;
> + else
> + HFS_SB(sb)->root_files++;
> }
> - if (dir->i_ino == HFS_ROOT_CNID)
> - HFS_SB(sb)->root_files++;
> inode->i_op = &hfs_file_inode_operations;
> inode->i_fop = &hfs_file_operations;
> inode->i_mapping->a_ops = &hfs_aops;
> @@ -272,17 +270,27 @@ void hfs_delete_inode(struct inode *inode)
>
> hfs_dbg("ino %lu\n", inode->i_ino);
> if (S_ISDIR(inode->i_mode)) {
> - atomic64_dec(&HFS_SB(sb)->folder_count);
> - if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
> - HFS_SB(sb)->root_dirs--;
> + if (unlikely(!dec_u32_unless_wraparound(&HFS_SB(sb)->folder_count)))
> + HFS_SB(sb)->suggest_fsck = true;
> + if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID)) {
> + if (unlikely(!HFS_SB(sb)->root_dirs))
> + HFS_SB(sb)->suggest_fsck = true;
> + else
> + HFS_SB(sb)->root_dirs--;
> + }
> set_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags);
> hfs_mark_mdb_dirty(sb);
> return;
> }
>
> - atomic64_dec(&HFS_SB(sb)->file_count);
> - if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
> - HFS_SB(sb)->root_files--;
> + if (unlikely(!dec_u32_unless_wraparound(&HFS_SB(sb)->file_count)))
> + HFS_SB(sb)->suggest_fsck = true;
> + if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID)) {
> + if (unlikely(!HFS_SB(sb)->root_files))
> + HFS_SB(sb)->suggest_fsck = true;
> + else
> + HFS_SB(sb)->root_files--;
> + }
> if (S_ISREG(inode->i_mode)) {
> if (!inode->i_nlink) {
> inode->i_size = 0;
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index a97cea35ca2e..c0e55e7233c9 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -64,27 +64,6 @@ static int hfs_get_last_session(struct super_block *sb,
> return 0;
> }
>
> -bool is_hfs_cnid_counts_valid(struct super_block *sb)
> -{
> - struct hfs_sb_info *sbi = HFS_SB(sb);
> - bool corrupted = false;
> -
> - if (unlikely(atomic64_read(&sbi->next_id) > U32_MAX)) {
> - pr_warn("next CNID exceeds limit\n");
> - corrupted = true;
> - }
> - if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
> - pr_warn("file count exceeds limit\n");
> - corrupted = true;
> - }
> - if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
> - pr_warn("folder count exceeds limit\n");
> - corrupted = true;
> - }
> -
> - return !corrupted;
> -}
> -
> /*
> * hfs_mdb_get()
> *
> @@ -177,8 +156,8 @@ int hfs_mdb_get(struct super_block *sb)
> atomic64_set(&HFS_SB(sb)->next_id, be32_to_cpu(mdb->drNxtCNID));
> HFS_SB(sb)->root_files = be16_to_cpu(mdb->drNmFls);
> HFS_SB(sb)->root_dirs = be16_to_cpu(mdb->drNmRtDirs);
> - atomic64_set(&HFS_SB(sb)->file_count, be32_to_cpu(mdb->drFilCnt));
> - atomic64_set(&HFS_SB(sb)->folder_count, be32_to_cpu(mdb->drDirCnt));
> + HFS_SB(sb)->file_count = be32_to_cpu(mdb->drFilCnt);
> + HFS_SB(sb)->folder_count = be32_to_cpu(mdb->drDirCnt);
>
> if (!is_hfs_cnid_counts_valid(sb)) {
> pr_warn("filesystem possibly corrupted, running fsck.hfs is recommended. Mounting read-only.\n");
> @@ -300,10 +279,8 @@ void hfs_mdb_commit(struct super_block *sb)
> cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->next_id));
> mdb->drNmFls = cpu_to_be16(HFS_SB(sb)->root_files);
> mdb->drNmRtDirs = cpu_to_be16(HFS_SB(sb)->root_dirs);
> - mdb->drFilCnt =
> - cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->file_count));
> - mdb->drDirCnt =
> - cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->folder_count));
> + mdb->drFilCnt = cpu_to_be32(HFS_SB(sb)->file_count);
> + mdb->drDirCnt = cpu_to_be32(HFS_SB(sb)->folder_count);
>
> /* write MDB to disk */
> mark_buffer_dirty(HFS_SB(sb)->mdb_bh);
> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> index 97546d6b41f4..e150550a5ded 100644
> --- a/fs/hfs/super.c
> +++ b/fs/hfs/super.c
> @@ -34,7 +34,6 @@ MODULE_LICENSE("GPL");
>
> static int hfs_sync_fs(struct super_block *sb, int wait)
> {
> - is_hfs_cnid_counts_valid(sb);
We need to check it for every sync because, potentially, file system could stay
in mounted state for days/weeks/months. And corruption could happen during
metadata modification.
> hfs_mdb_commit(sb);
> return 0;
> }
> @@ -50,6 +49,10 @@ static void hfs_put_super(struct super_block *sb)
> {
> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> hfs_mdb_close(sb);
> +
> + if (!is_hfs_cnid_counts_valid(sb) || HFS_SB(sb)->suggest_fsck)
> + pr_warn("filesystem error was detected, running fsck.hfs is recommended.\n");
> +
Nobody checks the file system message after unmount because nobody cares. But
everyone really cares about mount and regular file system operations. So, this
check is too late and it is not necessary here.
> /* release the MDB's resources */
> hfs_mdb_put(sb);
> }
> @@ -66,8 +69,6 @@ static void flush_mdb(struct work_struct *work)
> sbi->work_queued = 0;
> spin_unlock(&sbi->work_lock);
>
> - is_hfs_cnid_counts_valid(sb);
> -
This check is important because it is done during hfs_fill_super() call.
Thanks,
Slava.
> hfs_mdb_commit(sb);
> }
>
> @@ -322,8 +323,9 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> int silent = fc->sb_flags & SB_SILENT;
> int res;
>
> - atomic64_set(&sbi->file_count, 0);
> - atomic64_set(&sbi->folder_count, 0);
> + sbi->suggest_fsck = false;
> + sbi->file_count = 0;
> + sbi->folder_count = 0;
> atomic64_set(&sbi->next_id, 0);
>
> /* load_nls_default does not fail */
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: evaluate the upper 32bits for detecting overflow
2026-02-18 22:07 ` Viacheslav Dubeyko
@ 2026-02-19 14:00 ` Tetsuo Handa
2026-02-20 15:57 ` [PATCH] hfs: don't fail operations when files/directories counter overflows Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-19 14:00 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/19 7:07, Viacheslav Dubeyko wrote:
>> @@ -132,6 +132,7 @@ struct hfs_sb_info {
>> int work_queued; /* non-zero delayed work is queued */
>> struct delayed_work mdb_work; /* MDB flush delayed work */
>> spinlock_t work_lock; /* protects mdb_work and work_queued */
>> + bool suggest_fsck;
>
> I completely don't see the point in adding likewise field. We always can share
> warning message in the point of problem detection.
Calling printk() so frequently can cause stall warnings.
Calling printk() upon mount time or unmount time is fine, but
calling printk() upon e.g. sync() time can become problematic.
> What's the point to make code so complicated? What's wrong with atomic64_t? Are
> you trying to save 32 bits? :) I completely don't see the point in such very
> complicated logic.
These are u32 version of atomic_add_unless(v, 1, -1) and atomic_add_unless(v, -1, 0),
for you said that it is confusing to keep in mind that we need to treat negative
value as positive.
You also said that "It will introduce the bugs easily.", but the reality is that
we had been failing to understand that the BUG_ON() in
BUG_ON(atomic64_read(&HFS_SB(sb)->file_count) > U32_MAX);
atomic64_dec(&HFS_SB(sb)->file_count);
does not fire when HFS_SB(sb)->file_count < 0 due to s64 and u32 comparison.
We can avoid such oversight if we change it to u32 and u32 comparison using
dec_u32_unless_wraparound().
>> @@ -34,7 +34,6 @@ MODULE_LICENSE("GPL");
>>
>> static int hfs_sync_fs(struct super_block *sb, int wait)
>> {
>> - is_hfs_cnid_counts_valid(sb);
>
> We need to check it for every sync because, potentially, file system could stay
> in mounted state for days/weeks/months. And corruption could happen during
> metadata modification.
Calling printk() for every sync() might cause stall problems.
Also, if I understand the code correctly, these counter values are "for your
information" level which is not guaranteed to be in sync with actual number of
files/directories. It does not worth making operations fail.
> Nobody checks the file system message after unmount because nobody cares. But
> everyone really cares about mount and regular file system operations. So, this
> check is too late and it is not necessary here.
Then, I suggest completely removing warning messages regarding these counter values.
>> @@ -66,8 +69,6 @@ static void flush_mdb(struct work_struct *work)
>> sbi->work_queued = 0;
>> spin_unlock(&sbi->work_lock);
>>
>> - is_hfs_cnid_counts_valid(sb);
>> -
>
> This check is important because it is done during hfs_fill_super() call.
I couldn't catch, for INIT_DELAYED_WORK(&sbi->mdb_work, flush_mdb)
in hfs_fill_super() does not call flush_mdb().
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH] hfs: don't fail operations when files/directories counter overflows
2026-02-19 14:00 ` Tetsuo Handa
@ 2026-02-20 15:57 ` Tetsuo Handa
2026-02-20 20:03 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-20 15:57 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
Commit b226804532a8 ("hfs: Replace BUG_ON with error handling for CNID
count checks") was an improvement in that it avoids kernel crash, but
was not an improvement in several aspects.
First aspect is that it increased possibility of committing corrupted
counter values to mdb, for that commit did not eliminate possibility of
temporarily overflowing counter values despite these counter values might
be concurrently read by hfs_mdb_commit() because nothing serializes
hfs_new_inode() and hfs_mdb_commit().
hfs_new_inode() { hfs_mdb_commit() {
(...snipped...) (...snipped...)
// 4294967295 -> 4294967296
file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
// Detects overflow
if (file_count > U32_MAX) {
// 0 will be committed instead of 4294967295
mdb->drFilCnt = cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->file_count));
// 4294967296 -> 4294967295
atomic64_dec(&HFS_SB(sb)->file_count);
goto out_discard;
}
(...snipped...) (...snipped...)
} }
Second aspect is that is_hfs_cnid_counts_valid() cannot check for
negative counter values due to s64 and u32 comparison.
if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
pr_warn("file count exceeds limit\n");
corrupted = true;
}
Third aspect is that is_hfs_cnid_counts_valid() needlessly rejects
creation/deletion of files/directories, for overflow of these counter
values is not fatal error condition. These counter values are only for
informational use (which is not guaranteed to be in sync with actual
number of files/directories). The only fatal error condition that must
reject is that all available 32bits inode numbers are already in use
when creating a file or directory. Other conditions can be checked and
corrected when fsck.hfs is run.
Fourth aspect is that is_hfs_cnid_counts_valid() calls printk() without
ratelimit. A filesystem could stay in mounted state for days/weeks/months,
and e.g. sync() request could be called for e.g. 100 times per second.
The consequence will be stall problem due to printk() flooding and/or
out of disk space due to unimportant kernel messages.
Fix some of these aspects, by don't allow temporarily overflowing counter
values for files/directories and remove printk() for file/directory
counters. This patch does not touch next_id counter, for there are
different topics to consider.
Technically speaking, we can shrink these counter values from atomic64_t
to atomic_t, but this patch does not change because Viacheslav Dubeyko
does not want to treat -1 (negative value) as U32_MAX (positive value).
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
fs/hfs/inode.c | 30 ++++++++++--------------------
fs/hfs/mdb.c | 8 --------
2 files changed, 10 insertions(+), 28 deletions(-)
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 878535db64d6..e10f96d1711e 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -185,8 +185,6 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
struct super_block *sb = dir->i_sb;
struct inode *inode = new_inode(sb);
s64 next_id;
- s64 file_count;
- s64 folder_count;
int err = -ENOMEM;
if (!inode)
@@ -216,13 +214,8 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
if (S_ISDIR(mode)) {
inode->i_size = 2;
- folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
- if (folder_count> U32_MAX) {
- atomic64_dec(&HFS_SB(sb)->folder_count);
- pr_err("cannot create new inode: folder count exceeds limit\n");
- goto out_discard;
- }
- if (dir->i_ino == HFS_ROOT_CNID)
+ atomic64_add_unless(&HFS_SB(sb)->folder_count, 1, U32_MAX);
+ if (dir->i_ino == HFS_ROOT_CNID && HFS_SB(sb)->root_dirs != U16_MAX)
HFS_SB(sb)->root_dirs++;
inode->i_op = &hfs_dir_inode_operations;
inode->i_fop = &hfs_dir_operations;
@@ -230,13 +223,8 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
inode->i_mode &= ~HFS_SB(inode->i_sb)->s_dir_umask;
} else if (S_ISREG(mode)) {
HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
- file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
- if (file_count > U32_MAX) {
- atomic64_dec(&HFS_SB(sb)->file_count);
- pr_err("cannot create new inode: file count exceeds limit\n");
- goto out_discard;
- }
- if (dir->i_ino == HFS_ROOT_CNID)
+ atomic64_add_unless(&HFS_SB(sb)->file_count, 1, U32_MAX);
+ if (dir->i_ino == HFS_ROOT_CNID && HFS_SB(sb)->root_files != U16_MAX)
HFS_SB(sb)->root_files++;
inode->i_op = &hfs_file_inode_operations;
inode->i_fop = &hfs_file_operations;
@@ -272,16 +260,18 @@ void hfs_delete_inode(struct inode *inode)
hfs_dbg("ino %lu\n", inode->i_ino);
if (S_ISDIR(inode->i_mode)) {
- atomic64_dec(&HFS_SB(sb)->folder_count);
- if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
+ atomic64_add_unless(&HFS_SB(sb)->folder_count, -1, 0);
+ if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID) &&
+ HFS_SB(sb)->root_dirs)
HFS_SB(sb)->root_dirs--;
set_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags);
hfs_mark_mdb_dirty(sb);
return;
}
- atomic64_dec(&HFS_SB(sb)->file_count);
- if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
+ atomic64_add_unless(&HFS_SB(sb)->file_count, -1, 0);
+ if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID) &&
+ HFS_SB(sb)->root_files)
HFS_SB(sb)->root_files--;
if (S_ISREG(inode->i_mode)) {
if (!inode->i_nlink) {
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index a97cea35ca2e..e79e36b7ed84 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -73,14 +73,6 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
pr_warn("next CNID exceeds limit\n");
corrupted = true;
}
- if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
- pr_warn("file count exceeds limit\n");
- corrupted = true;
- }
- if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
- pr_warn("folder count exceeds limit\n");
- corrupted = true;
- }
return !corrupted;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: don't fail operations when files/directories counter overflows
2026-02-20 15:57 ` [PATCH] hfs: don't fail operations when files/directories counter overflows Tetsuo Handa
@ 2026-02-20 20:03 ` Viacheslav Dubeyko
2026-02-21 1:50 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-20 20:03 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Sat, 2026-02-21 at 00:57 +0900, Tetsuo Handa wrote:
> Commit b226804532a8 ("hfs: Replace BUG_ON with error handling for CNID
> count checks") was an improvement in that it avoids kernel crash, but
> was not an improvement in several aspects.
>
> First aspect is that it increased possibility of committing corrupted
> counter values to mdb, for that commit did not eliminate possibility of
> temporarily overflowing counter values despite these counter values might
> be concurrently read by hfs_mdb_commit() because nothing serializes
> hfs_new_inode() and hfs_mdb_commit().
>
> hfs_new_inode() { hfs_mdb_commit() {
> (...snipped...) (...snipped...)
> // 4294967295 -> 4294967296
> file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
> // Detects overflow
> if (file_count > U32_MAX) {
> // 0 will be committed instead of 4294967295
> mdb->drFilCnt = cpu_to_be32((u32)atomic64_read(&HFS_SB(sb)->file_count));
> // 4294967296 -> 4294967295
> atomic64_dec(&HFS_SB(sb)->file_count);
> goto out_discard;
> }
> (...snipped...) (...snipped...)
> } }
>
> Second aspect is that is_hfs_cnid_counts_valid() cannot check for
> negative counter values due to s64 and u32 comparison.
>
> if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
> pr_warn("file count exceeds limit\n");
> corrupted = true;
> }
>
> Third aspect is that is_hfs_cnid_counts_valid() needlessly rejects
> creation/deletion of files/directories, for overflow of these counter
> values is not fatal error condition. These counter values are only for
> informational use (which is not guaranteed to be in sync with actual
> number of files/directories). The only fatal error condition that must
> reject is that all available 32bits inode numbers are already in use
> when creating a file or directory. Other conditions can be checked and
> corrected when fsck.hfs is run.
>
> Fourth aspect is that is_hfs_cnid_counts_valid() calls printk() without
> ratelimit. A filesystem could stay in mounted state for days/weeks/months,
> and e.g. sync() request could be called for e.g. 100 times per second.
> The consequence will be stall problem due to printk() flooding and/or
> out of disk space due to unimportant kernel messages.
>
> Fix some of these aspects, by don't allow temporarily overflowing counter
> values for files/directories and remove printk() for file/directory
> counters. This patch does not touch next_id counter, for there are
> different topics to consider.
>
> Technically speaking, we can shrink these counter values from atomic64_t
> to atomic_t, but this patch does not change because Viacheslav Dubeyko
> does not want to treat -1 (negative value) as U32_MAX (positive value).
I cannot accept the patch with such comment because I don't need in favor. We
should have technically reasonable solution.
>
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> ---
> fs/hfs/inode.c | 30 ++++++++++--------------------
> fs/hfs/mdb.c | 8 --------
> 2 files changed, 10 insertions(+), 28 deletions(-)
>
> diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
> index 878535db64d6..e10f96d1711e 100644
> --- a/fs/hfs/inode.c
> +++ b/fs/hfs/inode.c
> @@ -185,8 +185,6 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> struct super_block *sb = dir->i_sb;
> struct inode *inode = new_inode(sb);
> s64 next_id;
> - s64 file_count;
> - s64 folder_count;
> int err = -ENOMEM;
>
> if (!inode)
> @@ -216,13 +214,8 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
> if (S_ISDIR(mode)) {
> inode->i_size = 2;
> - folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
> - if (folder_count> U32_MAX) {
> - atomic64_dec(&HFS_SB(sb)->folder_count);
> - pr_err("cannot create new inode: folder count exceeds limit\n");
> - goto out_discard;
> - }
> - if (dir->i_ino == HFS_ROOT_CNID)
> + atomic64_add_unless(&HFS_SB(sb)->folder_count, 1, U32_MAX);
We cannot simply silently stop accounting folders count. We should complain and
must return error.
> + if (dir->i_ino == HFS_ROOT_CNID && HFS_SB(sb)->root_dirs != U16_MAX)
> HFS_SB(sb)->root_dirs++;
> inode->i_op = &hfs_dir_inode_operations;
> inode->i_fop = &hfs_dir_operations;
> @@ -230,13 +223,8 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
> inode->i_mode &= ~HFS_SB(inode->i_sb)->s_dir_umask;
> } else if (S_ISREG(mode)) {
> HFS_I(inode)->clump_blocks = HFS_SB(sb)->clumpablks;
> - file_count = atomic64_inc_return(&HFS_SB(sb)->file_count);
> - if (file_count > U32_MAX) {
> - atomic64_dec(&HFS_SB(sb)->file_count);
> - pr_err("cannot create new inode: file count exceeds limit\n");
> - goto out_discard;
> - }
> - if (dir->i_ino == HFS_ROOT_CNID)
> + atomic64_add_unless(&HFS_SB(sb)->file_count, 1, U32_MAX);
The same here. We cannot simply silently stop accounting files count. We should
complain and must return error.
> + if (dir->i_ino == HFS_ROOT_CNID && HFS_SB(sb)->root_files != U16_MAX)
> HFS_SB(sb)->root_files++;
> inode->i_op = &hfs_file_inode_operations;
> inode->i_fop = &hfs_file_operations;
> @@ -272,16 +260,18 @@ void hfs_delete_inode(struct inode *inode)
>
> hfs_dbg("ino %lu\n", inode->i_ino);
> if (S_ISDIR(inode->i_mode)) {
> - atomic64_dec(&HFS_SB(sb)->folder_count);
> - if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
> + atomic64_add_unless(&HFS_SB(sb)->folder_count, -1, 0);
I don't see point to change atomic64_dec() on atomic64_add_unless() here.
> + if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID) &&
> + HFS_SB(sb)->root_dirs)
> HFS_SB(sb)->root_dirs--;
> set_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags);
> hfs_mark_mdb_dirty(sb);
> return;
> }
>
> - atomic64_dec(&HFS_SB(sb)->file_count);
> - if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
> + atomic64_add_unless(&HFS_SB(sb)->file_count, -1, 0);
The same comment here.
> + if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID) &&
> + HFS_SB(sb)->root_files)
> HFS_SB(sb)->root_files--;
> if (S_ISREG(inode->i_mode)) {
> if (!inode->i_nlink) {
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index a97cea35ca2e..e79e36b7ed84 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -73,14 +73,6 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
> pr_warn("next CNID exceeds limit\n");
> corrupted = true;
> }
> - if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
> - pr_warn("file count exceeds limit\n");
> - corrupted = true;
> - }
> - if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
> - pr_warn("folder count exceeds limit\n");
> - corrupted = true;
> - }
We need to have this check here. I don't see the point to remove these checks.
Thanks,
Slava.
>
> return !corrupted;
> }
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: don't fail operations when files/directories counter overflows
2026-02-20 20:03 ` Viacheslav Dubeyko
@ 2026-02-21 1:50 ` Tetsuo Handa
2026-02-26 14:01 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-21 1:50 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/21 5:03, Viacheslav Dubeyko wrote:
>> @@ -216,13 +214,8 @@ struct inode *hfs_new_inode(struct inode *dir, const struct qstr *name, umode_t
>> HFS_I(inode)->tz_secondswest = sys_tz.tz_minuteswest * 60;
>> if (S_ISDIR(mode)) {
>> inode->i_size = 2;
>> - folder_count = atomic64_inc_return(&HFS_SB(sb)->folder_count);
>> - if (folder_count> U32_MAX) {
>> - atomic64_dec(&HFS_SB(sb)->folder_count);
>> - pr_err("cannot create new inode: folder count exceeds limit\n");
>> - goto out_discard;
>> - }
>> - if (dir->i_ino == HFS_ROOT_CNID)
>> + atomic64_add_unless(&HFS_SB(sb)->folder_count, 1, U32_MAX);
>
> We cannot simply silently stop accounting folders count. We should complain and
> must return error.
If there were no race window of committing stale counter values to mdb, and if there
were no possibility of mdb corruption, it is impossible to overflow, for
0 <= file_count + folder_count <= U32_MAX is guaranteed because the cnid is 32bits.
But like syzbot reports, file_count can be larger or smaller than actual number of files
and folder_count can be larger or smaller than actual number of folders when mdb is
corrupted/inaccurate. Also, like I mentioned in this patch, there is always race window of
committing stale counter values. Users won't notice this inaccuracy unless fsck.hfs is run,
and this inaccuracy needlessly disturbs user's operation if we return error.
I suggested reporting this inaccuracy via printk() once, but you said that doing it at
unmount time is too late. And you also don't want to add per-mount flags for tracking
whether we already reported this inaccuracy after we detected it. Then, we should not
call printk() because it is too much noise.
Also, we are simply silently ignoring overflow of root folders/files counters.
Overflow of root folders/files counters should be handled as well as overflow
of total folders/files counters.
>> @@ -272,16 +260,18 @@ void hfs_delete_inode(struct inode *inode)
>>
>> hfs_dbg("ino %lu\n", inode->i_ino);
>> if (S_ISDIR(inode->i_mode)) {
>> - atomic64_dec(&HFS_SB(sb)->folder_count);
>> - if (HFS_I(inode)->cat_key.ParID == cpu_to_be32(HFS_ROOT_CNID))
>> + atomic64_add_unless(&HFS_SB(sb)->folder_count, -1, 0);
>
> I don't see point to change atomic64_dec() on atomic64_add_unless() here.
When folder_count is smaller than actual number of folders, folder_count can
become negative value. But due to s64 and u32 comparison, the counter became
negative is not reported. Overflow upon decrement should be handled as well as
overflow upon increment, but you are not seeing the point.
>> @@ -73,14 +73,6 @@ bool is_hfs_cnid_counts_valid(struct super_block *sb)
>> pr_warn("next CNID exceeds limit\n");
>> corrupted = true;
>> }
>> - if (unlikely(atomic64_read(&sbi->file_count) > U32_MAX)) {
>> - pr_warn("file count exceeds limit\n");
>> - corrupted = true;
>> - }
>> - if (unlikely(atomic64_read(&sbi->folder_count) > U32_MAX)) {
>> - pr_warn("folder count exceeds limit\n");
>> - corrupted = true;
>> - }
>
> We need to have this check here. I don't see the point to remove these checks.
This check here is useless. Calling printk() *only once for each mount* when overflow
(either increment or decrement) was detected for that mount would be useful.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: don't fail operations when files/directories counter overflows
2026-02-21 1:50 ` Tetsuo Handa
@ 2026-02-26 14:01 ` Tetsuo Handa
2026-02-26 17:45 ` Viacheslav Dubeyko
0 siblings, 1 reply; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-26 14:01 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/21 10:50, Tetsuo Handa wrote:
>> We cannot simply silently stop accounting folders count. We should complain and
>> must return error.
Here is an opinion from Google AI mode.
https://share.google/aimode/VTH5mHPFmaH62fnwx (Expires in 7 days. Please save if needed.)
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH] hfs: don't fail operations when files/directories counter overflows
2026-02-26 14:01 ` Tetsuo Handa
@ 2026-02-26 17:45 ` Viacheslav Dubeyko
2026-02-28 5:42 ` Tetsuo Handa
0 siblings, 1 reply; 17+ messages in thread
From: Viacheslav Dubeyko @ 2026-02-26 17:45 UTC (permalink / raw)
To: glaubitz@physik.fu-berlin.de, frank.li@vivo.com,
penguin-kernel@I-love.SAKURA.ne.jp, slava@dubeyko.com,
jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On Thu, 2026-02-26 at 23:01 +0900, Tetsuo Handa wrote:
> On 2026/02/21 10:50, Tetsuo Handa wrote:
> > > We cannot simply silently stop accounting folders count. We should complain and
> > > must return error.
>
> Here is an opinion from Google AI mode.
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__share.google_aimode_VTH5mHPFmaH62fnwx&d=DwICaQ&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=1F3R0MFsp57UPSWAWnRdZgKGEDAnjK_4zTR5SfmpBVc8B690-uUHhuiDhzewhdMq&s=PVFKvGfwllmy4v6FiMFcL_kyEQ7893sQg8iZcH0znHg&e= (Expires in 7 days. Please save if needed.)
I am not going to check what absurd can generate GPUs and ML models. :) I hope
you are not serious about this. Stupid hardware cannot have an opinion because
it hasn't soul, it cannot think, it hasn't conscientiousness and it has no idea
about ethics.
Best regards,
Slava.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH] hfs: don't fail operations when files/directories counter overflows
2026-02-26 17:45 ` Viacheslav Dubeyko
@ 2026-02-28 5:42 ` Tetsuo Handa
0 siblings, 0 replies; 17+ messages in thread
From: Tetsuo Handa @ 2026-02-28 5:42 UTC (permalink / raw)
To: Viacheslav Dubeyko, glaubitz@physik.fu-berlin.de,
frank.li@vivo.com, slava@dubeyko.com, jkoolstra@xs4all.nl
Cc: linux-fsdevel@vger.kernel.org
On 2026/02/27 2:45, Viacheslav Dubeyko wrote:
> On Thu, 2026-02-26 at 23:01 +0900, Tetsuo Handa wrote:
>> On 2026/02/21 10:50, Tetsuo Handa wrote:
>>>> We cannot simply silently stop accounting folders count. We should complain and
>>>> must return error.
>>
>> Here is an opinion from Google AI mode.
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__share.google_aimode_VTH5mHPFmaH62fnwx&d=DwICaQ&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=1F3R0MFsp57UPSWAWnRdZgKGEDAnjK_4zTR5SfmpBVc8B690-uUHhuiDhzewhdMq&s=PVFKvGfwllmy4v6FiMFcL_kyEQ7893sQg8iZcH0znHg&e= (Expires in 7 days. Please save if needed.)
>
> I am not going to check what absurd can generate GPUs and ML models. :) I hope
> you are not serious about this. Stupid hardware cannot have an opinion because
> it hasn't soul, it cannot think, it hasn't conscientiousness and it has no idea
> about ethics.
You are saying "MUST return error" and I am saying "NEED NOT TO return error".
A machine learning model compared several filesystems and agreed that the total
count variable is merely a "cache" used for speed and therefore my approach is
acceptable.
Since I don't like that you simply return error without recalculating the total
count variable using the source of truth, I suggest you to recalculate without
returning error. Current code which just returns an error can surprise users.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-02-28 5:42 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-11 10:29 [PATCH] hfs: evaluate the upper 32bits for detecting overflow Tetsuo Handa
2026-02-11 22:36 ` Viacheslav Dubeyko
2026-02-12 13:27 ` Tetsuo Handa
2026-02-12 21:51 ` Viacheslav Dubeyko
2026-02-13 10:34 ` Tetsuo Handa
2026-02-13 22:45 ` Viacheslav Dubeyko
2026-02-14 7:09 ` Tetsuo Handa
2026-02-17 0:59 ` Viacheslav Dubeyko
2026-02-17 12:41 ` Tetsuo Handa
2026-02-18 22:07 ` Viacheslav Dubeyko
2026-02-19 14:00 ` Tetsuo Handa
2026-02-20 15:57 ` [PATCH] hfs: don't fail operations when files/directories counter overflows Tetsuo Handa
2026-02-20 20:03 ` Viacheslav Dubeyko
2026-02-21 1:50 ` Tetsuo Handa
2026-02-26 14:01 ` Tetsuo Handa
2026-02-26 17:45 ` Viacheslav Dubeyko
2026-02-28 5:42 ` Tetsuo Handa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox