* [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption
@ 2026-03-18 7:38 Shardul Bankar
2026-03-18 7:38 ` [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation Shardul Bankar
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Shardul Bankar @ 2026-03-18 7:38 UTC (permalink / raw)
To: slava, glaubitz, frank.li, linux-fsdevel, linux-kernel
Cc: janak, janak, shardulsb08, Shardul Bankar
Hi all,
This series addresses a Syzkaller-reported vulnerability where fuzzed
HFS+ images mark the B-tree Header Node (Node 0) as free in the
allocation bitmap. This violates a core filesystem invariant and leads
to allocator corruption and kernel panics.
To fix this safely and cleanly, the series is split into two parts:
Patch 1 introduces a unified, robust API for B-tree map record access
(struct hfs_bmap_ctx, hfs_bmap_get_map_page, and hfs_bmap_clear_bit) and
refactors the boilerplate page mapping logic out of hfs_bmap_alloc() and
hfs_bmap_free().
Patch 2 utilizes this new API to perform a mount-time validation of Node
0 via hfs_bmap_test_bit(), forcing a safe read-only mount if structural
or bit-level corruption is detected.
Note on Allocator Optimization: Following discussions in v4,
there is a recognized opportunity to optimize hfs_bmap_alloc()
from a first-fit to a next-fit allocator by caching an in-core
allocation hint (roving pointer). To keep the scope of this series
strictly aligned with the Syzkaller corruption fix, that architectural
optimization is deferred to a separate, follow-up patchset/thread.
v7:
- Type Safety: Changed the return type of hfs_bmap_test_bit() to `bool`,
allowing the mount-time validation in hfs_btree_open() to cleanly
catch both IO errors and cleared bits in a single evaluation.
- Overflow Prevention: Added an explicit `(u32)` cast to `off16` during
page offset calculation to clearly document and prevent 16-bit integer
overflows.
- String Accuracy: Updated the CNID string macros to use their official
structural names ("Extents Overflow File", "Catalog File", "Attributes
File") per maintainer feedback.
- Documentation: Corrected a stale docstring error code (-EALREADY to
-EINVAL) above hfs_bmap_clear_bit().
v6:
- Symmetric Mapping: Updated hfs_bmap_get_map_page() to return an unmapped
struct page * instead of a mapped pointer. This ensures the caller
explicitly handles both kmap_local_page() and kunmap_local(), preventing
dangerous asymmetric mapping lifecycles.
- Bisectability: Moved the introduction of hfs_bmap_test_bit() from Patch 1
to Patch 2 where it is actually consumed, preventing a -Wunused-function
compiler warning and keeping the Git history perfectly bisectable.
- API Clarity: Renamed the bit_idx parameter to node_bit_idx in the bit-level
helpers to explicitly clarify that the index is strictly relative to the
target hfs_bnode's map record, preventing future absolute-index misuse.
- Naming & Style: Replaced hardcoded 8s with BITS_PER_BYTE, updated local
variable names (m to mask, data to bmap inside the new helpers), and added
kernel-doc field descriptions to struct hfs_bmap_ctx.
- Minimal Diff Scope: Restored the original variable names (data, m) inside
the legacy hfs_bmap_alloc() loop to keep the diff surgically focused on the
logical changes and preserve git blame history.
- Error Codes: Changed the error return in hfs_bmap_clear_bit() from
-EALREADY to -EINVAL.
- CNID String Lookup: Replaced the sparse string array with #define macros
and a standard switch statement for cleaner subsystem visibility, per
Slava's preference.
v5:
- API Encapsulation: Introduced struct hfs_bmap_ctx to cleanly bundle
offset, length, and page index state instead of passing multiple
pointers, addressing reviewer feedback.
- Bit-Level Helpers: Added hfs_bmap_test_bit() and hfs_bmap_clear_bit()
to safely encapsulate mapping/unmapping for single-bit accesses
(like the mount-time check and node freeing).
- Performance Retention: Retained the page-level mapping approach for
the linear scan inside hfs_bmap_alloc() to prevent the severe
performance regression of mapping/unmapping on a per-byte basis,
while refactoring it to use the new ctx struct.
- Hexagon Overflow Fix: Fixed a 0-day Kernel Test Robot warning on
architectures with 256KB page sizes by upgrading the offset variables
in the new struct hfs_bmap_ctx to unsigned int, preventing 16-bit shift
overflows.
Link: https://lore.kernel.org/all/202602270310.eBmeD8VX-lkp@intel.com/
- Map Record Spanning: Added a byte_offset parameter to the page mapper
to correctly handle large map records that span across multiple 4KB
pages.
- Loop Mask Revert: Reverted the 0x80 bitmask in the alloc() inner loop
back to its original state (and dropped the HFSPLUS_BTREE_NODE0_BIT
macro), as it represents a generic sliding mask, not specifically
Node 0.
- String Array Cleanup: Replaced the verbose switch(id) block in the
mount validation with a clean static array of constant strings for
the CNID names, per reviewer feedback.
v4:
- Split the changes into a 2-patch series (Refactoring + Bug Fix).
- Extracted map node traversal into a generic helper (hfs_bmap_get_map_page)
as per Slava's feedback, replacing manual offset/page management.
- Added node-type validation (HFS_NODE_HEADER vs HFS_NODE_MAP) inside the
helper to defend against structurally corrupted linkages.
- Replaced hardcoded values with named macros (HFSPLUS_BTREE_NODE0_BIT, etc).
- Handled invalid map offsets/lengths as corruption, continuing the mount
as SB_RDONLY instead of failing it completely to preserve data recovery.
v3:
- Moved validation logic inline into hfs_btree_open() to allow
reporting the specific corrupted tree ID.
- Replaced custom offset calculations with existing hfs_bnode_find()
and hfs_brec_lenoff() infrastructure to handle node sizes and
page boundaries correctly.
- Removed temporary 'btree_bitmap_corrupted' superblock flag; setup
SB_RDONLY directly upon detection.
- Moved logging to hfs_btree_open() to include the specific tree ID in
the warning message
- Used explicit bitwise check (&) instead of test_bit() to ensure
portability. test_bit() bit-numbering is architecture-dependent
(e.g., bit 0 vs bit 7 can swap meanings on BE vs LE), whereas
masking 0x80 consistently targets the MSB required by the HFS+
on-disk format.
v2:
- Fix compiler warning about comparing u16 bitmap_off with PAGE_SIZE which
can exceed u16 maximum on some architectures
- Cast bitmap_off to unsigned int for the PAGE_SIZE comparison to avoid
tautological constant-out-of-range comparison warning.
- Link: https://lore.kernel.org/oe-kbuild-all/202601251011.kJUhBF3P-lkp@intel.com/
Shardul Bankar (2):
hfsplus: refactor b-tree map page access and add node-type validation
hfsplus: validate b-tree node 0 bitmap at mount time
fs/hfsplus/btree.c | 229 +++++++++++++++++++++++++++++--------
include/linux/hfs_common.h | 2 +
2 files changed, 184 insertions(+), 47 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation
2026-03-18 7:38 [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Shardul Bankar
@ 2026-03-18 7:38 ` Shardul Bankar
2026-03-19 3:01 ` Viacheslav Dubeyko
2026-03-18 7:38 ` [PATCH v7 2/2] hfsplus: validate b-tree node 0 bitmap at mount time Shardul Bankar
2026-03-19 3:52 ` [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Viacheslav Dubeyko
2 siblings, 1 reply; 6+ messages in thread
From: Shardul Bankar @ 2026-03-18 7:38 UTC (permalink / raw)
To: slava, glaubitz, frank.li, linux-fsdevel, linux-kernel
Cc: janak, janak, shardulsb08, Shardul Bankar
In HFS+ b-trees, the node allocation bitmap is stored across multiple
records. The first chunk resides in the b-tree Header Node at record
index 2, while all subsequent chunks are stored in dedicated Map Nodes
at record index 0.
This structural quirk forces callers like hfs_bmap_alloc() and
hfs_bmap_free() to duplicate boilerplate code to validate offsets, correct
lengths, and map the underlying pages via kmap_local_page(). There is
also currently no strict node-type validation before reading these
records, leaving the allocator vulnerable if a corrupted image points a
map linkage to an Index or Leaf node.
Introduce a unified bit-level API to encapsulate the map record access:
1. A new `struct hfs_bmap_ctx` to cleanly pass state and safely handle
page math across all architectures.
2. `hfs_bmap_get_map_page()`: Automatically validates node types
(HFS_NODE_HEADER vs HFS_NODE_MAP), infers the correct record index,
handles page-boundary math, and returns the unmapped `struct page *`
directly to the caller to avoid asymmetric mappings.
3. `hfs_bmap_clear_bit()`: A clean wrapper that internally handles page
mapping/unmapping for single-bit operations.
Refactor hfs_bmap_alloc() and hfs_bmap_free() to utilize this new API.
This deduplicates the allocator logic, hardens the map traversal against
fuzzed images, and provides the exact abstractions needed for upcoming
mount-time validation checks.
Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
---
fs/hfsplus/btree.c | 169 ++++++++++++++++++++++++++-----------
include/linux/hfs_common.h | 2 +
2 files changed, 124 insertions(+), 47 deletions(-)
diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
index 1220a2f22737..64d168347b4b 100644
--- a/fs/hfsplus/btree.c
+++ b/fs/hfsplus/btree.c
@@ -129,6 +129,95 @@ u32 hfsplus_calc_btree_clump_size(u32 block_size, u32 node_size,
return clump_size;
}
+/* Context for iterating b-tree map pages
+ * @page_idx: The index of the page within the b-node's page array
+ * @off: The byte offset within the mapped page
+ * @len: The remaining length of the map record
+ */
+struct hfs_bmap_ctx {
+ unsigned int page_idx;
+ unsigned int off;
+ u16 len;
+};
+
+/*
+ * Finds the specific page containing the requested byte offset within the map
+ * record. Automatically handles the difference between header and map nodes.
+ * Returns the struct page pointer, or an ERR_PTR on failure.
+ * Note: The caller is responsible for mapping/unmapping the returned page.
+ */
+static struct page *hfs_bmap_get_map_page(struct hfs_bnode *node, struct hfs_bmap_ctx *ctx,
+ u32 byte_offset)
+{
+ u16 rec_idx, off16;
+ unsigned int page_off;
+
+ if (node->this == HFSPLUS_TREE_HEAD) {
+ if (node->type != HFS_NODE_HEADER) {
+ pr_err("hfsplus: invalid btree header node\n");
+ return ERR_PTR(-EIO);
+ }
+ rec_idx = HFSPLUS_BTREE_HDR_MAP_REC_INDEX;
+ } else {
+ if (node->type != HFS_NODE_MAP) {
+ pr_err("hfsplus: invalid btree map node\n");
+ return ERR_PTR(-EIO);
+ }
+ rec_idx = HFSPLUS_BTREE_MAP_NODE_REC_INDEX;
+ }
+
+ ctx->len = hfs_brec_lenoff(node, rec_idx, &off16);
+ if (!ctx->len)
+ return ERR_PTR(-ENOENT);
+
+ if (!is_bnode_offset_valid(node, off16))
+ return ERR_PTR(-EIO);
+
+ ctx->len = check_and_correct_requested_length(node, off16, ctx->len);
+
+ if (byte_offset >= ctx->len)
+ return ERR_PTR(-EINVAL);
+
+ page_off = (u32)off16 + node->page_offset + byte_offset;
+ ctx->page_idx = page_off >> PAGE_SHIFT;
+ ctx->off = page_off & ~PAGE_MASK;
+
+ return node->page[ctx->page_idx];
+}
+
+/**
+ * hfs_bmap_clear_bit - clear a bit in the b-tree map
+ * @node: the b-tree node containing the map record
+ * @node_bit_idx: the relative bit index within the node's map record
+ *
+ * Returns 0 on success, -EINVAL if already clear, or negative error code.
+ */
+static int hfs_bmap_clear_bit(struct hfs_bnode *node, u32 node_bit_idx)
+{
+ struct hfs_bmap_ctx ctx;
+ struct page *page;
+ u8 *bmap, mask;
+
+ page = hfs_bmap_get_map_page(node, &ctx, node_bit_idx / BITS_PER_BYTE);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ bmap = kmap_local_page(page);
+
+ mask = 1 << (7 - (node_bit_idx % BITS_PER_BYTE));
+
+ if (!(bmap[ctx.off] & mask)) {
+ kunmap_local(bmap);
+ return -EINVAL;
+ }
+
+ bmap[ctx.off] &= ~mask;
+ set_page_dirty(page);
+ kunmap_local(bmap);
+
+ return 0;
+}
+
/* Get a reference to a B*Tree and do some initial checks */
struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
{
@@ -374,11 +463,9 @@ int hfs_bmap_reserve(struct hfs_btree *tree, u32 rsvd_nodes)
struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
{
struct hfs_bnode *node, *next_node;
- struct page **pagep;
+ struct hfs_bmap_ctx ctx;
+ struct page *page;
u32 nidx, idx;
- unsigned off;
- u16 off16;
- u16 len;
u8 *data, byte, m;
int i, res;
@@ -390,30 +477,26 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
node = hfs_bnode_find(tree, nidx);
if (IS_ERR(node))
return node;
- len = hfs_brec_lenoff(node, 2, &off16);
- off = off16;
- if (!is_bnode_offset_valid(node, off)) {
+ page = hfs_bmap_get_map_page(node, &ctx, 0);
+ if (IS_ERR(page)) {
+ res = PTR_ERR(page);
hfs_bnode_put(node);
- return ERR_PTR(-EIO);
+ return ERR_PTR(res);
}
- len = check_and_correct_requested_length(node, off, len);
- off += node->page_offset;
- pagep = node->page + (off >> PAGE_SHIFT);
- data = kmap_local_page(*pagep);
- off &= ~PAGE_MASK;
+ data = kmap_local_page(page);
idx = 0;
for (;;) {
- while (len) {
- byte = data[off];
+ while (ctx.len) {
+ byte = data[ctx.off];
if (byte != 0xff) {
for (m = 0x80, i = 0; i < 8; m >>= 1, i++) {
if (!(byte & m)) {
idx += i;
- data[off] |= m;
- set_page_dirty(*pagep);
+ data[ctx.off] |= m;
+ set_page_dirty(page);
kunmap_local(data);
tree->free_nodes--;
mark_inode_dirty(tree->inode);
@@ -423,13 +506,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
}
}
}
- if (++off >= PAGE_SIZE) {
+ if (++ctx.off >= PAGE_SIZE) {
kunmap_local(data);
- data = kmap_local_page(*++pagep);
- off = 0;
+ page = node->page[++ctx.page_idx];
+ data = kmap_local_page(page);
+ ctx.off = 0;
}
idx += 8;
- len--;
+ ctx.len--;
}
kunmap_local(data);
nidx = node->next;
@@ -443,22 +527,22 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
return next_node;
node = next_node;
- len = hfs_brec_lenoff(node, 0, &off16);
- off = off16;
- off += node->page_offset;
- pagep = node->page + (off >> PAGE_SHIFT);
- data = kmap_local_page(*pagep);
- off &= ~PAGE_MASK;
+ page = hfs_bmap_get_map_page(node, &ctx, 0);
+ if (IS_ERR(page)) {
+ res = PTR_ERR(page);
+ hfs_bnode_put(node);
+ return ERR_PTR(res);
+ }
+ data = kmap_local_page(page);
}
}
void hfs_bmap_free(struct hfs_bnode *node)
{
struct hfs_btree *tree;
- struct page *page;
u16 off, len;
u32 nidx;
- u8 *data, byte, m;
+ int res;
hfs_dbg("node %u\n", node->this);
BUG_ON(!node->this);
@@ -495,24 +579,15 @@ void hfs_bmap_free(struct hfs_bnode *node)
}
len = hfs_brec_lenoff(node, 0, &off);
}
- off += node->page_offset + nidx / 8;
- page = node->page[off >> PAGE_SHIFT];
- data = kmap_local_page(page);
- off &= ~PAGE_MASK;
- m = 1 << (~nidx & 7);
- byte = data[off];
- if (!(byte & m)) {
- pr_crit("trying to free free bnode "
- "%u(%d)\n",
- node->this, node->type);
- kunmap_local(data);
- hfs_bnode_put(node);
- return;
+
+ res = hfs_bmap_clear_bit(node, nidx);
+ if (res == -EINVAL) {
+ pr_crit("trying to free free bnode %u(%d)\n",
+ node->this, node->type);
+ } else if (!res) {
+ tree->free_nodes++;
+ mark_inode_dirty(tree->inode);
}
- data[off] = byte & ~m;
- set_page_dirty(page);
- kunmap_local(data);
+
hfs_bnode_put(node);
- tree->free_nodes++;
- mark_inode_dirty(tree->inode);
}
diff --git a/include/linux/hfs_common.h b/include/linux/hfs_common.h
index dadb5e0aa8a3..be24c687858e 100644
--- a/include/linux/hfs_common.h
+++ b/include/linux/hfs_common.h
@@ -510,6 +510,8 @@ struct hfs_btree_header_rec {
#define HFSPLUS_NODE_MXSZ 32768
#define HFSPLUS_ATTR_TREE_NODE_SIZE 8192
#define HFSPLUS_BTREE_HDR_NODE_RECS_COUNT 3
+#define HFSPLUS_BTREE_HDR_MAP_REC_INDEX 2 /* Map (bitmap) record in Header node */
+#define HFSPLUS_BTREE_MAP_NODE_REC_INDEX 0 /* Map record in Map Node */
#define HFSPLUS_BTREE_HDR_USER_BYTES 128
/* btree key type */
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v7 2/2] hfsplus: validate b-tree node 0 bitmap at mount time
2026-03-18 7:38 [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Shardul Bankar
2026-03-18 7:38 ` [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation Shardul Bankar
@ 2026-03-18 7:38 ` Shardul Bankar
2026-03-19 3:01 ` Viacheslav Dubeyko
2026-03-19 3:52 ` [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Viacheslav Dubeyko
2 siblings, 1 reply; 6+ messages in thread
From: Shardul Bankar @ 2026-03-18 7:38 UTC (permalink / raw)
To: slava, glaubitz, frank.li, linux-fsdevel, linux-kernel
Cc: janak, janak, shardulsb08, Shardul Bankar,
syzbot+1c8ff72d0cd8a50dfeaa
Syzkaller reported an issue with corrupted HFS+ images where the b-tree
allocation bitmap indicates that the header node (Node 0) is free. Node 0
must always be allocated as it contains the b-tree header record and the
allocation bitmap itself. Violating this invariant leads to allocator
corruption, which cascades into kernel panics or undefined behavior when
the filesystem attempts to allocate blocks.
Prevent trusting a corrupted allocator state by adding a validation check
during hfs_btree_open(). Introduce the hfs_bmap_test_bit() helper
(utilizing the newly added map-access API) to safely verify that the MSB
of the first bitmap byte (representing Node 0) is marked as allocated.
The helper returns a boolean, allowing the caller to safely catch both
structural IO errors and illegally cleared bits in a single check.
If corruption is detected, print a warning identifying the specific
corrupted tree and force the filesystem to mount read-only (SB_RDONLY).
This prevents kernel panics from corrupted images while enabling data
recovery.
As a minor cleanup to support the warning logs, replace the verbose CNID
logic with cleaner macro definitions (using official structural names like
"Extents Overflow File") and a dedicated string lookup helper.
Reported-by: syzbot+1c8ff72d0cd8a50dfeaa@syzkaller.appspotmail.com
Link: https://syzkaller.appspot.com/bug?extid=1c8ff72d0cd8a50dfeaa
Link: https://lore.kernel.org/all/20260315172005.2066677-1-shardul.b@mpiricsoftware.com/
Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
---
fs/hfsplus/btree.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
index 64d168347b4b..04304f952f6b 100644
--- a/fs/hfsplus/btree.c
+++ b/fs/hfsplus/btree.c
@@ -185,6 +185,32 @@ static struct page *hfs_bmap_get_map_page(struct hfs_bnode *node, struct hfs_bma
return node->page[ctx->page_idx];
}
+/**
+ * hfs_bmap_test_bit - test a bit in the b-tree map
+ * @node: the b-tree node containing the map record
+ * @node_bit_idx: the relative bit index within the node's map record
+ *
+ * Returns true if set, false if clear or on failure.
+ */
+static bool hfs_bmap_test_bit(struct hfs_bnode *node, u32 node_bit_idx)
+{
+ struct hfs_bmap_ctx ctx;
+ struct page *page;
+ u8 *bmap, byte, mask;
+
+ page = hfs_bmap_get_map_page(node, &ctx, node_bit_idx / BITS_PER_BYTE);
+ if (IS_ERR(page))
+ return false;
+
+ bmap = kmap_local_page(page);
+ byte = bmap[ctx.off];
+ kunmap_local(bmap);
+
+ mask = 1 << (7 - (node_bit_idx % BITS_PER_BYTE));
+ return (byte & mask) != 0;
+}
+
+
/**
* hfs_bmap_clear_bit - clear a bit in the b-tree map
* @node: the b-tree node containing the map record
@@ -218,12 +244,32 @@ static int hfs_bmap_clear_bit(struct hfs_bnode *node, u32 node_bit_idx)
return 0;
}
+#define HFS_EXTENT_TREE_NAME "Extents Overflow File"
+#define HFS_CATALOG_TREE_NAME "Catalog File"
+#define HFS_ATTR_TREE_NAME "Attributes File"
+#define HFS_UNKNOWN_TREE_NAME "Unknown B-tree"
+
+static const char *hfs_btree_name(u32 cnid)
+{
+ switch (cnid) {
+ case HFSPLUS_EXT_CNID:
+ return HFS_EXTENT_TREE_NAME;
+ case HFSPLUS_CAT_CNID:
+ return HFS_CATALOG_TREE_NAME;
+ case HFSPLUS_ATTR_CNID:
+ return HFS_ATTR_TREE_NAME;
+ default:
+ return HFS_UNKNOWN_TREE_NAME;
+ }
+}
+
/* Get a reference to a B*Tree and do some initial checks */
struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
{
struct hfs_btree *tree;
struct hfs_btree_header_rec *head;
struct address_space *mapping;
+ struct hfs_bnode *node;
struct inode *inode;
struct page *page;
unsigned int size;
@@ -331,6 +377,20 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
kunmap_local(head);
put_page(page);
+
+ node = hfs_bnode_find(tree, HFSPLUS_TREE_HEAD);
+ if (IS_ERR(node))
+ goto free_inode;
+
+ if (!hfs_bmap_test_bit(node, 0)) {
+ pr_warn("(%s): %s (cnid 0x%x) map record invalid or bitmap corruption detected, forcing read-only.\n",
+ sb->s_id, hfs_btree_name(id), id);
+ pr_warn("Run fsck.hfsplus to repair.\n");
+ sb->s_flags |= SB_RDONLY;
+ }
+
+ hfs_bnode_put(node);
+
return tree;
fail_page:
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation
2026-03-18 7:38 ` [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation Shardul Bankar
@ 2026-03-19 3:01 ` Viacheslav Dubeyko
0 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Dubeyko @ 2026-03-19 3:01 UTC (permalink / raw)
To: Shardul Bankar, glaubitz, frank.li, linux-fsdevel, linux-kernel
Cc: janak, janak, shardulsb08
On Wed, 2026-03-18 at 13:08 +0530, Shardul Bankar wrote:
> In HFS+ b-trees, the node allocation bitmap is stored across multiple
> records. The first chunk resides in the b-tree Header Node at record
> index 2, while all subsequent chunks are stored in dedicated Map
> Nodes
> at record index 0.
>
> This structural quirk forces callers like hfs_bmap_alloc() and
> hfs_bmap_free() to duplicate boilerplate code to validate offsets,
> correct
> lengths, and map the underlying pages via kmap_local_page(). There is
> also currently no strict node-type validation before reading these
> records, leaving the allocator vulnerable if a corrupted image points
> a
> map linkage to an Index or Leaf node.
>
> Introduce a unified bit-level API to encapsulate the map record
> access:
> 1. A new `struct hfs_bmap_ctx` to cleanly pass state and safely
> handle
> page math across all architectures.
> 2. `hfs_bmap_get_map_page()`: Automatically validates node types
> (HFS_NODE_HEADER vs HFS_NODE_MAP), infers the correct record
> index,
> handles page-boundary math, and returns the unmapped `struct page
> *`
> directly to the caller to avoid asymmetric mappings.
> 3. `hfs_bmap_clear_bit()`: A clean wrapper that internally handles
> page
> mapping/unmapping for single-bit operations.
>
> Refactor hfs_bmap_alloc() and hfs_bmap_free() to utilize this new
> API.
> This deduplicates the allocator logic, hardens the map traversal
> against
> fuzzed images, and provides the exact abstractions needed for
> upcoming
> mount-time validation checks.
>
> Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
> ---
> fs/hfsplus/btree.c | 169 ++++++++++++++++++++++++++---------
> --
> include/linux/hfs_common.h | 2 +
> 2 files changed, 124 insertions(+), 47 deletions(-)
>
> diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
> index 1220a2f22737..64d168347b4b 100644
> --- a/fs/hfsplus/btree.c
> +++ b/fs/hfsplus/btree.c
> @@ -129,6 +129,95 @@ u32 hfsplus_calc_btree_clump_size(u32
> block_size, u32 node_size,
> return clump_size;
> }
>
> +/* Context for iterating b-tree map pages
> + * @page_idx: The index of the page within the b-node's page array
> + * @off: The byte offset within the mapped page
> + * @len: The remaining length of the map record
> + */
> +struct hfs_bmap_ctx {
> + unsigned int page_idx;
> + unsigned int off;
> + u16 len;
> +};
> +
> +/*
> + * Finds the specific page containing the requested byte offset
> within the map
> + * record. Automatically handles the difference between header and
> map nodes.
> + * Returns the struct page pointer, or an ERR_PTR on failure.
> + * Note: The caller is responsible for mapping/unmapping the
> returned page.
> + */
> +static struct page *hfs_bmap_get_map_page(struct hfs_bnode *node,
> struct hfs_bmap_ctx *ctx,
> + u32 byte_offset)
> +{
> + u16 rec_idx, off16;
> + unsigned int page_off;
> +
> + if (node->this == HFSPLUS_TREE_HEAD) {
> + if (node->type != HFS_NODE_HEADER) {
> + pr_err("hfsplus: invalid btree header
> node\n");
> + return ERR_PTR(-EIO);
> + }
> + rec_idx = HFSPLUS_BTREE_HDR_MAP_REC_INDEX;
> + } else {
> + if (node->type != HFS_NODE_MAP) {
> + pr_err("hfsplus: invalid btree map node\n");
> + return ERR_PTR(-EIO);
> + }
> + rec_idx = HFSPLUS_BTREE_MAP_NODE_REC_INDEX;
> + }
> +
> + ctx->len = hfs_brec_lenoff(node, rec_idx, &off16);
> + if (!ctx->len)
> + return ERR_PTR(-ENOENT);
> +
> + if (!is_bnode_offset_valid(node, off16))
> + return ERR_PTR(-EIO);
> +
> + ctx->len = check_and_correct_requested_length(node, off16,
> ctx->len);
> +
> + if (byte_offset >= ctx->len)
> + return ERR_PTR(-EINVAL);
> +
> + page_off = (u32)off16 + node->page_offset + byte_offset;
> + ctx->page_idx = page_off >> PAGE_SHIFT;
> + ctx->off = page_off & ~PAGE_MASK;
> +
> + return node->page[ctx->page_idx];
> +}
> +
> +/**
> + * hfs_bmap_clear_bit - clear a bit in the b-tree map
> + * @node: the b-tree node containing the map record
> + * @node_bit_idx: the relative bit index within the node's map
> record
> + *
> + * Returns 0 on success, -EINVAL if already clear, or negative error
> code.
> + */
> +static int hfs_bmap_clear_bit(struct hfs_bnode *node, u32
> node_bit_idx)
> +{
> + struct hfs_bmap_ctx ctx;
> + struct page *page;
> + u8 *bmap, mask;
> +
> + page = hfs_bmap_get_map_page(node, &ctx, node_bit_idx /
> BITS_PER_BYTE);
> + if (IS_ERR(page))
> + return PTR_ERR(page);
> +
> + bmap = kmap_local_page(page);
> +
> + mask = 1 << (7 - (node_bit_idx % BITS_PER_BYTE));
> +
> + if (!(bmap[ctx.off] & mask)) {
> + kunmap_local(bmap);
> + return -EINVAL;
> + }
> +
> + bmap[ctx.off] &= ~mask;
> + set_page_dirty(page);
> + kunmap_local(bmap);
> +
> + return 0;
> +}
> +
> /* Get a reference to a B*Tree and do some initial checks */
> struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
> {
> @@ -374,11 +463,9 @@ int hfs_bmap_reserve(struct hfs_btree *tree, u32
> rsvd_nodes)
> struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
> {
> struct hfs_bnode *node, *next_node;
> - struct page **pagep;
> + struct hfs_bmap_ctx ctx;
> + struct page *page;
> u32 nidx, idx;
> - unsigned off;
> - u16 off16;
> - u16 len;
> u8 *data, byte, m;
> int i, res;
>
> @@ -390,30 +477,26 @@ struct hfs_bnode *hfs_bmap_alloc(struct
> hfs_btree *tree)
> node = hfs_bnode_find(tree, nidx);
> if (IS_ERR(node))
> return node;
> - len = hfs_brec_lenoff(node, 2, &off16);
> - off = off16;
>
> - if (!is_bnode_offset_valid(node, off)) {
> + page = hfs_bmap_get_map_page(node, &ctx, 0);
> + if (IS_ERR(page)) {
> + res = PTR_ERR(page);
> hfs_bnode_put(node);
> - return ERR_PTR(-EIO);
> + return ERR_PTR(res);
> }
> - len = check_and_correct_requested_length(node, off, len);
>
> - off += node->page_offset;
> - pagep = node->page + (off >> PAGE_SHIFT);
> - data = kmap_local_page(*pagep);
> - off &= ~PAGE_MASK;
> + data = kmap_local_page(page);
> idx = 0;
>
> for (;;) {
> - while (len) {
> - byte = data[off];
> + while (ctx.len) {
> + byte = data[ctx.off];
> if (byte != 0xff) {
> for (m = 0x80, i = 0; i < 8; m >>=
> 1, i++) {
> if (!(byte & m)) {
> idx += i;
> - data[off] |= m;
> -
> set_page_dirty(*pagep);
> + data[ctx.off] |= m;
> + set_page_dirty(page)
> ;
> kunmap_local(data);
> tree->free_nodes--;
> mark_inode_dirty(tre
> e->inode);
> @@ -423,13 +506,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct
> hfs_btree *tree)
> }
> }
> }
> - if (++off >= PAGE_SIZE) {
> + if (++ctx.off >= PAGE_SIZE) {
> kunmap_local(data);
> - data = kmap_local_page(*++pagep);
> - off = 0;
> + page = node->page[++ctx.page_idx];
> + data = kmap_local_page(page);
> + ctx.off = 0;
> }
> idx += 8;
> - len--;
> + ctx.len--;
> }
> kunmap_local(data);
> nidx = node->next;
> @@ -443,22 +527,22 @@ struct hfs_bnode *hfs_bmap_alloc(struct
> hfs_btree *tree)
> return next_node;
> node = next_node;
>
> - len = hfs_brec_lenoff(node, 0, &off16);
> - off = off16;
> - off += node->page_offset;
> - pagep = node->page + (off >> PAGE_SHIFT);
> - data = kmap_local_page(*pagep);
> - off &= ~PAGE_MASK;
> + page = hfs_bmap_get_map_page(node, &ctx, 0);
> + if (IS_ERR(page)) {
> + res = PTR_ERR(page);
> + hfs_bnode_put(node);
> + return ERR_PTR(res);
> + }
> + data = kmap_local_page(page);
> }
> }
>
> void hfs_bmap_free(struct hfs_bnode *node)
> {
> struct hfs_btree *tree;
> - struct page *page;
> u16 off, len;
> u32 nidx;
> - u8 *data, byte, m;
> + int res;
>
> hfs_dbg("node %u\n", node->this);
> BUG_ON(!node->this);
> @@ -495,24 +579,15 @@ void hfs_bmap_free(struct hfs_bnode *node)
> }
> len = hfs_brec_lenoff(node, 0, &off);
> }
> - off += node->page_offset + nidx / 8;
> - page = node->page[off >> PAGE_SHIFT];
> - data = kmap_local_page(page);
> - off &= ~PAGE_MASK;
> - m = 1 << (~nidx & 7);
> - byte = data[off];
> - if (!(byte & m)) {
> - pr_crit("trying to free free bnode "
> - "%u(%d)\n",
> - node->this, node->type);
> - kunmap_local(data);
> - hfs_bnode_put(node);
> - return;
> +
> + res = hfs_bmap_clear_bit(node, nidx);
> + if (res == -EINVAL) {
> + pr_crit("trying to free free bnode %u(%d)\n",
> + node->this, node->type);
> + } else if (!res) {
> + tree->free_nodes++;
> + mark_inode_dirty(tree->inode);
> }
> - data[off] = byte & ~m;
> - set_page_dirty(page);
> - kunmap_local(data);
> +
> hfs_bnode_put(node);
> - tree->free_nodes++;
> - mark_inode_dirty(tree->inode);
> }
> diff --git a/include/linux/hfs_common.h b/include/linux/hfs_common.h
> index dadb5e0aa8a3..be24c687858e 100644
> --- a/include/linux/hfs_common.h
> +++ b/include/linux/hfs_common.h
> @@ -510,6 +510,8 @@ struct hfs_btree_header_rec {
> #define HFSPLUS_NODE_MXSZ 32768
> #define HFSPLUS_ATTR_TREE_NODE_SIZE 8192
> #define HFSPLUS_BTREE_HDR_NODE_RECS_COUNT 3
> +#define HFSPLUS_BTREE_HDR_MAP_REC_INDEX 2 /*
> Map (bitmap) record in Header node */
> +#define HFSPLUS_BTREE_MAP_NODE_REC_INDEX 0 /* Map
> record in Map Node */
> #define HFSPLUS_BTREE_HDR_USER_BYTES 128
>
> /* btree key type */
Looks good.
Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com>
The xfstests run hasn't revealed any new issues.
Tested-by: Viacheslav Dubeyko <slava@dubeyko.com>
Thanks,
Slava.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v7 2/2] hfsplus: validate b-tree node 0 bitmap at mount time
2026-03-18 7:38 ` [PATCH v7 2/2] hfsplus: validate b-tree node 0 bitmap at mount time Shardul Bankar
@ 2026-03-19 3:01 ` Viacheslav Dubeyko
0 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Dubeyko @ 2026-03-19 3:01 UTC (permalink / raw)
To: Shardul Bankar, glaubitz, frank.li, linux-fsdevel, linux-kernel
Cc: janak, janak, shardulsb08, syzbot+1c8ff72d0cd8a50dfeaa
On Wed, 2026-03-18 at 13:08 +0530, Shardul Bankar wrote:
> Syzkaller reported an issue with corrupted HFS+ images where the b-
> tree
> allocation bitmap indicates that the header node (Node 0) is free.
> Node 0
> must always be allocated as it contains the b-tree header record and
> the
> allocation bitmap itself. Violating this invariant leads to allocator
> corruption, which cascades into kernel panics or undefined behavior
> when
> the filesystem attempts to allocate blocks.
>
> Prevent trusting a corrupted allocator state by adding a validation
> check
> during hfs_btree_open(). Introduce the hfs_bmap_test_bit() helper
> (utilizing the newly added map-access API) to safely verify that the
> MSB
> of the first bitmap byte (representing Node 0) is marked as
> allocated.
> The helper returns a boolean, allowing the caller to safely catch
> both
> structural IO errors and illegally cleared bits in a single check.
>
> If corruption is detected, print a warning identifying the specific
> corrupted tree and force the filesystem to mount read-only
> (SB_RDONLY).
> This prevents kernel panics from corrupted images while enabling data
> recovery.
>
> As a minor cleanup to support the warning logs, replace the verbose
> CNID
> logic with cleaner macro definitions (using official structural names
> like
> "Extents Overflow File") and a dedicated string lookup helper.
>
> Reported-by: syzbot+1c8ff72d0cd8a50dfeaa@syzkaller.appspotmail.com
> Link: https://syzkaller.appspot.com/bug?extid=1c8ff72d0cd8a50dfeaa
> Link:
> https://lore.kernel.org/all/20260315172005.2066677-1-shardul.b@mpiricsoftware.com/
> Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
> ---
> fs/hfsplus/btree.c | 60
> ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 60 insertions(+)
>
> diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
> index 64d168347b4b..04304f952f6b 100644
> --- a/fs/hfsplus/btree.c
> +++ b/fs/hfsplus/btree.c
> @@ -185,6 +185,32 @@ static struct page *hfs_bmap_get_map_page(struct
> hfs_bnode *node, struct hfs_bma
> return node->page[ctx->page_idx];
> }
>
> +/**
> + * hfs_bmap_test_bit - test a bit in the b-tree map
> + * @node: the b-tree node containing the map record
> + * @node_bit_idx: the relative bit index within the node's map
> record
> + *
> + * Returns true if set, false if clear or on failure.
> + */
> +static bool hfs_bmap_test_bit(struct hfs_bnode *node, u32
> node_bit_idx)
> +{
> + struct hfs_bmap_ctx ctx;
> + struct page *page;
> + u8 *bmap, byte, mask;
> +
> + page = hfs_bmap_get_map_page(node, &ctx, node_bit_idx /
> BITS_PER_BYTE);
> + if (IS_ERR(page))
> + return false;
> +
> + bmap = kmap_local_page(page);
> + byte = bmap[ctx.off];
> + kunmap_local(bmap);
> +
> + mask = 1 << (7 - (node_bit_idx % BITS_PER_BYTE));
> + return (byte & mask) != 0;
> +}
> +
> +
> /**
> * hfs_bmap_clear_bit - clear a bit in the b-tree map
> * @node: the b-tree node containing the map record
> @@ -218,12 +244,32 @@ static int hfs_bmap_clear_bit(struct hfs_bnode
> *node, u32 node_bit_idx)
> return 0;
> }
>
> +#define HFS_EXTENT_TREE_NAME "Extents Overflow File"
> +#define HFS_CATALOG_TREE_NAME "Catalog File"
> +#define HFS_ATTR_TREE_NAME "Attributes File"
> +#define HFS_UNKNOWN_TREE_NAME "Unknown B-tree"
> +
> +static const char *hfs_btree_name(u32 cnid)
> +{
> + switch (cnid) {
> + case HFSPLUS_EXT_CNID:
> + return HFS_EXTENT_TREE_NAME;
> + case HFSPLUS_CAT_CNID:
> + return HFS_CATALOG_TREE_NAME;
> + case HFSPLUS_ATTR_CNID:
> + return HFS_ATTR_TREE_NAME;
> + default:
> + return HFS_UNKNOWN_TREE_NAME;
> + }
> +}
> +
> /* Get a reference to a B*Tree and do some initial checks */
> struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
> {
> struct hfs_btree *tree;
> struct hfs_btree_header_rec *head;
> struct address_space *mapping;
> + struct hfs_bnode *node;
> struct inode *inode;
> struct page *page;
> unsigned int size;
> @@ -331,6 +377,20 @@ struct hfs_btree *hfs_btree_open(struct
> super_block *sb, u32 id)
>
> kunmap_local(head);
> put_page(page);
> +
> + node = hfs_bnode_find(tree, HFSPLUS_TREE_HEAD);
> + if (IS_ERR(node))
> + goto free_inode;
> +
> + if (!hfs_bmap_test_bit(node, 0)) {
> + pr_warn("(%s): %s (cnid 0x%x) map record invalid or
> bitmap corruption detected, forcing read-only.\n",
> + sb->s_id, hfs_btree_name(id), id);
> + pr_warn("Run fsck.hfsplus to repair.\n");
> + sb->s_flags |= SB_RDONLY;
> + }
> +
> + hfs_bnode_put(node);
> +
> return tree;
>
> fail_page:
Looks good.
Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com>
The xfstests run hasn't revealed any new issues.
Tested-by: Viacheslav Dubeyko <slava@dubeyko.com>
Thanks,
Slava.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption
2026-03-18 7:38 [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Shardul Bankar
2026-03-18 7:38 ` [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation Shardul Bankar
2026-03-18 7:38 ` [PATCH v7 2/2] hfsplus: validate b-tree node 0 bitmap at mount time Shardul Bankar
@ 2026-03-19 3:52 ` Viacheslav Dubeyko
2 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Dubeyko @ 2026-03-19 3:52 UTC (permalink / raw)
To: Shardul Bankar, glaubitz, frank.li, linux-fsdevel, linux-kernel
Cc: janak, janak, shardulsb08
On Wed, 2026-03-18 at 13:08 +0530, Shardul Bankar wrote:
> Hi all,
>
> This series addresses a Syzkaller-reported vulnerability where fuzzed
> HFS+ images mark the B-tree Header Node (Node 0) as free in the
> allocation bitmap. This violates a core filesystem invariant and
> leads
> to allocator corruption and kernel panics.
>
> To fix this safely and cleanly, the series is split into two parts:
>
> Patch 1 introduces a unified, robust API for B-tree map record access
> (struct hfs_bmap_ctx, hfs_bmap_get_map_page, and hfs_bmap_clear_bit)
> and
> refactors the boilerplate page mapping logic out of hfs_bmap_alloc()
> and
> hfs_bmap_free().
> Patch 2 utilizes this new API to perform a mount-time validation of
> Node
> 0 via hfs_bmap_test_bit(), forcing a safe read-only mount if
> structural
> or bit-level corruption is detected.
>
> Note on Allocator Optimization: Following discussions in v4,
> there is a recognized opportunity to optimize hfs_bmap_alloc()
> from a first-fit to a next-fit allocator by caching an in-core
> allocation hint (roving pointer). To keep the scope of this series
> strictly aligned with the Syzkaller corruption fix, that
> architectural
> optimization is deferred to a separate, follow-up patchset/thread.
>
> v7:
> - Type Safety: Changed the return type of hfs_bmap_test_bit() to
> `bool`,
> allowing the mount-time validation in hfs_btree_open() to cleanly
> catch both IO errors and cleared bits in a single evaluation.
>
> - Overflow Prevention: Added an explicit `(u32)` cast to `off16`
> during
> page offset calculation to clearly document and prevent 16-bit
> integer
> overflows.
>
> - String Accuracy: Updated the CNID string macros to use their
> official
> structural names ("Extents Overflow File", "Catalog File",
> "Attributes
> File") per maintainer feedback.
>
> - Documentation: Corrected a stale docstring error code (-EALREADY
> to
> -EINVAL) above hfs_bmap_clear_bit().
>
> v6:
> - Symmetric Mapping: Updated hfs_bmap_get_map_page() to return an
> unmapped
> struct page * instead of a mapped pointer. This ensures the caller
> explicitly handles both kmap_local_page() and kunmap_local(),
> preventing
> dangerous asymmetric mapping lifecycles.
>
> - Bisectability: Moved the introduction of hfs_bmap_test_bit() from
> Patch 1
> to Patch 2 where it is actually consumed, preventing a -Wunused-
> function
> compiler warning and keeping the Git history perfectly bisectable.
>
> - API Clarity: Renamed the bit_idx parameter to node_bit_idx in the
> bit-level
> helpers to explicitly clarify that the index is strictly relative
> to the
> target hfs_bnode's map record, preventing future absolute-index
> misuse.
>
> - Naming & Style: Replaced hardcoded 8s with BITS_PER_BYTE, updated
> local
> variable names (m to mask, data to bmap inside the new helpers),
> and added
> kernel-doc field descriptions to struct hfs_bmap_ctx.
>
> - Minimal Diff Scope: Restored the original variable names (data, m)
> inside
> the legacy hfs_bmap_alloc() loop to keep the diff surgically
> focused on the
> logical changes and preserve git blame history.
>
> - Error Codes: Changed the error return in hfs_bmap_clear_bit() from
> -EALREADY to -EINVAL.
>
> - CNID String Lookup: Replaced the sparse string array with #define
> macros
> and a standard switch statement for cleaner subsystem visibility,
> per
> Slava's preference.
>
>
> v5:
> - API Encapsulation: Introduced struct hfs_bmap_ctx to cleanly
> bundle
> offset, length, and page index state instead of passing multiple
> pointers, addressing reviewer feedback.
>
> - Bit-Level Helpers: Added hfs_bmap_test_bit() and
> hfs_bmap_clear_bit()
> to safely encapsulate mapping/unmapping for single-bit accesses
> (like the mount-time check and node freeing).
>
> - Performance Retention: Retained the page-level mapping approach
> for
> the linear scan inside hfs_bmap_alloc() to prevent the severe
> performance regression of mapping/unmapping on a per-byte basis,
> while refactoring it to use the new ctx struct.
>
> - Hexagon Overflow Fix: Fixed a 0-day Kernel Test Robot warning on
> architectures with 256KB page sizes by upgrading the offset
> variables
> in the new struct hfs_bmap_ctx to unsigned int, preventing 16-bit
> shift
> overflows.
> Link:
> https://lore.kernel.org/all/202602270310.eBmeD8VX-lkp@intel.com/
>
> - Map Record Spanning: Added a byte_offset parameter to the page
> mapper
> to correctly handle large map records that span across multiple
> 4KB
> pages.
>
> - Loop Mask Revert: Reverted the 0x80 bitmask in the alloc() inner
> loop
> back to its original state (and dropped the
> HFSPLUS_BTREE_NODE0_BIT
> macro), as it represents a generic sliding mask, not specifically
> Node 0.
>
> - String Array Cleanup: Replaced the verbose switch(id) block in the
> mount validation with a clean static array of constant strings for
> the CNID names, per reviewer feedback.
>
> v4:
> - Split the changes into a 2-patch series (Refactoring + Bug Fix).
> - Extracted map node traversal into a generic helper
> (hfs_bmap_get_map_page)
> as per Slava's feedback, replacing manual offset/page management.
> - Added node-type validation (HFS_NODE_HEADER vs HFS_NODE_MAP)
> inside the
> helper to defend against structurally corrupted linkages.
> - Replaced hardcoded values with named macros
> (HFSPLUS_BTREE_NODE0_BIT, etc).
> - Handled invalid map offsets/lengths as corruption, continuing the
> mount
> as SB_RDONLY instead of failing it completely to preserve data
> recovery.
>
> v3:
> - Moved validation logic inline into hfs_btree_open() to allow
> reporting the specific corrupted tree ID.
> - Replaced custom offset calculations with existing
> hfs_bnode_find()
> and hfs_brec_lenoff() infrastructure to handle node sizes and
> page boundaries correctly.
> - Removed temporary 'btree_bitmap_corrupted' superblock flag; setup
> SB_RDONLY directly upon detection.
> - Moved logging to hfs_btree_open() to include the specific tree ID
> in
> the warning message
> - Used explicit bitwise check (&) instead of test_bit() to ensure
> portability. test_bit() bit-numbering is architecture-dependent
> (e.g., bit 0 vs bit 7 can swap meanings on BE vs LE), whereas
> masking 0x80 consistently targets the MSB required by the HFS+
> on-disk format.
>
> v2:
> - Fix compiler warning about comparing u16 bitmap_off with
> PAGE_SIZE which
> can exceed u16 maximum on some architectures
> - Cast bitmap_off to unsigned int for the PAGE_SIZE comparison to
> avoid
> tautological constant-out-of-range comparison warning.
> - Link:
> https://lore.kernel.org/oe-kbuild-all/202601251011.kJUhBF3P-lkp@intel.com/
>
>
> Shardul Bankar (2):
> hfsplus: refactor b-tree map page access and add node-type
> validation
> hfsplus: validate b-tree node 0 bitmap at mount time
>
> fs/hfsplus/btree.c | 229 +++++++++++++++++++++++++++++------
> --
> include/linux/hfs_common.h | 2 +
> 2 files changed, 184 insertions(+), 47 deletions(-)
Applied on for-next branch of HFS/HFS+ git tree.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-03-19 3:52 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-18 7:38 [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Shardul Bankar
2026-03-18 7:38 ` [PATCH v7 1/2] hfsplus: refactor b-tree map page access and add node-type validation Shardul Bankar
2026-03-19 3:01 ` Viacheslav Dubeyko
2026-03-18 7:38 ` [PATCH v7 2/2] hfsplus: validate b-tree node 0 bitmap at mount time Shardul Bankar
2026-03-19 3:01 ` Viacheslav Dubeyko
2026-03-19 3:52 ` [PATCH v7 0/2] hfsplus: prevent b-tree allocator corruption Viacheslav Dubeyko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox