public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Aurelien DESBRIERES <aurelien@hackers.camp>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	viro@zeniv.linux.org.uk, brauner@kernel.org
Subject: Re: [RFC PATCH 06/10] ftrfs: add block and inode allocator
Date: Mon, 13 Apr 2026 08:21:01 -0700	[thread overview]
Message-ID: <20260413152101.GX6202@frogsfrogsfrogs> (raw)
In-Reply-To: <20260413142357.515792-7-aurelien@hackers.camp>

On Mon, Apr 13, 2026 at 04:23:52PM +0200, Aurelien DESBRIERES wrote:
> From: Aurélien DESBRIERES <aurelien@hackers.camp>
> 
> Implement in-memory bitmap allocator for blocks and inodes:
> 
> - ftrfs_setup_bitmap(): allocate and initialize the free block bitmap
>   from superblock s_free_blocks count at mount time
> - ftrfs_destroy_bitmap(): release bitmap at umount
> - ftrfs_alloc_block(): find-first-bit allocator, updates on-disk
>   superblock s_free_blocks counter via mark_buffer_dirty()
> - ftrfs_free_block(): return block to pool, double-free detection
> - ftrfs_alloc_inode_num(): linear scan of inode table for a free
>   slot (i_mode == 0), updates s_free_inodes counter
> 
> The bitmap is loaded from the superblock free block count at mount
> and persisted incrementally on each allocation/free. A dedicated
> on-disk bitmap block is planned for a future revision.
> 
> Signed-off-by: Aurélien DESBRIERES <aurelien@hackers.camp>
> ---
>  fs/ftrfs/alloc.c | 251 +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 251 insertions(+)
>  create mode 100644 fs/ftrfs/alloc.c
> 
> diff --git a/fs/ftrfs/alloc.c b/fs/ftrfs/alloc.c
> new file mode 100644
> index 000000000..753eb67cf
> --- /dev/null
> +++ b/fs/ftrfs/alloc.c
> @@ -0,0 +1,251 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * FTRFS — Block and inode allocator
> + * Author: Aurélien DESBRIERES <aurelien@hackers.camp>
> + *
> + * Simple bitmap allocator. The free block bitmap is stored in-memory
> + * (loaded at mount time) and persisted to disk on each allocation/free.
> + *
> + * Layout assumption (from mkfs.ftrfs):
> + *   Block 0          : superblock
> + *   Block 1..N       : inode table
> + *   Block N+1        : root dir data
> + *   Block N+2..end   : data blocks
> + *
> + * The bitmap itself is stored in the first data block after the inode
> + * table. Each bit represents one data block (1 = free, 0 = used).
> + */
> +
> +#include <linux/fs.h>
> +#include <linux/buffer_head.h>
> +#include <linux/bitmap.h>
> +#include <linux/slab.h>
> +#include "ftrfs.h"
> +
> +/*
> + * ftrfs_setup_bitmap — allocate and initialize the in-memory block bitmap
> + * Called from ftrfs_fill_super() after the superblock is read.
> + *
> + * For the skeleton we use a simple in-memory bitmap initialized from
> + * s_free_blocks. A full implementation would read the on-disk bitmap block.
> + */
> +int ftrfs_setup_bitmap(struct super_block *sb)
> +{
> +	struct ftrfs_sb_info *sbi = FTRFS_SB(sb);
> +	unsigned long total_blocks;
> +	unsigned long data_start;
> +
> +	total_blocks = le64_to_cpu(sbi->s_ftrfs_sb->s_block_count);
> +	data_start   = le64_to_cpu(sbi->s_ftrfs_sb->s_data_start_blk);
> +
> +	if (total_blocks <= data_start) {
> +		pr_err("ftrfs: invalid block layout (total=%lu data_start=%lu)\n",
> +		       total_blocks, data_start);
> +		return -EINVAL;
> +	}
> +
> +	sbi->s_nblocks     = total_blocks - data_start;
> +	sbi->s_data_start  = data_start;
> +
> +	/* Allocate bitmap: one bit per data block */
> +	sbi->s_block_bitmap = bitmap_zalloc(sbi->s_nblocks, GFP_KERNEL);
> +	if (!sbi->s_block_bitmap)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Mark all blocks as free initially.
> +	 * A full implementation would read the on-disk bitmap here.
> +	 * For now we derive free blocks from s_free_blocks in the superblock.
> +	 */
> +	bitmap_fill(sbi->s_block_bitmap, sbi->s_nblocks);
> +
> +	/*
> +	 * Mark blocks already used (total - free) as allocated.
> +	 * We mark from block 0 of the data area upward.
> +	 */
> +	{
> +		unsigned long used = sbi->s_nblocks - sbi->s_free_blocks;
> +		unsigned long i;
> +
> +		for (i = 0; i < used && i < sbi->s_nblocks; i++)
> +			clear_bit(i, sbi->s_block_bitmap);
> +	}
> +
> +	pr_info("ftrfs: bitmap initialized (%lu data blocks, %lu free)\n",
> +		sbi->s_nblocks, sbi->s_free_blocks);
> +
> +	return 0;
> +}
> +
> +/*
> + * ftrfs_destroy_bitmap — free the in-memory bitmap
> + * Called from ftrfs_put_super().
> + */
> +void ftrfs_destroy_bitmap(struct super_block *sb)
> +{
> +	struct ftrfs_sb_info *sbi = FTRFS_SB(sb);
> +
> +	if (sbi->s_block_bitmap) {
> +		bitmap_free(sbi->s_block_bitmap);
> +		sbi->s_block_bitmap = NULL;
> +	}
> +}
> +
> +/*
> + * ftrfs_alloc_block — allocate a free data block
> + * @sb:  superblock
> + *
> + * Returns the absolute block number (>= s_data_start) on success,
> + * or 0 on failure (0 is the superblock, never a valid data block).
> + *
> + * Caller must hold sbi->s_lock.
> + */
> +u64 ftrfs_alloc_block(struct super_block *sb)
> +{
> +	struct ftrfs_sb_info *sbi = FTRFS_SB(sb);
> +	unsigned long bit;
> +
> +	if (!sbi->s_block_bitmap) {
> +		pr_err("ftrfs: bitmap not initialized\n");
> +		return 0;
> +	}
> +
> +	spin_lock(&sbi->s_lock);
> +
> +	if (sbi->s_free_blocks == 0) {
> +		spin_unlock(&sbi->s_lock);
> +		pr_warn("ftrfs: no free blocks\n");
> +		return 0;
> +	}
> +
> +	/* Find first free bit (set = free in our convention) */
> +	bit = find_first_bit(sbi->s_block_bitmap, sbi->s_nblocks);
> +	if (bit >= sbi->s_nblocks) {
> +		spin_unlock(&sbi->s_lock);
> +		pr_err("ftrfs: bitmap inconsistency (free_blocks=%lu but no free bit)\n",
> +		       sbi->s_free_blocks);
> +		return 0;
> +	}
> +
> +	/* Mark as used */
> +	clear_bit(bit, sbi->s_block_bitmap);
> +	sbi->s_free_blocks--;
> +
> +	/* Update on-disk superblock counter */
> +	sbi->s_ftrfs_sb->s_free_blocks = cpu_to_le64(sbi->s_free_blocks);
> +	mark_buffer_dirty(sbi->s_sbh);

No journalling?  Or even COW metadata?  How is this fault tolerant??

--D

> +
> +	spin_unlock(&sbi->s_lock);
> +
> +	/* Return absolute block number */
> +	return (u64)(sbi->s_data_start + bit);
> +}
> +
> +/*
> + * ftrfs_free_block — release a data block back to the free pool
> + * @sb:    superblock
> + * @block: absolute block number to free
> + */
> +void ftrfs_free_block(struct super_block *sb, u64 block)
> +{
> +	struct ftrfs_sb_info *sbi = FTRFS_SB(sb);
> +	unsigned long bit;
> +
> +	if (block < sbi->s_data_start) {
> +		pr_err("ftrfs: attempt to free non-data block %llu\n", block);
> +		return;
> +	}
> +
> +	bit = (unsigned long)(block - sbi->s_data_start);
> +
> +	if (bit >= sbi->s_nblocks) {
> +		pr_err("ftrfs: block %llu out of range\n", block);
> +		return;
> +	}
> +
> +	spin_lock(&sbi->s_lock);
> +
> +	if (test_bit(bit, sbi->s_block_bitmap)) {
> +		pr_warn("ftrfs: double free of block %llu\n", block);
> +		spin_unlock(&sbi->s_lock);
> +		return;
> +	}
> +
> +	set_bit(bit, sbi->s_block_bitmap);
> +	sbi->s_free_blocks++;
> +
> +	/* Update on-disk superblock counter */
> +	sbi->s_ftrfs_sb->s_free_blocks = cpu_to_le64(sbi->s_free_blocks);
> +	mark_buffer_dirty(sbi->s_sbh);
> +
> +	spin_unlock(&sbi->s_lock);
> +}
> +
> +/*
> + * ftrfs_alloc_inode_num — allocate a free inode number
> + * @sb: superblock
> + *
> + * Returns inode number >= 2 on success (1 = root, reserved),
> + * or 0 on failure.
> + *
> + * Simple linear scan of the inode table for a free slot.
> + * A full implementation uses an inode bitmap block.
> + */
> +u64 ftrfs_alloc_inode_num(struct super_block *sb)
> +{
> +	struct ftrfs_sb_info    *sbi = FTRFS_SB(sb);
> +	struct ftrfs_inode      *raw;
> +	struct buffer_head      *bh;
> +	unsigned long            inodes_per_block;
> +	unsigned long            inode_table_blk;
> +	unsigned long            total_inodes;
> +	unsigned long            block, i;
> +	u64                      ino = 0;
> +
> +	inodes_per_block = FTRFS_BLOCK_SIZE / sizeof(struct ftrfs_inode);
> +	inode_table_blk  = le64_to_cpu(sbi->s_ftrfs_sb->s_inode_table_blk);
> +	total_inodes     = le64_to_cpu(sbi->s_ftrfs_sb->s_inode_count);
> +
> +	spin_lock(&sbi->s_lock);
> +
> +	if (sbi->s_free_inodes == 0) {
> +		spin_unlock(&sbi->s_lock);
> +		return 0;
> +	}
> +
> +	/* Scan inode table blocks looking for a free inode (i_mode == 0) */
> +	for (block = 0; block * inodes_per_block < total_inodes; block++) {
> +		bh = sb_bread(sb, inode_table_blk + block);
> +		if (!bh)
> +			continue;
> +
> +		raw = (struct ftrfs_inode *)bh->b_data;
> +
> +		for (i = 0; i < inodes_per_block; i++) {
> +			unsigned long ino_num = block * inodes_per_block + i + 1;
> +
> +			if (ino_num > total_inodes)
> +				break;
> +
> +			/* inode 1 = root, always reserved */
> +			if (ino_num == 1)
> +				continue;
> +
> +			if (le16_to_cpu(raw[i].i_mode) == 0) {
> +				/* Found a free inode slot */
> +				ino = (u64)ino_num;
> +				sbi->s_free_inodes--;
> +				sbi->s_ftrfs_sb->s_free_inodes =
> +					cpu_to_le64(sbi->s_free_inodes);
> +				mark_buffer_dirty(sbi->s_sbh);
> +				brelse(bh);
> +				goto found;
> +			}
> +		}
> +		brelse(bh);
> +	}
> +
> +found:
> +	spin_unlock(&sbi->s_lock);
> +	return ino;
> +}
> -- 
> 2.52.0
> 
> 

  reply	other threads:[~2026-04-13 15:21 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-13 14:23 [RFC PATCH 0/10] ftrfs: Fault-Tolerant Radiation-Robust Filesystem Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 01/10] ftrfs: add on-disk format and in-memory data structures Aurelien DESBRIERES
2026-04-13 15:11   ` Darrick J. Wong
2026-04-13 17:26     ` Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 02/10] ftrfs: add superblock operations Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 03/10] ftrfs: add inode operations Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 04/10] ftrfs: add directory operations Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 05/10] ftrfs: add file operations Aurelien DESBRIERES
2026-04-13 15:09   ` Matthew Wilcox
     [not found]     ` <CAM=40tU5NppEZ9x07qDVkSxLw6Ga4nVg7sDCqcvhfQ51VbsS9Q@mail.gmail.com>
2026-04-13 17:41       ` Matthew Wilcox
2026-04-13 14:23 ` [RFC PATCH 06/10] ftrfs: add block and inode allocator Aurelien DESBRIERES
2026-04-13 15:21   ` Darrick J. Wong [this message]
2026-04-14 14:11     ` Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 07/10] ftrfs: add filename and directory entry operations Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 08/10] ftrfs: add CRC32 checksumming and Reed-Solomon FEC skeleton Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 09/10] ftrfs: add Kconfig, Makefile and fs/ tree integration Aurelien DESBRIERES
2026-04-13 14:23 ` [RFC PATCH 10/10] MAINTAINERS: add entry for FTRFS filesystem Aurelien DESBRIERES
2026-04-13 15:04 ` [RFC PATCH 0/10] ftrfs: Fault-Tolerant Radiation-Robust Filesystem Pedro Falcato
2026-04-13 18:03   ` Andreas Dilger
2026-04-14  2:56     ` Gao Xiang
2026-04-14 14:11     ` Aurelien DESBRIERES
2026-04-14 13:30   ` Aurelien DESBRIERES
2026-04-13 15:06 ` Matthew Wilcox
2026-04-13 18:11   ` Darrick J. Wong
2026-04-14 14:11     ` Aurelien DESBRIERES
2026-04-14 13:31   ` Aurelien DESBRIERES

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260413152101.GX6202@frogsfrogsfrogs \
    --to=djwong@kernel.org \
    --cc=aurelien@hackers.camp \
    --cc=brauner@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox