* PI and data checksumming for XFS
@ 2025-02-03 9:43 Christoph Hellwig
2025-02-03 9:43 ` [PATCH 1/7] block: support integrity generation and verification from file systems Christoph Hellwig
` (7 more replies)
0 siblings, 8 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
Hi all,
with all the PI and checksumming discussions I decided to dust of my old
XFS PI and data checksumming prototypes. This is pre-alpha code so
handle it with care. I tried to document most issues and limitations
in the patch, but I might have missed some. It survives an xfstests
quick run with just three failures, one of which is a pre-existing
failure on a PI disable device when creating dm-thin.
As it depends on various other in-flight patch series anyone
daring to it should use the git branch here:
git://git.infradead.org/users/hch/misc.git xfs-data-crc
instead.
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/7] block: support integrity generation and verification from file systems
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 19:47 ` Martin K. Petersen
2025-04-21 2:30 ` Anuj gupta
2025-02-03 9:43 ` [PATCH 2/7] iomap: introduce iomap_read_folio_ops Christoph Hellwig
` (6 subsequent siblings)
7 siblings, 2 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
Add a new blk_integrity_verify_all helper that uses the _all iterator to
verify the entire bio as built by the file system and doesn't require the
extra bvec_iter used by blk_integrity_verify_iter and export
blk_integrity_generate which can be used as-is.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk.h | 1 -
block/t10-pi.c | 90 ++++++++++++++++++++++++-----------
include/linux/bio-integrity.h | 12 +++++
3 files changed, 75 insertions(+), 28 deletions(-)
diff --git a/block/blk.h b/block/blk.h
index 8f5554a6989e..176b04cdddda 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -709,7 +709,6 @@ int bdev_open(struct block_device *bdev, blk_mode_t mode, void *holder,
const struct blk_holder_ops *hops, struct file *bdev_file);
int bdev_permission(dev_t dev, blk_mode_t mode, void *holder);
-void blk_integrity_generate(struct bio *bio);
void blk_integrity_verify_iter(struct bio *bio, struct bvec_iter *saved_iter);
void blk_integrity_prepare(struct request *rq);
void blk_integrity_complete(struct request *rq, unsigned int nr_bytes);
diff --git a/block/t10-pi.c b/block/t10-pi.c
index de172d56b1f3..b59db61a8104 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -403,42 +403,51 @@ void blk_integrity_generate(struct bio *bio)
kunmap_local(kaddr);
}
}
+EXPORT_SYMBOL_GPL(blk_integrity_generate);
+static blk_status_t blk_integrity_verify_bvec(struct blk_integrity *bi,
+ struct blk_integrity_iter *iter, struct bio_vec *bv)
+{
+ void *kaddr = bvec_kmap_local(bv);
+ blk_status_t ret = BLK_STS_OK;
+
+ iter->data_buf = kaddr;
+ iter->data_size = bv->bv_len;
+ switch (bi->csum_type) {
+ case BLK_INTEGRITY_CSUM_CRC64:
+ ret = ext_pi_crc64_verify(iter, bi);
+ break;
+ case BLK_INTEGRITY_CSUM_CRC:
+ case BLK_INTEGRITY_CSUM_IP:
+ ret = t10_pi_verify(iter, bi);
+ break;
+ default:
+ break;
+ }
+ kunmap_local(kaddr);
+ return ret;
+}
+
+/*
+ * At the moment verify is called, bi_iter could have been advanced by splits
+ * and completions, thus we have to use the saved copy here.
+ */
void blk_integrity_verify_iter(struct bio *bio, struct bvec_iter *saved_iter)
{
struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
struct bio_integrity_payload *bip = bio_integrity(bio);
- struct blk_integrity_iter iter;
+ struct blk_integrity_iter iter = {
+ .disk_name = bio->bi_bdev->bd_disk->disk_name,
+ .interval = 1 << bi->interval_exp,
+ .seed = saved_iter->bi_sector,
+ .prot_buf = bvec_virt(bip->bip_vec),
+ };
struct bvec_iter bviter;
struct bio_vec bv;
+ blk_status_t ret;
- /*
- * At the moment verify is called bi_iter has been advanced during split
- * and completion, so use the copy created during submission here.
- */
- iter.disk_name = bio->bi_bdev->bd_disk->disk_name;
- iter.interval = 1 << bi->interval_exp;
- iter.seed = saved_iter->bi_sector;
- iter.prot_buf = bvec_virt(bip->bip_vec);
__bio_for_each_segment(bv, bio, bviter, *saved_iter) {
- void *kaddr = bvec_kmap_local(&bv);
- blk_status_t ret = BLK_STS_OK;
-
- iter.data_buf = kaddr;
- iter.data_size = bv.bv_len;
- switch (bi->csum_type) {
- case BLK_INTEGRITY_CSUM_CRC64:
- ret = ext_pi_crc64_verify(&iter, bi);
- break;
- case BLK_INTEGRITY_CSUM_CRC:
- case BLK_INTEGRITY_CSUM_IP:
- ret = t10_pi_verify(&iter, bi);
- break;
- default:
- break;
- }
- kunmap_local(kaddr);
-
+ ret = blk_integrity_verify_bvec(bi, &iter, &bv);
if (ret) {
bio->bi_status = ret;
return;
@@ -446,6 +455,33 @@ void blk_integrity_verify_iter(struct bio *bio, struct bvec_iter *saved_iter)
}
}
+/*
+ * For use by the file system which owns the entire bio.
+ */
+int blk_integrity_verify_all(struct bio *bio, sector_t seed)
+{
+ struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
+ struct bio_integrity_payload *bip = bio_integrity(bio);
+ struct blk_integrity_iter iter = {
+ .disk_name = bio->bi_bdev->bd_disk->disk_name,
+ .interval = 1 << bi->interval_exp,
+ .seed = seed,
+ .prot_buf = bvec_virt(bip->bip_vec),
+ };
+ struct bvec_iter_all bviter;
+ struct bio_vec *bv;
+ blk_status_t ret;
+
+ bio_for_each_segment_all(bv, bio, bviter) {
+ ret = blk_integrity_verify_bvec(bi, &iter, bv);
+ if (ret)
+ return blk_status_to_errno(ret);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(blk_integrity_verify_all);
+
void blk_integrity_prepare(struct request *rq)
{
struct blk_integrity *bi = &rq->q->limits.integrity;
diff --git a/include/linux/bio-integrity.h b/include/linux/bio-integrity.h
index 0a25716820fe..26419eb5425a 100644
--- a/include/linux/bio-integrity.h
+++ b/include/linux/bio-integrity.h
@@ -81,6 +81,9 @@ void bio_integrity_advance(struct bio *bio, unsigned int bytes_done);
void bio_integrity_trim(struct bio *bio);
int bio_integrity_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp_mask);
+void blk_integrity_generate(struct bio *bio);
+int blk_integrity_verify_all(struct bio *bio, sector_t seed);
+
#else /* CONFIG_BLK_DEV_INTEGRITY */
static inline struct bio_integrity_payload *bio_integrity(struct bio *bio)
@@ -138,5 +141,14 @@ static inline int bio_integrity_add_page(struct bio *bio, struct page *page,
{
return 0;
}
+
+static inline void blk_integrity_generate(struct bio *bio)
+{
+}
+
+static inline int blk_integrity_verify_all(struct bio *bio, sector_t seed)
+{
+ return 0;
+}
#endif /* CONFIG_BLK_DEV_INTEGRITY */
#endif /* _LINUX_BIO_INTEGRITY_H */
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 2/7] iomap: introduce iomap_read_folio_ops
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
2025-02-03 9:43 ` [PATCH 1/7] block: support integrity generation and verification from file systems Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 9:43 ` [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset Christoph Hellwig
` (5 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
From: Goldwyn Rodrigues <rgoldwyn@suse.com>
iomap_read_folio_ops provide additional functions to allocate or submit
the bio. Filesystems such as btrfs have additional operations with bios
such as verifying data checksums. Creating a bio submission hook allows
the filesystem to process and verify the bio.
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
[hch: add a helper, pass file offset to ->submit_io]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/fops.c | 4 ++--
fs/erofs/data.c | 4 ++--
fs/gfs2/aops.c | 4 ++--
fs/iomap/buffered-io.c | 27 ++++++++++++++++++++++-----
fs/xfs/xfs_aops.c | 4 ++--
fs/zonefs/file.c | 4 ++--
include/linux/iomap.h | 16 ++++++++++++++--
7 files changed, 46 insertions(+), 17 deletions(-)
diff --git a/block/fops.c b/block/fops.c
index be9f1dbea9ce..f4c971311c6c 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -505,12 +505,12 @@ const struct address_space_operations def_blk_aops = {
#else /* CONFIG_BUFFER_HEAD */
static int blkdev_read_folio(struct file *file, struct folio *folio)
{
- return iomap_read_folio(folio, &blkdev_iomap_ops);
+ return iomap_read_folio(folio, &blkdev_iomap_ops, NULL);
}
static void blkdev_readahead(struct readahead_control *rac)
{
- iomap_readahead(rac, &blkdev_iomap_ops);
+ iomap_readahead(rac, &blkdev_iomap_ops, NULL);
}
static int blkdev_map_blocks(struct iomap_writepage_ctx *wpc,
diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index 0cd6b5c4df98..b0f0db855971 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -370,12 +370,12 @@ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
*/
static int erofs_read_folio(struct file *file, struct folio *folio)
{
- return iomap_read_folio(folio, &erofs_iomap_ops);
+ return iomap_read_folio(folio, &erofs_iomap_ops, NULL);
}
static void erofs_readahead(struct readahead_control *rac)
{
- return iomap_readahead(rac, &erofs_iomap_ops);
+ return iomap_readahead(rac, &erofs_iomap_ops, NULL);
}
static sector_t erofs_bmap(struct address_space *mapping, sector_t block)
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 68fc8af14700..f0debbe048a6 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -422,7 +422,7 @@ static int gfs2_read_folio(struct file *file, struct folio *folio)
if (!gfs2_is_jdata(ip) ||
(i_blocksize(inode) == PAGE_SIZE && !folio_buffers(folio))) {
- error = iomap_read_folio(folio, &gfs2_iomap_ops);
+ error = iomap_read_folio(folio, &gfs2_iomap_ops, NULL);
} else if (gfs2_is_stuffed(ip)) {
error = stuffed_read_folio(ip, folio);
} else {
@@ -497,7 +497,7 @@ static void gfs2_readahead(struct readahead_control *rac)
else if (gfs2_is_jdata(ip))
mpage_readahead(rac, gfs2_block_map);
else
- iomap_readahead(rac, &gfs2_iomap_ops);
+ iomap_readahead(rac, &gfs2_iomap_ops, NULL);
}
/**
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 4abff64998fe..804527dcc9ba 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -320,7 +320,9 @@ struct iomap_readpage_ctx {
struct folio *cur_folio;
bool cur_folio_in_bio;
struct bio *bio;
+ loff_t bio_start_pos;
struct readahead_control *rac;
+ const struct iomap_read_folio_ops *ops;
};
/**
@@ -362,6 +364,15 @@ static inline bool iomap_block_needs_zeroing(const struct iomap_iter *iter,
pos >= i_size_read(iter->inode);
}
+static void iomap_read_submit_bio(const struct iomap_iter *iter,
+ struct iomap_readpage_ctx *ctx)
+{
+ if (ctx->ops && ctx->ops->submit_io)
+ ctx->ops->submit_io(iter->inode, ctx->bio, ctx->bio_start_pos);
+ else
+ submit_bio(ctx->bio);
+}
+
static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
struct iomap_readpage_ctx *ctx, loff_t offset)
{
@@ -405,8 +416,9 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE);
if (ctx->bio)
- submit_bio(ctx->bio);
+ iomap_read_submit_bio(iter, ctx);
+ ctx->bio_start_pos = offset;
if (ctx->rac) /* same as readahead_gfp_mask */
gfp |= __GFP_NORETRY | __GFP_NOWARN;
ctx->bio = bio_alloc(iomap->bdev, bio_max_segs(nr_vecs),
@@ -455,7 +467,8 @@ static loff_t iomap_read_folio_iter(const struct iomap_iter *iter,
return done;
}
-int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
+int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops,
+ const struct iomap_read_folio_ops *read_folio_ops)
{
struct iomap_iter iter = {
.inode = folio->mapping->host,
@@ -464,6 +477,7 @@ int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
};
struct iomap_readpage_ctx ctx = {
.cur_folio = folio,
+ .ops = read_folio_ops,
};
int ret;
@@ -473,7 +487,7 @@ int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
iter.processed = iomap_read_folio_iter(&iter, &ctx);
if (ctx.bio) {
- submit_bio(ctx.bio);
+ iomap_read_submit_bio(&iter, &ctx);
WARN_ON_ONCE(!ctx.cur_folio_in_bio);
} else {
WARN_ON_ONCE(ctx.cur_folio_in_bio);
@@ -518,6 +532,7 @@ static loff_t iomap_readahead_iter(const struct iomap_iter *iter,
* iomap_readahead - Attempt to read pages from a file.
* @rac: Describes the pages to be read.
* @ops: The operations vector for the filesystem.
+ * @read_folio_ops: Function hooks for filesystems for special bio submissions
*
* This function is for filesystems to call to implement their readahead
* address_space operation.
@@ -529,7 +544,8 @@ static loff_t iomap_readahead_iter(const struct iomap_iter *iter,
* function is called with memalloc_nofs set, so allocations will not cause
* the filesystem to be reentered.
*/
-void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops)
+void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops,
+ const struct iomap_read_folio_ops *read_folio_ops)
{
struct iomap_iter iter = {
.inode = rac->mapping->host,
@@ -538,6 +554,7 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops)
};
struct iomap_readpage_ctx ctx = {
.rac = rac,
+ .ops = read_folio_ops,
};
trace_iomap_readahead(rac->mapping->host, readahead_count(rac));
@@ -546,7 +563,7 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops)
iter.processed = iomap_readahead_iter(&iter, &ctx);
if (ctx.bio)
- submit_bio(ctx.bio);
+ iomap_read_submit_bio(&iter, &ctx);
if (ctx.cur_folio) {
if (!ctx.cur_folio_in_bio)
folio_unlock(ctx.cur_folio);
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 8e60ceeb1520..3e42a684cce1 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -522,14 +522,14 @@ xfs_vm_read_folio(
struct file *unused,
struct folio *folio)
{
- return iomap_read_folio(folio, &xfs_read_iomap_ops);
+ return iomap_read_folio(folio, &xfs_read_iomap_ops, NULL);
}
STATIC void
xfs_vm_readahead(
struct readahead_control *rac)
{
- iomap_readahead(rac, &xfs_read_iomap_ops);
+ iomap_readahead(rac, &xfs_read_iomap_ops, NULL);
}
static int
diff --git a/fs/zonefs/file.c b/fs/zonefs/file.c
index 35166c92420c..a70fa1cecef8 100644
--- a/fs/zonefs/file.c
+++ b/fs/zonefs/file.c
@@ -112,12 +112,12 @@ static const struct iomap_ops zonefs_write_iomap_ops = {
static int zonefs_read_folio(struct file *unused, struct folio *folio)
{
- return iomap_read_folio(folio, &zonefs_read_iomap_ops);
+ return iomap_read_folio(folio, &zonefs_read_iomap_ops, NULL);
}
static void zonefs_readahead(struct readahead_control *rac)
{
- iomap_readahead(rac, &zonefs_read_iomap_ops);
+ iomap_readahead(rac, &zonefs_read_iomap_ops, NULL);
}
/*
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index b4be07e8ec94..2930861d1ef1 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -303,8 +303,20 @@ static inline bool iomap_want_unshare_iter(const struct iomap_iter *iter)
ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
const struct iomap_ops *ops, void *private);
-int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops);
-void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops);
+
+struct iomap_read_folio_ops {
+ /*
+ * Optional, allows the filesystem to perform a custom submission of
+ * bio, such as csum calculations or multi-device bio split
+ */
+ void (*submit_io)(struct inode *inode, struct bio *bio,
+ loff_t file_offset);
+};
+
+int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops,
+ const struct iomap_read_folio_ops *);
+void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops,
+ const struct iomap_read_folio_ops *);
bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count);
struct folio *iomap_get_folio(struct iomap_iter *iter, loff_t pos, size_t len);
bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags);
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
2025-02-03 9:43 ` [PATCH 1/7] block: support integrity generation and verification from file systems Christoph Hellwig
2025-02-03 9:43 ` [PATCH 2/7] iomap: introduce iomap_read_folio_ops Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 22:23 ` Darrick J. Wong
2025-03-13 13:53 ` Matthew Wilcox
2025-02-03 9:43 ` [PATCH 4/7] iomap: support ioends for reads Christoph Hellwig
` (4 subsequent siblings)
7 siblings, 2 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
From: Goldwyn Rodrigues <rgoldwyn@suse.com>
Allocate the bio from the bioset provided in iomap_read_folio_ops.
If no bioset is provided, fs_bio_set is used which is the standard
bioset for filesystems.
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
[hch: factor out two helpers]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/buffered-io.c | 51 ++++++++++++++++++++++++++++--------------
include/linux/iomap.h | 6 +++++
2 files changed, 40 insertions(+), 17 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 804527dcc9ba..eaffa23eb8e4 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -364,6 +364,39 @@ static inline bool iomap_block_needs_zeroing(const struct iomap_iter *iter,
pos >= i_size_read(iter->inode);
}
+static struct bio_set *iomap_read_bio_set(struct iomap_readpage_ctx *ctx)
+{
+ if (ctx->ops && ctx->ops->bio_set)
+ return ctx->ops->bio_set;
+ return &fs_bio_set;
+}
+
+static struct bio *iomap_read_alloc_bio(const struct iomap_iter *iter,
+ struct iomap_readpage_ctx *ctx, loff_t length)
+{
+ unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE);
+ struct block_device *bdev = iter->iomap.bdev;
+ struct bio_set *bio_set = iomap_read_bio_set(ctx);
+ gfp_t gfp = mapping_gfp_constraint(iter->inode->i_mapping, GFP_KERNEL);
+ gfp_t orig_gfp = gfp;
+ struct bio *bio;
+
+ if (ctx->rac) /* same as readahead_gfp_mask */
+ gfp |= __GFP_NORETRY | __GFP_NOWARN;
+
+ bio = bio_alloc_bioset(bdev, bio_max_segs(nr_vecs), REQ_OP_READ, gfp,
+ bio_set);
+
+ /*
+ * If the bio_alloc fails, try it again for a single page to avoid
+ * having to deal with partial page reads. This emulates what
+ * do_mpage_read_folio does.
+ */
+ if (!bio)
+ bio = bio_alloc_bioset(bdev, 1, REQ_OP_READ, orig_gfp, bio_set);
+ return bio;
+}
+
static void iomap_read_submit_bio(const struct iomap_iter *iter,
struct iomap_readpage_ctx *ctx)
{
@@ -411,27 +444,11 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
if (!ctx->bio ||
bio_end_sector(ctx->bio) != sector ||
!bio_add_folio(ctx->bio, folio, plen, poff)) {
- gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL);
- gfp_t orig_gfp = gfp;
- unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE);
-
if (ctx->bio)
iomap_read_submit_bio(iter, ctx);
ctx->bio_start_pos = offset;
- if (ctx->rac) /* same as readahead_gfp_mask */
- gfp |= __GFP_NORETRY | __GFP_NOWARN;
- ctx->bio = bio_alloc(iomap->bdev, bio_max_segs(nr_vecs),
- REQ_OP_READ, gfp);
- /*
- * If the bio_alloc fails, try it again for a single page to
- * avoid having to deal with partial page reads. This emulates
- * what do_mpage_read_folio does.
- */
- if (!ctx->bio) {
- ctx->bio = bio_alloc(iomap->bdev, 1, REQ_OP_READ,
- orig_gfp);
- }
+ ctx->bio = iomap_read_alloc_bio(iter, ctx, length);
if (ctx->rac)
ctx->bio->bi_opf |= REQ_RAHEAD;
ctx->bio->bi_iter.bi_sector = sector;
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 2930861d1ef1..304be88ecd23 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -311,6 +311,12 @@ struct iomap_read_folio_ops {
*/
void (*submit_io)(struct inode *inode, struct bio *bio,
loff_t file_offset);
+
+ /*
+ * Optional, allows filesystem to specify own bio_set, so new bio's
+ * can be allocated from the provided bio_set.
+ */
+ struct bio_set *bio_set;
};
int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops,
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 4/7] iomap: support ioends for reads
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
` (2 preceding siblings ...)
2025-02-03 9:43 ` [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 22:24 ` Darrick J. Wong
2025-02-03 9:43 ` [PATCH 5/7] iomap: limit buffered I/O size to 128M Christoph Hellwig
` (3 subsequent siblings)
7 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
Support using the ioend structure to defer I/O completion for
reads in addition to writes. This requires a check for the operation
to not merge reads and writes, and for buffere I/O a call into the
buffered read I/O completion handler from iomap_finish_ioend. For
direct I/O the existing call into the direct I/O completion handler
handles reads just fine already.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/buffered-io.c | 23 ++++++++++++++++++-----
fs/iomap/internal.h | 3 ++-
fs/iomap/ioend.c | 6 +++++-
3 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index eaffa23eb8e4..06990e012884 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -306,14 +306,27 @@ static void iomap_finish_folio_read(struct folio *folio, size_t off,
folio_end_read(folio, uptodate);
}
-static void iomap_read_end_io(struct bio *bio)
+static u32 __iomap_read_end_io(struct bio *bio, int error)
{
- int error = blk_status_to_errno(bio->bi_status);
struct folio_iter fi;
+ u32 folio_count = 0;
- bio_for_each_folio_all(fi, bio)
+ bio_for_each_folio_all(fi, bio) {
iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error);
+ folio_count++;
+ }
bio_put(bio);
+ return folio_count;
+}
+
+static void iomap_read_end_io(struct bio *bio)
+{
+ __iomap_read_end_io(bio, blk_status_to_errno(bio->bi_status));
+}
+
+u32 iomap_finish_ioend_buffered_read(struct iomap_ioend *ioend)
+{
+ return __iomap_read_end_io(&ioend->io_bio, ioend->io_error);
}
struct iomap_readpage_ctx {
@@ -1568,7 +1581,7 @@ static void iomap_finish_folio_write(struct inode *inode, struct folio *folio,
* state, release holds on bios, and finally free up memory. Do not use the
* ioend after this.
*/
-u32 iomap_finish_ioend_buffered(struct iomap_ioend *ioend)
+u32 iomap_finish_ioend_buffered_write(struct iomap_ioend *ioend)
{
struct inode *inode = ioend->io_inode;
struct bio *bio = &ioend->io_bio;
@@ -1600,7 +1613,7 @@ static void iomap_writepage_end_bio(struct bio *bio)
struct iomap_ioend *ioend = iomap_ioend_from_bio(bio);
ioend->io_error = blk_status_to_errno(bio->bi_status);
- iomap_finish_ioend_buffered(ioend);
+ iomap_finish_ioend_buffered_write(ioend);
}
/*
diff --git a/fs/iomap/internal.h b/fs/iomap/internal.h
index f6992a3bf66a..c824e74a3526 100644
--- a/fs/iomap/internal.h
+++ b/fs/iomap/internal.h
@@ -4,7 +4,8 @@
#define IOEND_BATCH_SIZE 4096
-u32 iomap_finish_ioend_buffered(struct iomap_ioend *ioend);
+u32 iomap_finish_ioend_buffered_read(struct iomap_ioend *ioend);
+u32 iomap_finish_ioend_buffered_write(struct iomap_ioend *ioend);
u32 iomap_finish_ioend_direct(struct iomap_ioend *ioend);
#endif /* _IOMAP_INTERNAL_H */
diff --git a/fs/iomap/ioend.c b/fs/iomap/ioend.c
index 18894ebba6db..2dd29403dc10 100644
--- a/fs/iomap/ioend.c
+++ b/fs/iomap/ioend.c
@@ -44,7 +44,9 @@ static u32 iomap_finish_ioend(struct iomap_ioend *ioend, int error)
return 0;
if (ioend->io_flags & IOMAP_IOEND_DIRECT)
return iomap_finish_ioend_direct(ioend);
- return iomap_finish_ioend_buffered(ioend);
+ if (bio_op(&ioend->io_bio) == REQ_OP_READ)
+ return iomap_finish_ioend_buffered_read(ioend);
+ return iomap_finish_ioend_buffered_write(ioend);
}
/*
@@ -83,6 +85,8 @@ EXPORT_SYMBOL_GPL(iomap_finish_ioends);
static bool iomap_ioend_can_merge(struct iomap_ioend *ioend,
struct iomap_ioend *next)
{
+ if (bio_op(&ioend->io_bio) != bio_op(&next->io_bio))
+ return false;
if (ioend->io_bio.bi_status != next->io_bio.bi_status)
return false;
if (next->io_flags & IOMAP_IOEND_BOUNDARY)
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 5/7] iomap: limit buffered I/O size to 128M
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
` (3 preceding siblings ...)
2025-02-03 9:43 ` [PATCH 4/7] iomap: support ioends for reads Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 22:22 ` Darrick J. Wong
2025-02-03 9:43 ` [PATCH 6/7] xfs: support T10 protection information Christoph Hellwig
` (2 subsequent siblings)
7 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
Currently iomap can build extremely large bios (I've seen sizes
up to 480MB). Limit this to a lower bound so that the soon to
be added per-ioend integrity buffer doesn't go beyond what the
page allocator can support.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/buffered-io.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 06990e012884..71bb676d4998 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -456,6 +456,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
sector = iomap_sector(iomap, pos);
if (!ctx->bio ||
bio_end_sector(ctx->bio) != sector ||
+ ctx->bio->bi_iter.bi_size > SZ_128M ||
!bio_add_folio(ctx->bio, folio, plen, poff)) {
if (ctx->bio)
iomap_read_submit_bio(iter, ctx);
@@ -1674,6 +1675,8 @@ static struct iomap_ioend *iomap_alloc_ioend(struct iomap_writepage_ctx *wpc,
static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t pos,
u16 ioend_flags)
{
+ if (wpc->ioend->io_bio.bi_iter.bi_size > SZ_128M)
+ return false;
if (ioend_flags & IOMAP_IOEND_BOUNDARY)
return false;
if ((ioend_flags & IOMAP_IOEND_NOMERGE_FLAGS) !=
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 6/7] xfs: support T10 protection information
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
` (4 preceding siblings ...)
2025-02-03 9:43 ` [PATCH 5/7] iomap: limit buffered I/O size to 128M Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 22:21 ` Darrick J. Wong
2025-02-03 9:43 ` [PATCH 7/7] xfs: implement block-metadata based data checksums Christoph Hellwig
2025-02-03 19:51 ` PI and data checksumming for XFS Martin K. Petersen
7 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
Add support for generating / verifying protection information in the
file system. This is done by hooking into the bio submission in
iomap and then using the generic PI helpers. Compared to just using
the block layer auto PI this extends the protection envelope and also
prepares for eventually passing through PI from userspace at least
for direct I/O.
Right now this is still pretty hacky, e.g. the single PI buffer can
get pretty gigantic and has no mempool backing it. The deferring of
I/O completions is done unconditionally instead only when needed,
and we assume the device can actually handle these huge segments.
The latter should be fixed by doing proper splitting based on metadata
limits in the block layer, but the rest needs to be addressed here.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/Makefile | 1 +
fs/xfs/xfs_aops.c | 29 +++++++++++++++--
fs/xfs/xfs_aops.h | 1 +
fs/xfs/xfs_data_csum.c | 73 ++++++++++++++++++++++++++++++++++++++++++
fs/xfs/xfs_data_csum.h | 7 ++++
fs/xfs/xfs_file.c | 27 +++++++++++++++-
fs/xfs/xfs_inode.h | 6 ++--
7 files changed, 136 insertions(+), 8 deletions(-)
create mode 100644 fs/xfs/xfs_data_csum.c
create mode 100644 fs/xfs/xfs_data_csum.h
diff --git a/fs/xfs/Makefile b/fs/xfs/Makefile
index 7afa51e41427..aa8749d640e7 100644
--- a/fs/xfs/Makefile
+++ b/fs/xfs/Makefile
@@ -73,6 +73,7 @@ xfs-y += xfs_aops.o \
xfs_bmap_util.o \
xfs_bio_io.o \
xfs_buf.o \
+ xfs_data_csum.o \
xfs_dahash_test.o \
xfs_dir2_readdir.o \
xfs_discard.o \
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 3e42a684cce1..291f5d4dbce6 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -19,6 +19,7 @@
#include "xfs_reflink.h"
#include "xfs_errortag.h"
#include "xfs_error.h"
+#include "xfs_data_csum.h"
struct xfs_writepage_ctx {
struct iomap_writepage_ctx ctx;
@@ -122,6 +123,11 @@ xfs_end_ioend(
goto done;
}
+ if (bio_op(&ioend->io_bio) == REQ_OP_READ) {
+ error = xfs_data_csum_verify(ioend);
+ goto done;
+ }
+
/*
* Success: commit the COW or unwritten blocks if needed.
*/
@@ -175,7 +181,7 @@ xfs_end_io(
}
}
-STATIC void
+void
xfs_end_bio(
struct bio *bio)
{
@@ -417,6 +423,8 @@ xfs_submit_ioend(
memalloc_nofs_restore(nofs_flag);
+ xfs_data_csum_generate(&ioend->io_bio);
+
/* send ioends that might require a transaction to the completion wq */
if (xfs_ioend_is_append(ioend) ||
(ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED)))
@@ -517,19 +525,34 @@ xfs_vm_bmap(
return iomap_bmap(mapping, block, &xfs_read_iomap_ops);
}
+static void xfs_buffered_read_submit_io(struct inode *inode,
+ struct bio *bio, loff_t file_offset)
+{
+ xfs_data_csum_alloc(bio);
+ iomap_init_ioend(inode, bio, file_offset, 0);
+ bio->bi_end_io = xfs_end_bio;
+ submit_bio(bio);
+}
+
+static const struct iomap_read_folio_ops xfs_iomap_read_ops = {
+ .bio_set = &iomap_ioend_bioset,
+ .submit_io = xfs_buffered_read_submit_io,
+};
+
STATIC int
xfs_vm_read_folio(
struct file *unused,
struct folio *folio)
{
- return iomap_read_folio(folio, &xfs_read_iomap_ops, NULL);
+ return iomap_read_folio(folio, &xfs_read_iomap_ops,
+ &xfs_iomap_read_ops);
}
STATIC void
xfs_vm_readahead(
struct readahead_control *rac)
{
- iomap_readahead(rac, &xfs_read_iomap_ops, NULL);
+ iomap_readahead(rac, &xfs_read_iomap_ops, &xfs_iomap_read_ops);
}
static int
diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
index e0bd68419764..efed1ab59dbf 100644
--- a/fs/xfs/xfs_aops.h
+++ b/fs/xfs/xfs_aops.h
@@ -10,5 +10,6 @@ extern const struct address_space_operations xfs_address_space_operations;
extern const struct address_space_operations xfs_dax_aops;
int xfs_setfilesize(struct xfs_inode *ip, xfs_off_t offset, size_t size);
+void xfs_end_bio(struct bio *bio);
#endif /* __XFS_AOPS_H__ */
diff --git a/fs/xfs/xfs_data_csum.c b/fs/xfs/xfs_data_csum.c
new file mode 100644
index 000000000000..d9d3620654b1
--- /dev/null
+++ b/fs/xfs/xfs_data_csum.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022-2025 Christoph Hellwig.
+ */
+#include "xfs.h"
+#include "xfs_format.h"
+#include "xfs_shared.h"
+#include "xfs_trans_resv.h"
+#include "xfs_mount.h"
+#include "xfs_inode.h"
+#include "xfs_cksum.h"
+#include "xfs_data_csum.h"
+#include <linux/iomap.h>
+#include <linux/blk-integrity.h>
+#include <linux/bio-integrity.h>
+
+void *
+xfs_data_csum_alloc(
+ struct bio *bio)
+{
+ struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
+ struct bio_integrity_payload *bip;
+ unsigned int buf_size;
+ void *buf;
+
+ if (!bi)
+ return NULL;
+
+ buf_size = bio_integrity_bytes(bi, bio_sectors(bio));
+ /* XXX: this needs proper mempools */
+ /* XXX: needs (partial) zeroing if tuple_size > csum_size */
+ buf = kmalloc(buf_size, GFP_NOFS | __GFP_NOFAIL);
+ bip = bio_integrity_alloc(bio, GFP_NOFS | __GFP_NOFAIL, 1);
+ if (!bio_integrity_add_page(bio, virt_to_page(buf), buf_size,
+ offset_in_page(buf)))
+ WARN_ON_ONCE(1);
+
+ if (bi->csum_type) {
+ if (bi->csum_type == BLK_INTEGRITY_CSUM_IP)
+ bip->bip_flags |= BIP_IP_CHECKSUM;
+ bip->bip_flags |= BIP_CHECK_GUARD;
+ }
+ if (bi->flags & BLK_INTEGRITY_REF_TAG)
+ bip->bip_flags |= BIP_CHECK_REFTAG;
+ bip_set_seed(bip, bio->bi_iter.bi_sector);
+
+ return buf;
+}
+
+void
+xfs_data_csum_generate(
+ struct bio *bio)
+{
+ struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
+
+ if (!bi || !bi->csum_type)
+ return;
+
+ xfs_data_csum_alloc(bio);
+ blk_integrity_generate(bio);
+}
+
+int
+xfs_data_csum_verify(
+ struct iomap_ioend *ioend)
+{
+ struct bio *bio = &ioend->io_bio;
+ struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
+
+ if (!bi || !bi->csum_type)
+ return 0;
+ return blk_integrity_verify_all(bio, ioend->io_sector);
+}
diff --git a/fs/xfs/xfs_data_csum.h b/fs/xfs/xfs_data_csum.h
new file mode 100644
index 000000000000..f32215e8f46c
--- /dev/null
+++ b/fs/xfs/xfs_data_csum.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+struct iomap_ioend;
+
+void *xfs_data_csum_alloc(struct bio *bio);
+void xfs_data_csum_generate(struct bio *bio);
+int xfs_data_csum_verify(struct iomap_ioend *ioend);
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index f7a7d89c345e..0d64c54017f0 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -25,6 +25,8 @@
#include "xfs_iomap.h"
#include "xfs_reflink.h"
#include "xfs_file.h"
+#include "xfs_data_csum.h"
+#include "xfs_aops.h"
#include <linux/dax.h>
#include <linux/falloc.h>
@@ -227,6 +229,20 @@ xfs_ilock_iocb_for_write(
return 0;
}
+static void xfs_dio_read_submit_io(const struct iomap_iter *iter,
+ struct bio *bio, loff_t file_offset)
+{
+ xfs_data_csum_alloc(bio);
+ iomap_init_ioend(iter->inode, bio, file_offset, IOMAP_IOEND_DIRECT);
+ bio->bi_end_io = xfs_end_bio;
+ submit_bio(bio);
+}
+
+static const struct iomap_dio_ops xfs_dio_read_ops = {
+ .bio_set = &iomap_ioend_bioset,
+ .submit_io = xfs_dio_read_submit_io,
+};
+
STATIC ssize_t
xfs_file_dio_read(
struct kiocb *iocb,
@@ -245,7 +261,8 @@ xfs_file_dio_read(
ret = xfs_ilock_iocb(iocb, XFS_IOLOCK_SHARED);
if (ret)
return ret;
- ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, NULL, 0, NULL, 0);
+ ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, &xfs_dio_read_ops, 0,
+ NULL, 0);
xfs_iunlock(ip, XFS_IOLOCK_SHARED);
return ret;
@@ -578,8 +595,16 @@ xfs_dio_write_end_io(
return error;
}
+static void xfs_dio_write_submit_io(const struct iomap_iter *iter,
+ struct bio *bio, loff_t file_offset)
+{
+ xfs_data_csum_generate(bio);
+ submit_bio(bio);
+}
+
static const struct iomap_dio_ops xfs_dio_write_ops = {
.end_io = xfs_dio_write_end_io,
+ .submit_io = xfs_dio_write_submit_io,
};
/*
diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
index c08093a65352..ff346bbe454c 100644
--- a/fs/xfs/xfs_inode.h
+++ b/fs/xfs/xfs_inode.h
@@ -609,10 +609,8 @@ int xfs_break_layouts(struct inode *inode, uint *iolock,
static inline void xfs_update_stable_writes(struct xfs_inode *ip)
{
- if (bdev_stable_writes(xfs_inode_buftarg(ip)->bt_bdev))
- mapping_set_stable_writes(VFS_I(ip)->i_mapping);
- else
- mapping_clear_stable_writes(VFS_I(ip)->i_mapping);
+ /* XXX: unconditional for now */
+ mapping_set_stable_writes(VFS_I(ip)->i_mapping);
}
/*
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 7/7] xfs: implement block-metadata based data checksums
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
` (5 preceding siblings ...)
2025-02-03 9:43 ` [PATCH 6/7] xfs: support T10 protection information Christoph Hellwig
@ 2025-02-03 9:43 ` Christoph Hellwig
2025-02-03 22:20 ` Darrick J. Wong
2025-02-03 19:51 ` PI and data checksumming for XFS Martin K. Petersen
7 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-03 9:43 UTC (permalink / raw)
To: Kanchan Joshi, Martin K . Petersen
Cc: Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
This is a quick hack to demonstrate how data checksumming can be
implemented when it can be stored in the out of line metadata for each
logical block. It builds on top of the previous PI infrastructure
and instead of generating/verifying protection information it simply
generates and verifies a crc32c checksum and stores it in the non-PI
metadata. It misses a feature bit in the superblock, checking that
enough size is available in the metadata and many other things.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_data_csum.c | 79 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 76 insertions(+), 3 deletions(-)
diff --git a/fs/xfs/xfs_data_csum.c b/fs/xfs/xfs_data_csum.c
index d9d3620654b1..862388803398 100644
--- a/fs/xfs/xfs_data_csum.c
+++ b/fs/xfs/xfs_data_csum.c
@@ -14,6 +14,73 @@
#include <linux/blk-integrity.h>
#include <linux/bio-integrity.h>
+static inline void *xfs_csum_buf(struct bio *bio)
+{
+ return bvec_virt(bio_integrity(bio)->bip_vec);
+}
+
+static inline __le32
+xfs_data_csum(
+ void *data,
+ unsigned int len)
+{
+ return xfs_end_cksum(crc32c(XFS_CRC_SEED, data, len));
+}
+
+static void
+__xfs_data_csum_generate(
+ struct bio *bio)
+{
+ unsigned int ssize = bdev_logical_block_size(bio->bi_bdev);
+ __le32 *csum_buf = xfs_csum_buf(bio);
+ struct bvec_iter_all iter;
+ struct bio_vec *bv;
+ int c = 0;
+
+ bio_for_each_segment_all(bv, bio, iter) {
+ void *p;
+ unsigned int off;
+
+ p = bvec_kmap_local(bv);
+ for (off = 0; off < bv->bv_len; off += ssize)
+ csum_buf[c++] = xfs_data_csum(p + off, ssize);
+ kunmap_local(p);
+ }
+}
+
+static int
+__xfs_data_csum_verify(
+ struct bio *bio,
+ struct xfs_inode *ip,
+ xfs_off_t file_offset)
+{
+ unsigned int ssize = bdev_logical_block_size(bio->bi_bdev);
+ __le32 *csum_buf = xfs_csum_buf(bio);
+ int c = 0;
+ struct bvec_iter_all iter;
+ struct bio_vec *bv;
+
+ bio_for_each_segment_all(bv, bio, iter) {
+ void *p;
+ unsigned int off;
+
+ p = bvec_kmap_local(bv);
+ for (off = 0; off < bv->bv_len; off += ssize) {
+ if (xfs_data_csum(p + off, ssize) != csum_buf[c++]) {
+ kunmap_local(p);
+ xfs_warn(ip->i_mount,
+"checksum mismatch at inode 0x%llx offset %lld",
+ ip->i_ino, file_offset);
+ return -EFSBADCRC;
+ }
+ file_offset += ssize;
+ }
+ kunmap_local(p);
+ }
+
+ return 0;
+}
+
void *
xfs_data_csum_alloc(
struct bio *bio)
@@ -53,11 +120,14 @@ xfs_data_csum_generate(
{
struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
- if (!bi || !bi->csum_type)
+ if (!bi)
return;
xfs_data_csum_alloc(bio);
- blk_integrity_generate(bio);
+ if (!bi->csum_type)
+ __xfs_data_csum_generate(bio);
+ else
+ blk_integrity_generate(bio);
}
int
@@ -67,7 +137,10 @@ xfs_data_csum_verify(
struct bio *bio = &ioend->io_bio;
struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
- if (!bi || !bi->csum_type)
+ if (!bi)
return 0;
+ if (!bi->csum_type)
+ return __xfs_data_csum_verify(&ioend->io_bio,
+ XFS_I(ioend->io_inode), ioend->io_offset);
return blk_integrity_verify_all(bio, ioend->io_sector);
}
--
2.45.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 1/7] block: support integrity generation and verification from file systems
2025-02-03 9:43 ` [PATCH 1/7] block: support integrity generation and verification from file systems Christoph Hellwig
@ 2025-02-03 19:47 ` Martin K. Petersen
2025-04-21 2:30 ` Anuj gupta
1 sibling, 0 replies; 23+ messages in thread
From: Martin K. Petersen @ 2025-02-03 19:47 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
Christoph,
> Add a new blk_integrity_verify_all helper that uses the _all iterator
> to verify the entire bio as built by the file system and doesn't
> require the extra bvec_iter used by blk_integrity_verify_iter and
> export blk_integrity_generate which can be used as-is.
LGTM.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: PI and data checksumming for XFS
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
` (6 preceding siblings ...)
2025-02-03 9:43 ` [PATCH 7/7] xfs: implement block-metadata based data checksums Christoph Hellwig
@ 2025-02-03 19:51 ` Martin K. Petersen
7 siblings, 0 replies; 23+ messages in thread
From: Martin K. Petersen @ 2025-02-03 19:51 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
Christoph,
> with all the PI and checksumming discussions I decided to dust of my
> old XFS PI and data checksumming prototypes. This is pre-alpha code so
> handle it with care. I tried to document most issues and limitations
> in the patch, but I might have missed some. It survives an xfstests
> quick run with just three failures, one of which is a pre-existing
> failure on a PI disable device when creating dm-thin.
This is along the lines of how I was originally intending the integrity
infrastructure to be used by filesystems. So I'm happy to see some
momentum in that department!
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 7/7] xfs: implement block-metadata based data checksums
2025-02-03 9:43 ` [PATCH 7/7] xfs: implement block-metadata based data checksums Christoph Hellwig
@ 2025-02-03 22:20 ` Darrick J. Wong
2025-02-04 5:00 ` Christoph Hellwig
0 siblings, 1 reply; 23+ messages in thread
From: Darrick J. Wong @ 2025-02-03 22:20 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 10:43:11AM +0100, Christoph Hellwig wrote:
> This is a quick hack to demonstrate how data checksumming can be
> implemented when it can be stored in the out of line metadata for each
> logical block. It builds on top of the previous PI infrastructure
> and instead of generating/verifying protection information it simply
> generates and verifies a crc32c checksum and stores it in the non-PI
PI can do crc32c now? I thought it could only do that old crc16 from
like 15 years ago and crc64? If we try to throw crc32c at a device,
won't it then reject the "incorrect" checksums? Or is there some other
magic in here where it works and I'm just too out of date to know?
<shrug>
The crc32c generation and validation looks decent though we're
definitely going to want an inode flag so that we're not stuck with
stable page writes.
--D
> metadata. It misses a feature bit in the superblock, checking that
> enough size is available in the metadata and many other things.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> fs/xfs/xfs_data_csum.c | 79 ++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 76 insertions(+), 3 deletions(-)
>
> diff --git a/fs/xfs/xfs_data_csum.c b/fs/xfs/xfs_data_csum.c
> index d9d3620654b1..862388803398 100644
> --- a/fs/xfs/xfs_data_csum.c
> +++ b/fs/xfs/xfs_data_csum.c
> @@ -14,6 +14,73 @@
> #include <linux/blk-integrity.h>
> #include <linux/bio-integrity.h>
>
> +static inline void *xfs_csum_buf(struct bio *bio)
> +{
> + return bvec_virt(bio_integrity(bio)->bip_vec);
> +}
> +
> +static inline __le32
> +xfs_data_csum(
> + void *data,
> + unsigned int len)
> +{
> + return xfs_end_cksum(crc32c(XFS_CRC_SEED, data, len));
> +}
> +
> +static void
> +__xfs_data_csum_generate(
> + struct bio *bio)
> +{
> + unsigned int ssize = bdev_logical_block_size(bio->bi_bdev);
> + __le32 *csum_buf = xfs_csum_buf(bio);
> + struct bvec_iter_all iter;
> + struct bio_vec *bv;
> + int c = 0;
> +
> + bio_for_each_segment_all(bv, bio, iter) {
> + void *p;
> + unsigned int off;
> +
> + p = bvec_kmap_local(bv);
> + for (off = 0; off < bv->bv_len; off += ssize)
> + csum_buf[c++] = xfs_data_csum(p + off, ssize);
> + kunmap_local(p);
> + }
> +}
> +
> +static int
> +__xfs_data_csum_verify(
> + struct bio *bio,
> + struct xfs_inode *ip,
> + xfs_off_t file_offset)
> +{
> + unsigned int ssize = bdev_logical_block_size(bio->bi_bdev);
> + __le32 *csum_buf = xfs_csum_buf(bio);
> + int c = 0;
> + struct bvec_iter_all iter;
> + struct bio_vec *bv;
> +
> + bio_for_each_segment_all(bv, bio, iter) {
> + void *p;
> + unsigned int off;
> +
> + p = bvec_kmap_local(bv);
> + for (off = 0; off < bv->bv_len; off += ssize) {
> + if (xfs_data_csum(p + off, ssize) != csum_buf[c++]) {
> + kunmap_local(p);
> + xfs_warn(ip->i_mount,
> +"checksum mismatch at inode 0x%llx offset %lld",
> + ip->i_ino, file_offset);
> + return -EFSBADCRC;
> + }
> + file_offset += ssize;
> + }
> + kunmap_local(p);
> + }
> +
> + return 0;
> +}
> +
> void *
> xfs_data_csum_alloc(
> struct bio *bio)
> @@ -53,11 +120,14 @@ xfs_data_csum_generate(
> {
> struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
>
> - if (!bi || !bi->csum_type)
> + if (!bi)
> return;
>
> xfs_data_csum_alloc(bio);
> - blk_integrity_generate(bio);
> + if (!bi->csum_type)
> + __xfs_data_csum_generate(bio);
> + else
> + blk_integrity_generate(bio);
> }
>
> int
> @@ -67,7 +137,10 @@ xfs_data_csum_verify(
> struct bio *bio = &ioend->io_bio;
> struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
>
> - if (!bi || !bi->csum_type)
> + if (!bi)
> return 0;
> + if (!bi->csum_type)
> + return __xfs_data_csum_verify(&ioend->io_bio,
> + XFS_I(ioend->io_inode), ioend->io_offset);
> return blk_integrity_verify_all(bio, ioend->io_sector);
> }
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/7] xfs: support T10 protection information
2025-02-03 9:43 ` [PATCH 6/7] xfs: support T10 protection information Christoph Hellwig
@ 2025-02-03 22:21 ` Darrick J. Wong
0 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2025-02-03 22:21 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 10:43:10AM +0100, Christoph Hellwig wrote:
> Add support for generating / verifying protection information in the
> file system. This is done by hooking into the bio submission in
> iomap and then using the generic PI helpers. Compared to just using
> the block layer auto PI this extends the protection envelope and also
> prepares for eventually passing through PI from userspace at least
> for direct I/O.
>
> Right now this is still pretty hacky, e.g. the single PI buffer can
> get pretty gigantic and has no mempool backing it. The deferring of
> I/O completions is done unconditionally instead only when needed,
> and we assume the device can actually handle these huge segments.
> The latter should be fixed by doing proper splitting based on metadata
> limits in the block layer, but the rest needs to be addressed here.
Seems reasonable to me modulo the XXX parts. :)
--D
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> fs/xfs/Makefile | 1 +
> fs/xfs/xfs_aops.c | 29 +++++++++++++++--
> fs/xfs/xfs_aops.h | 1 +
> fs/xfs/xfs_data_csum.c | 73 ++++++++++++++++++++++++++++++++++++++++++
> fs/xfs/xfs_data_csum.h | 7 ++++
> fs/xfs/xfs_file.c | 27 +++++++++++++++-
> fs/xfs/xfs_inode.h | 6 ++--
> 7 files changed, 136 insertions(+), 8 deletions(-)
> create mode 100644 fs/xfs/xfs_data_csum.c
> create mode 100644 fs/xfs/xfs_data_csum.h
>
> diff --git a/fs/xfs/Makefile b/fs/xfs/Makefile
> index 7afa51e41427..aa8749d640e7 100644
> --- a/fs/xfs/Makefile
> +++ b/fs/xfs/Makefile
> @@ -73,6 +73,7 @@ xfs-y += xfs_aops.o \
> xfs_bmap_util.o \
> xfs_bio_io.o \
> xfs_buf.o \
> + xfs_data_csum.o \
> xfs_dahash_test.o \
> xfs_dir2_readdir.o \
> xfs_discard.o \
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 3e42a684cce1..291f5d4dbce6 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -19,6 +19,7 @@
> #include "xfs_reflink.h"
> #include "xfs_errortag.h"
> #include "xfs_error.h"
> +#include "xfs_data_csum.h"
>
> struct xfs_writepage_ctx {
> struct iomap_writepage_ctx ctx;
> @@ -122,6 +123,11 @@ xfs_end_ioend(
> goto done;
> }
>
> + if (bio_op(&ioend->io_bio) == REQ_OP_READ) {
> + error = xfs_data_csum_verify(ioend);
> + goto done;
> + }
> +
> /*
> * Success: commit the COW or unwritten blocks if needed.
> */
> @@ -175,7 +181,7 @@ xfs_end_io(
> }
> }
>
> -STATIC void
> +void
> xfs_end_bio(
> struct bio *bio)
> {
> @@ -417,6 +423,8 @@ xfs_submit_ioend(
>
> memalloc_nofs_restore(nofs_flag);
>
> + xfs_data_csum_generate(&ioend->io_bio);
> +
> /* send ioends that might require a transaction to the completion wq */
> if (xfs_ioend_is_append(ioend) ||
> (ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED)))
> @@ -517,19 +525,34 @@ xfs_vm_bmap(
> return iomap_bmap(mapping, block, &xfs_read_iomap_ops);
> }
>
> +static void xfs_buffered_read_submit_io(struct inode *inode,
> + struct bio *bio, loff_t file_offset)
> +{
> + xfs_data_csum_alloc(bio);
> + iomap_init_ioend(inode, bio, file_offset, 0);
> + bio->bi_end_io = xfs_end_bio;
> + submit_bio(bio);
> +}
> +
> +static const struct iomap_read_folio_ops xfs_iomap_read_ops = {
> + .bio_set = &iomap_ioend_bioset,
> + .submit_io = xfs_buffered_read_submit_io,
> +};
> +
> STATIC int
> xfs_vm_read_folio(
> struct file *unused,
> struct folio *folio)
> {
> - return iomap_read_folio(folio, &xfs_read_iomap_ops, NULL);
> + return iomap_read_folio(folio, &xfs_read_iomap_ops,
> + &xfs_iomap_read_ops);
> }
>
> STATIC void
> xfs_vm_readahead(
> struct readahead_control *rac)
> {
> - iomap_readahead(rac, &xfs_read_iomap_ops, NULL);
> + iomap_readahead(rac, &xfs_read_iomap_ops, &xfs_iomap_read_ops);
> }
>
> static int
> diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
> index e0bd68419764..efed1ab59dbf 100644
> --- a/fs/xfs/xfs_aops.h
> +++ b/fs/xfs/xfs_aops.h
> @@ -10,5 +10,6 @@ extern const struct address_space_operations xfs_address_space_operations;
> extern const struct address_space_operations xfs_dax_aops;
>
> int xfs_setfilesize(struct xfs_inode *ip, xfs_off_t offset, size_t size);
> +void xfs_end_bio(struct bio *bio);
>
> #endif /* __XFS_AOPS_H__ */
> diff --git a/fs/xfs/xfs_data_csum.c b/fs/xfs/xfs_data_csum.c
> new file mode 100644
> index 000000000000..d9d3620654b1
> --- /dev/null
> +++ b/fs/xfs/xfs_data_csum.c
> @@ -0,0 +1,73 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022-2025 Christoph Hellwig.
> + */
> +#include "xfs.h"
> +#include "xfs_format.h"
> +#include "xfs_shared.h"
> +#include "xfs_trans_resv.h"
> +#include "xfs_mount.h"
> +#include "xfs_inode.h"
> +#include "xfs_cksum.h"
> +#include "xfs_data_csum.h"
> +#include <linux/iomap.h>
> +#include <linux/blk-integrity.h>
> +#include <linux/bio-integrity.h>
> +
> +void *
> +xfs_data_csum_alloc(
> + struct bio *bio)
> +{
> + struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
> + struct bio_integrity_payload *bip;
> + unsigned int buf_size;
> + void *buf;
> +
> + if (!bi)
> + return NULL;
> +
> + buf_size = bio_integrity_bytes(bi, bio_sectors(bio));
> + /* XXX: this needs proper mempools */
> + /* XXX: needs (partial) zeroing if tuple_size > csum_size */
> + buf = kmalloc(buf_size, GFP_NOFS | __GFP_NOFAIL);
> + bip = bio_integrity_alloc(bio, GFP_NOFS | __GFP_NOFAIL, 1);
> + if (!bio_integrity_add_page(bio, virt_to_page(buf), buf_size,
> + offset_in_page(buf)))
> + WARN_ON_ONCE(1);
> +
> + if (bi->csum_type) {
> + if (bi->csum_type == BLK_INTEGRITY_CSUM_IP)
> + bip->bip_flags |= BIP_IP_CHECKSUM;
> + bip->bip_flags |= BIP_CHECK_GUARD;
> + }
> + if (bi->flags & BLK_INTEGRITY_REF_TAG)
> + bip->bip_flags |= BIP_CHECK_REFTAG;
> + bip_set_seed(bip, bio->bi_iter.bi_sector);
> +
> + return buf;
> +}
> +
> +void
> +xfs_data_csum_generate(
> + struct bio *bio)
> +{
> + struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
> +
> + if (!bi || !bi->csum_type)
> + return;
> +
> + xfs_data_csum_alloc(bio);
> + blk_integrity_generate(bio);
> +}
> +
> +int
> +xfs_data_csum_verify(
> + struct iomap_ioend *ioend)
> +{
> + struct bio *bio = &ioend->io_bio;
> + struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
> +
> + if (!bi || !bi->csum_type)
> + return 0;
> + return blk_integrity_verify_all(bio, ioend->io_sector);
> +}
> diff --git a/fs/xfs/xfs_data_csum.h b/fs/xfs/xfs_data_csum.h
> new file mode 100644
> index 000000000000..f32215e8f46c
> --- /dev/null
> +++ b/fs/xfs/xfs_data_csum.h
> @@ -0,0 +1,7 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +struct iomap_ioend;
> +
> +void *xfs_data_csum_alloc(struct bio *bio);
> +void xfs_data_csum_generate(struct bio *bio);
> +int xfs_data_csum_verify(struct iomap_ioend *ioend);
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index f7a7d89c345e..0d64c54017f0 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -25,6 +25,8 @@
> #include "xfs_iomap.h"
> #include "xfs_reflink.h"
> #include "xfs_file.h"
> +#include "xfs_data_csum.h"
> +#include "xfs_aops.h"
>
> #include <linux/dax.h>
> #include <linux/falloc.h>
> @@ -227,6 +229,20 @@ xfs_ilock_iocb_for_write(
> return 0;
> }
>
> +static void xfs_dio_read_submit_io(const struct iomap_iter *iter,
> + struct bio *bio, loff_t file_offset)
> +{
> + xfs_data_csum_alloc(bio);
> + iomap_init_ioend(iter->inode, bio, file_offset, IOMAP_IOEND_DIRECT);
> + bio->bi_end_io = xfs_end_bio;
> + submit_bio(bio);
> +}
> +
> +static const struct iomap_dio_ops xfs_dio_read_ops = {
> + .bio_set = &iomap_ioend_bioset,
> + .submit_io = xfs_dio_read_submit_io,
> +};
> +
> STATIC ssize_t
> xfs_file_dio_read(
> struct kiocb *iocb,
> @@ -245,7 +261,8 @@ xfs_file_dio_read(
> ret = xfs_ilock_iocb(iocb, XFS_IOLOCK_SHARED);
> if (ret)
> return ret;
> - ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, NULL, 0, NULL, 0);
> + ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, &xfs_dio_read_ops, 0,
> + NULL, 0);
> xfs_iunlock(ip, XFS_IOLOCK_SHARED);
>
> return ret;
> @@ -578,8 +595,16 @@ xfs_dio_write_end_io(
> return error;
> }
>
> +static void xfs_dio_write_submit_io(const struct iomap_iter *iter,
> + struct bio *bio, loff_t file_offset)
> +{
> + xfs_data_csum_generate(bio);
> + submit_bio(bio);
> +}
> +
> static const struct iomap_dio_ops xfs_dio_write_ops = {
> .end_io = xfs_dio_write_end_io,
> + .submit_io = xfs_dio_write_submit_io,
> };
>
> /*
> diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
> index c08093a65352..ff346bbe454c 100644
> --- a/fs/xfs/xfs_inode.h
> +++ b/fs/xfs/xfs_inode.h
> @@ -609,10 +609,8 @@ int xfs_break_layouts(struct inode *inode, uint *iolock,
>
> static inline void xfs_update_stable_writes(struct xfs_inode *ip)
> {
> - if (bdev_stable_writes(xfs_inode_buftarg(ip)->bt_bdev))
> - mapping_set_stable_writes(VFS_I(ip)->i_mapping);
> - else
> - mapping_clear_stable_writes(VFS_I(ip)->i_mapping);
> + /* XXX: unconditional for now */
> + mapping_set_stable_writes(VFS_I(ip)->i_mapping);
> }
>
> /*
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 5/7] iomap: limit buffered I/O size to 128M
2025-02-03 9:43 ` [PATCH 5/7] iomap: limit buffered I/O size to 128M Christoph Hellwig
@ 2025-02-03 22:22 ` Darrick J. Wong
0 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2025-02-03 22:22 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 10:43:09AM +0100, Christoph Hellwig wrote:
> Currently iomap can build extremely large bios (I've seen sizes
> up to 480MB). Limit this to a lower bound so that the soon to
> be added per-ioend integrity buffer doesn't go beyond what the
> page allocator can support.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> fs/iomap/buffered-io.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 06990e012884..71bb676d4998 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -456,6 +456,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
> sector = iomap_sector(iomap, pos);
> if (!ctx->bio ||
> bio_end_sector(ctx->bio) != sector ||
> + ctx->bio->bi_iter.bi_size > SZ_128M ||
I imagine this is one of the XXX parts, but we probably shouldn't limit
the bios for !pi filesystems that won't care.
--D
> !bio_add_folio(ctx->bio, folio, plen, poff)) {
> if (ctx->bio)
> iomap_read_submit_bio(iter, ctx);
> @@ -1674,6 +1675,8 @@ static struct iomap_ioend *iomap_alloc_ioend(struct iomap_writepage_ctx *wpc,
> static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t pos,
> u16 ioend_flags)
> {
> + if (wpc->ioend->io_bio.bi_iter.bi_size > SZ_128M)
> + return false;
> if (ioend_flags & IOMAP_IOEND_BOUNDARY)
> return false;
> if ((ioend_flags & IOMAP_IOEND_NOMERGE_FLAGS) !=
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset
2025-02-03 9:43 ` [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset Christoph Hellwig
@ 2025-02-03 22:23 ` Darrick J. Wong
2025-02-04 4:58 ` Christoph Hellwig
2025-03-13 13:53 ` Matthew Wilcox
1 sibling, 1 reply; 23+ messages in thread
From: Darrick J. Wong @ 2025-02-03 22:23 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 10:43:07AM +0100, Christoph Hellwig wrote:
> From: Goldwyn Rodrigues <rgoldwyn@suse.com>
>
> Allocate the bio from the bioset provided in iomap_read_folio_ops.
> If no bioset is provided, fs_bio_set is used which is the standard
> bioset for filesystems.
>
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
I feel like I've seen this patch and the last one floating around for
quite a while; would you and/or Goldwyn like to merge it for 6.15?
--D
> [hch: factor out two helpers]
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> fs/iomap/buffered-io.c | 51 ++++++++++++++++++++++++++++--------------
> include/linux/iomap.h | 6 +++++
> 2 files changed, 40 insertions(+), 17 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 804527dcc9ba..eaffa23eb8e4 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -364,6 +364,39 @@ static inline bool iomap_block_needs_zeroing(const struct iomap_iter *iter,
> pos >= i_size_read(iter->inode);
> }
>
> +static struct bio_set *iomap_read_bio_set(struct iomap_readpage_ctx *ctx)
> +{
> + if (ctx->ops && ctx->ops->bio_set)
> + return ctx->ops->bio_set;
> + return &fs_bio_set;
> +}
> +
> +static struct bio *iomap_read_alloc_bio(const struct iomap_iter *iter,
> + struct iomap_readpage_ctx *ctx, loff_t length)
> +{
> + unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE);
> + struct block_device *bdev = iter->iomap.bdev;
> + struct bio_set *bio_set = iomap_read_bio_set(ctx);
> + gfp_t gfp = mapping_gfp_constraint(iter->inode->i_mapping, GFP_KERNEL);
> + gfp_t orig_gfp = gfp;
> + struct bio *bio;
> +
> + if (ctx->rac) /* same as readahead_gfp_mask */
> + gfp |= __GFP_NORETRY | __GFP_NOWARN;
> +
> + bio = bio_alloc_bioset(bdev, bio_max_segs(nr_vecs), REQ_OP_READ, gfp,
> + bio_set);
> +
> + /*
> + * If the bio_alloc fails, try it again for a single page to avoid
> + * having to deal with partial page reads. This emulates what
> + * do_mpage_read_folio does.
> + */
> + if (!bio)
> + bio = bio_alloc_bioset(bdev, 1, REQ_OP_READ, orig_gfp, bio_set);
> + return bio;
> +}
> +
> static void iomap_read_submit_bio(const struct iomap_iter *iter,
> struct iomap_readpage_ctx *ctx)
> {
> @@ -411,27 +444,11 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter,
> if (!ctx->bio ||
> bio_end_sector(ctx->bio) != sector ||
> !bio_add_folio(ctx->bio, folio, plen, poff)) {
> - gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL);
> - gfp_t orig_gfp = gfp;
> - unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE);
> -
> if (ctx->bio)
> iomap_read_submit_bio(iter, ctx);
>
> ctx->bio_start_pos = offset;
> - if (ctx->rac) /* same as readahead_gfp_mask */
> - gfp |= __GFP_NORETRY | __GFP_NOWARN;
> - ctx->bio = bio_alloc(iomap->bdev, bio_max_segs(nr_vecs),
> - REQ_OP_READ, gfp);
> - /*
> - * If the bio_alloc fails, try it again for a single page to
> - * avoid having to deal with partial page reads. This emulates
> - * what do_mpage_read_folio does.
> - */
> - if (!ctx->bio) {
> - ctx->bio = bio_alloc(iomap->bdev, 1, REQ_OP_READ,
> - orig_gfp);
> - }
> + ctx->bio = iomap_read_alloc_bio(iter, ctx, length);
> if (ctx->rac)
> ctx->bio->bi_opf |= REQ_RAHEAD;
> ctx->bio->bi_iter.bi_sector = sector;
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 2930861d1ef1..304be88ecd23 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -311,6 +311,12 @@ struct iomap_read_folio_ops {
> */
> void (*submit_io)(struct inode *inode, struct bio *bio,
> loff_t file_offset);
> +
> + /*
> + * Optional, allows filesystem to specify own bio_set, so new bio's
> + * can be allocated from the provided bio_set.
> + */
> + struct bio_set *bio_set;
> };
>
> int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops,
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 4/7] iomap: support ioends for reads
2025-02-03 9:43 ` [PATCH 4/7] iomap: support ioends for reads Christoph Hellwig
@ 2025-02-03 22:24 ` Darrick J. Wong
0 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2025-02-03 22:24 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 10:43:08AM +0100, Christoph Hellwig wrote:
> Support using the ioend structure to defer I/O completion for
> reads in addition to writes. This requires a check for the operation
> to not merge reads and writes, and for buffere I/O a call into the
buffered
> buffered read I/O completion handler from iomap_finish_ioend. For
> direct I/O the existing call into the direct I/O completion handler
> handles reads just fine already.
Otherwise everything looks ok to me.
--D
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> fs/iomap/buffered-io.c | 23 ++++++++++++++++++-----
> fs/iomap/internal.h | 3 ++-
> fs/iomap/ioend.c | 6 +++++-
> 3 files changed, 25 insertions(+), 7 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index eaffa23eb8e4..06990e012884 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -306,14 +306,27 @@ static void iomap_finish_folio_read(struct folio *folio, size_t off,
> folio_end_read(folio, uptodate);
> }
>
> -static void iomap_read_end_io(struct bio *bio)
> +static u32 __iomap_read_end_io(struct bio *bio, int error)
> {
> - int error = blk_status_to_errno(bio->bi_status);
> struct folio_iter fi;
> + u32 folio_count = 0;
>
> - bio_for_each_folio_all(fi, bio)
> + bio_for_each_folio_all(fi, bio) {
> iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error);
> + folio_count++;
> + }
> bio_put(bio);
> + return folio_count;
> +}
> +
> +static void iomap_read_end_io(struct bio *bio)
> +{
> + __iomap_read_end_io(bio, blk_status_to_errno(bio->bi_status));
> +}
> +
> +u32 iomap_finish_ioend_buffered_read(struct iomap_ioend *ioend)
> +{
> + return __iomap_read_end_io(&ioend->io_bio, ioend->io_error);
> }
>
> struct iomap_readpage_ctx {
> @@ -1568,7 +1581,7 @@ static void iomap_finish_folio_write(struct inode *inode, struct folio *folio,
> * state, release holds on bios, and finally free up memory. Do not use the
> * ioend after this.
> */
> -u32 iomap_finish_ioend_buffered(struct iomap_ioend *ioend)
> +u32 iomap_finish_ioend_buffered_write(struct iomap_ioend *ioend)
> {
> struct inode *inode = ioend->io_inode;
> struct bio *bio = &ioend->io_bio;
> @@ -1600,7 +1613,7 @@ static void iomap_writepage_end_bio(struct bio *bio)
> struct iomap_ioend *ioend = iomap_ioend_from_bio(bio);
>
> ioend->io_error = blk_status_to_errno(bio->bi_status);
> - iomap_finish_ioend_buffered(ioend);
> + iomap_finish_ioend_buffered_write(ioend);
> }
>
> /*
> diff --git a/fs/iomap/internal.h b/fs/iomap/internal.h
> index f6992a3bf66a..c824e74a3526 100644
> --- a/fs/iomap/internal.h
> +++ b/fs/iomap/internal.h
> @@ -4,7 +4,8 @@
>
> #define IOEND_BATCH_SIZE 4096
>
> -u32 iomap_finish_ioend_buffered(struct iomap_ioend *ioend);
> +u32 iomap_finish_ioend_buffered_read(struct iomap_ioend *ioend);
> +u32 iomap_finish_ioend_buffered_write(struct iomap_ioend *ioend);
> u32 iomap_finish_ioend_direct(struct iomap_ioend *ioend);
>
> #endif /* _IOMAP_INTERNAL_H */
> diff --git a/fs/iomap/ioend.c b/fs/iomap/ioend.c
> index 18894ebba6db..2dd29403dc10 100644
> --- a/fs/iomap/ioend.c
> +++ b/fs/iomap/ioend.c
> @@ -44,7 +44,9 @@ static u32 iomap_finish_ioend(struct iomap_ioend *ioend, int error)
> return 0;
> if (ioend->io_flags & IOMAP_IOEND_DIRECT)
> return iomap_finish_ioend_direct(ioend);
> - return iomap_finish_ioend_buffered(ioend);
> + if (bio_op(&ioend->io_bio) == REQ_OP_READ)
> + return iomap_finish_ioend_buffered_read(ioend);
> + return iomap_finish_ioend_buffered_write(ioend);
> }
>
> /*
> @@ -83,6 +85,8 @@ EXPORT_SYMBOL_GPL(iomap_finish_ioends);
> static bool iomap_ioend_can_merge(struct iomap_ioend *ioend,
> struct iomap_ioend *next)
> {
> + if (bio_op(&ioend->io_bio) != bio_op(&next->io_bio))
> + return false;
> if (ioend->io_bio.bi_status != next->io_bio.bi_status)
> return false;
> if (next->io_flags & IOMAP_IOEND_BOUNDARY)
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset
2025-02-03 22:23 ` Darrick J. Wong
@ 2025-02-04 4:58 ` Christoph Hellwig
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-04 4:58 UTC (permalink / raw)
To: Darrick J. Wong
Cc: Christoph Hellwig, Kanchan Joshi, Martin K . Petersen,
Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 02:23:27PM -0800, Darrick J. Wong wrote:
> > Allocate the bio from the bioset provided in iomap_read_folio_ops.
> > If no bioset is provided, fs_bio_set is used which is the standard
> > bioset for filesystems.
> >
> > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
>
> I feel like I've seen this patch and the last one floating around for
> quite a while; would you and/or Goldwyn like to merge it for 6.15?
I think Goldwyn posted it once or twice and this is my first take on
it (I had a similar one in a local tree, but I don't think that ever
made it out to the public).
But until we actually grow a user I'd rather not have it queue up
as dead code. I'm not sure what the timeline of iomap in btrfs is,
but I'm 6.15 is the absolute earliest that the PI support for XFS
could make it.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 7/7] xfs: implement block-metadata based data checksums
2025-02-03 22:20 ` Darrick J. Wong
@ 2025-02-04 5:00 ` Christoph Hellwig
2025-02-04 18:36 ` Darrick J. Wong
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-04 5:00 UTC (permalink / raw)
To: Darrick J. Wong
Cc: Christoph Hellwig, Kanchan Joshi, Martin K . Petersen,
Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 02:20:31PM -0800, Darrick J. Wong wrote:
> On Mon, Feb 03, 2025 at 10:43:11AM +0100, Christoph Hellwig wrote:
> > This is a quick hack to demonstrate how data checksumming can be
> > implemented when it can be stored in the out of line metadata for each
> > logical block. It builds on top of the previous PI infrastructure
> > and instead of generating/verifying protection information it simply
> > generates and verifies a crc32c checksum and stores it in the non-PI
>
> PI can do crc32c now? I thought it could only do that old crc16 from
> like 15 years ago and crc64?
NVMe has a protection information format with a crc32c, but it's not
supported by Linux yet.
> If we try to throw crc32c at a device,
> won't it then reject the "incorrect" checksums? Or is there some other
> magic in here where it works and I'm just too out of date to know?
This patch implements XFS-level data checksums on devices that implement
non-PI metadata, that is the device allows to store extra data with the
LBA, but doesn't actually interpret and verify it іn any way.
> The crc32c generation and validation looks decent though we're
> definitely going to want an inode flag so that we're not stuck with
> stable page writes.
Yeah, support the NOCOW flag, have a sb flag to enable the checksums,
maybe even a field about what checksum to use, yodda, yodda.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 7/7] xfs: implement block-metadata based data checksums
2025-02-04 5:00 ` Christoph Hellwig
@ 2025-02-04 18:36 ` Darrick J. Wong
2025-02-06 6:05 ` Christoph Hellwig
0 siblings, 1 reply; 23+ messages in thread
From: Darrick J. Wong @ 2025-02-04 18:36 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Tue, Feb 04, 2025 at 06:00:25AM +0100, Christoph Hellwig wrote:
> On Mon, Feb 03, 2025 at 02:20:31PM -0800, Darrick J. Wong wrote:
> > On Mon, Feb 03, 2025 at 10:43:11AM +0100, Christoph Hellwig wrote:
> > > This is a quick hack to demonstrate how data checksumming can be
> > > implemented when it can be stored in the out of line metadata for each
> > > logical block. It builds on top of the previous PI infrastructure
> > > and instead of generating/verifying protection information it simply
> > > generates and verifies a crc32c checksum and stores it in the non-PI
> >
> > PI can do crc32c now? I thought it could only do that old crc16 from
> > like 15 years ago and crc64?
>
> NVMe has a protection information format with a crc32c, but it's not
> supported by Linux yet.
Ah. Missed that!
> > If we try to throw crc32c at a device,
> > won't it then reject the "incorrect" checksums? Or is there some other
> > magic in here where it works and I'm just too out of date to know?
>
> This patch implements XFS-level data checksums on devices that implement
> non-PI metadata, that is the device allows to store extra data with the
> LBA, but doesn't actually interpret and verify it іn any way.
Ohhhhh. So the ondisk metadata /would/ need to capture the checksum
type and which inodes are participating.
> > The crc32c generation and validation looks decent though we're
> > definitely going to want an inode flag so that we're not stuck with
> > stable page writes.
>
> Yeah, support the NOCOW flag, have a sb flag to enable the checksums,
> maybe even a field about what checksum to use, yodda, yodda.
Why do we need nocow? Won't the block contents and the PI data get
written in an untorn fashion?
--D
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 7/7] xfs: implement block-metadata based data checksums
2025-02-04 18:36 ` Darrick J. Wong
@ 2025-02-06 6:05 ` Christoph Hellwig
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-02-06 6:05 UTC (permalink / raw)
To: Darrick J. Wong
Cc: Christoph Hellwig, Kanchan Joshi, Martin K . Petersen,
Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
On Tue, Feb 04, 2025 at 10:36:51AM -0800, Darrick J. Wong wrote:
> > > The crc32c generation and validation looks decent though we're
> > > definitely going to want an inode flag so that we're not stuck with
> > > stable page writes.
> >
> > Yeah, support the NOCOW flag, have a sb flag to enable the checksums,
> > maybe even a field about what checksum to use, yodda, yodda.
>
> Why do we need nocow? Won't the block contents and the PI data get
> written in an untorn fashion?
I mean to say NODATASUM, not NOCOW. Sorry for the confusion that
this caused.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset
2025-02-03 9:43 ` [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset Christoph Hellwig
2025-02-03 22:23 ` Darrick J. Wong
@ 2025-03-13 13:53 ` Matthew Wilcox
2025-03-14 16:53 ` Darrick J. Wong
2025-03-17 5:52 ` Christoph Hellwig
1 sibling, 2 replies; 23+ messages in thread
From: Matthew Wilcox @ 2025-03-13 13:53 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
On Mon, Feb 03, 2025 at 10:43:07AM +0100, Christoph Hellwig wrote:
> Allocate the bio from the bioset provided in iomap_read_folio_ops.
> If no bioset is provided, fs_bio_set is used which is the standard
> bioset for filesystems.
It feels weird to have an 'ops' that contains a bioset rather than a
function pointer. Is there a better name we could be using? ctx seems
wrong because it's not a per-op struct.
> +++ b/include/linux/iomap.h
> @@ -311,6 +311,12 @@ struct iomap_read_folio_ops {
> */
> void (*submit_io)(struct inode *inode, struct bio *bio,
> loff_t file_offset);
> +
> + /*
> + * Optional, allows filesystem to specify own bio_set, so new bio's
> + * can be allocated from the provided bio_set.
> + */
> + struct bio_set *bio_set;
> };
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset
2025-03-13 13:53 ` Matthew Wilcox
@ 2025-03-14 16:53 ` Darrick J. Wong
2025-03-17 5:52 ` Christoph Hellwig
1 sibling, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2025-03-14 16:53 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christoph Hellwig, Kanchan Joshi, Martin K . Petersen,
Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
On Thu, Mar 13, 2025 at 01:53:59PM +0000, Matthew Wilcox wrote:
> On Mon, Feb 03, 2025 at 10:43:07AM +0100, Christoph Hellwig wrote:
> > Allocate the bio from the bioset provided in iomap_read_folio_ops.
> > If no bioset is provided, fs_bio_set is used which is the standard
> > bioset for filesystems.
>
> It feels weird to have an 'ops' that contains a bioset rather than a
> function pointer. Is there a better name we could be using? ctx seems
> wrong because it's not a per-op struct.
"profile" is the closest I can come up with, and that feels wrong to me.
There's at least some precedent in fs-land for ops structs that have
non-function pointer fields such as magic numbers, descriptive names,
or crc block offsets.
--D
>
> > +++ b/include/linux/iomap.h
> > @@ -311,6 +311,12 @@ struct iomap_read_folio_ops {
> > */
> > void (*submit_io)(struct inode *inode, struct bio *bio,
> > loff_t file_offset);
> > +
> > + /*
> > + * Optional, allows filesystem to specify own bio_set, so new bio's
> > + * can be allocated from the provided bio_set.
> > + */
> > + struct bio_set *bio_set;
> > };
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset
2025-03-13 13:53 ` Matthew Wilcox
2025-03-14 16:53 ` Darrick J. Wong
@ 2025-03-17 5:52 ` Christoph Hellwig
1 sibling, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2025-03-17 5:52 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christoph Hellwig, Kanchan Joshi, Martin K . Petersen,
Johannes Thumshirn, Qu Wenruo, Goldwyn Rodrigues, linux-block,
linux-fsdevel, linux-xfs
On Thu, Mar 13, 2025 at 01:53:59PM +0000, Matthew Wilcox wrote:
> On Mon, Feb 03, 2025 at 10:43:07AM +0100, Christoph Hellwig wrote:
> > Allocate the bio from the bioset provided in iomap_read_folio_ops.
> > If no bioset is provided, fs_bio_set is used which is the standard
> > bioset for filesystems.
>
> It feels weird to have an 'ops' that contains a bioset rather than a
> function pointer. Is there a better name we could be using? ctx seems
> wrong because it's not a per-op struct.
As Darrick pointed out ops commonly have non-method static fields of
some kind. After at all it still mostly is about ops, the bio_set
pointer just avoids having to add a special alloc indirection that
will all end up using the same code just with a different bio_set.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 1/7] block: support integrity generation and verification from file systems
2025-02-03 9:43 ` [PATCH 1/7] block: support integrity generation and verification from file systems Christoph Hellwig
2025-02-03 19:47 ` Martin K. Petersen
@ 2025-04-21 2:30 ` Anuj gupta
1 sibling, 0 replies; 23+ messages in thread
From: Anuj gupta @ 2025-04-21 2:30 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kanchan Joshi, Martin K . Petersen, Johannes Thumshirn, Qu Wenruo,
Goldwyn Rodrigues, linux-block, linux-fsdevel, linux-xfs
> +EXPORT_SYMBOL_GPL(blk_integrity_generate);
Since this is now exported, it should have a kernel-doc style comment.
Otherwise looks good to me:
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2025-04-21 2:31 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-03 9:43 PI and data checksumming for XFS Christoph Hellwig
2025-02-03 9:43 ` [PATCH 1/7] block: support integrity generation and verification from file systems Christoph Hellwig
2025-02-03 19:47 ` Martin K. Petersen
2025-04-21 2:30 ` Anuj gupta
2025-02-03 9:43 ` [PATCH 2/7] iomap: introduce iomap_read_folio_ops Christoph Hellwig
2025-02-03 9:43 ` [PATCH 3/7] iomap: add bioset in iomap_read_folio_ops for filesystems to use own bioset Christoph Hellwig
2025-02-03 22:23 ` Darrick J. Wong
2025-02-04 4:58 ` Christoph Hellwig
2025-03-13 13:53 ` Matthew Wilcox
2025-03-14 16:53 ` Darrick J. Wong
2025-03-17 5:52 ` Christoph Hellwig
2025-02-03 9:43 ` [PATCH 4/7] iomap: support ioends for reads Christoph Hellwig
2025-02-03 22:24 ` Darrick J. Wong
2025-02-03 9:43 ` [PATCH 5/7] iomap: limit buffered I/O size to 128M Christoph Hellwig
2025-02-03 22:22 ` Darrick J. Wong
2025-02-03 9:43 ` [PATCH 6/7] xfs: support T10 protection information Christoph Hellwig
2025-02-03 22:21 ` Darrick J. Wong
2025-02-03 9:43 ` [PATCH 7/7] xfs: implement block-metadata based data checksums Christoph Hellwig
2025-02-03 22:20 ` Darrick J. Wong
2025-02-04 5:00 ` Christoph Hellwig
2025-02-04 18:36 ` Darrick J. Wong
2025-02-06 6:05 ` Christoph Hellwig
2025-02-03 19:51 ` PI and data checksumming for XFS Martin K. Petersen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).