public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* bio allocation cleanups
@ 2026-03-16 16:11 Christoph Hellwig
  2026-03-16 16:11 ` [PATCH 1/3] block: mark bvec_{alloc,free} static Christoph Hellwig
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Christoph Hellwig @ 2026-03-16 16:11 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

Hi all,

I recently looked into a batch bio allocator for a project I'm working on,
and notice how convoluted the bio allocator has become.

This series unwinds it so that the the fast-path slab allocation is
better separated from the mempool fallback, which reduces the code
complexity a lot, and avoids indirect calls for common cases.

Note that we could also avoid the indirect calls for the free path by
using mempool_free_bulk.  Should I give this a spin or wait for a
workload where we can actually see a difference?

Diffstat:
 block/bio.c         |  199 +++++++++++++++++++++-------------------------------
 block/blk.h         |    5 -
 include/linux/bio.h |    3 
 3 files changed, 82 insertions(+), 125 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] block: mark bvec_{alloc,free} static
  2026-03-16 16:11 bio allocation cleanups Christoph Hellwig
@ 2026-03-16 16:11 ` Christoph Hellwig
  2026-03-17 13:38   ` Johannes Thumshirn
  2026-03-18  0:26   ` Chaitanya Kulkarni
  2026-03-16 16:11 ` [PATCH 2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath Christoph Hellwig
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 11+ messages in thread
From: Christoph Hellwig @ 2026-03-16 16:11 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

Only used in bio.c these days.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 7 +++++--
 block/blk.h | 5 -----
 2 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index bf1f3670e85a..6131ccb7284a 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -34,6 +34,8 @@ struct bio_alloc_cache {
 	unsigned int		nr_irq;
 };
 
+#define BIO_INLINE_VECS 4
+
 static struct biovec_slab {
 	int nr_vecs;
 	char *name;
@@ -159,7 +161,8 @@ static void bio_put_slab(struct bio_set *bs)
 	mutex_unlock(&bio_slab_lock);
 }
 
-void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs)
+static void bvec_free(struct mempool *pool, struct bio_vec *bv,
+		      unsigned short nr_vecs)
 {
 	BUG_ON(nr_vecs > BIO_MAX_VECS);
 
@@ -179,7 +182,7 @@ static inline gfp_t bvec_alloc_gfp(gfp_t gfp)
 		__GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN;
 }
 
-struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
+static struct bio_vec *bvec_alloc(struct mempool *pool, unsigned short *nr_vecs,
 		gfp_t gfp_mask)
 {
 	struct biovec_slab *bvs = biovec_slab(*nr_vecs);
diff --git a/block/blk.h b/block/blk.h
index c5b2115b9ea4..103cb1d0b9cb 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -108,11 +108,6 @@ static inline void blk_wait_io(struct completion *done)
 struct block_device *blkdev_get_no_open(dev_t dev, bool autoload);
 void blkdev_put_no_open(struct block_device *bdev);
 
-#define BIO_INLINE_VECS 4
-struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
-		gfp_t gfp_mask);
-void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs);
-
 bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
 		struct page *page, unsigned len, unsigned offset);
 
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath
  2026-03-16 16:11 bio allocation cleanups Christoph Hellwig
  2026-03-16 16:11 ` [PATCH 1/3] block: mark bvec_{alloc,free} static Christoph Hellwig
@ 2026-03-16 16:11 ` Christoph Hellwig
  2026-03-18  0:27   ` Chaitanya Kulkarni
  2026-03-16 16:11 ` [PATCH 3/3] block: remove bvec_free Christoph Hellwig
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2026-03-16 16:11 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

bio_alloc_bioset tries non-waiting slab allocations first for the bio and
bvec array, but does so in a somewhat convoluted way.

Restructure the function so that it first open codes these slab
allocations, and then falls back to the mempools with the original
gfp mask.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c         | 180 ++++++++++++++++++--------------------------
 include/linux/bio.h |   3 +-
 2 files changed, 74 insertions(+), 109 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 6131ccb7284a..5982bf069cef 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -176,43 +176,12 @@ static void bvec_free(struct mempool *pool, struct bio_vec *bv,
  * Make the first allocation restricted and don't dump info on allocation
  * failures, since we'll fall back to the mempool in case of failure.
  */
-static inline gfp_t bvec_alloc_gfp(gfp_t gfp)
+static inline gfp_t try_alloc_gfp(gfp_t gfp)
 {
 	return (gfp & ~(__GFP_DIRECT_RECLAIM | __GFP_IO)) |
 		__GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN;
 }
 
-static struct bio_vec *bvec_alloc(struct mempool *pool, unsigned short *nr_vecs,
-		gfp_t gfp_mask)
-{
-	struct biovec_slab *bvs = biovec_slab(*nr_vecs);
-
-	if (WARN_ON_ONCE(!bvs))
-		return NULL;
-
-	/*
-	 * Upgrade the nr_vecs request to take full advantage of the allocation.
-	 * We also rely on this in the bvec_free path.
-	 */
-	*nr_vecs = bvs->nr_vecs;
-
-	/*
-	 * Try a slab allocation first for all smaller allocations.  If that
-	 * fails and __GFP_DIRECT_RECLAIM is set retry with the mempool.
-	 * The mempool is sized to handle up to BIO_MAX_VECS entries.
-	 */
-	if (*nr_vecs < BIO_MAX_VECS) {
-		struct bio_vec *bvl;
-
-		bvl = kmem_cache_alloc(bvs->slab, bvec_alloc_gfp(gfp_mask));
-		if (likely(bvl) || !(gfp_mask & __GFP_DIRECT_RECLAIM))
-			return bvl;
-		*nr_vecs = BIO_MAX_VECS;
-	}
-
-	return mempool_alloc(pool, gfp_mask);
-}
-
 void bio_uninit(struct bio *bio)
 {
 #ifdef CONFIG_BLK_CGROUP
@@ -433,13 +402,31 @@ static void bio_alloc_rescue(struct work_struct *work)
 	}
 }
 
+/*
+ * submit_bio_noacct() converts recursion to iteration; this means if we're
+ * running beneath it, any bios we allocate and submit will not be submitted
+ * (and thus freed) until after we return.
+ *
+ * This exposes us to a potential deadlock if we allocate multiple bios from the
+ * same bio_set while running underneath submit_bio_noacct().  If we were to
+ * allocate multiple bios (say a stacking block driver that was splitting bios),
+ * we would deadlock if we exhausted the mempool's reserve.
+ *
+ * We solve this, and guarantee forward progress by punting the bios on
+ * current->bio_list to a per bio_set rescuer workqueue before blocking to wait
+ * for elements being returned to the mempool.
+ */
 static void punt_bios_to_rescuer(struct bio_set *bs)
 {
 	struct bio_list punt, nopunt;
 	struct bio *bio;
 
-	if (WARN_ON_ONCE(!bs->rescue_workqueue))
+	if (!current->bio_list || !bs->rescue_workqueue)
 		return;
+	if (bio_list_empty(&current->bio_list[0]) &&
+	    bio_list_empty(&current->bio_list[1]))
+		return;
+
 	/*
 	 * In order to guarantee forward progress we must punt only bios that
 	 * were allocated from this bio_set; otherwise, if there was a bio on
@@ -486,9 +473,7 @@ static void bio_alloc_irq_cache_splice(struct bio_alloc_cache *cache)
 	local_irq_restore(flags);
 }
 
-static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
-		unsigned short nr_vecs, blk_opf_t opf, gfp_t gfp,
-		struct bio_set *bs)
+static struct bio *bio_alloc_percpu_cache(struct bio_set *bs)
 {
 	struct bio_alloc_cache *cache;
 	struct bio *bio;
@@ -506,11 +491,6 @@ static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
 	cache->free_list = bio->bi_next;
 	cache->nr--;
 	put_cpu();
-
-	if (nr_vecs)
-		bio_init_inline(bio, bdev, nr_vecs, opf);
-	else
-		bio_init(bio, bdev, NULL, nr_vecs, opf);
 	bio->bi_pool = bs;
 	return bio;
 }
@@ -520,7 +500,7 @@ static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
  * @bdev:	block device to allocate the bio for (can be %NULL)
  * @nr_vecs:	number of bvecs to pre-allocate
  * @opf:	operation and flags for bio
- * @gfp_mask:   the GFP_* mask given to the slab allocator
+ * @gfp:	the GFP_* mask given to the slab allocator
  * @bs:		the bio_set to allocate from.
  *
  * Allocate a bio from the mempools in @bs.
@@ -550,91 +530,77 @@ static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
  * Returns: Pointer to new bio on success, NULL on failure.
  */
 struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
-			     blk_opf_t opf, gfp_t gfp_mask,
-			     struct bio_set *bs)
+			     blk_opf_t opf, gfp_t gfp, struct bio_set *bs)
 {
-	gfp_t saved_gfp = gfp_mask;
-	struct bio *bio;
+	struct bio_vec *bvecs = NULL;
+	struct bio *bio = NULL;
+	gfp_t saved_gfp = gfp;
 	void *p;
 
 	/* should not use nobvec bioset for nr_vecs > 0 */
 	if (WARN_ON_ONCE(!mempool_initialized(&bs->bvec_pool) && nr_vecs > 0))
 		return NULL;
 
+	gfp = try_alloc_gfp(gfp);
 	if (bs->cache && nr_vecs <= BIO_INLINE_VECS) {
-		opf |= REQ_ALLOC_CACHE;
-		bio = bio_alloc_percpu_cache(bdev, nr_vecs, opf,
-					     gfp_mask, bs);
-		if (bio)
-			return bio;
 		/*
-		 * No cached bio available, bio returned below marked with
-		 * REQ_ALLOC_CACHE to participate in per-cpu alloc cache.
+		 * Set REQ_ALLOC_CACHE even if no cached bio is available to
+		 * return the allocated bio to the percpu cache when done.
 		 */
-	} else
+		opf |= REQ_ALLOC_CACHE;
+		bio = bio_alloc_percpu_cache(bs);
+	} else {
 		opf &= ~REQ_ALLOC_CACHE;
-
-	/*
-	 * submit_bio_noacct() converts recursion to iteration; this means if
-	 * we're running beneath it, any bios we allocate and submit will not be
-	 * submitted (and thus freed) until after we return.
-	 *
-	 * This exposes us to a potential deadlock if we allocate multiple bios
-	 * from the same bio_set() while running underneath submit_bio_noacct().
-	 * If we were to allocate multiple bios (say a stacking block driver
-	 * that was splitting bios), we would deadlock if we exhausted the
-	 * mempool's reserve.
-	 *
-	 * We solve this, and guarantee forward progress, with a rescuer
-	 * workqueue per bio_set. If we go to allocate and there are bios on
-	 * current->bio_list, we first try the allocation without
-	 * __GFP_DIRECT_RECLAIM; if that fails, we punt those bios we would be
-	 * blocking to the rescuer workqueue before we retry with the original
-	 * gfp_flags.
-	 */
-	if (current->bio_list &&
-	    (!bio_list_empty(&current->bio_list[0]) ||
-	     !bio_list_empty(&current->bio_list[1])) &&
-	    bs->rescue_workqueue)
-		gfp_mask &= ~__GFP_DIRECT_RECLAIM;
-
-	p = mempool_alloc(&bs->bio_pool, gfp_mask);
-	if (!p && gfp_mask != saved_gfp) {
-		punt_bios_to_rescuer(bs);
-		gfp_mask = saved_gfp;
-		p = mempool_alloc(&bs->bio_pool, gfp_mask);
+		p = kmem_cache_alloc(bs->bio_slab, gfp);
+		if (p)
+			bio = p + bs->front_pad;
 	}
-	if (unlikely(!p))
-		return NULL;
-	if (!mempool_is_saturated(&bs->bio_pool))
-		opf &= ~REQ_ALLOC_CACHE;
 
-	bio = p + bs->front_pad;
-	if (nr_vecs > BIO_INLINE_VECS) {
-		struct bio_vec *bvl = NULL;
+	if (bio && nr_vecs > BIO_INLINE_VECS) {
+		struct biovec_slab *bvs = biovec_slab(nr_vecs);
 
-		bvl = bvec_alloc(&bs->bvec_pool, &nr_vecs, gfp_mask);
-		if (!bvl && gfp_mask != saved_gfp) {
-			punt_bios_to_rescuer(bs);
-			gfp_mask = saved_gfp;
-			bvl = bvec_alloc(&bs->bvec_pool, &nr_vecs, gfp_mask);
+		/*
+		 * Upgrade nr_vecs to take full advantage of the allocation.
+		 * We also rely on this in bvec_free().
+		 */
+		nr_vecs = bvs->nr_vecs;
+		bvecs = kmem_cache_alloc(bvs->slab, gfp);
+		if (unlikely(!bvecs)) {
+			kmem_cache_free(bs->bio_slab, p);
+			bio = NULL;
 		}
-		if (unlikely(!bvl))
-			goto err_free;
+	}
 
-		bio_init(bio, bdev, bvl, nr_vecs, opf);
-	} else if (nr_vecs) {
-		bio_init_inline(bio, bdev, BIO_INLINE_VECS, opf);
-	} else {
-		bio_init(bio, bdev, NULL, 0, opf);
+	if (unlikely(!bio)) {
+		/*
+		 * Give up if we are not allow to sleep as non-blocking mempool
+		 * allocations just go back to the slab allocation.
+		 */
+		if (!(saved_gfp & __GFP_DIRECT_RECLAIM))
+			return NULL;
+
+		punt_bios_to_rescuer(bs);
+
+		/*
+		 * Don't rob the mempools by returning to the per-CPU cache if
+		 * we're tight on memory.
+		 */
+		opf &= ~REQ_ALLOC_CACHE;
+
+		p = mempool_alloc(&bs->bio_pool, gfp);
+		bio = p + bs->front_pad;
+		if (nr_vecs > BIO_INLINE_VECS) {
+			nr_vecs = BIO_MAX_VECS;
+			bvecs = mempool_alloc(&bs->bvec_pool, gfp);
+		}
 	}
 
+	if (nr_vecs && nr_vecs <= BIO_INLINE_VECS)
+		bio_init_inline(bio, bdev, nr_vecs, opf);
+	else
+		bio_init(bio, bdev, bvecs, nr_vecs, opf);
 	bio->bi_pool = bs;
 	return bio;
-
-err_free:
-	mempool_free(p, &bs->bio_pool);
-	return NULL;
 }
 EXPORT_SYMBOL(bio_alloc_bioset);
 
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 9693a0d6fefe..984844d2870b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -350,8 +350,7 @@ extern void bioset_exit(struct bio_set *);
 extern int biovec_init_pool(mempool_t *pool, int pool_entries);
 
 struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
-			     blk_opf_t opf, gfp_t gfp_mask,
-			     struct bio_set *bs);
+			     blk_opf_t opf, gfp_t gfp, struct bio_set *bs);
 struct bio *bio_kmalloc(unsigned short nr_vecs, gfp_t gfp_mask);
 extern void bio_put(struct bio *);
 
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] block: remove bvec_free
  2026-03-16 16:11 bio allocation cleanups Christoph Hellwig
  2026-03-16 16:11 ` [PATCH 1/3] block: mark bvec_{alloc,free} static Christoph Hellwig
  2026-03-16 16:11 ` [PATCH 2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath Christoph Hellwig
@ 2026-03-16 16:11 ` Christoph Hellwig
  2026-03-17 13:40   ` Johannes Thumshirn
  2026-03-18  0:27   ` Chaitanya Kulkarni
  2026-03-18  1:21 ` bio allocation cleanups Martin K. Petersen
  2026-03-18  1:27 ` Jens Axboe
  4 siblings, 2 replies; 11+ messages in thread
From: Christoph Hellwig @ 2026-03-16 16:11 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

bvec_free is only called by bio_free, so inline it there.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 5982bf069cef..b58bce6b5fea 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -161,17 +161,6 @@ static void bio_put_slab(struct bio_set *bs)
 	mutex_unlock(&bio_slab_lock);
 }
 
-static void bvec_free(struct mempool *pool, struct bio_vec *bv,
-		      unsigned short nr_vecs)
-{
-	BUG_ON(nr_vecs > BIO_MAX_VECS);
-
-	if (nr_vecs == BIO_MAX_VECS)
-		mempool_free(bv, pool);
-	else if (nr_vecs > BIO_INLINE_VECS)
-		kmem_cache_free(biovec_slab(nr_vecs)->slab, bv);
-}
-
 /*
  * Make the first allocation restricted and don't dump info on allocation
  * failures, since we'll fall back to the mempool in case of failure.
@@ -203,9 +192,14 @@ static void bio_free(struct bio *bio)
 	void *p = bio;
 
 	WARN_ON_ONCE(!bs);
+	WARN_ON_ONCE(bio->bi_max_vecs > BIO_MAX_VECS);
 
 	bio_uninit(bio);
-	bvec_free(&bs->bvec_pool, bio->bi_io_vec, bio->bi_max_vecs);
+	if (bio->bi_max_vecs == BIO_MAX_VECS)
+		mempool_free(bio->bi_io_vec, &bs->bvec_pool);
+	else if (bio->bi_max_vecs > BIO_INLINE_VECS)
+		kmem_cache_free(biovec_slab(bio->bi_max_vecs)->slab,
+				bio->bi_io_vec);
 	mempool_free(p - bs->front_pad, &bs->bio_pool);
 }
 
@@ -561,7 +555,7 @@ struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
 
 		/*
 		 * Upgrade nr_vecs to take full advantage of the allocation.
-		 * We also rely on this in bvec_free().
+		 * We also rely on this in bio_free().
 		 */
 		nr_vecs = bvs->nr_vecs;
 		bvecs = kmem_cache_alloc(bvs->slab, gfp);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] block: mark bvec_{alloc,free} static
  2026-03-16 16:11 ` [PATCH 1/3] block: mark bvec_{alloc,free} static Christoph Hellwig
@ 2026-03-17 13:38   ` Johannes Thumshirn
  2026-03-18  0:26   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 11+ messages in thread
From: Johannes Thumshirn @ 2026-03-17 13:38 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block@vger.kernel.org

Looks good and I think this can go in on it's own

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] block: remove bvec_free
  2026-03-16 16:11 ` [PATCH 3/3] block: remove bvec_free Christoph Hellwig
@ 2026-03-17 13:40   ` Johannes Thumshirn
  2026-03-18  0:27   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 11+ messages in thread
From: Johannes Thumshirn @ 2026-03-17 13:40 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block@vger.kernel.org

Same as with 1/3, I think this can go in on it's own.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] block: mark bvec_{alloc,free} static
  2026-03-16 16:11 ` [PATCH 1/3] block: mark bvec_{alloc,free} static Christoph Hellwig
  2026-03-17 13:38   ` Johannes Thumshirn
@ 2026-03-18  0:26   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2026-03-18  0:26 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block@vger.kernel.org

On 3/16/26 09:11, Christoph Hellwig wrote:
> Only used in bio.c these days.
>
> Signed-off-by: Christoph Hellwig<hch@lst.de>


Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> -ck




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath
  2026-03-16 16:11 ` [PATCH 2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath Christoph Hellwig
@ 2026-03-18  0:27   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2026-03-18  0:27 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block@vger.kernel.org

On 3/16/26 09:11, Christoph Hellwig wrote:
> bio_alloc_bioset tries non-waiting slab allocations first for the bio and
> bvec array, but does so in a somewhat convoluted way.
>
> Restructure the function so that it first open codes these slab
> allocations, and then falls back to the mempools with the original
> gfp mask.
>
> Signed-off-by: Christoph Hellwig<hch@lst.de>

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> -ck




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] block: remove bvec_free
  2026-03-16 16:11 ` [PATCH 3/3] block: remove bvec_free Christoph Hellwig
  2026-03-17 13:40   ` Johannes Thumshirn
@ 2026-03-18  0:27   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 11+ messages in thread
From: Chaitanya Kulkarni @ 2026-03-18  0:27 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block@vger.kernel.org

On 3/16/26 09:11, Christoph Hellwig wrote:
> bvec_free is only called by bio_free, so inline it there.
>
> Signed-off-by: Christoph Hellwig<hch@lst.de>

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> -ck




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: bio allocation cleanups
  2026-03-16 16:11 bio allocation cleanups Christoph Hellwig
                   ` (2 preceding siblings ...)
  2026-03-16 16:11 ` [PATCH 3/3] block: remove bvec_free Christoph Hellwig
@ 2026-03-18  1:21 ` Martin K. Petersen
  2026-03-18  1:27 ` Jens Axboe
  4 siblings, 0 replies; 11+ messages in thread
From: Martin K. Petersen @ 2026-03-18  1:21 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block


Christoph,

> I recently looked into a batch bio allocator for a project I'm working
> on, and notice how convoluted the bio allocator has become.
>
> This series unwinds it so that the the fast-path slab allocation is
> better separated from the mempool fallback, which reduces the code
> complexity a lot, and avoids indirect calls for common cases.

LGTM.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: bio allocation cleanups
  2026-03-16 16:11 bio allocation cleanups Christoph Hellwig
                   ` (3 preceding siblings ...)
  2026-03-18  1:21 ` bio allocation cleanups Martin K. Petersen
@ 2026-03-18  1:27 ` Jens Axboe
  4 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2026-03-18  1:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block


On Mon, 16 Mar 2026 17:11:28 +0100, Christoph Hellwig wrote:
> I recently looked into a batch bio allocator for a project I'm working on,
> and notice how convoluted the bio allocator has become.
> 
> This series unwinds it so that the the fast-path slab allocation is
> better separated from the mempool fallback, which reduces the code
> complexity a lot, and avoids indirect calls for common cases.
> 
> [...]

Applied, thanks!

[1/3] block: mark bvec_{alloc,free} static
      commit: fed406f3c1c2feb97adcbc557218713c5f7ec6a7
[2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath
      commit: b520c4eef83dd406591431f936de0908c3ed7fb9
[3/3] block: remove bvec_free
      commit: e80fd7a08940093aad5ea247a42046b57709a7bd

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-03-18  1:27 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-16 16:11 bio allocation cleanups Christoph Hellwig
2026-03-16 16:11 ` [PATCH 1/3] block: mark bvec_{alloc,free} static Christoph Hellwig
2026-03-17 13:38   ` Johannes Thumshirn
2026-03-18  0:26   ` Chaitanya Kulkarni
2026-03-16 16:11 ` [PATCH 2/3] block: split bio_alloc_bioset more clearly into a fast and slowpath Christoph Hellwig
2026-03-18  0:27   ` Chaitanya Kulkarni
2026-03-16 16:11 ` [PATCH 3/3] block: remove bvec_free Christoph Hellwig
2026-03-17 13:40   ` Johannes Thumshirn
2026-03-18  0:27   ` Chaitanya Kulkarni
2026-03-18  1:21 ` bio allocation cleanups Martin K. Petersen
2026-03-18  1:27 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox