* drop some broken zone append support code
@ 2024-10-30 5:18 Christoph Hellwig
2024-10-30 5:18 ` [PATCH 1/2] block: remove zone append special casing from the direct I/O path Christoph Hellwig
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-10-30 5:18 UTC (permalink / raw)
To: Jens Axboe; +Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block
Hi Jens,
when porting my zoned XFS code I ran into a regression in
__bio_iov_iter_get_pages 6.12, which isn't all that surprising given that
this path isn't used upstream. After spending some time trying to fix it
I gave up and ported my code to the scheme used in btrfs where the file
system splits bios to the hardware boundaries, which more closely mirror
what we do for the "normal" bio path.
Either way we should not carry dead code, so patch 1 removes that.
Patch 2 also removes our other zone append helper as for the same reason
no one but semi-passthrough interfaces like nvmet should use it, and
those can simply use bio_add_pc_page.
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] block: remove zone append special casing from the direct I/O path
2024-10-30 5:18 drop some broken zone append support code Christoph Hellwig
@ 2024-10-30 5:18 ` Christoph Hellwig
2024-10-30 18:32 ` Chaitanya Kulkarni
2024-10-31 16:54 ` Jens Axboe
2024-10-30 5:18 ` [PATCH 2/2] block: remove bio_add_zone_append_page Christoph Hellwig
2024-10-30 18:47 ` drop some broken zone append support code Johannes Thumshirn
2 siblings, 2 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-10-30 5:18 UTC (permalink / raw)
To: Jens Axboe; +Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block
This code is unused, and all future zoned file systems should follow
the btrfs lead of splitting the bios themselves to the zoned limits
in the I/O submission handler, because if they didn't they would be
hit by commit ed9832bc08db ("block: introduce folio awareness and add
a bigger size from folio") breaking this code when the zone append
limit (that is usually the max_hw_sectors limit) is smaller than the
largest possible folio size.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/bio.c | 34 ++--------------------------------
1 file changed, 2 insertions(+), 32 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index ac4d77c88932..6a60d62a529d 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1206,21 +1206,12 @@ EXPORT_SYMBOL_GPL(__bio_release_pages);
void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
{
- size_t size = iov_iter_count(iter);
-
WARN_ON_ONCE(bio->bi_max_vecs);
- if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
- struct request_queue *q = bdev_get_queue(bio->bi_bdev);
- size_t max_sectors = queue_max_zone_append_sectors(q);
-
- size = min(size, max_sectors << SECTOR_SHIFT);
- }
-
bio->bi_vcnt = iter->nr_segs;
bio->bi_io_vec = (struct bio_vec *)iter->bvec;
bio->bi_iter.bi_bvec_done = iter->iov_offset;
- bio->bi_iter.bi_size = size;
+ bio->bi_iter.bi_size = iov_iter_count(iter);
bio_set_flag(bio, BIO_CLONED);
}
@@ -1245,20 +1236,6 @@ static int bio_iov_add_folio(struct bio *bio, struct folio *folio, size_t len,
return 0;
}
-static int bio_iov_add_zone_append_folio(struct bio *bio, struct folio *folio,
- size_t len, size_t offset)
-{
- struct request_queue *q = bdev_get_queue(bio->bi_bdev);
- bool same_page = false;
-
- if (bio_add_hw_folio(q, bio, folio, len, offset,
- queue_max_zone_append_sectors(q), &same_page) != len)
- return -EINVAL;
- if (same_page && bio_flagged(bio, BIO_PAGE_PINNED))
- unpin_user_folio(folio, 1);
- return 0;
-}
-
static unsigned int get_contig_folio_len(unsigned int *num_pages,
struct page **pages, unsigned int i,
struct folio *folio, size_t left,
@@ -1365,14 +1342,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
len = get_contig_folio_len(&num_pages, pages, i,
folio, left, offset);
- if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
- ret = bio_iov_add_zone_append_folio(bio, folio, len,
- folio_offset);
- if (ret)
- break;
- } else
- bio_iov_add_folio(bio, folio, len, folio_offset);
-
+ bio_iov_add_folio(bio, folio, len, folio_offset);
offset = 0;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/2] block: remove bio_add_zone_append_page
2024-10-30 5:18 drop some broken zone append support code Christoph Hellwig
2024-10-30 5:18 ` [PATCH 1/2] block: remove zone append special casing from the direct I/O path Christoph Hellwig
@ 2024-10-30 5:18 ` Christoph Hellwig
2024-10-30 7:30 ` Chaitanya Kulkarni
2024-10-30 18:47 ` drop some broken zone append support code Johannes Thumshirn
2 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2024-10-30 5:18 UTC (permalink / raw)
To: Jens Axboe; +Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block
This is only used by the nvmet zns passthrough code, which can trivially
just use bio_add_pc_page and do the sanity check for the max zone append
limit itself.
All future zoned file systems should follow the btrfs lead and let the
upper layers fill up bios unlimited by hardware constraints and split
them to the limits in the I/O submission handler.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/bio.c | 33 ---------------------------------
drivers/nvme/target/zns.c | 21 +++++++++++++--------
include/linux/bio.h | 2 --
3 files changed, 13 insertions(+), 43 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 6a60d62a529d..daceb0a5c1d7 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1064,39 +1064,6 @@ int bio_add_pc_page(struct request_queue *q, struct bio *bio,
}
EXPORT_SYMBOL(bio_add_pc_page);
-/**
- * bio_add_zone_append_page - attempt to add page to zone-append bio
- * @bio: destination bio
- * @page: page to add
- * @len: vec entry length
- * @offset: vec entry offset
- *
- * Attempt to add a page to the bio_vec maplist of a bio that will be submitted
- * for a zone-append request. This can fail for a number of reasons, such as the
- * bio being full or the target block device is not a zoned block device or
- * other limitations of the target block device. The target block device must
- * allow bio's up to PAGE_SIZE, so it is always possible to add a single page
- * to an empty bio.
- *
- * Returns: number of bytes added to the bio, or 0 in case of a failure.
- */
-int bio_add_zone_append_page(struct bio *bio, struct page *page,
- unsigned int len, unsigned int offset)
-{
- struct request_queue *q = bdev_get_queue(bio->bi_bdev);
- bool same_page = false;
-
- if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_ZONE_APPEND))
- return 0;
-
- if (WARN_ON_ONCE(!bdev_is_zoned(bio->bi_bdev)))
- return 0;
-
- return bio_add_hw_page(q, bio, page, len, offset,
- queue_max_zone_append_sectors(q), &same_page);
-}
-EXPORT_SYMBOL_GPL(bio_add_zone_append_page);
-
/**
* __bio_add_page - add page(s) to a bio in a new segment
* @bio: destination bio
diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index af9e13be7678..3aef35b05111 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -537,6 +537,7 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
u16 status = NVME_SC_SUCCESS;
unsigned int total_len = 0;
struct scatterlist *sg;
+ u32 data_len = nvmet_rw_data_len(req);
struct bio *bio;
int sg_cnt;
@@ -544,6 +545,13 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
return;
+ if (data_len >
+ bdev_max_zone_append_sectors(req->ns->bdev) << SECTOR_SHIFT) {
+ req->error_loc = offsetof(struct nvme_rw_command, length);
+ status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
+ goto out;
+ }
+
if (!req->sg_cnt) {
nvmet_req_complete(req, 0);
return;
@@ -576,20 +584,17 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
bio->bi_opf |= REQ_FUA;
for_each_sg(req->sg, sg, req->sg_cnt, sg_cnt) {
- struct page *p = sg_page(sg);
- unsigned int l = sg->length;
- unsigned int o = sg->offset;
- unsigned int ret;
+ unsigned int len = sg->length;
- ret = bio_add_zone_append_page(bio, p, l, o);
- if (ret != sg->length) {
+ if (bio_add_pc_page(bdev_get_queue(bio->bi_bdev), bio,
+ sg_page(sg), len, sg->offset) != len) {
status = NVME_SC_INTERNAL;
goto out_put_bio;
}
- total_len += sg->length;
+ total_len += len;
}
- if (total_len != nvmet_rw_data_len(req)) {
+ if (total_len != data_len) {
status = NVME_SC_INTERNAL | NVME_STATUS_DNR;
goto out_put_bio;
}
diff --git a/include/linux/bio.h b/include/linux/bio.h
index faceadb040f9..4a1bf43ca53d 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -418,8 +418,6 @@ bool __must_check bio_add_folio(struct bio *bio, struct folio *folio,
size_t len, size_t off);
extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *,
unsigned int, unsigned int);
-int bio_add_zone_append_page(struct bio *bio, struct page *page,
- unsigned int len, unsigned int offset);
void __bio_add_page(struct bio *bio, struct page *page,
unsigned int len, unsigned int off);
void bio_add_folio_nofail(struct bio *bio, struct folio *folio, size_t len,
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] block: remove bio_add_zone_append_page
2024-10-30 5:18 ` [PATCH 2/2] block: remove bio_add_zone_append_page Christoph Hellwig
@ 2024-10-30 7:30 ` Chaitanya Kulkarni
2024-10-30 13:47 ` Christoph Hellwig
0 siblings, 1 reply; 10+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-30 7:30 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block@vger.kernel.org,
Jens Axboe
On 10/29/24 22:18, Christoph Hellwig wrote:
> This is only used by the nvmet zns passthrough code, which can trivially
> just use bio_add_pc_page and do the sanity check for the max zone append
> limit itself.
>
> All future zoned file systems should follow the btrfs lead and let the
> upper layers fill up bios unlimited by hardware constraints and split
> them to the limits in the I/O submission handler.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/bio.c | 33 ---------------------------------
> drivers/nvme/target/zns.c | 21 +++++++++++++--------
> include/linux/bio.h | 2 --
> 3 files changed, 13 insertions(+), 43 deletions(-)
>
> diff --git a/block/bio.c b/block/bio.c
> index 6a60d62a529d..daceb0a5c1d7 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1064,39 +1064,6 @@ int bio_add_pc_page(struct request_queue *q, struct bio *bio,
> }
> EXPORT_SYMBOL(bio_add_pc_page);
>
> -/**
> - * bio_add_zone_append_page - attempt to add page to zone-append bio
> - * @bio: destination bio
> - * @page: page to add
> - * @len: vec entry length
> - * @offset: vec entry offset
> - *
> - * Attempt to add a page to the bio_vec maplist of a bio that will be submitted
> - * for a zone-append request. This can fail for a number of reasons, such as the
> - * bio being full or the target block device is not a zoned block device or
> - * other limitations of the target block device. The target block device must
> - * allow bio's up to PAGE_SIZE, so it is always possible to add a single page
> - * to an empty bio.
> - *
> - * Returns: number of bytes added to the bio, or 0 in case of a failure.
> - */
> -int bio_add_zone_append_page(struct bio *bio, struct page *page,
> - unsigned int len, unsigned int offset)
> -{
> - struct request_queue *q = bdev_get_queue(bio->bi_bdev);
> - bool same_page = false;
> -
> - if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_ZONE_APPEND))
> - return 0;
> -
> - if (WARN_ON_ONCE(!bdev_is_zoned(bio->bi_bdev)))
> - return 0;
> -
> - return bio_add_hw_page(q, bio, page, len, offset,
> - queue_max_zone_append_sectors(q), &same_page);
> -}
> -EXPORT_SYMBOL_GPL(bio_add_zone_append_page);
> -
> /**
> * __bio_add_page - add page(s) to a bio in a new segment
> * @bio: destination bio
> diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
> index af9e13be7678..3aef35b05111 100644
> --- a/drivers/nvme/target/zns.c
> +++ b/drivers/nvme/target/zns.c
> @@ -537,6 +537,7 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
> u16 status = NVME_SC_SUCCESS;
> unsigned int total_len = 0;
> struct scatterlist *sg;
> + u32 data_len = nvmet_rw_data_len(req);
> struct bio *bio;
> int sg_cnt;
>
> @@ -544,6 +545,13 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
> if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
> return;
>
> + if (data_len >
> + bdev_max_zone_append_sectors(req->ns->bdev) << SECTOR_SHIFT) {
> + req->error_loc = offsetof(struct nvme_rw_command, length);
> + status = NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
> + goto out;
> + }
> +
> if (!req->sg_cnt) {
> nvmet_req_complete(req, 0);
> return;
> @@ -576,20 +584,17 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
> bio->bi_opf |= REQ_FUA;
>
> for_each_sg(req->sg, sg, req->sg_cnt, sg_cnt) {
> - struct page *p = sg_page(sg);
> - unsigned int l = sg->length;
> - unsigned int o = sg->offset;
> - unsigned int ret;
> + unsigned int len = sg->length;
>
> - ret = bio_add_zone_append_page(bio, p, l, o);
> - if (ret != sg->length) {
> + if (bio_add_pc_page(bdev_get_queue(bio->bi_bdev), bio,
> + sg_page(sg), len, sg->offset) != len) {
bio_add_pc_page() comment :- This should only be used by passthrough
bios.
Sorry I didn't understand use of passthru bio interface here.
From host/nvme.h:nvme_req_op():
REQ_OP_DRV_OUT/REQ_OP_DRV_IN are the passthru requests, and
nvme_req_op() is used in nvmet_passthru_execute_cmd() to map the
incoming nvme passthru command into block layer passthru request
i.e. REQ_OP_DRV_IN or REQ_OP_DRV_OUT:-
nvme/target/passthru.c :-
319 rq = blk_mq_alloc_request(q, nvme_req_op(req->cmd), 0);
In nvmet_bdev_execute_zone_append() bio->bi_opf set to
opf local var that is initialized at the start of the function to :-
const blk_opf_t opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE;
Hence this is not a passthru request but zone append request ?
if that is true should we just use the bio_add_hw_page() ? since
bio_add_pc_page() is a simple wrapper over bio_add_hw_page() without
the additional checks present in bio_add_zone_append_page() ?
unless for some reason I failed to understand REQ_OP_ZONE_APPEND
is categorized here as a passthru request, then sorry for wasting
your time ...
-ck
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] block: remove bio_add_zone_append_page
2024-10-30 7:30 ` Chaitanya Kulkarni
@ 2024-10-30 13:47 ` Christoph Hellwig
2024-10-30 18:31 ` Chaitanya Kulkarni
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2024-10-30 13:47 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Christoph Hellwig, Kundan Kumar, linux-block@vger.kernel.org,
Jens Axboe
On Wed, Oct 30, 2024 at 07:30:49AM +0000, Chaitanya Kulkarni wrote:
> if that is true should we just use the bio_add_hw_page() ? since
> bio_add_pc_page() is a simple wrapper over bio_add_hw_page() without
> the additional checks present in bio_add_zone_append_page() ?
bio_add_hw_page is currently static. But just renaming bio_add_pc_page
to bio_add_hw_page and finding a different name for the version with
the same_page argument sounds like a good idea, but that's for a follow
on patch. I can look into that.
> unless for some reason I failed to understand REQ_OP_ZONE_APPEND
> is categorized here as a passthru request, then sorry for wasting
> your time ...
It is not a passthrough request, but it is treated as one.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] block: remove bio_add_zone_append_page
2024-10-30 13:47 ` Christoph Hellwig
@ 2024-10-30 18:31 ` Chaitanya Kulkarni
0 siblings, 0 replies; 10+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-30 18:31 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Kundan Kumar, linux-block@vger.kernel.org, Jens Axboe
On 10/30/24 06:47, Christoph Hellwig wrote:
> On Wed, Oct 30, 2024 at 07:30:49AM +0000, Chaitanya Kulkarni wrote:
>> if that is true should we just use the bio_add_hw_page() ? since
>> bio_add_pc_page() is a simple wrapper over bio_add_hw_page() without
>> the additional checks present in bio_add_zone_append_page() ?
> bio_add_hw_page is currently static. But just renaming bio_add_pc_page
> to bio_add_hw_page and finding a different name for the version with
> the same_page argument sounds like a good idea, but that's for a follow
> on patch. I can look into that.
>
sounds like a plan, looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] block: remove zone append special casing from the direct I/O path
2024-10-30 5:18 ` [PATCH 1/2] block: remove zone append special casing from the direct I/O path Christoph Hellwig
@ 2024-10-30 18:32 ` Chaitanya Kulkarni
2024-10-31 16:54 ` Jens Axboe
1 sibling, 0 replies; 10+ messages in thread
From: Chaitanya Kulkarni @ 2024-10-30 18:32 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe
Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block@vger.kernel.org
On 10/29/24 22:18, Christoph Hellwig wrote:
> This code is unused, and all future zoned file systems should follow
> the btrfs lead of splitting the bios themselves to the zoned limits
> in the I/O submission handler, because if they didn't they would be
> hit by commit ed9832bc08db ("block: introduce folio awareness and add
> a bigger size from folio") breaking this code when the zone append
> limit (that is usually the max_hw_sectors limit) is smaller than the
> largest possible folio size.
>
> Signed-off-by: Christoph Hellwig<hch@lst.de>
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: drop some broken zone append support code
2024-10-30 5:18 drop some broken zone append support code Christoph Hellwig
2024-10-30 5:18 ` [PATCH 1/2] block: remove zone append special casing from the direct I/O path Christoph Hellwig
2024-10-30 5:18 ` [PATCH 2/2] block: remove bio_add_zone_append_page Christoph Hellwig
@ 2024-10-30 18:47 ` Johannes Thumshirn
2024-10-31 12:58 ` hch
2 siblings, 1 reply; 10+ messages in thread
From: Johannes Thumshirn @ 2024-10-30 18:47 UTC (permalink / raw)
To: hch, Jens Axboe
Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block@vger.kernel.org
On 30.10.24 06:19, Christoph Hellwig wrote:
> Hi Jens,
>
> when porting my zoned XFS code I ran into a regression in
> __bio_iov_iter_get_pages 6.12, which isn't all that surprising given that
> this path isn't used upstream. After spending some time trying to fix it
> I gave up and ported my code to the scheme used in btrfs where the file
> system splits bios to the hardware boundaries, which more closely mirror
> what we do for the "normal" bio path.
>
> Either way we should not carry dead code, so patch 1 removes that.
> Patch 2 also removes our other zone append helper as for the same reason
> no one but semi-passthrough interfaces like nvmet should use it, and
> those can simply use bio_add_pc_page.
IIRC this code was once used by the zone-append code we where using in
zonefs, but that code has been ripped out, so.
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: drop some broken zone append support code
2024-10-30 18:47 ` drop some broken zone append support code Johannes Thumshirn
@ 2024-10-31 12:58 ` hch
0 siblings, 0 replies; 10+ messages in thread
From: hch @ 2024-10-31 12:58 UTC (permalink / raw)
To: Johannes Thumshirn
Cc: hch, Jens Axboe, Chaitanya Kulkarni, Kundan Kumar,
linux-block@vger.kernel.org
On Wed, Oct 30, 2024 at 06:47:26PM +0000, Johannes Thumshirn wrote:
> IIRC this code was once used by the zone-append code we where using in
> zonefs, but that code has been ripped out, so.
Yes, I already had a chat with Damien how that can be done the same way
as in btrfs and zoned xfs.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] block: remove zone append special casing from the direct I/O path
2024-10-30 5:18 ` [PATCH 1/2] block: remove zone append special casing from the direct I/O path Christoph Hellwig
2024-10-30 18:32 ` Chaitanya Kulkarni
@ 2024-10-31 16:54 ` Jens Axboe
1 sibling, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2024-10-31 16:54 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Chaitanya Kulkarni, Kundan Kumar, linux-block
On Wed, 30 Oct 2024 06:18:51 +0100, Christoph Hellwig wrote:
> This code is unused, and all future zoned file systems should follow
> the btrfs lead of splitting the bios themselves to the zoned limits
> in the I/O submission handler, because if they didn't they would be
> hit by commit ed9832bc08db ("block: introduce folio awareness and add
> a bigger size from folio") breaking this code when the zone append
> limit (that is usually the max_hw_sectors limit) is smaller than the
> largest possible folio size.
>
> [...]
Applied, thanks!
[1/2] block: remove zone append special casing from the direct I/O path
commit: cafd00d0e90956c1c570a0a96cd86298897d247b
[2/2] block: remove bio_add_zone_append_page
commit: f187b9bf1a639090893c31030ddb60f9beae23f0
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-10-31 16:54 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-30 5:18 drop some broken zone append support code Christoph Hellwig
2024-10-30 5:18 ` [PATCH 1/2] block: remove zone append special casing from the direct I/O path Christoph Hellwig
2024-10-30 18:32 ` Chaitanya Kulkarni
2024-10-31 16:54 ` Jens Axboe
2024-10-30 5:18 ` [PATCH 2/2] block: remove bio_add_zone_append_page Christoph Hellwig
2024-10-30 7:30 ` Chaitanya Kulkarni
2024-10-30 13:47 ` Christoph Hellwig
2024-10-30 18:31 ` Chaitanya Kulkarni
2024-10-30 18:47 ` drop some broken zone append support code Johannes Thumshirn
2024-10-31 12:58 ` hch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).