* [PATCHv2 1/4] null_blk: simplify copy_from_nullb
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
@ 2025-11-06 1:54 ` Keith Busch
2025-11-06 3:51 ` Chaitanya Kulkarni
` (3 more replies)
2025-11-06 1:54 ` [PATCHv2 2/4] null_blk: consistently use blk_status_t Keith Busch
` (5 subsequent siblings)
6 siblings, 4 replies; 28+ messages in thread
From: Keith Busch @ 2025-11-06 1:54 UTC (permalink / raw)
To: linux-block, hch, axboe, dlemoal, hans.holmberg; +Cc: Keith Busch
From: Keith Busch <kbusch@kernel.org>
It always returns success, so the code that saves the errors status, but
proceeds without checking it looks a bit odd. Clean this up.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/block/null_blk/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 0ee55f889cfdd..a8bbbd132534a 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1161,7 +1161,7 @@ static int copy_to_nullb(struct nullb *nullb, struct page *source,
return 0;
}
-static int copy_from_nullb(struct nullb *nullb, struct page *dest,
+static void copy_from_nullb(struct nullb *nullb, struct page *dest,
unsigned int off, sector_t sector, size_t n)
{
size_t temp, count = 0;
@@ -1184,7 +1184,6 @@ static int copy_from_nullb(struct nullb *nullb, struct page *dest,
count += temp;
sector += temp >> SECTOR_SHIFT;
}
- return 0;
}
static void nullb_fill_pattern(struct nullb *nullb, struct page *page,
@@ -1248,8 +1247,8 @@ static int null_transfer(struct nullb *nullb, struct page *page,
sector, len);
if (valid_len) {
- err = copy_from_nullb(nullb, page, off,
- sector, valid_len);
+ copy_from_nullb(nullb, page, off, sector,
+ valid_len);
off += valid_len;
len -= valid_len;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 28+ messages in thread* Re: [PATCHv2 1/4] null_blk: simplify copy_from_nullb
2025-11-06 1:54 ` [PATCHv2 1/4] null_blk: simplify copy_from_nullb Keith Busch
@ 2025-11-06 3:51 ` Chaitanya Kulkarni
2025-11-06 4:12 ` Damien Le Moal
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Chaitanya Kulkarni @ 2025-11-06 3:51 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch@lst.de,
axboe@kernel.dk, dlemoal@kernel.org, hans.holmberg@wdc.com
Cc: Keith Busch
On 11/5/25 17:54, Keith Busch wrote:
> From: Keith Busch<kbusch@kernel.org>
>
> It always returns success, so the code that saves the errors status, but
> proceeds without checking it looks a bit odd. Clean this up.
>
> Signed-off-by: Keith Busch<kbusch@kernel.org>
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 1/4] null_blk: simplify copy_from_nullb
2025-11-06 1:54 ` [PATCHv2 1/4] null_blk: simplify copy_from_nullb Keith Busch
2025-11-06 3:51 ` Chaitanya Kulkarni
@ 2025-11-06 4:12 ` Damien Le Moal
2025-11-06 10:48 ` Johannes Thumshirn
2025-11-06 11:51 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Damien Le Moal @ 2025-11-06 4:12 UTC (permalink / raw)
To: Keith Busch, linux-block, hch, axboe, hans.holmberg; +Cc: Keith Busch
On 11/6/25 10:54 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> It always returns success, so the code that saves the errors status, but
> proceeds without checking it looks a bit odd. Clean this up.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 1/4] null_blk: simplify copy_from_nullb
2025-11-06 1:54 ` [PATCHv2 1/4] null_blk: simplify copy_from_nullb Keith Busch
2025-11-06 3:51 ` Chaitanya Kulkarni
2025-11-06 4:12 ` Damien Le Moal
@ 2025-11-06 10:48 ` Johannes Thumshirn
2025-11-06 11:51 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Johannes Thumshirn @ 2025-11-06 10:48 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch, axboe@kernel.dk,
dlemoal@kernel.org, Hans Holmberg
Cc: Keith Busch
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 1/4] null_blk: simplify copy_from_nullb
2025-11-06 1:54 ` [PATCHv2 1/4] null_blk: simplify copy_from_nullb Keith Busch
` (2 preceding siblings ...)
2025-11-06 10:48 ` Johannes Thumshirn
@ 2025-11-06 11:51 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-11-06 11:51 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-block, hch, axboe, dlemoal, hans.holmberg, Keith Busch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCHv2 2/4] null_blk: consistently use blk_status_t
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
2025-11-06 1:54 ` [PATCHv2 1/4] null_blk: simplify copy_from_nullb Keith Busch
@ 2025-11-06 1:54 ` Keith Busch
2025-11-06 3:52 ` Chaitanya Kulkarni
` (3 more replies)
2025-11-06 1:54 ` [PATCHv2 3/4] null_blk: single kmap per bio segment Keith Busch
` (4 subsequent siblings)
6 siblings, 4 replies; 28+ messages in thread
From: Keith Busch @ 2025-11-06 1:54 UTC (permalink / raw)
To: linux-block, hch, axboe, dlemoal, hans.holmberg; +Cc: Keith Busch
From: Keith Busch <kbusch@kernel.org>
No need to mix errno and blk_status_t error types. Just use the standard
block layer type.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/block/null_blk/main.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index a8bbbd132534a..ff53c8bd5d832 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1129,7 +1129,7 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n)
return 0;
}
-static int copy_to_nullb(struct nullb *nullb, struct page *source,
+static blk_status_t copy_to_nullb(struct nullb *nullb, struct page *source,
unsigned int off, sector_t sector, size_t n, bool is_fua)
{
size_t temp, count = 0;
@@ -1146,7 +1146,7 @@ static int copy_to_nullb(struct nullb *nullb, struct page *source,
t_page = null_insert_page(nullb, sector,
!null_cache_active(nullb) || is_fua);
if (!t_page)
- return -ENOSPC;
+ return BLK_STS_NOSPC;
memcpy_page(t_page->page, offset, source, off + count, temp);
@@ -1158,7 +1158,7 @@ static int copy_to_nullb(struct nullb *nullb, struct page *source,
count += temp;
sector += temp >> SECTOR_SHIFT;
}
- return 0;
+ return BLK_STS_OK;
}
static void copy_from_nullb(struct nullb *nullb, struct page *dest,
@@ -1233,13 +1233,13 @@ static blk_status_t null_handle_flush(struct nullb *nullb)
return errno_to_blk_status(err);
}
-static int null_transfer(struct nullb *nullb, struct page *page,
+static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
unsigned int len, unsigned int off, bool is_write, sector_t sector,
bool is_fua)
{
struct nullb_device *dev = nullb->dev;
+ blk_status_t err = BLK_STS_OK;
unsigned int valid_len = len;
- int err = 0;
if (!is_write) {
if (dev->zoned)
@@ -1273,7 +1273,7 @@ static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd,
{
struct request *rq = blk_mq_rq_from_pdu(cmd);
struct nullb *nullb = cmd->nq->dev->nullb;
- int err = 0;
+ blk_status_t err = BLK_STS_OK;
unsigned int len;
sector_t sector = blk_rq_pos(rq);
unsigned int max_bytes = nr_sectors << SECTOR_SHIFT;
@@ -1298,7 +1298,7 @@ static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd,
}
spin_unlock_irq(&nullb->lock);
- return errno_to_blk_status(err);
+ return err;
}
static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd)
--
2.47.3
^ permalink raw reply related [flat|nested] 28+ messages in thread* Re: [PATCHv2 2/4] null_blk: consistently use blk_status_t
2025-11-06 1:54 ` [PATCHv2 2/4] null_blk: consistently use blk_status_t Keith Busch
@ 2025-11-06 3:52 ` Chaitanya Kulkarni
2025-11-06 4:13 ` Damien Le Moal
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Chaitanya Kulkarni @ 2025-11-06 3:52 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch@lst.de,
axboe@kernel.dk, dlemoal@kernel.org, hans.holmberg@wdc.com
Cc: Keith Busch
On 11/5/25 17:54, Keith Busch wrote:
> From: Keith Busch<kbusch@kernel.org>
>
> No need to mix errno and blk_status_t error types. Just use the standard
> block layer type.
>
> Signed-off-by: Keith Busch<kbusch@kernel.org>
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 2/4] null_blk: consistently use blk_status_t
2025-11-06 1:54 ` [PATCHv2 2/4] null_blk: consistently use blk_status_t Keith Busch
2025-11-06 3:52 ` Chaitanya Kulkarni
@ 2025-11-06 4:13 ` Damien Le Moal
2025-11-06 10:52 ` Johannes Thumshirn
2025-11-06 11:52 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Damien Le Moal @ 2025-11-06 4:13 UTC (permalink / raw)
To: Keith Busch, linux-block, hch, axboe, hans.holmberg; +Cc: Keith Busch
On 11/6/25 10:54 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> No need to mix errno and blk_status_t error types. Just use the standard
> block layer type.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 2/4] null_blk: consistently use blk_status_t
2025-11-06 1:54 ` [PATCHv2 2/4] null_blk: consistently use blk_status_t Keith Busch
2025-11-06 3:52 ` Chaitanya Kulkarni
2025-11-06 4:13 ` Damien Le Moal
@ 2025-11-06 10:52 ` Johannes Thumshirn
2025-11-06 11:52 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Johannes Thumshirn @ 2025-11-06 10:52 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch, axboe@kernel.dk,
dlemoal@kernel.org, Hans Holmberg
Cc: Keith Busch
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 2/4] null_blk: consistently use blk_status_t
2025-11-06 1:54 ` [PATCHv2 2/4] null_blk: consistently use blk_status_t Keith Busch
` (2 preceding siblings ...)
2025-11-06 10:52 ` Johannes Thumshirn
@ 2025-11-06 11:52 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-11-06 11:52 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-block, hch, axboe, dlemoal, hans.holmberg, Keith Busch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCHv2 3/4] null_blk: single kmap per bio segment
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
2025-11-06 1:54 ` [PATCHv2 1/4] null_blk: simplify copy_from_nullb Keith Busch
2025-11-06 1:54 ` [PATCHv2 2/4] null_blk: consistently use blk_status_t Keith Busch
@ 2025-11-06 1:54 ` Keith Busch
2025-11-06 3:58 ` Chaitanya Kulkarni
` (3 more replies)
2025-11-06 1:54 ` [PATCHv2 4/4] null_blk: allow byte aligned memory offsets Keith Busch
` (3 subsequent siblings)
6 siblings, 4 replies; 28+ messages in thread
From: Keith Busch @ 2025-11-06 1:54 UTC (permalink / raw)
To: linux-block, hch, axboe, dlemoal, hans.holmberg; +Cc: Keith Busch
From: Keith Busch <kbusch@kernel.org>
Rather than kmap the the request bio segment for each sector, do
the mapping just once.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/block/null_blk/main.c | 32 ++++++++++++++------------------
1 file changed, 14 insertions(+), 18 deletions(-)
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index ff53c8bd5d832..34346590d4eee 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1129,8 +1129,8 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n)
return 0;
}
-static blk_status_t copy_to_nullb(struct nullb *nullb, struct page *source,
- unsigned int off, sector_t sector, size_t n, bool is_fua)
+static blk_status_t copy_to_nullb(struct nullb *nullb, void *source,
+ sector_t sector, size_t n, bool is_fua)
{
size_t temp, count = 0;
unsigned int offset;
@@ -1148,7 +1148,7 @@ static blk_status_t copy_to_nullb(struct nullb *nullb, struct page *source,
if (!t_page)
return BLK_STS_NOSPC;
- memcpy_page(t_page->page, offset, source, off + count, temp);
+ memcpy_to_page(t_page->page, offset, source + count, temp);
__set_bit(sector & SECTOR_MASK, t_page->bitmap);
@@ -1161,8 +1161,8 @@ static blk_status_t copy_to_nullb(struct nullb *nullb, struct page *source,
return BLK_STS_OK;
}
-static void copy_from_nullb(struct nullb *nullb, struct page *dest,
- unsigned int off, sector_t sector, size_t n)
+static void copy_from_nullb(struct nullb *nullb, void *dest, sector_t sector,
+ size_t n)
{
size_t temp, count = 0;
unsigned int offset;
@@ -1176,22 +1176,16 @@ static void copy_from_nullb(struct nullb *nullb, struct page *dest,
!null_cache_active(nullb));
if (t_page)
- memcpy_page(dest, off + count, t_page->page, offset,
- temp);
+ memcpy_from_page(dest + count, t_page->page, offset,
+ temp);
else
- memzero_page(dest, off + count, temp);
+ memset(dest + count, 0, temp);
count += temp;
sector += temp >> SECTOR_SHIFT;
}
}
-static void nullb_fill_pattern(struct nullb *nullb, struct page *page,
- unsigned int len, unsigned int off)
-{
- memset_page(page, off, 0xff, len);
-}
-
blk_status_t null_handle_discard(struct nullb_device *dev,
sector_t sector, sector_t nr_sectors)
{
@@ -1240,27 +1234,29 @@ static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
struct nullb_device *dev = nullb->dev;
blk_status_t err = BLK_STS_OK;
unsigned int valid_len = len;
+ void *p;
+ p = kmap_local_page(page) + off;
if (!is_write) {
if (dev->zoned)
valid_len = null_zone_valid_read_len(nullb,
sector, len);
if (valid_len) {
- copy_from_nullb(nullb, page, off, sector,
- valid_len);
+ copy_from_nullb(nullb, p, sector, valid_len);
off += valid_len;
len -= valid_len;
}
if (len)
- nullb_fill_pattern(nullb, page, len, off);
+ memset(p + valid_len, 0xff, len);
flush_dcache_page(page);
} else {
flush_dcache_page(page);
- err = copy_to_nullb(nullb, page, off, sector, len, is_fua);
+ err = copy_to_nullb(nullb, p, sector, len, is_fua);
}
+ kunmap_local(p);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 28+ messages in thread* Re: [PATCHv2 3/4] null_blk: single kmap per bio segment
2025-11-06 1:54 ` [PATCHv2 3/4] null_blk: single kmap per bio segment Keith Busch
@ 2025-11-06 3:58 ` Chaitanya Kulkarni
2025-11-06 4:26 ` Damien Le Moal
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Chaitanya Kulkarni @ 2025-11-06 3:58 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch@lst.de,
axboe@kernel.dk, dlemoal@kernel.org, hans.holmberg@wdc.com
Cc: Keith Busch
On 11/5/25 17:54, Keith Busch wrote:
> From: Keith Busch<kbusch@kernel.org>
>
> Rather than kmap the the request bio segment for each sector, do
> the mapping just once.
>
> Signed-off-by: Keith Busch<kbusch@kernel.org>
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 3/4] null_blk: single kmap per bio segment
2025-11-06 1:54 ` [PATCHv2 3/4] null_blk: single kmap per bio segment Keith Busch
2025-11-06 3:58 ` Chaitanya Kulkarni
@ 2025-11-06 4:26 ` Damien Le Moal
2025-11-06 11:22 ` Johannes Thumshirn
2025-11-06 11:53 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Damien Le Moal @ 2025-11-06 4:26 UTC (permalink / raw)
To: Keith Busch, linux-block, hch, axboe, hans.holmberg; +Cc: Keith Busch
On 11/6/25 10:54 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> Rather than kmap the the request bio segment for each sector, do
One extra "the" above.
> the mapping just once.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
Other than that, looks OK to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 3/4] null_blk: single kmap per bio segment
2025-11-06 1:54 ` [PATCHv2 3/4] null_blk: single kmap per bio segment Keith Busch
2025-11-06 3:58 ` Chaitanya Kulkarni
2025-11-06 4:26 ` Damien Le Moal
@ 2025-11-06 11:22 ` Johannes Thumshirn
2025-11-06 11:53 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Johannes Thumshirn @ 2025-11-06 11:22 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch, axboe@kernel.dk,
dlemoal@kernel.org, Hans Holmberg
Cc: Keith Busch
Apart from Damien's remark
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 3/4] null_blk: single kmap per bio segment
2025-11-06 1:54 ` [PATCHv2 3/4] null_blk: single kmap per bio segment Keith Busch
` (2 preceding siblings ...)
2025-11-06 11:22 ` Johannes Thumshirn
@ 2025-11-06 11:53 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-11-06 11:53 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-block, hch, axboe, dlemoal, hans.holmberg, Keith Busch
On Wed, Nov 05, 2025 at 05:54:46PM -0800, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> Rather than kmap the the request bio segment for each sector, do
> the mapping just once.
This looks fine:
Reviewed-by: Christoph Hellwig <hch@lst.de>
Although I'd still prefer to pass the bvec down as far as possible..
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
` (2 preceding siblings ...)
2025-11-06 1:54 ` [PATCHv2 3/4] null_blk: single kmap per bio segment Keith Busch
@ 2025-11-06 1:54 ` Keith Busch
2025-11-06 4:09 ` Chaitanya Kulkarni
` (3 more replies)
2025-11-06 4:11 ` [PATCHv2 0/4] null_blk: relaxed memory alignments Damien Le Moal
` (2 subsequent siblings)
6 siblings, 4 replies; 28+ messages in thread
From: Keith Busch @ 2025-11-06 1:54 UTC (permalink / raw)
To: linux-block, hch, axboe, dlemoal, hans.holmberg; +Cc: Keith Busch
From: Keith Busch <kbusch@kernel.org>
Allowing byte aligned memory provides a nice testing ground for
direct-io.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/block/null_blk/main.c | 46 ++++++++++++++++++----------------
drivers/block/null_blk/zoned.c | 2 +-
2 files changed, 25 insertions(+), 23 deletions(-)
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 34346590d4eee..f1e67962ecaeb 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1130,25 +1130,27 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n)
}
static blk_status_t copy_to_nullb(struct nullb *nullb, void *source,
- sector_t sector, size_t n, bool is_fua)
+ loff_t pos, size_t n, bool is_fua)
{
size_t temp, count = 0;
- unsigned int offset;
struct nullb_page *t_page;
+ sector_t sector;
while (count < n) {
- temp = min_t(size_t, nullb->dev->blocksize, n - count);
+ temp = min3(nullb->dev->blocksize, n - count,
+ PAGE_SIZE - offset_in_page(pos));
+ sector = pos >> SECTOR_SHIFT;
if (null_cache_active(nullb) && !is_fua)
null_make_cache_space(nullb, PAGE_SIZE);
- offset = (sector & SECTOR_MASK) << SECTOR_SHIFT;
t_page = null_insert_page(nullb, sector,
!null_cache_active(nullb) || is_fua);
if (!t_page)
return BLK_STS_NOSPC;
- memcpy_to_page(t_page->page, offset, source + count, temp);
+ memcpy_to_page(t_page->page, offset_in_page(pos),
+ source + count, temp);
__set_bit(sector & SECTOR_MASK, t_page->bitmap);
@@ -1156,33 +1158,33 @@ static blk_status_t copy_to_nullb(struct nullb *nullb, void *source,
null_free_sector(nullb, sector, true);
count += temp;
- sector += temp >> SECTOR_SHIFT;
+ pos += temp;
}
return BLK_STS_OK;
}
-static void copy_from_nullb(struct nullb *nullb, void *dest, sector_t sector,
+static void copy_from_nullb(struct nullb *nullb, void *dest, loff_t pos,
size_t n)
{
size_t temp, count = 0;
- unsigned int offset;
struct nullb_page *t_page;
+ sector_t sector;
while (count < n) {
- temp = min_t(size_t, nullb->dev->blocksize, n - count);
+ temp = min3(nullb->dev->blocksize, n - count,
+ PAGE_SIZE - offset_in_page(pos));
+ sector = pos >> SECTOR_SHIFT;
- offset = (sector & SECTOR_MASK) << SECTOR_SHIFT;
t_page = null_lookup_page(nullb, sector, false,
!null_cache_active(nullb));
-
if (t_page)
- memcpy_from_page(dest + count, t_page->page, offset,
- temp);
+ memcpy_from_page(dest + count, t_page->page,
+ offset_in_page(pos), temp);
else
memset(dest + count, 0, temp);
count += temp;
- sector += temp >> SECTOR_SHIFT;
+ pos += temp;
}
}
@@ -1228,7 +1230,7 @@ static blk_status_t null_handle_flush(struct nullb *nullb)
}
static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
- unsigned int len, unsigned int off, bool is_write, sector_t sector,
+ unsigned int len, unsigned int off, bool is_write, loff_t pos,
bool is_fua)
{
struct nullb_device *dev = nullb->dev;
@@ -1240,10 +1242,10 @@ static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
if (!is_write) {
if (dev->zoned)
valid_len = null_zone_valid_read_len(nullb,
- sector, len);
+ pos >> SECTOR_SHIFT, len);
if (valid_len) {
- copy_from_nullb(nullb, p, sector, valid_len);
+ copy_from_nullb(nullb, p, pos, valid_len);
off += valid_len;
len -= valid_len;
}
@@ -1253,7 +1255,7 @@ static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
flush_dcache_page(page);
} else {
flush_dcache_page(page);
- err = copy_to_nullb(nullb, p, sector, len, is_fua);
+ err = copy_to_nullb(nullb, p, pos, len, is_fua);
}
kunmap_local(p);
@@ -1271,7 +1273,7 @@ static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd,
struct nullb *nullb = cmd->nq->dev->nullb;
blk_status_t err = BLK_STS_OK;
unsigned int len;
- sector_t sector = blk_rq_pos(rq);
+ loff_t pos = blk_rq_pos(rq) << SECTOR_SHIFT;
unsigned int max_bytes = nr_sectors << SECTOR_SHIFT;
unsigned int transferred_bytes = 0;
struct req_iterator iter;
@@ -1283,11 +1285,11 @@ static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd,
if (transferred_bytes + len > max_bytes)
len = max_bytes - transferred_bytes;
err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
- op_is_write(req_op(rq)), sector,
+ op_is_write(req_op(rq)), pos,
rq->cmd_flags & REQ_FUA);
if (err)
break;
- sector += len >> SECTOR_SHIFT;
+ pos += len;
transferred_bytes += len;
if (transferred_bytes >= max_bytes)
break;
@@ -1944,7 +1946,7 @@ static int null_add_dev(struct nullb_device *dev)
.logical_block_size = dev->blocksize,
.physical_block_size = dev->blocksize,
.max_hw_sectors = dev->max_sectors,
- .dma_alignment = dev->blocksize - 1,
+ .dma_alignment = 1,
};
struct nullb *nullb;
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index 6a93b12a06ff7..dbf292a8eae96 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -242,7 +242,7 @@ size_t null_zone_valid_read_len(struct nullb *nullb,
{
struct nullb_device *dev = nullb->dev;
struct nullb_zone *zone = &dev->zones[null_zone_no(dev, sector)];
- unsigned int nr_sectors = len >> SECTOR_SHIFT;
+ unsigned int nr_sectors = DIV_ROUND_UP(len, SECTOR_SHIFT);
/* Read must be below the write pointer position */
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL ||
--
2.47.3
^ permalink raw reply related [flat|nested] 28+ messages in thread* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 1:54 ` [PATCHv2 4/4] null_blk: allow byte aligned memory offsets Keith Busch
@ 2025-11-06 4:09 ` Chaitanya Kulkarni
2025-11-06 4:31 ` Damien Le Moal
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Chaitanya Kulkarni @ 2025-11-06 4:09 UTC (permalink / raw)
To: Keith Busch
Cc: Keith Busch, linux-block@vger.kernel.org, hans.holmberg@wdc.com,
axboe@kernel.dk, dlemoal@kernel.org, hch@lst.de
On 11/5/25 17:54, Keith Busch wrote:
> From: Keith Busch<kbusch@kernel.org>
>
> Allowing byte aligned memory provides a nice testing ground for
> direct-io.
>
> Signed-off-by: Keith Busch<kbusch@kernel.org>
Looks good.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 1:54 ` [PATCHv2 4/4] null_blk: allow byte aligned memory offsets Keith Busch
2025-11-06 4:09 ` Chaitanya Kulkarni
@ 2025-11-06 4:31 ` Damien Le Moal
2025-11-06 4:40 ` Keith Busch
2025-11-06 11:23 ` Johannes Thumshirn
2025-11-06 12:01 ` Christoph Hellwig
3 siblings, 1 reply; 28+ messages in thread
From: Damien Le Moal @ 2025-11-06 4:31 UTC (permalink / raw)
To: Keith Busch, linux-block, hch, axboe, hans.holmberg; +Cc: Keith Busch
On 11/6/25 10:54 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> Allowing byte aligned memory provides a nice testing ground for
.dma_alignment = 1 means a minimum of 2-bytes alignment, no ?
> direct-io.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Note: on top of this, for testing, I think it would be nice to add a config
parameter to allow changing dma_alignment to higher values, and check requests
when processing them against that alignment. That could allow testing corner
cases or emulate devices with weird DMA alignment constraints.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 4:31 ` Damien Le Moal
@ 2025-11-06 4:40 ` Keith Busch
0 siblings, 0 replies; 28+ messages in thread
From: Keith Busch @ 2025-11-06 4:40 UTC (permalink / raw)
To: Damien Le Moal; +Cc: Keith Busch, linux-block, hch, axboe, hans.holmberg
On Thu, Nov 06, 2025 at 01:31:24PM +0900, Damien Le Moal wrote:
>
> Note: on top of this, for testing, I think it would be nice to add a config
> parameter to allow changing dma_alignment to higher values, and check requests
> when processing them against that alignment. That could allow testing corner
> cases or emulate devices with weird DMA alignment constraints.
Yeah, that sounds like it fits in with this limit: there are other
similar limits paramaterized by null_blk, like the virtual boundary, so
user defined dma alignment makes sense.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 1:54 ` [PATCHv2 4/4] null_blk: allow byte aligned memory offsets Keith Busch
2025-11-06 4:09 ` Chaitanya Kulkarni
2025-11-06 4:31 ` Damien Le Moal
@ 2025-11-06 11:23 ` Johannes Thumshirn
2025-11-06 12:01 ` Christoph Hellwig
3 siblings, 0 replies; 28+ messages in thread
From: Johannes Thumshirn @ 2025-11-06 11:23 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch, axboe@kernel.dk,
dlemoal@kernel.org, Hans Holmberg
Cc: Keith Busch
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 1:54 ` [PATCHv2 4/4] null_blk: allow byte aligned memory offsets Keith Busch
` (2 preceding siblings ...)
2025-11-06 11:23 ` Johannes Thumshirn
@ 2025-11-06 12:01 ` Christoph Hellwig
2025-11-06 15:24 ` Keith Busch
3 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-11-06 12:01 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-block, hch, axboe, dlemoal, hans.holmberg, Keith Busch
On Wed, Nov 05, 2025 at 05:54:47PM -0800, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> Allowing byte aligned memory provides a nice testing ground for
> direct-io.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> drivers/block/null_blk/main.c | 46 ++++++++++++++++++----------------
> drivers/block/null_blk/zoned.c | 2 +-
> 2 files changed, 25 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> index 34346590d4eee..f1e67962ecaeb 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -1130,25 +1130,27 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n)
> }
>
> static blk_status_t copy_to_nullb(struct nullb *nullb, void *source,
> - sector_t sector, size_t n, bool is_fua)
> + loff_t pos, size_t n, bool is_fua)
Is it just me, or is n are way to non-descriptive argument name? Can
we fix that if you touch it anyway?
> {
> size_t temp, count = 0;
> - unsigned int offset;
> struct nullb_page *t_page;
> + sector_t sector;
>
> while (count < n) {
And count here should be done of offset. I really had a bit of a hard
time following the code due to the naming.
> static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
> - unsigned int len, unsigned int off, bool is_write, sector_t sector,
> + unsigned int len, unsigned int off, bool is_write, loff_t pos,
> bool is_fua)
.. and the indentation here could use fixing if we touch it anyway.
Otherwise looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 12:01 ` Christoph Hellwig
@ 2025-11-06 15:24 ` Keith Busch
2025-11-06 15:42 ` Johannes Thumshirn
0 siblings, 1 reply; 28+ messages in thread
From: Keith Busch @ 2025-11-06 15:24 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Keith Busch, linux-block, axboe, dlemoal, hans.holmberg
On Thu, Nov 06, 2025 at 01:01:31PM +0100, Christoph Hellwig wrote:
> > static blk_status_t copy_to_nullb(struct nullb *nullb, void *source,
> > - sector_t sector, size_t n, bool is_fua)
> > + loff_t pos, size_t n, bool is_fua)
>
> Is it just me, or is n are way to non-descriptive argument name? Can
> we fix that if you touch it anyway?
>
> > {
> > size_t temp, count = 0;
> > - unsigned int offset;
> > struct nullb_page *t_page;
> > + sector_t sector;
> >
> > while (count < n) {
>
> And count here should be done of offset. I really had a bit of a hard
> time following the code due to the naming.
>
> > static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
> > - unsigned int len, unsigned int off, bool is_write, sector_t sector,
> > + unsigned int len, unsigned int off, bool is_write, loff_t pos,
> > bool is_fua)
>
> .. and the indentation here could use fixing if we touch it anyway.
I actually had an earlier branch with all sorts of little refactors like
what you're mentioning. I don't even like the loops counting upward: we
can remove "count" entirely and loop backwards from "n".
Anyway, it started to look like all those little cleanups were
distracting from the feature, but I can redo the series with more prep
patches to tidy things up.
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 15:24 ` Keith Busch
@ 2025-11-06 15:42 ` Johannes Thumshirn
2025-11-06 15:43 ` hch
0 siblings, 1 reply; 28+ messages in thread
From: Johannes Thumshirn @ 2025-11-06 15:42 UTC (permalink / raw)
To: Keith Busch, hch
Cc: Keith Busch, linux-block@vger.kernel.org, axboe@kernel.dk,
dlemoal@kernel.org, Hans Holmberg
On 11/6/25 4:25 PM, Keith Busch wrote:
> Anyway, it started to look like all those little cleanups were
> distracting from the feature, but I can redo the series with more prep
> patches to tidy things up.
Or just merge this series as of now and do the cleanup on top? I mean,
it's a small feature and has no negative review comments.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 4/4] null_blk: allow byte aligned memory offsets
2025-11-06 15:42 ` Johannes Thumshirn
@ 2025-11-06 15:43 ` hch
0 siblings, 0 replies; 28+ messages in thread
From: hch @ 2025-11-06 15:43 UTC (permalink / raw)
To: Johannes Thumshirn
Cc: Keith Busch, hch, Keith Busch, linux-block@vger.kernel.org,
axboe@kernel.dk, dlemoal@kernel.org, Hans Holmberg
On Thu, Nov 06, 2025 at 03:42:14PM +0000, Johannes Thumshirn wrote:
> On 11/6/25 4:25 PM, Keith Busch wrote:
> > Anyway, it started to look like all those little cleanups were
> > distracting from the feature, but I can redo the series with more prep
> > patches to tidy things up.
>
> Or just merge this series as of now and do the cleanup on top? I mean,
> it's a small feature and has no negative review comments.
Yeah, that's probably easier.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCHv2 0/4] null_blk: relaxed memory alignments
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
` (3 preceding siblings ...)
2025-11-06 1:54 ` [PATCHv2 4/4] null_blk: allow byte aligned memory offsets Keith Busch
@ 2025-11-06 4:11 ` Damien Le Moal
2025-11-06 8:35 ` Hans Holmberg
2025-11-06 23:30 ` Jens Axboe
6 siblings, 0 replies; 28+ messages in thread
From: Damien Le Moal @ 2025-11-06 4:11 UTC (permalink / raw)
To: Keith Busch, linux-block, hch, axboe, hans.holmberg; +Cc: Keith Busch
On 11/6/25 10:54 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> The direct-io can work with arbitrary memory alignemnts, based on what
> the block device's queue limits report. This series enhances the
> null_blk driver by removing the software limitations that required
> block size memory and length alignment.
>
> Note, funny thing I noticed: this patch could allow null_blk to use
> byte aligned memory, but the queue limits doesn't have a way to express
> that. The smallest we can set the mask is 1, meaning 2-byte alignment,
> because setting the mask to 0 is overridden by the default 511 mask. I'm
> pretty sure at least some drivers are depending on the default, so can't
> really change that.
Maybe we could special case UINT_MAX for the no restriction case ?
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 0/4] null_blk: relaxed memory alignments
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
` (4 preceding siblings ...)
2025-11-06 4:11 ` [PATCHv2 0/4] null_blk: relaxed memory alignments Damien Le Moal
@ 2025-11-06 8:35 ` Hans Holmberg
2025-11-06 23:30 ` Jens Axboe
6 siblings, 0 replies; 28+ messages in thread
From: Hans Holmberg @ 2025-11-06 8:35 UTC (permalink / raw)
To: Keith Busch, linux-block@vger.kernel.org, hch, axboe@kernel.dk,
dlemoal@kernel.org
Cc: Keith Busch
On 06/11/2025 02:55, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> The direct-io can work with arbitrary memory alignemnts, based on what
> the block device's queue limits report. This series enhances the
> null_blk driver by removing the software limitations that required
> block size memory and length alignment.
>
> Note, funny thing I noticed: this patch could allow null_blk to use
> byte aligned memory, but the queue limits doesn't have a way to express
> that. The smallest we can set the mask is 1, meaning 2-byte alignment,
> because setting the mask to 0 is overridden by the default 511 mask. I'm
> pretty sure at least some drivers are depending on the default, so can't
> really change that.
>
>
> Changes from v1:
>
> - A couple cosmetic patches to clean up some of the error handling, as
> noted by Damien.
>
> - Fixed up the buffer overruns that Hans reported.
>
> - Moved the kmap'ing to a layer lower in the call stack as suggested by
> Christoph, which also made it easier to fixup relying on
> virt_to_page. This part of the patch is split out into a prep patch
> this time.
>
> Keith Busch (4):
> null_blk: simplify copy_from_nullb
> null_blk: consistently use blk_status_t
> null_blk: single kmap per bio segment
> null_blk: allow byte aligned memory offsets
>
> drivers/block/null_blk/main.c | 77 ++++++++++++++++------------------
> drivers/block/null_blk/zoned.c | 2 +-
> 2 files changed, 38 insertion
s(+), 41 deletions(-)
>
I applied the series on top of 6.18-rc4 an ran the same reproducer
(xfstest xfs/538 with kasan and slab debug enabled) that I used previously
to detect memory corruption. No issues detected.
So, for the series:
Tested-by: Hans Holmberg <hans.holmberg@wdc.com>
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PATCHv2 0/4] null_blk: relaxed memory alignments
2025-11-06 1:54 [PATCHv2 0/4] null_blk: relaxed memory alignments Keith Busch
` (5 preceding siblings ...)
2025-11-06 8:35 ` Hans Holmberg
@ 2025-11-06 23:30 ` Jens Axboe
6 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2025-11-06 23:30 UTC (permalink / raw)
To: linux-block, hch, dlemoal, hans.holmberg, Keith Busch; +Cc: Keith Busch
On Wed, 05 Nov 2025 17:54:43 -0800, Keith Busch wrote:
> The direct-io can work with arbitrary memory alignemnts, based on what
> the block device's queue limits report. This series enhances the
> null_blk driver by removing the software limitations that required
> block size memory and length alignment.
>
> Note, funny thing I noticed: this patch could allow null_blk to use
> byte aligned memory, but the queue limits doesn't have a way to express
> that. The smallest we can set the mask is 1, meaning 2-byte alignment,
> because setting the mask to 0 is overridden by the default 511 mask. I'm
> pretty sure at least some drivers are depending on the default, so can't
> really change that.
>
> [...]
Applied, thanks!
[1/4] null_blk: simplify copy_from_nullb
commit: 1165d20f4d1abba59ff2f032df271605ad49c255
[2/4] null_blk: consistently use blk_status_t
commit: 845928381963c61a537b932b6b3f494ce0ccea2d
[3/4] null_blk: single kmap per bio segment
commit: 262a3dd04e729386bececffeb095d31f7a9c43d5
[4/4] null_blk: allow byte aligned memory offsets
commit: 3451cf34f51bb70c24413abb20b423e64486161b
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread