* [PATCH 00/23] ublk: add UBLK_F_BATCH_IO
@ 2025-09-01 10:02 Ming Lei
2025-09-01 10:02 ` [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() Ming Lei
` (22 more replies)
0 siblings, 23 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Hello,
This patchset adds UBLK_F_BATCH_IO feature for communicating between kernel and ublk
server in batching way:
- **Per-queue vs Per-I/O**: Commands operate on queues rather than individual I/Os
- **Batch processing**: Multiple I/Os are handled in single operation
- **Multishot commands**: Use io_uring multishot for reducing submission overhead
- **Flexible task assignment**: Any task can handle any I/O (no per-I/O daemons)
- **Better load balancing**: Tasks can adjust their workload dynamically
- **help for following future optimizations**:
- blk-mq batch tags allocation/free,
- easier to support io-poll
- per-task batch for avoiding per-io lock
selftest for this feature are provided.
This patchset depends on uring_cmd multishot support, which is merged to for-6.18/io_uring.
Thanks,
Ming
Ming Lei (23):
ublk: add parameter `struct io_uring_cmd *` to
ublk_prep_auto_buf_reg()
ublk: add `union ublk_io_buf` with improved naming
ublk: refactor auto buffer register in ublk_dispatch_req()
ublk: add helper of __ublk_fetch()
ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO
ublk: prepare for not tracking task context for command batch
ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS &
UBLK_U_IO_COMMIT_IO_CMDS
ublk: handle UBLK_U_IO_PREP_IO_CMDS
ublk: handle UBLK_U_IO_COMMIT_IO_CMDS
ublk: add io events fifo structure
ublk: add batch I/O dispatch infrastructure
ublk: add UBLK_U_IO_FETCH_IO_CMDS for batch I/O processing
ublk: abort requests filled in event kfifo
ublk: add new feature UBLK_F_BATCH_IO
ublk: document feature UBLK_F_BATCH_IO
selftests: ublk: replace assert() with ublk_assert()
selftests: ublk: add ublk_io_buf_idx() for returning io buffer index
selftests: ublk: add batch buffer management infrastructure
selftests: ublk: handle UBLK_U_IO_PREP_IO_CMDS
selftests: ublk: handle UBLK_U_IO_COMMIT_IO_CMDS
selftests: ublk: handle UBLK_U_IO_FETCH_IO_CMDS
selftests: ublk: add --batch/-b for enabling F_BATCH_IO
selftests: ublk: support arbitrary threads/queues combination
Documentation/block/ublk.rst | 60 +-
drivers/block/ublk_drv.c | 1208 +++++++++++++++--
include/uapi/linux/ublk_cmd.h | 91 ++
tools/testing/selftests/ublk/Makefile | 7 +-
tools/testing/selftests/ublk/batch.c | 610 +++++++++
tools/testing/selftests/ublk/common.c | 2 +-
tools/testing/selftests/ublk/file_backed.c | 11 +-
tools/testing/selftests/ublk/kublk.c | 128 +-
tools/testing/selftests/ublk/kublk.h | 190 ++-
tools/testing/selftests/ublk/null.c | 18 +-
tools/testing/selftests/ublk/stripe.c | 17 +-
.../testing/selftests/ublk/test_generic_13.sh | 32 +
.../testing/selftests/ublk/test_generic_14.sh | 30 +
.../testing/selftests/ublk/test_generic_15.sh | 30 +
.../testing/selftests/ublk/test_stress_06.sh | 45 +
.../testing/selftests/ublk/test_stress_07.sh | 44 +
tools/testing/selftests/ublk/utils.h | 64 +
17 files changed, 2431 insertions(+), 156 deletions(-)
create mode 100644 tools/testing/selftests/ublk/batch.c
create mode 100755 tools/testing/selftests/ublk/test_generic_13.sh
create mode 100755 tools/testing/selftests/ublk/test_generic_14.sh
create mode 100755 tools/testing/selftests/ublk/test_generic_15.sh
create mode 100755 tools/testing/selftests/ublk/test_stress_06.sh
create mode 100755 tools/testing/selftests/ublk/test_stress_07.sh
--
2.47.0
^ permalink raw reply [flat|nested] 43+ messages in thread
* [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg()
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-03 3:47 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 02/23] ublk: add `union ublk_io_buf` with improved naming Ming Lei
` (21 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() and
prepare for reusing this helper for the coming UBLK_BATCH_IO feature,
which can fetch & commit one batch of io commands via single uring_cmd.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 99abd67b708b..040528ad5d30 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1213,11 +1213,12 @@ ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, struct ublk_io *io)
}
static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
- struct ublk_io *io, unsigned int issue_flags)
+ struct ublk_io *io, struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
{
int ret;
- ret = io_buffer_register_bvec(io->cmd, req, ublk_io_release,
+ ret = io_buffer_register_bvec(cmd, req, ublk_io_release,
io->buf.index, issue_flags);
if (ret) {
if (io->buf.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
@@ -1229,18 +1230,19 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
}
io->task_registered_buffers = 1;
- io->buf_ctx_handle = io_uring_cmd_ctx_handle(io->cmd);
+ io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
return true;
}
static bool ublk_prep_auto_buf_reg(struct ublk_queue *ubq,
struct request *req, struct ublk_io *io,
+ struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
ublk_init_req_ref(ubq, io);
if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req))
- return ublk_auto_buf_reg(ubq, req, io, issue_flags);
+ return ublk_auto_buf_reg(ubq, req, io, cmd, issue_flags);
return true;
}
@@ -1315,7 +1317,7 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
if (!ublk_start_io(ubq, req, io))
return;
- if (ublk_prep_auto_buf_reg(ubq, req, io, issue_flags))
+ if (ublk_prep_auto_buf_reg(ubq, req, io, io->cmd, issue_flags))
ublk_complete_io_cmd(io, req, UBLK_IO_RES_OK, issue_flags);
}
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 02/23] ublk: add `union ublk_io_buf` with improved naming
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
2025-09-01 10:02 ` [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-03 4:01 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req() Ming Lei
` (20 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add `union ublk_io_buf`, meantime apply it to `struct ublk_io` for
storing either ublk auto buffer register data or ublk server io buffer
address.
The union uses clear field names:
- `addr`: for regular ublk server io buffer addresses
- `auto_reg`: for ublk auto buffer registration data
This eliminates confusing access patterns and improves code readability.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 40 ++++++++++++++++++++++------------------
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 040528ad5d30..9185978abeb7 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -155,12 +155,13 @@ struct ublk_uring_cmd_pdu {
*/
#define UBLK_REFCOUNT_INIT (REFCOUNT_MAX / 2)
+union ublk_io_buf {
+ __u64 addr;
+ struct ublk_auto_buf_reg auto_reg;
+};
+
struct ublk_io {
- /* userspace buffer address from io cmd */
- union {
- __u64 addr;
- struct ublk_auto_buf_reg buf;
- };
+ union ublk_io_buf buf;
unsigned int flags;
int res;
@@ -500,7 +501,7 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
iod->op_flags = ublk_op | ublk_req_build_flags(req);
iod->nr_sectors = blk_rq_sectors(req);
iod->start_sector = blk_rq_pos(req);
- iod->addr = io->addr;
+ iod->addr = io->buf.addr;
return BLK_STS_OK;
}
@@ -1012,7 +1013,7 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
struct iov_iter iter;
const int dir = ITER_DEST;
- import_ubuf(dir, u64_to_user_ptr(io->addr), rq_bytes, &iter);
+ import_ubuf(dir, u64_to_user_ptr(io->buf.addr), rq_bytes, &iter);
return ublk_copy_user_pages(req, 0, &iter, dir);
}
return rq_bytes;
@@ -1033,7 +1034,7 @@ static int ublk_unmap_io(const struct ublk_queue *ubq,
WARN_ON_ONCE(io->res > rq_bytes);
- import_ubuf(dir, u64_to_user_ptr(io->addr), io->res, &iter);
+ import_ubuf(dir, u64_to_user_ptr(io->buf.addr), io->res, &iter);
return ublk_copy_user_pages(req, 0, &iter, dir);
}
return rq_bytes;
@@ -1104,7 +1105,7 @@ static blk_status_t ublk_setup_iod(struct ublk_queue *ubq, struct request *req)
iod->op_flags = ublk_op | ublk_req_build_flags(req);
iod->nr_sectors = blk_rq_sectors(req);
iod->start_sector = blk_rq_pos(req);
- iod->addr = io->addr;
+ iod->addr = io->buf.addr;
return BLK_STS_OK;
}
@@ -1219,9 +1220,9 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
int ret;
ret = io_buffer_register_bvec(cmd, req, ublk_io_release,
- io->buf.index, issue_flags);
+ io->buf.auto_reg.index, issue_flags);
if (ret) {
- if (io->buf.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
+ if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
ublk_auto_buf_reg_fallback(ubq, io);
return true;
}
@@ -1513,7 +1514,7 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
*/
io->flags &= UBLK_IO_FLAG_CANCELED;
io->cmd = NULL;
- io->addr = 0;
+ io->buf.addr = 0;
/*
* old task is PF_EXITING, put it now
@@ -2007,13 +2008,16 @@ static inline int ublk_check_cmd_op(u32 cmd_op)
static inline int ublk_set_auto_buf_reg(struct ublk_io *io, struct io_uring_cmd *cmd)
{
- io->buf = ublk_sqe_addr_to_auto_buf_reg(READ_ONCE(cmd->sqe->addr));
+ struct ublk_auto_buf_reg buf;
+
+ buf = ublk_sqe_addr_to_auto_buf_reg(READ_ONCE(cmd->sqe->addr));
- if (io->buf.reserved0 || io->buf.reserved1)
+ if (buf.reserved0 || buf.reserved1)
return -EINVAL;
- if (io->buf.flags & ~UBLK_AUTO_BUF_REG_F_MASK)
+ if (buf.flags & ~UBLK_AUTO_BUF_REG_F_MASK)
return -EINVAL;
+ io->buf.auto_reg = buf;
return 0;
}
@@ -2035,7 +2039,7 @@ static int ublk_handle_auto_buf_reg(struct ublk_io *io,
* this ublk request gets stuck.
*/
if (io->buf_ctx_handle == io_uring_cmd_ctx_handle(cmd))
- *buf_idx = io->buf.index;
+ *buf_idx = io->buf.auto_reg.index;
}
return ublk_set_auto_buf_reg(io, cmd);
@@ -2063,7 +2067,7 @@ ublk_config_io_buf(const struct ublk_queue *ubq, struct ublk_io *io,
if (ublk_support_auto_buf_reg(ubq))
return ublk_handle_auto_buf_reg(io, cmd, buf_idx);
- io->addr = buf_addr;
+ io->buf.addr = buf_addr;
return 0;
}
@@ -2259,7 +2263,7 @@ static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io,
*/
io->flags &= ~UBLK_IO_FLAG_NEED_GET_DATA;
/* update iod->addr because ublksrv may have passed a new io buffer */
- ublk_get_iod(ubq, req->tag)->addr = io->addr;
+ ublk_get_iod(ubq, req->tag)->addr = io->buf.addr;
pr_devel("%s: update iod->addr: qid %d tag %d io_flags %x addr %llx\n",
__func__, ubq->q_id, req->tag, io->flags,
ublk_get_iod(ubq, req->tag)->addr);
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req()
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
2025-09-01 10:02 ` [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() Ming Lei
2025-09-01 10:02 ` [PATCH 02/23] ublk: add `union ublk_io_buf` with improved naming Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-03 4:41 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 04/23] ublk: add helper of __ublk_fetch() Ming Lei
` (19 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Refactor auto buffer register code and prepare for supporting batch IO
feature, and the main motivation is to put 'ublk_io' operation code
together, so that per-io lock can be applied for the code block.
The key changes are:
- Rename ublk_auto_buf_reg() as ublk_do_auto_buf_reg()
- Introduce an enum `auto_buf_reg_res` to represent the result of
the buffer registration attempt (FAIL, FALLBACK, OK).
- Split the existing `ublk_do_auto_buf_reg` function into two:
- `__ublk_do_auto_buf_reg`: Performs the actual buffer registration
and returns the `auto_buf_reg_res` status.
- `ublk_do_auto_buf_reg`: A wrapper that calls the internal function
and handles the I/O preparation based on the result.
- Introduce `ublk_prep_auto_buf_reg_io` to encapsulate the logic for
preparing the I/O for completion after buffer registration.
- Pass the `tag` directly to `ublk_auto_buf_reg_fallback` to avoid
recalculating it.
This refactoring makes the control flow clearer and isolates the different
stages of the auto buffer registration process.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 65 +++++++++++++++++++++++++++-------------
1 file changed, 44 insertions(+), 21 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 9185978abeb7..e53f623b0efe 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1205,17 +1205,36 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
}
static void
-ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, struct ublk_io *io)
+ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, unsigned tag)
{
- unsigned tag = io - ubq->ios;
struct ublksrv_io_desc *iod = ublk_get_iod(ubq, tag);
iod->op_flags |= UBLK_IO_F_NEED_REG_BUF;
}
-static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
- struct ublk_io *io, struct io_uring_cmd *cmd,
- unsigned int issue_flags)
+enum auto_buf_reg_res {
+ AUTO_BUF_REG_FAIL,
+ AUTO_BUF_REG_FALLBACK,
+ AUTO_BUF_REG_OK,
+};
+
+static void ublk_prep_auto_buf_reg_io(const struct ublk_queue *ubq,
+ struct request *req, struct ublk_io *io,
+ struct io_uring_cmd *cmd, bool registered)
+{
+ if (registered) {
+ io->task_registered_buffers = 1;
+ io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
+ io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
+ }
+ ublk_init_req_ref(ubq, io);
+ __ublk_prep_compl_io_cmd(io, req);
+}
+
+static enum auto_buf_reg_res
+__ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
+ struct ublk_io *io, struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
{
int ret;
@@ -1223,29 +1242,27 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
io->buf.auto_reg.index, issue_flags);
if (ret) {
if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
- ublk_auto_buf_reg_fallback(ubq, io);
- return true;
+ ublk_auto_buf_reg_fallback(ubq, req->tag);
+ return AUTO_BUF_REG_FALLBACK;
}
blk_mq_end_request(req, BLK_STS_IOERR);
- return false;
+ return AUTO_BUF_REG_FAIL;
}
- io->task_registered_buffers = 1;
- io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
- io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
- return true;
+ return AUTO_BUF_REG_OK;
}
-static bool ublk_prep_auto_buf_reg(struct ublk_queue *ubq,
- struct request *req, struct ublk_io *io,
- struct io_uring_cmd *cmd,
- unsigned int issue_flags)
+static void ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
+ struct ublk_io *io, struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
{
- ublk_init_req_ref(ubq, io);
- if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req))
- return ublk_auto_buf_reg(ubq, req, io, cmd, issue_flags);
+ enum auto_buf_reg_res res = __ublk_do_auto_buf_reg(ubq, req, io, cmd,
+ issue_flags);
- return true;
+ if (res != AUTO_BUF_REG_FAIL) {
+ ublk_prep_auto_buf_reg_io(ubq, req, io, cmd, res == AUTO_BUF_REG_OK);
+ io_uring_cmd_done(cmd, UBLK_IO_RES_OK, 0, issue_flags);
+ }
}
static bool ublk_start_io(const struct ublk_queue *ubq, struct request *req,
@@ -1318,8 +1335,14 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
if (!ublk_start_io(ubq, req, io))
return;
- if (ublk_prep_auto_buf_reg(ubq, req, io, io->cmd, issue_flags))
+ if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req)) {
+ struct io_uring_cmd *cmd = io->cmd;
+
+ ublk_do_auto_buf_reg(ubq, req, io, cmd, issue_flags);
+ } else {
+ ublk_init_req_ref(ubq, io);
ublk_complete_io_cmd(io, req, UBLK_IO_RES_OK, issue_flags);
+ }
}
static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd,
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 04/23] ublk: add helper of __ublk_fetch()
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (2 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req() Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-03 4:42 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 05/23] ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO Ming Lei
` (18 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add helper __ublk_fetch() for the coming batch io feature.
Meantime move ublk_config_io_buf() out of __ublk_fetch() because batch
io has new interface for configuring buffer.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 31 ++++++++++++++++++++-----------
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index e53f623b0efe..f265795a8d57 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2206,18 +2206,12 @@ static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr)
return 0;
}
-static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
- struct ublk_io *io, __u64 buf_addr)
+static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
+ struct ublk_io *io)
{
struct ublk_device *ub = ubq->dev;
int ret = 0;
- /*
- * When handling FETCH command for setting up ublk uring queue,
- * ub->mutex is the innermost lock, and we won't block for handling
- * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
- */
- mutex_lock(&ub->mutex);
/* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
if (ublk_queue_ready(ubq)) {
ret = -EBUSY;
@@ -2233,13 +2227,28 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV);
ublk_fill_io_cmd(io, cmd);
- ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
- if (ret)
- goto out;
WRITE_ONCE(io->task, get_task_struct(current));
ublk_mark_io_ready(ub, ubq);
out:
+ return ret;
+}
+
+static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
+ struct ublk_io *io, __u64 buf_addr)
+{
+ struct ublk_device *ub = ubq->dev;
+ int ret;
+
+ /*
+ * When handling FETCH command for setting up ublk uring queue,
+ * ub->mutex is the innermost lock, and we won't block for handling
+ * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
+ */
+ mutex_lock(&ub->mutex);
+ ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
+ if (!ret)
+ ret = __ublk_fetch(cmd, ubq, io);
mutex_unlock(&ub->mutex);
return ret;
}
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 05/23] ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (3 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 04/23] ublk: add helper of __ublk_fetch() Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-06 18:47 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 06/23] ublk: prepare for not tracking task context for command batch Ming Lei
` (17 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Introduces the basic structure for a batched I/O feature in the ublk driver.
It adds placeholder functions and a new file operations structure,
ublk_ch_batch_io_fops, which will be used for fetching and committing I/O
commands in batches. Currently, the feature is disabled and returns
-EOPNOTSUPP.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index f265795a8d57..a0dfad8a56f0 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -256,6 +256,11 @@ static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
size_t offset);
static inline unsigned int ublk_req_build_flags(struct request *req);
+static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
+{
+ return false;
+}
+
static inline struct ublksrv_io_desc *
ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
{
@@ -2509,6 +2514,12 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return ublk_ch_uring_cmd_local(cmd, issue_flags);
}
+static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
+{
+ return -EOPNOTSUPP;
+}
+
static inline bool ublk_check_ubuf_dir(const struct request *req,
int ubuf_dir)
{
@@ -2624,6 +2635,16 @@ static const struct file_operations ublk_ch_fops = {
.mmap = ublk_ch_mmap,
};
+static const struct file_operations ublk_ch_batch_io_fops = {
+ .owner = THIS_MODULE,
+ .open = ublk_ch_open,
+ .release = ublk_ch_release,
+ .read_iter = ublk_ch_read_iter,
+ .write_iter = ublk_ch_write_iter,
+ .uring_cmd = ublk_ch_batch_io_uring_cmd,
+ .mmap = ublk_ch_mmap,
+};
+
static void ublk_deinit_queue(struct ublk_device *ub, int q_id)
{
int size = ublk_queue_cmd_buf_size(ub, q_id);
@@ -2761,7 +2782,10 @@ static int ublk_add_chdev(struct ublk_device *ub)
if (ret)
goto fail;
- cdev_init(&ub->cdev, &ublk_ch_fops);
+ if (ublk_dev_support_batch_io(ub))
+ cdev_init(&ub->cdev, &ublk_ch_batch_io_fops);
+ else
+ cdev_init(&ub->cdev, &ublk_ch_fops);
ret = cdev_device_add(&ub->cdev, dev);
if (ret)
goto fail;
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 06/23] ublk: prepare for not tracking task context for command batch
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (4 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 05/23] ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-06 18:48 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
` (16 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
batch io is designed to be independent of task context, and we will not
track task context for batch io feature.
So warn on non-batch-io code paths.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index a0dfad8a56f0..46be5b656f22 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -261,6 +261,11 @@ static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
return false;
}
+static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
+{
+ return false;
+}
+
static inline struct ublksrv_io_desc *
ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
{
@@ -1309,6 +1314,8 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
__func__, ubq->q_id, req->tag, io->flags,
ublk_get_iod(ubq, req->tag)->addr);
+ WARN_ON_ONCE(ublk_support_batch_io(ubq));
+
/*
* Task is exiting if either:
*
@@ -1868,6 +1875,8 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
if (WARN_ON_ONCE(pdu->tag >= ubq->q_depth))
return;
+ WARN_ON_ONCE(ublk_support_batch_io(ubq));
+
task = io_uring_cmd_get_task(cmd);
io = &ubq->ios[pdu->tag];
if (WARN_ON_ONCE(task && task != io->task))
@@ -2233,7 +2242,10 @@ static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
ublk_fill_io_cmd(io, cmd);
- WRITE_ONCE(io->task, get_task_struct(current));
+ if (ublk_support_batch_io(ubq))
+ WRITE_ONCE(io->task, NULL);
+ else
+ WRITE_ONCE(io->task, get_task_struct(current));
ublk_mark_io_ready(ub, ubq);
out:
return ret;
@@ -2347,6 +2359,8 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
if (tag >= ubq->q_depth)
goto out;
+ WARN_ON_ONCE(ublk_support_batch_io(ubq));
+
io = &ubq->ios[tag];
/* UBLK_IO_FETCH_REQ can be handled on any task, which sets io->task */
if (unlikely(_IOC_NR(cmd_op) == UBLK_IO_FETCH_REQ)) {
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (5 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 06/23] ublk: prepare for not tracking task context for command batch Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-06 18:50 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
` (15 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add new command UBLK_U_IO_PREP_IO_CMDS, which is the batch version of
UBLK_IO_FETCH_REQ.
Add new command UBLK_U_IO_COMMIT_IO_CMDS, which is for committing io command
result only, still the batch version.
The new command header type is `struct ublk_batch_io`, and fixed buffer is
required for these two uring_cmd.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 102 +++++++++++++++++++++++++++++++++-
include/uapi/linux/ublk_cmd.h | 49 ++++++++++++++++
2 files changed, 149 insertions(+), 2 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 46be5b656f22..4da0dbbd7e16 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -85,6 +85,11 @@
UBLK_PARAM_TYPE_DEVT | UBLK_PARAM_TYPE_ZONED | \
UBLK_PARAM_TYPE_DMA_ALIGN | UBLK_PARAM_TYPE_SEGMENT)
+#define UBLK_BATCH_F_ALL \
+ (UBLK_BATCH_F_HAS_ZONE_LBA | \
+ UBLK_BATCH_F_HAS_BUF_ADDR | \
+ UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK)
+
struct ublk_uring_cmd_pdu {
/*
* Store requests in same batch temporarily for queuing them to
@@ -108,6 +113,11 @@ struct ublk_uring_cmd_pdu {
u16 tag;
};
+struct ublk_batch_io_data {
+ struct ublk_queue *ubq;
+ struct io_uring_cmd *cmd;
+};
+
/*
* io command is active: sqe cmd is received, and its cqe isn't done
*
@@ -277,7 +287,7 @@ static inline bool ublk_dev_is_zoned(const struct ublk_device *ub)
return ub->dev_info.flags & UBLK_F_ZONED;
}
-static inline bool ublk_queue_is_zoned(struct ublk_queue *ubq)
+static inline bool ublk_queue_is_zoned(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_ZONED;
}
@@ -2528,10 +2538,98 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return ublk_ch_uring_cmd_local(cmd, issue_flags);
}
+static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
+{
+ const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
+ UBLK_BATCH_F_HAS_ZONE_LBA;
+
+ if (uc->flags & ~UBLK_BATCH_F_ALL)
+ return -EINVAL;
+
+ /* UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK requires buffer index */
+ if ((uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) &&
+ (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR))
+ return -EINVAL;
+
+ switch (uc->flags & mask) {
+ case 0:
+ if (uc->elem_bytes != 8)
+ return -EINVAL;
+ break;
+ case UBLK_BATCH_F_HAS_ZONE_LBA:
+ case UBLK_BATCH_F_HAS_BUF_ADDR:
+ if (uc->elem_bytes != 8 + 8)
+ return -EINVAL;
+ break;
+ case UBLK_BATCH_F_HAS_ZONE_LBA | UBLK_BATCH_F_HAS_BUF_ADDR:
+ if (uc->elem_bytes != 8 + 8 + 8)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int ublk_check_batch_cmd(const struct ublk_batch_io_data *data,
+ const struct ublk_batch_io *uc)
+{
+ if (!(data->cmd->flags & IORING_URING_CMD_FIXED))
+ return -EINVAL;
+
+ if (uc->nr_elem * uc->elem_bytes > data->cmd->sqe->len)
+ return -E2BIG;
+
+ if (uc->nr_elem > data->ubq->q_depth)
+ return -E2BIG;
+
+ if ((uc->flags & UBLK_BATCH_F_HAS_ZONE_LBA) &&
+ !ublk_queue_is_zoned(data->ubq))
+ return -EINVAL;
+
+ if ((uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR) &&
+ !ublk_need_map_io(data->ubq))
+ return -EINVAL;
+
+ if ((uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) &&
+ !ublk_support_auto_buf_reg(data->ubq))
+ return -EINVAL;
+
+ if (uc->reserved || uc->reserved2)
+ return -EINVAL;
+
+ return ublk_check_batch_cmd_flags(uc);
+}
+
static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
- return -EOPNOTSUPP;
+ const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
+ struct ublk_device *ub = cmd->file->private_data;
+ struct ublk_batch_io_data data = {
+ .cmd = cmd,
+ };
+ u32 cmd_op = cmd->cmd_op;
+ int ret = -EINVAL;
+
+ if (uc->q_id >= ub->dev_info.nr_hw_queues)
+ goto out;
+ data.ubq = ublk_get_queue(ub, uc->q_id);
+
+ switch (cmd_op) {
+ case UBLK_U_IO_PREP_IO_CMDS:
+ case UBLK_U_IO_COMMIT_IO_CMDS:
+ ret = ublk_check_batch_cmd(&data, uc);
+ if (ret)
+ goto out;
+ ret = -EOPNOTSUPP;
+ break;
+ default:
+ ret = -EOPNOTSUPP;
+ }
+out:
+ return ret;
}
static inline bool ublk_check_ubuf_dir(const struct request *req,
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index ec77dabba45b..01d3af52cfb4 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -102,6 +102,10 @@
_IOWR('u', 0x23, struct ublksrv_io_cmd)
#define UBLK_U_IO_UNREGISTER_IO_BUF \
_IOWR('u', 0x24, struct ublksrv_io_cmd)
+#define UBLK_U_IO_PREP_IO_CMDS \
+ _IOWR('u', 0x25, struct ublk_batch_io)
+#define UBLK_U_IO_COMMIT_IO_CMDS \
+ _IOWR('u', 0x26, struct ublk_batch_io)
/* only ABORT means that no re-fetch */
#define UBLK_IO_RES_OK 0
@@ -525,6 +529,51 @@ struct ublksrv_io_cmd {
};
};
+struct ublk_elem_header {
+ __u16 tag; /* IO tag */
+
+ /*
+ * Buffer index for incoming io command, only valid iff
+ * UBLK_F_AUTO_BUF_REG is set
+ */
+ __u16 buf_index;
+ __u32 result; /* I/O completion result (commit only) */
+};
+
+/*
+ * uring_cmd buffer structure
+ *
+ * buffer includes multiple elements, which number is specified by
+ * `nr_elem`. Each element buffer is organized in the following order:
+ *
+ * struct ublk_elem_buffer {
+ * // Mandatory fields (8 bytes)
+ * struct ublk_elem_header header;
+ *
+ * // Optional fields (8 bytes each, included based on flags)
+ *
+ * // Buffer address (if UBLK_BATCH_F_HAS_BUF_ADDR) for copying data
+ * // between ublk request and ublk server buffer
+ * __u64 buf_addr;
+ *
+ * // returned Zone append LBA (if UBLK_BATCH_F_HAS_ZONE_LBA)
+ * __u64 zone_lba;
+ * }
+ *
+ * Used for `UBLK_U_IO_PREP_IO_CMDS` and `UBLK_U_IO_COMMIT_IO_CMDS`
+ */
+struct ublk_batch_io {
+ __u16 q_id;
+#define UBLK_BATCH_F_HAS_ZONE_LBA (1 << 0)
+#define UBLK_BATCH_F_HAS_BUF_ADDR (1 << 1)
+#define UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK (1 << 2)
+ __u16 flags;
+ __u16 nr_elem;
+ __u8 elem_bytes;
+ __u8 reserved;
+ __u64 reserved2;
+};
+
struct ublk_param_basic {
#define UBLK_ATTR_READ_ONLY (1 << 0)
#define UBLK_ATTR_ROTATIONAL (1 << 1)
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (6 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-06 19:48 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
` (14 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
which allows userspace to prepare a batch of I/O requests.
The core of this change is the `ublk_walk_cmd_buf` function, which iterates
over the elements in the uring_cmd fixed buffer. For each element, it parses
the I/O details, finds the corresponding `ublk_io` structure, and prepares it
for future dispatch.
Add per-io lock for protecting concurrent delivery and committing.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
include/uapi/linux/ublk_cmd.h | 5 +
2 files changed, 195 insertions(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 4da0dbbd7e16..a4bae3d1562a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
struct ublk_batch_io_data {
struct ublk_queue *ubq;
struct io_uring_cmd *cmd;
+ unsigned int issue_flags;
+
+ /* set when walking the element buffer */
+ const struct ublk_elem_header *elem;
};
/*
@@ -200,6 +204,7 @@ struct ublk_io {
unsigned task_registered_buffers;
void *buf_ctx_handle;
+ spinlock_t lock;
} ____cacheline_aligned_in_smp;
struct ublk_queue {
@@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
return false;
}
+static inline void ublk_io_lock(struct ublk_io *io)
+{
+ spin_lock(&io->lock);
+}
+
+static inline void ublk_io_unlock(struct ublk_io *io)
+{
+ spin_unlock(&io->lock);
+}
+
static inline struct ublksrv_io_desc *
ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
{
@@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return ublk_ch_uring_cmd_local(cmd, issue_flags);
}
+static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
+ const struct ublk_elem_header *elem)
+{
+ const void *buf = (const void *)elem;
+
+ if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
+ return *(__u64 *)(buf + sizeof(*elem));
+ return -1;
+}
+
+static struct ublk_auto_buf_reg
+ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
+ const struct ublk_elem_header *elem)
+{
+ struct ublk_auto_buf_reg reg = {
+ .index = elem->buf_index,
+ .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
+ UBLK_AUTO_BUF_REG_FALLBACK : 0,
+ };
+
+ return reg;
+}
+
+/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
+#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
+struct ublk_batch_io_iter {
+ /* copy to this buffer from iterator first */
+ unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
+ struct iov_iter iter;
+ unsigned done, total;
+ unsigned char elem_bytes;
+};
+
+static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
+ struct ublk_batch_io_data *data,
+ unsigned bytes,
+ int (*cb)(struct ublk_io *io,
+ const struct ublk_batch_io_data *data))
+{
+ int i, ret = 0;
+
+ for (i = 0; i < bytes; i += iter->elem_bytes) {
+ const struct ublk_elem_header *elem =
+ (const struct ublk_elem_header *)&iter->buf[i];
+ struct ublk_io *io;
+
+ if (unlikely(elem->tag >= data->ubq->q_depth)) {
+ ret = -EINVAL;
+ break;
+ }
+
+ io = &data->ubq->ios[elem->tag];
+ data->elem = elem;
+ ret = cb(io, data);
+ if (unlikely(ret))
+ break;
+ }
+ iter->done += i;
+ return ret;
+}
+
+static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
+ struct ublk_batch_io_data *data,
+ int (*cb)(struct ublk_io *io,
+ const struct ublk_batch_io_data *data))
+{
+ int ret = 0;
+
+ while (iter->done < iter->total) {
+ unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
+
+ ret = copy_from_iter(iter->buf, len, &iter->iter);
+ if (ret != len) {
+ pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
+ data->ubq->dev->dev_info.dev_id, ret, len);
+ ret = -EINVAL;
+ break;
+ }
+
+ ret = __ublk_walk_cmd_buf(iter, data, len, cb);
+ if (ret)
+ break;
+ }
+ return ret;
+}
+
+static int ublk_batch_unprep_io(struct ublk_io *io,
+ const struct ublk_batch_io_data *data)
+{
+ if (ublk_queue_ready(data->ubq))
+ data->ubq->dev->nr_queues_ready--;
+
+ ublk_io_lock(io);
+ io->flags = 0;
+ ublk_io_unlock(io);
+ data->ubq->nr_io_ready--;
+ return 0;
+}
+
+static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter,
+ struct ublk_batch_io_data *data)
+{
+ int ret;
+
+ if (!iter->done)
+ return;
+
+ iov_iter_revert(&iter->iter, iter->done);
+ iter->total = iter->done;
+ iter->done = 0;
+
+ ret = ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io);
+ WARN_ON_ONCE(ret);
+}
+
+static int ublk_batch_prep_io(struct ublk_io *io,
+ const struct ublk_batch_io_data *data)
+{
+ const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
+ union ublk_io_buf buf = { 0 };
+ int ret;
+
+ if (ublk_support_auto_buf_reg(data->ubq))
+ buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
+ else if (ublk_need_map_io(data->ubq)) {
+ buf.addr = ublk_batch_buf_addr(uc, data->elem);
+
+ ret = ublk_check_fetch_buf(data->ubq, buf.addr);
+ if (ret)
+ return ret;
+ }
+
+ ublk_io_lock(io);
+ ret = __ublk_fetch(data->cmd, data->ubq, io);
+ if (!ret)
+ io->buf = buf;
+ ublk_io_unlock(io);
+
+ return ret;
+}
+
+static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
+{
+ struct io_uring_cmd *cmd = data->cmd;
+ const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
+ struct ublk_batch_io_iter iter = {
+ .total = uc->nr_elem * uc->elem_bytes,
+ .elem_bytes = uc->elem_bytes,
+ };
+ int ret;
+
+ ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
+ WRITE, &iter.iter, cmd, data->issue_flags);
+ if (ret)
+ return ret;
+
+ mutex_lock(&data->ubq->dev->mutex);
+ ret = ublk_walk_cmd_buf(&iter, data, ublk_batch_prep_io);
+
+ if (ret && iter.done)
+ ublk_batch_revert_prep_cmd(&iter, data);
+ mutex_unlock(&data->ubq->dev->mutex);
+ return ret;
+}
+
static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
{
const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
@@ -2609,6 +2789,7 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
struct ublk_device *ub = cmd->file->private_data;
struct ublk_batch_io_data data = {
.cmd = cmd,
+ .issue_flags = issue_flags,
};
u32 cmd_op = cmd->cmd_op;
int ret = -EINVAL;
@@ -2619,6 +2800,11 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
switch (cmd_op) {
case UBLK_U_IO_PREP_IO_CMDS:
+ ret = ublk_check_batch_cmd(&data, uc);
+ if (ret)
+ goto out;
+ ret = ublk_handle_batch_prep_cmd(&data);
+ break;
case UBLK_U_IO_COMMIT_IO_CMDS:
ret = ublk_check_batch_cmd(&data, uc);
if (ret)
@@ -2780,7 +2966,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO;
void *ptr;
- int size;
+ int size, i;
spin_lock_init(&ubq->cancel_lock);
ubq->flags = ub->dev_info.flags;
@@ -2792,6 +2978,9 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
if (!ptr)
return -ENOMEM;
+ for (i = 0; i < ubq->q_depth; i++)
+ spin_lock_init(&ubq->ios[i].lock);
+
ubq->io_cmd_buf = ptr;
ubq->dev = ub;
return 0;
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index 01d3af52cfb4..38c8cc10d694 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -102,6 +102,11 @@
_IOWR('u', 0x23, struct ublksrv_io_cmd)
#define UBLK_U_IO_UNREGISTER_IO_BUF \
_IOWR('u', 0x24, struct ublksrv_io_cmd)
+
+/*
+ * return 0 if the command is run successfully, otherwise failure code
+ * is returned
+ */
#define UBLK_U_IO_PREP_IO_CMDS \
_IOWR('u', 0x25, struct ublk_batch_io)
#define UBLK_U_IO_COMMIT_IO_CMDS \
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (7 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-02 6:19 ` kernel test robot
2025-09-01 10:02 ` [PATCH 10/23] ublk: add io events fifo structure Ming Lei
` (13 subsequent siblings)
22 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Handle UBLK_U_IO_COMMIT_IO_CMDS by walking the uring_cmd fixed buffer:
- read each element into one temp buffer in batch style
- parse and apply each element for committing io result
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 120 ++++++++++++++++++++++++++++++++--
include/uapi/linux/ublk_cmd.h | 8 +++
2 files changed, 124 insertions(+), 4 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index a4bae3d1562a..fae016b67254 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2083,9 +2083,9 @@ static inline int ublk_set_auto_buf_reg(struct ublk_io *io, struct io_uring_cmd
return 0;
}
-static int ublk_handle_auto_buf_reg(struct ublk_io *io,
- struct io_uring_cmd *cmd,
- u16 *buf_idx)
+static void __ublk_handle_auto_buf_reg(struct ublk_io *io,
+ struct io_uring_cmd *cmd,
+ u16 *buf_idx)
{
if (io->flags & UBLK_IO_FLAG_AUTO_BUF_REG) {
io->flags &= ~UBLK_IO_FLAG_AUTO_BUF_REG;
@@ -2103,7 +2103,13 @@ static int ublk_handle_auto_buf_reg(struct ublk_io *io,
if (io->buf_ctx_handle == io_uring_cmd_ctx_handle(cmd))
*buf_idx = io->buf.auto_reg.index;
}
+}
+static int ublk_handle_auto_buf_reg(struct ublk_io *io,
+ struct io_uring_cmd *cmd,
+ u16 *buf_idx)
+{
+ __ublk_handle_auto_buf_reg(io, cmd, buf_idx);
return ublk_set_auto_buf_reg(io, cmd);
}
@@ -2563,6 +2569,17 @@ static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
return -1;
}
+static inline __u64 ublk_batch_zone_lba(const struct ublk_batch_io *uc,
+ const struct ublk_elem_header *elem)
+{
+ const void *buf = (const void *)elem;
+
+ if (uc->flags & UBLK_BATCH_F_HAS_ZONE_LBA)
+ return *(__u64 *)(buf + sizeof(*elem) +
+ 8 * !!(uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR));
+ return -1;
+}
+
static struct ublk_auto_buf_reg
ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
const struct ublk_elem_header *elem)
@@ -2718,6 +2735,101 @@ static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
return ret;
}
+static int ublk_batch_commit_io_check(const struct ublk_queue *ubq,
+ struct ublk_io *io,
+ union ublk_io_buf *buf)
+{
+ struct request *req = io->req;
+
+ if (!req)
+ return -EINVAL;
+
+ if (io->flags & UBLK_IO_FLAG_ACTIVE)
+ return -EBUSY;
+
+ if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
+ return -EINVAL;
+
+ if (ublk_need_map_io(ubq)) {
+ /*
+ * COMMIT_AND_FETCH_REQ has to provide IO buffer if
+ * NEED GET DATA is not enabled or it is Read IO.
+ */
+ if (!buf->addr && (!ublk_need_get_data(ubq) ||
+ req_op(req) == REQ_OP_READ))
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int ublk_batch_commit_io(struct ublk_io *io,
+ const struct ublk_batch_io_data *data)
+{
+ const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
+ struct ublk_queue *ubq = data->ubq;
+ u16 buf_idx = UBLK_INVALID_BUF_IDX;
+ union ublk_io_buf buf = { 0 };
+ struct request *req = NULL;
+ bool auto_reg = false;
+ bool compl = false;
+ int ret;
+
+ if (ublk_support_auto_buf_reg(data->ubq)) {
+ buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
+ auto_reg = true;
+ } else if (ublk_need_map_io(data->ubq))
+ buf.addr = ublk_batch_buf_addr(uc, data->elem);
+
+ ublk_io_lock(io);
+ ret = ublk_batch_commit_io_check(ubq, io, &buf);
+ if (!ret) {
+ io->res = data->elem->result;
+ io->buf = buf;
+ req = ublk_fill_io_cmd(io, data->cmd);
+
+ if (auto_reg)
+ __ublk_handle_auto_buf_reg(io, data->cmd, &buf_idx);
+ compl = ublk_need_complete_req(ubq, io);
+ }
+ ublk_io_unlock(io);
+
+ if (unlikely(ret)) {
+ pr_warn("%s: dev %u queue %u io %ld: commit failure %d\n",
+ __func__, ubq->dev->dev_info.dev_id, ubq->q_id,
+ io - ubq->ios, ret);
+ return ret;
+ }
+
+ /* can't touch 'ublk_io' any more */
+ if (buf_idx != UBLK_INVALID_BUF_IDX)
+ io_buffer_unregister_bvec(data->cmd, buf_idx, data->issue_flags);
+ if (req_op(req) == REQ_OP_ZONE_APPEND)
+ req->__sector = ublk_batch_zone_lba(uc, data->elem);
+ if (compl)
+ __ublk_complete_rq(req);
+ return 0;
+}
+
+static int ublk_handle_batch_commit_cmd(struct ublk_batch_io_data *data)
+{
+ struct io_uring_cmd *cmd = data->cmd;
+ const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
+ struct ublk_batch_io_iter iter = {
+ .total = uc->nr_elem * uc->elem_bytes,
+ .elem_bytes = uc->elem_bytes,
+ };
+ int ret;
+
+ ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
+ WRITE, &iter.iter, cmd, data->issue_flags);
+ if (ret)
+ return ret;
+
+ ret = ublk_walk_cmd_buf(&iter, data, ublk_batch_commit_io);
+
+ return iter.done == 0 ? ret : iter.done;
+}
+
static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
{
const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
@@ -2809,7 +2921,7 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
ret = ublk_check_batch_cmd(&data, uc);
if (ret)
goto out;
- ret = -EOPNOTSUPP;
+ ret = ublk_handle_batch_commit_cmd(&data);
break;
default:
ret = -EOPNOTSUPP;
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index 38c8cc10d694..695b38522995 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -109,6 +109,14 @@
*/
#define UBLK_U_IO_PREP_IO_CMDS \
_IOWR('u', 0x25, struct ublk_batch_io)
+/*
+ * If failure code is returned, nothing in the command buffer is handled.
+ * Otherwise, the returned value means how many bytes in command buffer
+ * are handled actually, then number of handled IOs can be calculated with
+ * `elem_bytes` for each IO. IOs in the remained bytes are not committed,
+ * userspace has to check return value for dealing with partial committing
+ * correctly.
+ */
#define UBLK_U_IO_COMMIT_IO_CMDS \
_IOWR('u', 0x26, struct ublk_batch_io)
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 10/23] ublk: add io events fifo structure
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (8 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 11/23] ublk: add batch I/O dispatch infrastructure Ming Lei
` (12 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add ublk io events fifo structure and prepare for supporting command
batch, which will use io_uring multishot uring_cmd for fetching one
batch of io commands each time.
One nice feature of kfifo is to allow multiple producer vs single
consumer. We just need lock the producer side, meantime the single
consumer can be lockless.
The producer is actually from ublk_queue_rq() or ublk_queue_rqs(), so
lock contention can be eased by setting proper blk-mq nr_queues.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 55 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 54 insertions(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index fae016b67254..0f955592ebd5 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -44,6 +44,7 @@
#include <linux/task_work.h>
#include <linux/namei.h>
#include <linux/kref.h>
+#include <linux/kfifo.h>
#include <uapi/linux/ublk_cmd.h>
#define UBLK_MINORS (1U << MINORBITS)
@@ -220,6 +221,22 @@ struct ublk_queue {
unsigned short nr_io_ready; /* how many ios setup */
spinlock_t cancel_lock;
struct ublk_device *dev;
+
+ /*
+ * Inflight ublk request tag is saved in this fifo
+ *
+ * There are multiple writer from ublk_queue_rq() or ublk_queue_rqs(),
+ * so lock is required for storing request tag to fifo
+ *
+ * Make sure just one reader for fetching request from task work
+ * function to ublk server, so no need to grab the lock in reader
+ * side.
+ */
+ struct {
+ DECLARE_KFIFO_PTR(evts_fifo, unsigned short);
+ spinlock_t evts_lock;
+ }____cacheline_aligned_in_smp;
+
struct ublk_io ios[];
};
@@ -291,6 +308,31 @@ static inline void ublk_io_unlock(struct ublk_io *io)
spin_unlock(&io->lock);
}
+/* Initialize the queue */
+static inline int ublk_io_evts_init(struct ublk_queue *q, unsigned int size)
+{
+ spin_lock_init(&q->evts_lock);
+ return kfifo_alloc(&q->evts_fifo, size, GFP_KERNEL);
+}
+
+/* Check if queue is empty */
+static inline bool ublk_io_evts_empty(const struct ublk_queue *q)
+{
+ return kfifo_is_empty(&q->evts_fifo);
+}
+
+/* Check if queue is full */
+static inline bool ublk_io_evts_full(const struct ublk_queue *q)
+{
+ return kfifo_is_full(&q->evts_fifo);
+}
+
+static inline void ublk_io_evts_deinit(struct ublk_queue *q)
+{
+ WARN_ON_ONCE(!kfifo_is_empty(&q->evts_fifo));
+ kfifo_free(&q->evts_fifo);
+}
+
static inline struct ublksrv_io_desc *
ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
{
@@ -3071,6 +3113,8 @@ static void ublk_deinit_queue(struct ublk_device *ub, int q_id)
if (ubq->io_cmd_buf)
free_pages((unsigned long)ubq->io_cmd_buf, get_order(size));
+ if (ublk_support_batch_io(ubq))
+ ublk_io_evts_deinit(ubq);
}
static int ublk_init_queue(struct ublk_device *ub, int q_id)
@@ -3078,7 +3122,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO;
void *ptr;
- int size, i;
+ int size, i, ret = 0;
spin_lock_init(&ubq->cancel_lock);
ubq->flags = ub->dev_info.flags;
@@ -3095,7 +3139,16 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
ubq->io_cmd_buf = ptr;
ubq->dev = ub;
+
+ if (ublk_support_batch_io(ubq)) {
+ ret = ublk_io_evts_init(ubq, ubq->q_depth);
+ if (ret)
+ goto fail;
+ }
return 0;
+fail:
+ ublk_deinit_queue(ub, q_id);
+ return ret;
}
static void ublk_deinit_queues(struct ublk_device *ub)
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 11/23] ublk: add batch I/O dispatch infrastructure
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (9 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 10/23] ublk: add io events fifo structure Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 12/23] ublk: add UBLK_U_IO_FETCH_IO_CMDS for batch I/O processing Ming Lei
` (11 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add infrastructure for delivering I/O commands to ublk server in batches,
preparing for the upcoming UBLK_U_IO_FETCH_IO_CMDS feature.
Key components:
- struct ublk_batch_fcmd: Represents a batch fetch uring_cmd that will
receive multiple I/O tags in a single operation, using io_uring's
multishot command for efficient ublk IO delivery.
- ublk_batch_dispatch(): Batch version of ublk_dispatch_req() that:
* Pulls multiple request tags from the events FIFO (lock-free reader)
* Prepares each I/O for delivery (including auto buffer registration)
* Delivers tags to userspace via single uring_cmd notification
* Handles partial failures by restoring undelivered tags to FIFO
The batch approach significantly reduces notification overhead by aggregating
multiple I/O completions into single uring_cmd, while maintaining the same
I/O processing semantics as individual operations.
Error handling ensures system consistency: if buffer selection or CQE
posting fails, undelivered tags are restored to the FIFO for retry.
This runs in task work context, scheduled via io_uring_cmd_complete_in_task()
or called directly from ->uring_cmd(), enabling efficient batch processing
without blocking the I/O submission path.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 123 ++++++++++++++++++++++++++++++++++
include/uapi/linux/ublk_cmd.h | 6 ++
2 files changed, 129 insertions(+)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 0f955592ebd5..2ed2adc93df6 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -91,6 +91,12 @@
UBLK_BATCH_F_HAS_BUF_ADDR | \
UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK)
+/* ublk batch fetch uring_cmd */
+struct ublk_batch_fcmd {
+ struct io_uring_cmd *cmd;
+ unsigned short buf_group;
+};
+
struct ublk_uring_cmd_pdu {
/*
* Store requests in same batch temporarily for queuing them to
@@ -623,6 +629,32 @@ static wait_queue_head_t ublk_idr_wq; /* wait until one idr is freed */
static DEFINE_MUTEX(ublk_ctl_mutex);
+static void ublk_batch_deinit_fetch_buf(struct ublk_batch_io_data *data,
+ struct ublk_batch_fcmd *fcmd,
+ int res)
+{
+ io_uring_cmd_done(fcmd->cmd, res, 0, data->issue_flags);
+ fcmd->cmd = NULL;
+}
+
+static int ublk_batch_fetch_post_cqe(struct ublk_batch_fcmd *fcmd,
+ struct io_br_sel *sel,
+ unsigned int issue_flags)
+{
+ if (io_uring_mshot_cmd_post_cqe(fcmd->cmd, sel, issue_flags))
+ return -ENOBUFS;
+ return 0;
+}
+
+static ssize_t ublk_batch_copy_io_tags(struct ublk_batch_fcmd *fcmd,
+ void __user *buf, const u16 *tag_buf,
+ unsigned int len)
+{
+ if (copy_to_user(buf, tag_buf, len))
+ return -EFAULT;
+ return len;
+}
+
#define UBLK_MAX_UBLKS UBLK_MINORS
/*
@@ -1424,6 +1456,97 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
}
}
+static bool __ublk_batch_prep_dispatch(struct ublk_batch_io_data *data,
+ unsigned short tag)
+{
+ struct ublk_queue *ubq = data->ubq;
+ struct ublk_device *ub = ubq->dev;
+ struct ublk_io *io = &ubq->ios[tag];
+ struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
+ enum auto_buf_reg_res res = AUTO_BUF_REG_FALLBACK;
+ struct io_uring_cmd *cmd = data->cmd;
+
+ if (!ublk_start_io(ubq, req, io))
+ return false;
+
+ if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req))
+ res = __ublk_do_auto_buf_reg(ubq, req, io, cmd,
+ data->issue_flags);
+
+ ublk_io_lock(io);
+ ublk_prep_auto_buf_reg_io(ubq, req, io, cmd, res == AUTO_BUF_REG_OK);
+ ublk_io_unlock(io);
+
+ return res != AUTO_BUF_REG_FAIL;
+}
+
+static void ublk_batch_prep_dispatch(struct ublk_batch_io_data *data,
+ unsigned short *tag_buf,
+ unsigned int len)
+{
+ int i;
+
+ for (i = 0; i < len; i += 1) {
+ unsigned short tag = tag_buf[i];
+
+ if (!__ublk_batch_prep_dispatch(data, tag))
+ tag_buf[i] = UBLK_BATCH_IO_UNUSED_TAG;
+ }
+}
+
+#define MAX_NR_TAG 128
+static int __ublk_batch_dispatch(struct ublk_batch_io_data *data,
+ struct ublk_batch_fcmd *fcmd)
+{
+ unsigned short tag_buf[MAX_NR_TAG];
+ struct io_br_sel sel;
+ size_t len = 0;
+ int ret;
+
+ sel = io_uring_cmd_buffer_select(fcmd->cmd, fcmd->buf_group, &len,
+ data->issue_flags);
+ if (sel.val < 0)
+ return sel.val;
+ if (!sel.addr)
+ return -ENOBUFS;
+
+ /* single reader needn't lock and sizeof(kfifo element) is 2 bytes */
+ len = min(len, sizeof(tag_buf)) / 2;
+ len = kfifo_out(&data->ubq->evts_fifo, tag_buf, len);
+
+ ublk_batch_prep_dispatch(data, tag_buf, len);
+
+ sel.val = ublk_batch_copy_io_tags(fcmd, sel.addr, tag_buf, len * 2);
+ ret = ublk_batch_fetch_post_cqe(fcmd, &sel, data->issue_flags);
+ if (unlikely(ret < 0)) {
+ int res = kfifo_in_spinlocked_noirqsave(&data->ubq->evts_fifo,
+ tag_buf, len, &data->ubq->evts_lock);
+
+ pr_warn("%s: copy tags or post CQE failure, move back "
+ "tags(%d %lu) ret %d\n", __func__, res, len,
+ ret);
+ }
+ return ret;
+}
+
+static __maybe_unused int
+ublk_batch_dispatch(struct ublk_batch_io_data *data,
+ struct ublk_batch_fcmd *fcmd)
+{
+ int ret = 0;
+
+ while (!ublk_io_evts_empty(data->ubq)) {
+ ret = __ublk_batch_dispatch(data, fcmd);
+ if (ret <= 0)
+ break;
+ }
+
+ if (ret < 0)
+ ublk_batch_deinit_fetch_buf(data, fcmd, ret);
+
+ return ret;
+}
+
static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index 695b38522995..fbd3582bc203 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -553,6 +553,12 @@ struct ublk_elem_header {
__u32 result; /* I/O completion result (commit only) */
};
+/*
+ * If this tag value is observed from buffer of `UBLK_U_IO_FETCH_IO_CMDS`
+ * ublk server can simply ignore it
+ */
+#define UBLK_BATCH_IO_UNUSED_TAG (__u16)(-1)
+
/*
* uring_cmd buffer structure
*
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 12/23] ublk: add UBLK_U_IO_FETCH_IO_CMDS for batch I/O processing
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (10 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 11/23] ublk: add batch I/O dispatch infrastructure Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 13/23] ublk: abort requests filled in event kfifo Ming Lei
` (10 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add UBLK_U_IO_FETCH_IO_CMDS command to enable efficient batch processing
of I/O requests. This multishot uring_cmd allows the ublk server to fetch
multiple I/O commands in a single operation, significantly reducing
submission overhead compared to individual FETCH_REQ* commands.
Key Design Features:
1. Multishot Operation: One UBLK_U_IO_FETCH_IO_CMDS can fetch many I/O
commands, with the batch size limited by the provided buffer length.
2. Dynamic Load Balancing: Multiple fetch commands can be submitted
simultaneously, but only one is active at any time. This enables
efficient load distribution across multiple server task contexts.
3. Implicit State Management: The implementation uses three key variables
to track state:
- evts_fifo: Queue of request tags awaiting processing
- fcmd_head: List of available fetch commands
- active_fcmd: Currently active fetch command (NULL = none active)
States are derived implicitly:
- IDLE: No fetch commands available
- READY: Fetch commands available, none active
- ACTIVE: One fetch command processing events
4. Lockless Reader Optimization: The active fetch command can read from
evts_fifo without locking (single reader guarantee), while writers
(ublk_queue_rq/ublk_queue_rqs) use evts_lock protection.
Implementation Details:
- ublk_queue_rq() and ublk_queue_rqs() save request tags to evts_fifo
- __ublk_pick_active_fcmd() selects an available fetch command when
events arrive and no command is currently active
- ublk_batch_dispatch() moves tags from evts_fifo to the fetch command's
buffer and posts completion via io_uring_mshot_cmd_post_cqe()
- State transitions are coordinated via evts_lock to maintain consistency
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 379 +++++++++++++++++++++++++++++++---
include/uapi/linux/ublk_cmd.h | 7 +
2 files changed, 359 insertions(+), 27 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 2ed2adc93df6..60e19fe0655e 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -93,6 +93,7 @@
/* ublk batch fetch uring_cmd */
struct ublk_batch_fcmd {
+ struct list_head node;
struct io_uring_cmd *cmd;
unsigned short buf_group;
};
@@ -117,7 +118,10 @@ struct ublk_uring_cmd_pdu {
*/
struct ublk_queue *ubq;
- u16 tag;
+ union {
+ u16 tag;
+ struct ublk_batch_fcmd *fcmd; /* batch io only */
+ };
};
struct ublk_batch_io_data {
@@ -229,18 +233,36 @@ struct ublk_queue {
struct ublk_device *dev;
/*
- * Inflight ublk request tag is saved in this fifo
+ * Batch I/O State Management:
+ *
+ * The batch I/O system uses implicit state management based on the
+ * combination of three key variables below.
+ *
+ * - IDLE: list_empty(&fcmd_head) && !active_fcmd
+ * No fetch commands available, events queue in evts_fifo
+ *
+ * - READY: !list_empty(&fcmd_head) && !active_fcmd
+ * Fetch commands available but none processing events
*
- * There are multiple writer from ublk_queue_rq() or ublk_queue_rqs(),
- * so lock is required for storing request tag to fifo
+ * - ACTIVE: active_fcmd
+ * One fetch command actively processing events from evts_fifo
*
- * Make sure just one reader for fetching request from task work
- * function to ublk server, so no need to grab the lock in reader
- * side.
+ * Key Invariants:
+ * - At most one active_fcmd at any time (single reader)
+ * - active_fcmd is always from fcmd_head list when non-NULL
+ * - evts_fifo can be read locklessly by the single active reader
+ * - All state transitions require evts_lock protection
+ * - Multiple writers to evts_fifo require lock protection
*/
struct {
DECLARE_KFIFO_PTR(evts_fifo, unsigned short);
spinlock_t evts_lock;
+
+ /* List of fetch commands available to process events */
+ struct list_head fcmd_head;
+
+ /* Currently active fetch command (NULL = none active) */
+ struct ublk_batch_fcmd *active_fcmd;
}____cacheline_aligned_in_smp;
struct ublk_io ios[];
@@ -293,6 +315,10 @@ static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
const struct ublk_queue *ubq, struct ublk_io *io,
size_t offset);
static inline unsigned int ublk_req_build_flags(struct request *req);
+static void ublk_batch_dispatch(struct ublk_batch_io_data *data,
+ struct ublk_batch_fcmd *fcmd);
+static struct ublk_batch_fcmd *__ublk_pick_active_fcmd(
+ struct ublk_queue *ubq);
static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
{
@@ -628,13 +654,39 @@ static wait_queue_head_t ublk_idr_wq; /* wait until one idr is freed */
static DEFINE_MUTEX(ublk_ctl_mutex);
+static struct ublk_batch_fcmd *
+ublk_batch_alloc_fcmd(struct io_uring_cmd *cmd)
+{
+ struct ublk_batch_fcmd *fcmd = kzalloc(sizeof(*fcmd), GFP_NOIO);
+ if (fcmd) {
+ fcmd->cmd = cmd;
+ fcmd->buf_group = READ_ONCE(cmd->sqe->buf_index);
+ }
+ return fcmd;
+}
+
+static void ublk_batch_free_fcmd(struct ublk_batch_fcmd *fcmd)
+{
+ kfree(fcmd);
+}
+
+/*
+ * Nothing can move on, so clear ->active_fcmd, and the caller should stop
+ * dispatching
+ */
static void ublk_batch_deinit_fetch_buf(struct ublk_batch_io_data *data,
struct ublk_batch_fcmd *fcmd,
int res)
{
+ spin_lock(&data->ubq->evts_lock);
+ list_del(&fcmd->node);
+ WARN_ON_ONCE(fcmd != data->ubq->active_fcmd);
+ data->ubq->active_fcmd = NULL;
+ spin_unlock(&data->ubq->evts_lock);
+
io_uring_cmd_done(fcmd->cmd, res, 0, data->issue_flags);
- fcmd->cmd = NULL;
+ ublk_batch_free_fcmd(fcmd);
}
static int ublk_batch_fetch_post_cqe(struct ublk_batch_fcmd *fcmd,
@@ -1503,6 +1555,8 @@ static int __ublk_batch_dispatch(struct ublk_batch_io_data *data,
size_t len = 0;
int ret;
+ WARN_ON_ONCE(data->cmd != fcmd->cmd);
+
sel = io_uring_cmd_buffer_select(fcmd->cmd, fcmd->buf_group, &len,
data->issue_flags);
if (sel.val < 0)
@@ -1529,22 +1583,79 @@ static int __ublk_batch_dispatch(struct ublk_batch_io_data *data,
return ret;
}
-static __maybe_unused int
-ublk_batch_dispatch(struct ublk_batch_io_data *data,
+static struct ublk_batch_fcmd *__ublk_pick_active_fcmd(
+ struct ublk_queue *ubq)
+{
+ struct ublk_batch_fcmd *fcmd;
+
+ lockdep_assert_held(&ubq->evts_lock);
+
+ if (!ublk_io_evts_empty(ubq) && !ubq->active_fcmd) {
+ smp_mb();
+ fcmd = ubq->active_fcmd = list_first_entry_or_null(
+ &ubq->fcmd_head, struct ublk_batch_fcmd, node);
+ } else {
+ fcmd = NULL;
+ }
+ return fcmd;
+}
+
+static void ublk_batch_tw_cb(struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
+{
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ struct ublk_batch_fcmd *fcmd = pdu->fcmd;
+ struct ublk_batch_io_data data = {
+ .ubq = pdu->ubq,
+ .cmd = fcmd->cmd,
+ .issue_flags = issue_flags,
+ };
+
+ WARN_ON_ONCE(pdu->ubq->active_fcmd != fcmd);
+
+ ublk_batch_dispatch(&data, fcmd);
+}
+
+static void ublk_batch_dispatch(struct ublk_batch_io_data *data,
struct ublk_batch_fcmd *fcmd)
{
+ struct ublk_queue *ubq = data->ubq;
+ struct ublk_batch_fcmd *new_fcmd;
+ void *handle;
+ bool empty;
int ret = 0;
+again:
while (!ublk_io_evts_empty(data->ubq)) {
ret = __ublk_batch_dispatch(data, fcmd);
if (ret <= 0)
break;
}
- if (ret < 0)
+ if (ret < 0) {
ublk_batch_deinit_fetch_buf(data, fcmd, ret);
+ return;
+ }
- return ret;
+ handle = io_uring_cmd_ctx_handle(fcmd->cmd);
+ ubq->active_fcmd = NULL;
+ smp_mb();
+ empty = ublk_io_evts_empty(ubq);
+ if (likely(empty))
+ return;
+
+ spin_lock(&ubq->evts_lock);
+ new_fcmd = __ublk_pick_active_fcmd(ubq);
+ spin_unlock(&ubq->evts_lock);
+
+ if (!new_fcmd)
+ return;
+ if (handle == io_uring_cmd_ctx_handle(new_fcmd->cmd)) {
+ data->cmd = new_fcmd->cmd;
+ fcmd = new_fcmd;
+ goto again;
+ }
+ io_uring_cmd_complete_in_task(new_fcmd->cmd, ublk_batch_tw_cb);
}
static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd,
@@ -1556,13 +1667,27 @@ static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd,
ublk_dispatch_req(ubq, pdu->req, issue_flags);
}
-static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq)
+static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq, bool last)
{
- struct io_uring_cmd *cmd = ubq->ios[rq->tag].cmd;
- struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ if (ublk_support_batch_io(ubq)) {
+ unsigned short tag = rq->tag;
+ struct ublk_batch_fcmd *fcmd = NULL;
- pdu->req = rq;
- io_uring_cmd_complete_in_task(cmd, ublk_cmd_tw_cb);
+ spin_lock(&ubq->evts_lock);
+ kfifo_put(&ubq->evts_fifo, tag);
+ if (last)
+ fcmd = __ublk_pick_active_fcmd(ubq);
+ spin_unlock(&ubq->evts_lock);
+
+ if (fcmd)
+ io_uring_cmd_complete_in_task(fcmd->cmd, ublk_batch_tw_cb);
+ } else {
+ struct io_uring_cmd *cmd = ubq->ios[rq->tag].cmd;
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+
+ pdu->req = rq;
+ io_uring_cmd_complete_in_task(cmd, ublk_cmd_tw_cb);
+ }
}
static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd,
@@ -1580,14 +1705,44 @@ static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd,
} while (rq);
}
-static void ublk_queue_cmd_list(struct ublk_io *io, struct rq_list *l)
+static void ublk_batch_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l)
{
- struct io_uring_cmd *cmd = io->cmd;
- struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ unsigned short tags[MAX_NR_TAG];
+ struct ublk_batch_fcmd *fcmd;
+ struct request *rq;
+ unsigned cnt = 0;
+
+ spin_lock(&ubq->evts_lock);
+ rq_list_for_each(l, rq) {
+ tags[cnt++] = (unsigned short)rq->tag;
+ if (cnt >= MAX_NR_TAG) {
+ kfifo_in(&ubq->evts_fifo, tags, cnt);
+ cnt = 0;
+ }
+ }
+ if (cnt)
+ kfifo_in(&ubq->evts_fifo, tags, cnt);
+ fcmd = __ublk_pick_active_fcmd(ubq);
+ spin_unlock(&ubq->evts_lock);
- pdu->req_list = rq_list_peek(l);
rq_list_init(l);
- io_uring_cmd_complete_in_task(cmd, ublk_cmd_list_tw_cb);
+ if (fcmd)
+ io_uring_cmd_complete_in_task(fcmd->cmd, ublk_batch_tw_cb);
+}
+
+static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct ublk_io *io,
+ struct rq_list *l, bool batch)
+{
+ if (batch) {
+ ublk_batch_queue_cmd_list(ubq, l);
+ } else {
+ struct io_uring_cmd *cmd = io->cmd;
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+
+ pdu->req_list = rq_list_peek(l);
+ rq_list_init(l);
+ io_uring_cmd_complete_in_task(cmd, ublk_cmd_list_tw_cb);
+ }
}
static enum blk_eh_timer_return ublk_timeout(struct request *rq)
@@ -1666,7 +1821,7 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
return BLK_STS_OK;
}
- ublk_queue_cmd(ubq, rq);
+ ublk_queue_cmd(ubq, rq, bd->last);
return BLK_STS_OK;
}
@@ -1678,11 +1833,25 @@ static inline bool ublk_belong_to_same_batch(const struct ublk_io *io,
(io->task == io2->task);
}
-static void ublk_queue_rqs(struct rq_list *rqlist)
+static void ublk_commit_rqs(struct blk_mq_hw_ctx *hctx)
+{
+ struct ublk_queue *ubq = hctx->driver_data;
+ struct ublk_batch_fcmd *fcmd;
+
+ spin_lock(&ubq->evts_lock);
+ fcmd = __ublk_pick_active_fcmd(ubq);
+ spin_unlock(&ubq->evts_lock);
+
+ if (fcmd)
+ io_uring_cmd_complete_in_task(fcmd->cmd, ublk_batch_tw_cb);
+}
+
+static void __ublk_queue_rqs(struct rq_list *rqlist, bool batch)
{
struct rq_list requeue_list = { };
struct rq_list submit_list = { };
struct ublk_io *io = NULL;
+ struct ublk_queue *ubq = NULL;
struct request *req;
while ((req = rq_list_pop(rqlist))) {
@@ -1696,16 +1865,27 @@ static void ublk_queue_rqs(struct rq_list *rqlist)
if (io && !ublk_belong_to_same_batch(io, this_io) &&
!rq_list_empty(&submit_list))
- ublk_queue_cmd_list(io, &submit_list);
+ ublk_queue_cmd_list(ubq, io, &submit_list, batch);
io = this_io;
+ ubq = this_q;
rq_list_add_tail(&submit_list, req);
}
if (!rq_list_empty(&submit_list))
- ublk_queue_cmd_list(io, &submit_list);
+ ublk_queue_cmd_list(ubq, io, &submit_list, batch);
*rqlist = requeue_list;
}
+static void ublk_queue_rqs(struct rq_list *rqlist)
+{
+ __ublk_queue_rqs(rqlist, false);
+}
+
+static void ublk_batch_queue_rqs(struct rq_list *rqlist)
+{
+ __ublk_queue_rqs(rqlist, true);
+}
+
static int ublk_init_hctx(struct blk_mq_hw_ctx *hctx, void *driver_data,
unsigned int hctx_idx)
{
@@ -1723,6 +1903,14 @@ static const struct blk_mq_ops ublk_mq_ops = {
.timeout = ublk_timeout,
};
+static const struct blk_mq_ops ublk_batch_mq_ops = {
+ .commit_rqs = ublk_commit_rqs,
+ .queue_rq = ublk_queue_rq,
+ .queue_rqs = ublk_batch_queue_rqs,
+ .init_hctx = ublk_init_hctx,
+ .timeout = ublk_timeout,
+};
+
static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
{
int i;
@@ -2036,6 +2224,56 @@ static void ublk_cancel_cmd(struct ublk_queue *ubq, unsigned tag,
io_uring_cmd_done(io->cmd, UBLK_IO_RES_ABORT, 0, issue_flags);
}
+static void ublk_batch_cancel_cmd(struct ublk_queue *ubq,
+ struct ublk_batch_fcmd *fcmd,
+ unsigned int issue_flags)
+{
+ bool done;
+
+ spin_lock(&ubq->evts_lock);
+ done = (ubq->active_fcmd != fcmd);
+ if (done)
+ list_del(&fcmd->node);
+ spin_unlock(&ubq->evts_lock);
+
+ if (done) {
+ io_uring_cmd_done(fcmd->cmd, UBLK_IO_RES_ABORT, 0, issue_flags);
+ ublk_batch_free_fcmd(fcmd);
+ }
+}
+
+static void ublk_batch_cancel_queue(struct ublk_queue *ubq)
+{
+ LIST_HEAD(fcmd_list);
+
+ spin_lock(&ubq->evts_lock);
+ ubq->force_abort = true;
+ list_splice_init(&ubq->fcmd_head, &fcmd_list);
+ if (ubq->active_fcmd)
+ list_move(&ubq->active_fcmd->node, &ubq->fcmd_head);
+ spin_unlock(&ubq->evts_lock);
+
+ while (!list_empty(&fcmd_list)) {
+ struct ublk_batch_fcmd *fcmd = list_first_entry(&fcmd_list,
+ struct ublk_batch_fcmd, node);
+
+ ublk_batch_cancel_cmd(ubq, fcmd, IO_URING_F_UNLOCKED);
+ }
+}
+
+static void ublk_batch_cancel_fn(struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
+{
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ struct ublk_batch_fcmd *fcmd = pdu->fcmd;
+ struct ublk_queue *ubq = pdu->ubq;
+
+ if (!ubq->canceling)
+ ublk_start_cancel(ubq->dev);
+
+ ublk_batch_cancel_cmd(ubq, fcmd, issue_flags);
+}
+
/*
* The ublk char device won't be closed when calling cancel fn, so both
* ublk device and queue are guaranteed to be live
@@ -2087,6 +2325,11 @@ static void ublk_cancel_queue(struct ublk_queue *ubq)
{
int i;
+ if (ublk_support_batch_io(ubq)) {
+ ublk_batch_cancel_queue(ubq);
+ return;
+ }
+
for (i = 0; i < ubq->q_depth; i++)
ublk_cancel_cmd(ubq, i, IO_URING_F_UNLOCKED);
}
@@ -3059,6 +3302,73 @@ static int ublk_check_batch_cmd(const struct ublk_batch_io_data *data,
return ublk_check_batch_cmd_flags(uc);
}
+static int ublk_batch_attach(struct ublk_batch_io_data *data,
+ struct ublk_batch_fcmd *fcmd)
+{
+ struct ublk_queue *ubq = data->ubq;
+ struct ublk_batch_fcmd *new_fcmd = NULL;
+ bool free = false;
+
+ spin_lock(&ubq->evts_lock);
+ if (unlikely(ubq->force_abort || ubq->canceling)) {
+ free = true;
+ } else {
+ list_add_tail(&fcmd->node, &ubq->fcmd_head);
+ new_fcmd = __ublk_pick_active_fcmd(ubq);
+ }
+ spin_unlock(&ubq->evts_lock);
+
+ /*
+ * If the two fetch commands are originated from same io_ring_ctx,
+ * run batch dispatch directly. Otherwise, schedule task work for
+ * doing it.
+ */
+ if (new_fcmd && io_uring_cmd_ctx_handle(new_fcmd->cmd) ==
+ io_uring_cmd_ctx_handle(fcmd->cmd)) {
+ data->cmd = new_fcmd->cmd;
+ ublk_batch_dispatch(data, new_fcmd);
+ } else if (new_fcmd) {
+ io_uring_cmd_complete_in_task(new_fcmd->cmd,
+ ublk_batch_tw_cb);
+ }
+
+ if (free) {
+ ublk_batch_free_fcmd(fcmd);
+ return -ENODEV;
+ }
+ return -EIOCBQUEUED;
+}
+
+static int ublk_handle_batch_fetch_cmd(struct ublk_batch_io_data *data)
+{
+ struct ublk_batch_fcmd *fcmd = ublk_batch_alloc_fcmd(data->cmd);
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(data->cmd);
+
+ if (!fcmd)
+ return -ENOMEM;
+
+ pdu->ubq = data->ubq;
+ pdu->fcmd = fcmd;
+ io_uring_cmd_mark_cancelable(data->cmd, data->issue_flags);
+
+ return ublk_batch_attach(data, fcmd);
+}
+
+static int ublk_validate_batch_fetch_cmd(struct ublk_batch_io_data *data,
+ const struct ublk_batch_io *uc)
+{
+ if (!(data->cmd->flags & IORING_URING_CMD_MULTISHOT))
+ return -EINVAL;
+
+ if (uc->elem_bytes != sizeof(__u16))
+ return -EINVAL;
+
+ if (uc->flags != 0)
+ return -E2BIG;
+
+ return 0;
+}
+
static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
@@ -3075,6 +3385,11 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
goto out;
data.ubq = ublk_get_queue(ub, uc->q_id);
+ if (unlikely(issue_flags & IO_URING_F_CANCEL)) {
+ ublk_batch_cancel_fn(cmd, issue_flags);
+ return 0;
+ }
+
switch (cmd_op) {
case UBLK_U_IO_PREP_IO_CMDS:
ret = ublk_check_batch_cmd(&data, uc);
@@ -3088,6 +3403,12 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
goto out;
ret = ublk_handle_batch_commit_cmd(&data);
break;
+ case UBLK_U_IO_FETCH_IO_CMDS:
+ ret = ublk_validate_batch_fetch_cmd(&data, uc);
+ if (ret)
+ goto out;
+ ret = ublk_handle_batch_fetch_cmd(&data);
+ break;
default:
ret = -EOPNOTSUPP;
}
@@ -3267,6 +3588,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
ret = ublk_io_evts_init(ubq, ubq->q_depth);
if (ret)
goto fail;
+ INIT_LIST_HEAD(&ubq->fcmd_head);
}
return 0;
fail:
@@ -3398,7 +3720,10 @@ static void ublk_align_max_io_size(struct ublk_device *ub)
static int ublk_add_tag_set(struct ublk_device *ub)
{
- ub->tag_set.ops = &ublk_mq_ops;
+ if (ublk_dev_support_batch_io(ub))
+ ub->tag_set.ops = &ublk_batch_mq_ops;
+ else
+ ub->tag_set.ops = &ublk_mq_ops;
ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues;
ub->tag_set.queue_depth = ub->dev_info.queue_depth;
ub->tag_set.numa_node = NUMA_NO_NODE;
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index fbd3582bc203..3c1505b7ec41 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -120,6 +120,13 @@
#define UBLK_U_IO_COMMIT_IO_CMDS \
_IOWR('u', 0x26, struct ublk_batch_io)
+/*
+ * Fetch io commands to provided buffer in multishot style,
+ * `IORING_URING_CMD_MULTISHOT` is required for this command.
+ */
+#define UBLK_U_IO_FETCH_IO_CMDS \
+ _IOWR('u', 0x27, struct ublk_batch_io)
+
/* only ABORT means that no re-fetch */
#define UBLK_IO_RES_OK 0
#define UBLK_IO_RES_NEED_GET_DATA 1
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 13/23] ublk: abort requests filled in event kfifo
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (11 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 12/23] ublk: add UBLK_U_IO_FETCH_IO_CMDS for batch I/O processing Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 14/23] ublk: add new feature UBLK_F_BATCH_IO Ming Lei
` (9 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
In case of BATCH_IO, any request filled in event kfifo, they don't get
chance to be dispatched any more when releasing ublk char device, so
we have to abort them too.
Add ublk_abort_batch_queue() for aborting this kind of requests.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 60e19fe0655e..cbc5c372c4aa 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2143,6 +2143,26 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
}
}
+/*
+ * Request tag may just be filled to event kfifo, not get chance to
+ * dispatch, abort these requests too
+ */
+static void ublk_abort_batch_queue(struct ublk_device *ub,
+ struct ublk_queue *ubq)
+{
+ while (true) {
+ struct request *req;
+ short tag;
+
+ if (!kfifo_out(&ubq->evts_fifo, &tag, 1))
+ break;
+
+ req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
+ if (req && blk_mq_request_started(req))
+ __ublk_fail_req(ubq, &ubq->ios[tag], req);
+ }
+}
+
/*
* Called from ublk char device release handler, when any uring_cmd is
* done, meantime request queue is "quiesced" since all inflight requests
@@ -2161,6 +2181,9 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
if (io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)
__ublk_fail_req(ubq, io, io->req);
}
+
+ if (ublk_support_batch_io(ubq))
+ ublk_abort_batch_queue(ub, ubq);
}
static void ublk_start_cancel(struct ublk_device *ub)
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 14/23] ublk: add new feature UBLK_F_BATCH_IO
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (12 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 13/23] ublk: abort requests filled in event kfifo Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 15/23] ublk: document " Ming Lei
` (8 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add new feature UBLK_F_BATCH_IO which replaces the following two
per-io commands:
- UBLK_U_IO_FETCH_REQ
- UBLK_U_IO_COMMIT_AND_FETCH_REQ
with three per-queue batch io uring_cmd:
- UBLK_U_IO_PREP_IO_CMDS
- UBLK_U_IO_COMMIT_IO_CMDS
- UBLK_U_IO_FETCH_IO_CMDS
Then ublk can deliver batch io commands to ublk server in single
multishort uring_cmd, also allows to prepare & commit multiple
commands in batch style via single uring_cmd, communication cost is
reduced a lot.
This feature also doesn't limit task context any more for all supported
commands, so any allowed uring_cmd can be issued in any task context.
ublk server implementation becomes much easier.
Meantime load balance becomes much easier to support with this feature.
The command `UBLK_U_IO_FETCH_IO_CMDS` can be issued from multiple task
contexts, so each task can adjust this command's buffer length or number
of inflight commands for controlling how much load is handled by current
task.
Later, priority parameter will be added to command `UBLK_U_IO_FETCH_IO_CMDS`
for improving load balance support.
UBLK_U_IO_GET_DATA isn't supported in batch io yet, but it may be
enabled in future via its batch pair.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 57 ++++++++++++++++++++++++++++++++---
include/uapi/linux/ublk_cmd.h | 16 ++++++++++
2 files changed, 68 insertions(+), 5 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index cbc5c372c4aa..846215313093 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -74,7 +74,8 @@
| UBLK_F_AUTO_BUF_REG \
| UBLK_F_QUIESCE \
| UBLK_F_PER_IO_DAEMON \
- | UBLK_F_BUF_REG_OFF_DAEMON)
+ | UBLK_F_BUF_REG_OFF_DAEMON \
+ | UBLK_F_BATCH_IO)
#define UBLK_F_ALL_RECOVERY_FLAGS (UBLK_F_USER_RECOVERY \
| UBLK_F_USER_RECOVERY_REISSUE \
@@ -322,12 +323,12 @@ static struct ublk_batch_fcmd *__ublk_pick_active_fcmd(
static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
{
- return false;
+ return ub->dev_info.flags & UBLK_F_BATCH_IO;
}
static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
{
- return false;
+ return ubq->flags & UBLK_F_BATCH_IO;
}
static inline void ublk_io_lock(struct ublk_io *io)
@@ -3392,6 +3393,40 @@ static int ublk_validate_batch_fetch_cmd(struct ublk_batch_io_data *data,
return 0;
}
+static int ublk_handle_non_batch_cmd(struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
+{
+ const struct ublksrv_io_cmd *ub_cmd = io_uring_sqe_cmd(cmd->sqe);
+ struct ublk_device *ub = cmd->file->private_data;
+ unsigned tag = ub_cmd->tag;
+ struct ublk_queue *ubq;
+ struct ublk_io *io;
+ int ret = -EINVAL;
+
+ if (!ub)
+ return ret;
+
+ if (ub_cmd->q_id >= ub->dev_info.nr_hw_queues)
+ return ret;
+
+ ubq = ublk_get_queue(ub, ub_cmd->q_id);
+ if (tag >= ubq->q_depth)
+ return ret;
+
+ io = &ubq->ios[tag];
+
+ switch (cmd->cmd_op) {
+ case UBLK_U_IO_REGISTER_IO_BUF:
+ return ublk_register_io_buf(cmd, ubq, io, ub_cmd->addr,
+ issue_flags);
+ case UBLK_U_IO_UNREGISTER_IO_BUF:
+ return ublk_unregister_io_buf(cmd, ub, ub_cmd->addr,
+ issue_flags);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
@@ -3433,7 +3468,8 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
ret = ublk_handle_batch_fetch_cmd(&data);
break;
default:
- ret = -EOPNOTSUPP;
+ ret = ublk_handle_non_batch_cmd(cmd, issue_flags);
+ break;
}
out:
return ret;
@@ -4079,9 +4115,13 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
ub->dev_info.flags |= UBLK_F_CMD_IOCTL_ENCODE |
UBLK_F_URING_CMD_COMP_IN_TASK |
- UBLK_F_PER_IO_DAEMON |
+ (ublk_dev_support_batch_io(ub) ? 0 : UBLK_F_PER_IO_DAEMON) |
UBLK_F_BUF_REG_OFF_DAEMON;
+ /* So far, UBLK_F_PER_IO_DAEMON won't be exposed for BATCH_IO */
+ if (ublk_dev_support_batch_io(ub))
+ ub->dev_info.flags &= ~UBLK_F_PER_IO_DAEMON;
+
/* GET_DATA isn't needed any more with USER_COPY or ZERO COPY */
if (ub->dev_info.flags & (UBLK_F_USER_COPY | UBLK_F_SUPPORT_ZERO_COPY |
UBLK_F_AUTO_BUF_REG))
@@ -4434,6 +4474,13 @@ static int ublk_wait_for_idle_io(struct ublk_device *ub,
unsigned int elapsed = 0;
int ret;
+ /*
+ * For UBLK_F_BATCH_IO ublk server can get notified with existing
+ * or new fetch command, so needn't wait any more
+ */
+ if (ublk_dev_support_batch_io(ub))
+ return 0;
+
while (elapsed < timeout_ms && !signal_pending(current)) {
unsigned int queues_cancelable = 0;
int i;
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index 3c1505b7ec41..647cab1cbb97 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -335,6 +335,22 @@
*/
#define UBLK_F_BUF_REG_OFF_DAEMON (1ULL << 14)
+
+/*
+ * Support the following commands for delivering & committing io command
+ * in batch.
+ *
+ * - UBLK_U_IO_PREP_IO_CMDS
+ * - UBLK_U_IO_COMMIT_IO_CMDS
+ * - UBLK_U_IO_FETCH_IO_CMDS
+ * - UBLK_U_IO_REGISTER_IO_BUF
+ * - UBLK_U_IO_UNREGISTER_IO_BUF
+ *
+ * The existing UBLK_U_IO_FETCH_REQ, UBLK_U_IO_COMMIT_AND_FETCH_REQ and
+ * UBLK_U_IO_GET_DATA uring_cmd are not supported for this feature.
+ */
+#define UBLK_F_BATCH_IO (1ULL << 15)
+
/* device state */
#define UBLK_S_DEV_DEAD 0
#define UBLK_S_DEV_LIVE 1
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 15/23] ublk: document feature UBLK_F_BATCH_IO
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (13 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 14/23] ublk: add new feature UBLK_F_BATCH_IO Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 16/23] selftests: ublk: replace assert() with ublk_assert() Ming Lei
` (7 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Document feature UBLK_F_BATCH_IO.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
Documentation/block/ublk.rst | 60 +++++++++++++++++++++++++++++++++---
1 file changed, 56 insertions(+), 4 deletions(-)
diff --git a/Documentation/block/ublk.rst b/Documentation/block/ublk.rst
index 8c4030bcabb6..09a5604f8e10 100644
--- a/Documentation/block/ublk.rst
+++ b/Documentation/block/ublk.rst
@@ -260,9 +260,12 @@ The following IO commands are communicated via io_uring passthrough command,
and each command is only for forwarding the IO and committing the result
with specified IO tag in the command data:
-- ``UBLK_IO_FETCH_REQ``
+Traditional Per-I/O Commands
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Sent from the server IO pthread for fetching future incoming IO requests
+- ``UBLK_U_IO_FETCH_REQ``
+
+ Sent from the server I/O pthread for fetching future incoming I/O requests
destined to ``/dev/ublkb*``. This command is sent only once from the server
IO pthread for ublk driver to setup IO forward environment.
@@ -278,7 +281,7 @@ with specified IO tag in the command data:
supported by the driver, daemons must be per-queue instead - i.e. all I/Os
associated to a single qid must be handled by the same task.
-- ``UBLK_IO_COMMIT_AND_FETCH_REQ``
+- ``UBLK_U_IO_COMMIT_AND_FETCH_REQ``
When an IO request is destined to ``/dev/ublkb*``, the driver stores
the IO's ``ublksrv_io_desc`` to the specified mapped area; then the
@@ -293,7 +296,7 @@ with specified IO tag in the command data:
requests with the same IO tag. That is, ``UBLK_IO_COMMIT_AND_FETCH_REQ``
is reused for both fetching request and committing back IO result.
-- ``UBLK_IO_NEED_GET_DATA``
+- ``UBLK_U_IO_NEED_GET_DATA``
With ``UBLK_F_NEED_GET_DATA`` enabled, the WRITE request will be firstly
issued to ublk server without data copy. Then, IO backend of ublk server
@@ -322,6 +325,55 @@ with specified IO tag in the command data:
``UBLK_IO_COMMIT_AND_FETCH_REQ`` to the server, ublkdrv needs to copy
the server buffer (pages) read to the IO request pages.
+Batch I/O Commands (UBLK_F_BATCH_IO)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``UBLK_F_BATCH_IO`` feature provides an alternative high-performance
+I/O handling model that replaces the traditional per-I/O commands with
+per-queue batch commands. This significantly reduces communication overhead
+and enables better load balancing across multiple server tasks.
+
+Key differences from traditional mode:
+
+- **Per-queue vs Per-I/O**: Commands operate on queues rather than individual I/Os
+- **Batch processing**: Multiple I/Os are handled in single operations
+- **Multishot commands**: Use io_uring multishot for reduced submission overhead
+- **Flexible task assignment**: Any task can handle any I/O (no per-I/O daemons)
+- **Better load balancing**: Tasks can adjust their workload dynamically
+
+Batch I/O Commands:
+
+- ``UBLK_U_IO_PREP_IO_CMDS``
+
+ Prepares multiple I/O commands in batch. The server provides a buffer
+ containing multiple I/O descriptors that will be processed together.
+ This reduces the number of individual command submissions required.
+
+- ``UBLK_U_IO_COMMIT_IO_CMDS``
+
+ Commits results for multiple I/O operations in batch. The server provides
+ a buffer containing the results of multiple completed I/Os, allowing
+ efficient bulk completion of requests.
+
+- ``UBLK_U_IO_FETCH_IO_CMDS``
+
+ **Multishot command** for fetching I/O commands in batch. This is the key
+ command that enables high-performance batch processing:
+
+ * Uses io_uring multishot capability for reduced submission overhead
+ * Single command can fetch multiple I/O requests over time
+ * Buffer size determines maximum batch size per operation
+ * Multiple fetch commands can be submitted for load balancing
+ * Only one fetch command is active at any time per queue
+ * Supports dynamic load balancing across multiple server tasks
+
+ Each task can submit ``UBLK_U_IO_FETCH_IO_CMDS`` with different buffer
+ sizes to control how much work it handles. This enables sophisticated
+ load balancing strategies in multi-threaded servers.
+
+Migration: Applications using traditional commands (``UBLK_U_IO_FETCH_REQ``,
+``UBLK_U_IO_COMMIT_AND_FETCH_REQ``) cannot use batch mode simultaneously.
+
Zero copy
---------
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 16/23] selftests: ublk: replace assert() with ublk_assert()
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (14 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 15/23] ublk: document " Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 17/23] selftests: ublk: add ublk_io_buf_idx() for returning io buffer index Ming Lei
` (6 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Replace assert() with ublk_assert() since it is often triggered in daemon,
and we may get nothing shown in terminal.
Add ublk_assert(), so we can log something to syslog when assert() is
triggered.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/common.c | 2 +-
tools/testing/selftests/ublk/file_backed.c | 2 +-
tools/testing/selftests/ublk/kublk.c | 2 +-
tools/testing/selftests/ublk/kublk.h | 2 +-
tools/testing/selftests/ublk/stripe.c | 10 +++++-----
tools/testing/selftests/ublk/utils.h | 10 ++++++++++
6 files changed, 19 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/ublk/common.c b/tools/testing/selftests/ublk/common.c
index 01580a6f8519..4c07bc37eb6d 100644
--- a/tools/testing/selftests/ublk/common.c
+++ b/tools/testing/selftests/ublk/common.c
@@ -16,7 +16,7 @@ int backing_file_tgt_init(struct ublk_dev *dev)
{
int fd, i;
- assert(dev->nr_fds == 1);
+ ublk_assert(dev->nr_fds == 1);
for (i = 0; i < dev->tgt.nr_backing_files; i++) {
char *file = dev->tgt.backing_file[i];
diff --git a/tools/testing/selftests/ublk/file_backed.c b/tools/testing/selftests/ublk/file_backed.c
index 2d93ac860bd5..99bde88b6ebd 100644
--- a/tools/testing/selftests/ublk/file_backed.c
+++ b/tools/testing/selftests/ublk/file_backed.c
@@ -10,7 +10,7 @@ static enum io_uring_op ublk_to_uring_op(const struct ublksrv_io_desc *iod, int
return zc ? IORING_OP_READ_FIXED : IORING_OP_READ;
else if (ublk_op == UBLK_IO_OP_WRITE)
return zc ? IORING_OP_WRITE_FIXED : IORING_OP_WRITE;
- assert(0);
+ ublk_assert(0);
}
static int loop_queue_flush_io(struct ublk_thread *t, struct ublk_queue *q,
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index 95188065b2e9..6b25c7e1e6a6 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -733,7 +733,7 @@ static void ublk_handle_uring_cmd(struct ublk_thread *t,
}
if (cqe->res == UBLK_IO_RES_OK) {
- assert(tag < q->q_depth);
+ ublk_assert(tag < q->q_depth);
if (q->tgt_ops->queue_io)
q->tgt_ops->queue_io(t, q, tag);
} else if (cqe->res == UBLK_IO_RES_NEED_GET_DATA) {
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index 219233f8a053..b54eef96948e 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -218,7 +218,7 @@ static inline __u64 build_user_data(unsigned tag, unsigned op,
{
/* we only have 7 bits to encode q_id */
_Static_assert(UBLK_MAX_QUEUES_SHIFT <= 7);
- assert(!(tag >> 16) && !(op >> 8) && !(tgt_data >> 16) && !(q_id >> 7));
+ ublk_assert(!(tag >> 16) && !(op >> 8) && !(tgt_data >> 16) && !(q_id >> 7));
return tag | (op << 16) | (tgt_data << 24) |
(__u64)q_id << 56 | (__u64)is_target_io << 63;
diff --git a/tools/testing/selftests/ublk/stripe.c b/tools/testing/selftests/ublk/stripe.c
index 1fb9b7cc281b..81dd05214b3f 100644
--- a/tools/testing/selftests/ublk/stripe.c
+++ b/tools/testing/selftests/ublk/stripe.c
@@ -96,12 +96,12 @@ static void calculate_stripe_array(const struct stripe_conf *conf,
this->seq = seq;
s->nr += 1;
} else {
- assert(seq == this->seq);
- assert(this->start + this->nr_sects == stripe_off);
+ ublk_assert(seq == this->seq);
+ ublk_assert(this->start + this->nr_sects == stripe_off);
this->nr_sects += nr_sects;
}
- assert(this->nr_vec < this->cap);
+ ublk_assert(this->nr_vec < this->cap);
this->vec[this->nr_vec].iov_base = (void *)(base + done);
this->vec[this->nr_vec++].iov_len = nr_sects << 9;
@@ -120,7 +120,7 @@ static inline enum io_uring_op stripe_to_uring_op(
return zc ? IORING_OP_READV_FIXED : IORING_OP_READV;
else if (ublk_op == UBLK_IO_OP_WRITE)
return zc ? IORING_OP_WRITEV_FIXED : IORING_OP_WRITEV;
- assert(0);
+ ublk_assert(0);
}
static int stripe_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
@@ -318,7 +318,7 @@ static int ublk_stripe_tgt_init(const struct dev_ctx *ctx, struct ublk_dev *dev)
if (!dev->tgt.nr_backing_files || dev->tgt.nr_backing_files > NR_STRIPE)
return -EINVAL;
- assert(dev->nr_fds == dev->tgt.nr_backing_files + 1);
+ ublk_assert(dev->nr_fds == dev->tgt.nr_backing_files + 1);
for (i = 0; i < dev->tgt.nr_backing_files; i++)
dev->tgt.backing_file_size[i] &= ~((1 << chunk_shift) - 1);
diff --git a/tools/testing/selftests/ublk/utils.h b/tools/testing/selftests/ublk/utils.h
index 36545d1567f1..47e44cccaf6a 100644
--- a/tools/testing/selftests/ublk/utils.h
+++ b/tools/testing/selftests/ublk/utils.h
@@ -45,6 +45,7 @@ static inline void ublk_err(const char *fmt, ...)
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
+ va_end(ap);
}
static inline void ublk_log(const char *fmt, ...)
@@ -54,6 +55,7 @@ static inline void ublk_log(const char *fmt, ...)
va_start(ap, fmt);
vfprintf(stdout, fmt, ap);
+ va_end(ap);
}
}
@@ -64,7 +66,15 @@ static inline void ublk_dbg(int level, const char *fmt, ...)
va_start(ap, fmt);
vfprintf(stdout, fmt, ap);
+ va_end(ap);
}
}
+#define ublk_assert(x) do { \
+ if (!(x)) { \
+ ublk_err("%s %d: assert!\n", __func__, __LINE__); \
+ assert(x); \
+ } \
+} while (0)
+
#endif
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 17/23] selftests: ublk: add ublk_io_buf_idx() for returning io buffer index
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (15 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 16/23] selftests: ublk: replace assert() with ublk_assert() Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 18/23] selftests: ublk: add batch buffer management infrastructure Ming Lei
` (5 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Since UBLK_F_PER_IO_DAEMON is added, io buffer index may depend on current
thread because the common way is to use per-pthread io_ring_ctx for issuing
ublk uring_cmd.
Add one helper for returning io buffer index, so we can hide the buffer
index implementation details for target code.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/file_backed.c | 9 +++++----
tools/testing/selftests/ublk/kublk.c | 9 +++++----
tools/testing/selftests/ublk/kublk.h | 10 +++++++++-
tools/testing/selftests/ublk/null.c | 18 ++++++++++--------
tools/testing/selftests/ublk/stripe.c | 7 ++++---
5 files changed, 33 insertions(+), 20 deletions(-)
diff --git a/tools/testing/selftests/ublk/file_backed.c b/tools/testing/selftests/ublk/file_backed.c
index 99bde88b6ebd..76cd11597cce 100644
--- a/tools/testing/selftests/ublk/file_backed.c
+++ b/tools/testing/selftests/ublk/file_backed.c
@@ -36,6 +36,7 @@ static int loop_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
enum io_uring_op op = ublk_to_uring_op(iod, zc | auto_zc);
struct io_uring_sqe *sqe[3];
void *addr = (zc | auto_zc) ? NULL : (void *)iod->addr;
+ unsigned short buf_idx = ublk_io_buf_idx(t, q, tag);
if (!zc || auto_zc) {
ublk_io_alloc_sqes(t, sqe, 1);
@@ -47,7 +48,7 @@ static int loop_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
iod->nr_sectors << 9,
iod->start_sector << 9);
if (auto_zc)
- sqe[0]->buf_index = tag;
+ sqe[0]->buf_index = buf_idx;
io_uring_sqe_set_flags(sqe[0], IOSQE_FIXED_FILE);
/* bit63 marks us as tgt io */
sqe[0]->user_data = build_user_data(tag, ublk_op, 0, q->q_id, 1);
@@ -56,7 +57,7 @@ static int loop_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
ublk_io_alloc_sqes(t, sqe, 3);
- io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index);
+ io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, buf_idx);
sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK;
sqe[0]->user_data = build_user_data(tag,
ublk_cmd_op_nr(sqe[0]->cmd_op), 0, q->q_id, 1);
@@ -64,11 +65,11 @@ static int loop_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
io_uring_prep_rw(op, sqe[1], 1 /*fds[1]*/, 0,
iod->nr_sectors << 9,
iod->start_sector << 9);
- sqe[1]->buf_index = tag;
+ sqe[1]->buf_index = buf_idx;
sqe[1]->flags |= IOSQE_FIXED_FILE | IOSQE_IO_HARDLINK;
sqe[1]->user_data = build_user_data(tag, ublk_op, 0, q->q_id, 1);
- io_uring_prep_buf_unregister(sqe[2], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index);
+ io_uring_prep_buf_unregister(sqe[2], 0, tag, q->q_id, buf_idx);
sqe[2]->user_data = build_user_data(tag, ublk_cmd_op_nr(sqe[2]->cmd_op), 0, q->q_id, 1);
return 2;
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index 6b25c7e1e6a6..ac74b0f554fe 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -565,16 +565,17 @@ static void ublk_dev_unprep(struct ublk_dev *dev)
close(dev->fds[0]);
}
-static void ublk_set_auto_buf_reg(const struct ublk_queue *q,
+static void ublk_set_auto_buf_reg(const struct ublk_thread *t,
+ const struct ublk_queue *q,
struct io_uring_sqe *sqe,
unsigned short tag)
{
struct ublk_auto_buf_reg buf = {};
if (q->tgt_ops->buf_index)
- buf.index = q->tgt_ops->buf_index(q, tag);
+ buf.index = q->tgt_ops->buf_index(t, q, tag);
else
- buf.index = q->ios[tag].buf_index;
+ buf.index = ublk_io_buf_idx(t, q, tag);
if (ublk_queue_auto_zc_fallback(q))
buf.flags = UBLK_AUTO_BUF_REG_FALLBACK;
@@ -638,7 +639,7 @@ int ublk_queue_io_cmd(struct ublk_thread *t, struct ublk_io *io)
cmd->addr = 0;
if (ublk_queue_use_auto_zc(q))
- ublk_set_auto_buf_reg(q, sqe[0], io->tag);
+ ublk_set_auto_buf_reg(t, q, sqe[0], io->tag);
user_data = build_user_data(io->tag, _IOC_NR(cmd_op), 0, q->q_id, 0);
io_uring_sqe_set_data64(sqe[0], user_data);
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index b54eef96948e..5e62a99b65cc 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -142,7 +142,8 @@ struct ublk_tgt_ops {
void (*usage)(const struct ublk_tgt_ops *ops);
/* return buffer index for UBLK_F_AUTO_BUF_REG */
- unsigned short (*buf_index)(const struct ublk_queue *, int tag);
+ unsigned short (*buf_index)(const struct ublk_thread *t,
+ const struct ublk_queue *, int tag);
};
struct ublk_tgt {
@@ -337,6 +338,13 @@ static inline void ublk_set_sqe_cmd_op(struct io_uring_sqe *sqe, __u32 cmd_op)
addr[1] = 0;
}
+static inline unsigned short ublk_io_buf_idx(const struct ublk_thread *t,
+ const struct ublk_queue *q,
+ unsigned tag)
+{
+ return q->ios[tag].buf_index;
+}
+
static inline struct ublk_io *ublk_get_io(struct ublk_queue *q, unsigned tag)
{
return &q->ios[tag];
diff --git a/tools/testing/selftests/ublk/null.c b/tools/testing/selftests/ublk/null.c
index f0e0003a4860..be796063405c 100644
--- a/tools/testing/selftests/ublk/null.c
+++ b/tools/testing/selftests/ublk/null.c
@@ -43,12 +43,12 @@ static int ublk_null_tgt_init(const struct dev_ctx *ctx, struct ublk_dev *dev)
}
static void __setup_nop_io(int tag, const struct ublksrv_io_desc *iod,
- struct io_uring_sqe *sqe, int q_id)
+ struct io_uring_sqe *sqe, int q_id, unsigned buf_idx)
{
unsigned ublk_op = ublksrv_get_op(iod);
io_uring_prep_nop(sqe);
- sqe->buf_index = tag;
+ sqe->buf_index = buf_idx;
sqe->flags |= IOSQE_FIXED_FILE;
sqe->rw_flags = IORING_NOP_FIXED_BUFFER | IORING_NOP_INJECT_RESULT;
sqe->len = iod->nr_sectors << 9; /* injected result */
@@ -60,18 +60,19 @@ static int null_queue_zc_io(struct ublk_thread *t, struct ublk_queue *q,
{
const struct ublksrv_io_desc *iod = ublk_get_iod(q, tag);
struct io_uring_sqe *sqe[3];
+ unsigned short buf_idx = ublk_io_buf_idx(t, q, tag);
ublk_io_alloc_sqes(t, sqe, 3);
- io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index);
+ io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, buf_idx);
sqe[0]->user_data = build_user_data(tag,
ublk_cmd_op_nr(sqe[0]->cmd_op), 0, q->q_id, 1);
sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK;
- __setup_nop_io(tag, iod, sqe[1], q->q_id);
+ __setup_nop_io(tag, iod, sqe[1], q->q_id, buf_idx);
sqe[1]->flags |= IOSQE_IO_HARDLINK;
- io_uring_prep_buf_unregister(sqe[2], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index);
+ io_uring_prep_buf_unregister(sqe[2], 0, tag, q->q_id, buf_idx);
sqe[2]->user_data = build_user_data(tag, ublk_cmd_op_nr(sqe[2]->cmd_op), 0, q->q_id, 1);
// buf register is marked as IOSQE_CQE_SKIP_SUCCESS
@@ -85,7 +86,7 @@ static int null_queue_auto_zc_io(struct ublk_thread *t, struct ublk_queue *q,
struct io_uring_sqe *sqe[1];
ublk_io_alloc_sqes(t, sqe, 1);
- __setup_nop_io(tag, iod, sqe[0], q->q_id);
+ __setup_nop_io(tag, iod, sqe[0], q->q_id, ublk_io_buf_idx(t, q, tag));
return 1;
}
@@ -136,11 +137,12 @@ static int ublk_null_queue_io(struct ublk_thread *t, struct ublk_queue *q,
* return invalid buffer index for triggering auto buffer register failure,
* then UBLK_IO_RES_NEED_REG_BUF handling is covered
*/
-static unsigned short ublk_null_buf_index(const struct ublk_queue *q, int tag)
+static unsigned short ublk_null_buf_index(const struct ublk_thread *t,
+ const struct ublk_queue *q, int tag)
{
if (ublk_queue_auto_zc_fallback(q))
return (unsigned short)-1;
- return q->ios[tag].buf_index;
+ return ublk_io_buf_idx(t, q, tag);
}
const struct ublk_tgt_ops null_tgt_ops = {
diff --git a/tools/testing/selftests/ublk/stripe.c b/tools/testing/selftests/ublk/stripe.c
index 81dd05214b3f..85b7860b0f0f 100644
--- a/tools/testing/selftests/ublk/stripe.c
+++ b/tools/testing/selftests/ublk/stripe.c
@@ -135,6 +135,7 @@ static int stripe_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
struct ublk_io *io = ublk_get_io(q, tag);
int i, extra = zc ? 2 : 0;
void *base = (zc | auto_zc) ? NULL : (void *)iod->addr;
+ unsigned short buf_idx = ublk_io_buf_idx(t, q, tag);
io->private_data = s;
calculate_stripe_array(conf, iod, s, base);
@@ -142,7 +143,7 @@ static int stripe_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
ublk_io_alloc_sqes(t, sqe, s->nr + extra);
if (zc) {
- io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, io->buf_index);
+ io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, buf_idx);
sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK;
sqe[0]->user_data = build_user_data(tag,
ublk_cmd_op_nr(sqe[0]->cmd_op), 0, q->q_id, 1);
@@ -158,7 +159,7 @@ static int stripe_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
t->start << 9);
io_uring_sqe_set_flags(sqe[i], IOSQE_FIXED_FILE);
if (auto_zc || zc) {
- sqe[i]->buf_index = tag;
+ sqe[i]->buf_index = buf_idx;
if (zc)
sqe[i]->flags |= IOSQE_IO_HARDLINK;
}
@@ -168,7 +169,7 @@ static int stripe_queue_tgt_rw_io(struct ublk_thread *t, struct ublk_queue *q,
if (zc) {
struct io_uring_sqe *unreg = sqe[s->nr + 1];
- io_uring_prep_buf_unregister(unreg, 0, tag, q->q_id, io->buf_index);
+ io_uring_prep_buf_unregister(unreg, 0, tag, q->q_id, buf_idx);
unreg->user_data = build_user_data(
tag, ublk_cmd_op_nr(unreg->cmd_op), 0, q->q_id, 1);
}
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 18/23] selftests: ublk: add batch buffer management infrastructure
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (16 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 17/23] selftests: ublk: add ublk_io_buf_idx() for returning io buffer index Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 19/23] selftests: ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
` (4 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add the foundational infrastructure for UBLK_F_BATCH_IO buffer
management including:
- Allocator utility functions for small sized per-thread allocation
- Batch buffer allocation and deallocation functions
- Buffer index management for commit buffers
- Thread state management for batch I/O mode
- Buffer size calculation based on device features
This prepares the groundwork for handling batch I/O commands by
establishing the buffer management layer needed for UBLK_U_IO_PREP_IO_CMDS
and UBLK_U_IO_COMMIT_IO_CMDS operations.
The allocator uses CPU sets for efficient per-thread buffer tracking,
and commit buffers are pre-allocated with 2 buffers per thread to handle
overlapping command operations.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/Makefile | 2 +-
tools/testing/selftests/ublk/batch.c | 148 ++++++++++++++++++++++++++
tools/testing/selftests/ublk/kublk.c | 26 ++++-
tools/testing/selftests/ublk/kublk.h | 52 +++++++++
tools/testing/selftests/ublk/utils.h | 54 ++++++++++
5 files changed, 278 insertions(+), 4 deletions(-)
create mode 100644 tools/testing/selftests/ublk/batch.c
diff --git a/tools/testing/selftests/ublk/Makefile b/tools/testing/selftests/ublk/Makefile
index 5d7f4ecfb816..19793678f24c 100644
--- a/tools/testing/selftests/ublk/Makefile
+++ b/tools/testing/selftests/ublk/Makefile
@@ -43,7 +43,7 @@ TEST_GEN_PROGS_EXTENDED = kublk
include ../lib.mk
-$(TEST_GEN_PROGS_EXTENDED): kublk.c null.c file_backed.c common.c stripe.c \
+$(TEST_GEN_PROGS_EXTENDED): kublk.c batch.c null.c file_backed.c common.c stripe.c \
fault_inject.c
check:
diff --git a/tools/testing/selftests/ublk/batch.c b/tools/testing/selftests/ublk/batch.c
new file mode 100644
index 000000000000..ee851e27f053
--- /dev/null
+++ b/tools/testing/selftests/ublk/batch.c
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Description: UBLK_F_BATCH_IO buffer management
+ */
+
+#include "kublk.h"
+
+static inline void *ublk_get_commit_buf(struct ublk_thread *t,
+ unsigned short buf_idx)
+{
+ unsigned idx;
+
+ if (buf_idx < t->commit_buf_start ||
+ buf_idx >= t->commit_buf_start + t->nr_commit_buf)
+ return NULL;
+ idx = buf_idx - t->commit_buf_start;
+ return t->commit_buf + idx * t->commit_buf_size;
+}
+
+/*
+ * Allocate one buffer for UBLK_U_IO_PREP_IO_CMDS or UBLK_U_IO_COMMIT_IO_CMDS
+ *
+ * Buffer index is returned.
+ */
+static inline unsigned short ublk_alloc_commit_buf(struct ublk_thread *t)
+{
+ int idx = allocator_get(&t->commit_buf_alloc);
+
+ if (idx >= 0)
+ return idx + t->commit_buf_start;
+ return UBLKS_T_COMMIT_BUF_INV_IDX;
+}
+
+/*
+ * Free one commit buffer which is used by UBLK_U_IO_PREP_IO_CMDS or
+ * UBLK_U_IO_COMMIT_IO_CMDS
+ */
+static inline void ublk_free_commit_buf(struct ublk_thread *t,
+ unsigned short i)
+{
+ unsigned short idx = i - t->commit_buf_start;
+
+ ublk_assert(idx < t->nr_commit_buf);
+ ublk_assert(allocator_get_val(&t->commit_buf_alloc, idx) != 0);
+
+ allocator_put(&t->commit_buf_alloc, idx);
+}
+
+static unsigned char ublk_commit_elem_buf_size(struct ublk_dev *dev)
+{
+ if (dev->dev_info.flags & (UBLK_F_SUPPORT_ZERO_COPY | UBLK_F_USER_COPY |
+ UBLK_F_AUTO_BUF_REG))
+ return 8;
+
+ /* one extra 8bytes for carrying buffer address */
+ return 16;
+}
+
+static unsigned ublk_commit_buf_size(struct ublk_thread *t)
+{
+ struct ublk_dev *dev = t->dev;
+ unsigned elem_size = ublk_commit_elem_buf_size(dev);
+ unsigned int total = elem_size * dev->dev_info.queue_depth;
+ unsigned int page_sz = getpagesize();
+
+ return round_up(total, page_sz);
+}
+
+static void free_batch_commit_buf(struct ublk_thread *t)
+{
+ free(t->commit_buf);
+ allocator_deinit(&t->commit_buf_alloc);
+}
+
+static int alloc_batch_commit_buf(struct ublk_thread *t)
+{
+ unsigned buf_size = ublk_commit_buf_size(t);
+ unsigned int total = buf_size * t->nr_commit_buf;
+ struct iovec iov[t->nr_commit_buf];
+ unsigned int page_sz = getpagesize();
+ void *buf = NULL;
+ int i, ret;
+
+ allocator_init(&t->commit_buf_alloc, t->nr_commit_buf);
+
+ t->commit_buf = NULL;
+ ret = posix_memalign(&buf, page_sz, total);
+ if (ret || !buf)
+ goto fail;
+
+ t->commit_buf = buf;
+ for (i = 0; i < t->nr_commit_buf; i++) {
+ iov[i].iov_base = buf;
+ iov[i].iov_len = buf_size;
+ buf += buf_size;
+ }
+
+ ret = io_uring_register_buffers_update_tag(&t->ring,
+ t->commit_buf_start, iov, NULL,
+ t->nr_commit_buf);
+ if (ret == t->nr_commit_buf)
+ return 0;
+
+ ublk_err("%s: io_uring_register_buffers_update_tag failed ret %d\n",
+ __func__, ret);
+fail:
+ free_batch_commit_buf(t);
+ return ret;
+}
+
+void ublk_batch_prepare(struct ublk_thread *t)
+{
+ /*
+ * We only handle single device in this thread context.
+ *
+ * All queues have same feature flags, so use queue 0's for
+ * calculate uring_cmd flags.
+ *
+ * This way looks not elegant, but it works so far.
+ */
+ struct ublk_queue *q = &t->dev->q[0];
+
+ t->commit_buf_elem_size = ublk_commit_elem_buf_size(t->dev);
+ t->commit_buf_size = ublk_commit_buf_size(t);
+ t->commit_buf_start = t->nr_bufs;
+ t->nr_commit_buf = 2;
+ t->nr_bufs += t->nr_commit_buf;
+
+ t->cmd_flags = 0;
+ if (ublk_queue_use_auto_zc(q)) {
+ if (ublk_queue_auto_zc_fallback(q))
+ t->cmd_flags |= UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK;
+ } else if (!ublk_queue_no_buf(q))
+ t->cmd_flags |= UBLK_BATCH_F_HAS_BUF_ADDR;
+
+ t->state |= UBLKS_T_BATCH_IO;
+}
+
+int ublk_batch_alloc_buf(struct ublk_thread *t)
+{
+ ublk_assert(t->nr_commit_buf < 16);
+ return alloc_batch_commit_buf(t);
+}
+
+void ublk_batch_free_buf(struct ublk_thread *t)
+{
+ free_batch_commit_buf(t);
+}
\ No newline at end of file
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index ac74b0f554fe..d7cf8d90c1eb 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -423,6 +423,8 @@ static void ublk_thread_deinit(struct ublk_thread *t)
{
io_uring_unregister_buffers(&t->ring);
+ ublk_batch_free_buf(t);
+
io_uring_unregister_ring_fd(&t->ring);
if (t->ring.ring_fd > 0) {
@@ -501,15 +503,33 @@ static int ublk_thread_init(struct ublk_thread *t)
unsigned nr_ios = dev->dev_info.queue_depth * dev->dev_info.nr_hw_queues;
unsigned max_nr_ios_per_thread = nr_ios / dev->nthreads;
max_nr_ios_per_thread += !!(nr_ios % dev->nthreads);
- ret = io_uring_register_buffers_sparse(
- &t->ring, max_nr_ios_per_thread);
+
+ t->nr_bufs = max_nr_ios_per_thread;
+ } else {
+ t->nr_bufs = 0;
+ }
+
+ if (ublk_dev_batch_io(dev))
+ ublk_batch_prepare(t);
+
+ if (t->nr_bufs) {
+ ret = io_uring_register_buffers_sparse(&t->ring, t->nr_bufs);
if (ret) {
- ublk_err("ublk dev %d thread %d register spare buffers failed %d",
+ ublk_err("ublk dev %d thread %d register spare buffers failed %d\n",
dev->dev_info.dev_id, t->idx, ret);
goto fail;
}
}
+ if (ublk_dev_batch_io(dev)) {
+ ret = ublk_batch_alloc_buf(t);
+ if (ret) {
+ ublk_err("ublk dev %d thread %d alloc batch buf failed %d\n",
+ dev->dev_info.dev_id, t->idx, ret);
+ goto fail;
+ }
+ }
+
io_uring_register_ring_fd(&t->ring);
ret = io_uring_register_files(&t->ring, dev->fds, dev->nr_fds);
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index 5e62a99b65cc..6a10779b830e 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -171,6 +171,14 @@ struct ublk_queue {
struct ublk_io ios[UBLK_QUEUE_DEPTH];
};
+/* align with `ublk_elem_header` */
+struct ublk_batch_elem {
+ __u16 tag;
+ __u16 buf_index;
+ __s32 result;
+ __u64 buf_addr;
+};
+
struct ublk_thread {
struct ublk_dev *dev;
struct io_uring ring;
@@ -182,7 +190,23 @@ struct ublk_thread {
#define UBLKS_T_STOPPING (1U << 0)
#define UBLKS_T_IDLE (1U << 1)
+#define UBLKS_T_BATCH_IO (1U << 31) /* readonly */
unsigned state;
+
+ unsigned short nr_bufs;
+
+ /* followings are for BATCH_IO */
+ unsigned short commit_buf_start;
+ unsigned char commit_buf_elem_size;
+ /*
+ * We just support single device, so pre-calculate commit/prep flags
+ */
+ unsigned short cmd_flags;
+ unsigned int nr_commit_buf;
+ unsigned int commit_buf_size;
+ void *commit_buf;
+#define UBLKS_T_COMMIT_BUF_INV_IDX ((unsigned short)-1)
+ struct allocator commit_buf_alloc;
};
struct ublk_dev {
@@ -203,6 +227,27 @@ struct ublk_dev {
extern int ublk_queue_io_cmd(struct ublk_thread *t, struct ublk_io *io);
+static inline int __ublk_use_batch_io(__u64 flags)
+{
+ return flags & UBLK_F_BATCH_IO;
+}
+
+static inline int ublk_queue_batch_io(const struct ublk_queue *q)
+{
+ return __ublk_use_batch_io(q->flags);
+}
+
+static inline int ublk_dev_batch_io(const struct ublk_dev *dev)
+{
+ return __ublk_use_batch_io(dev->dev_info.flags);
+}
+
+/* only work for handle single device in this pthread context */
+static inline int ublk_thread_batch_io(const struct ublk_thread *t)
+{
+ return t->state & UBLKS_T_BATCH_IO;
+}
+
static inline int ublk_io_auto_zc_fallback(const struct ublksrv_io_desc *iod)
{
@@ -404,6 +449,13 @@ static inline int ublk_queue_no_buf(const struct ublk_queue *q)
return ublk_queue_use_zc(q) || ublk_queue_use_auto_zc(q);
}
+/* Initialize batch I/O state and calculate buffer parameters */
+void ublk_batch_prepare(struct ublk_thread *t);
+/* Allocate and register commit buffers for batch operations */
+int ublk_batch_alloc_buf(struct ublk_thread *t);
+/* Free commit buffers and cleanup batch allocator */
+void ublk_batch_free_buf(struct ublk_thread *t);
+
extern const struct ublk_tgt_ops null_tgt_ops;
extern const struct ublk_tgt_ops loop_tgt_ops;
extern const struct ublk_tgt_ops stripe_tgt_ops;
diff --git a/tools/testing/selftests/ublk/utils.h b/tools/testing/selftests/ublk/utils.h
index 47e44cccaf6a..50fc9503f9f8 100644
--- a/tools/testing/selftests/ublk/utils.h
+++ b/tools/testing/selftests/ublk/utils.h
@@ -23,6 +23,60 @@
#define round_up(val, rnd) \
(((val) + ((rnd) - 1)) & ~((rnd) - 1))
+/* small sized & per-thread allocator */
+struct allocator {
+ unsigned int size;
+ cpu_set_t *set;
+};
+
+static inline int allocator_init(struct allocator *a, unsigned size)
+{
+ a->set = CPU_ALLOC(size);
+ a->size = size;
+
+ if (a->set)
+ return 0;
+ return -ENOMEM;
+}
+
+static inline void allocator_deinit(struct allocator *a)
+{
+ CPU_FREE(a->set);
+ a->set = NULL;
+ a->size = 0;
+}
+
+static inline int allocator_get(struct allocator *a)
+{
+ int i;
+
+ for (i = 0; i < a->size; i += 1) {
+ size_t set_size = CPU_ALLOC_SIZE(a->size);
+
+ if (!CPU_ISSET_S(i, set_size, a->set)) {
+ CPU_SET_S(i, set_size, a->set);
+ return i;
+ }
+ }
+
+ return -1;
+}
+
+static inline void allocator_put(struct allocator *a, int i)
+{
+ size_t set_size = CPU_ALLOC_SIZE(a->size);
+
+ if (i >= 0 && i < a->size)
+ CPU_CLR_S(i, set_size, a->set);
+}
+
+static inline int allocator_get_val(struct allocator *a, int i)
+{
+ size_t set_size = CPU_ALLOC_SIZE(a->size);
+
+ return CPU_ISSET_S(i, set_size, a->set);
+}
+
static inline unsigned int ilog2(unsigned int x)
{
if (x == 0)
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 19/23] selftests: ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (17 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 18/23] selftests: ublk: add batch buffer management infrastructure Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 20/23] selftests: ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
` (3 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Implement support for UBLK_U_IO_PREP_IO_CMDS in the batch I/O framework:
- Add batch command initialization and setup functions
- Implement prep command queueing with proper buffer management
- Add command completion handling for prep and commit commands
- Integrate batch I/O setup into thread initialization
- Update CQE handling to support batch commands
The implementation uses the previously established buffer management
infrastructure to queue UBLK_U_IO_PREP_IO_CMDS commands. Commands are
prepared in the first thread context and use commit buffers for
efficient command batching.
Key changes:
- ublk_batch_queue_prep_io_cmds() prepares I/O command batches
- ublk_batch_compl_cmd() handles batch command completions
- Modified thread setup to use batch operations when enabled
- Enhanced buffer index calculation for batch mode
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/batch.c | 116 ++++++++++++++++++++++++++-
tools/testing/selftests/ublk/kublk.c | 46 ++++++++---
tools/testing/selftests/ublk/kublk.h | 22 +++++
3 files changed, 173 insertions(+), 11 deletions(-)
diff --git a/tools/testing/selftests/ublk/batch.c b/tools/testing/selftests/ublk/batch.c
index ee851e27f053..e680c9625de1 100644
--- a/tools/testing/selftests/ublk/batch.c
+++ b/tools/testing/selftests/ublk/batch.c
@@ -145,4 +145,118 @@ int ublk_batch_alloc_buf(struct ublk_thread *t)
void ublk_batch_free_buf(struct ublk_thread *t)
{
free_batch_commit_buf(t);
-}
\ No newline at end of file
+}
+
+static void ublk_init_batch_cmd(struct ublk_thread *t, __u16 q_id,
+ struct io_uring_sqe *sqe, unsigned op,
+ unsigned short elem_bytes,
+ unsigned short nr_elem,
+ unsigned short buf_idx)
+{
+ struct ublk_batch_io *cmd;
+ __u64 user_data;
+
+ cmd = (struct ublk_batch_io *)ublk_get_sqe_cmd(sqe);
+
+ ublk_set_sqe_cmd_op(sqe, op);
+
+ sqe->fd = 0; /* dev->fds[0] */
+ sqe->opcode = IORING_OP_URING_CMD;
+ sqe->flags = IOSQE_FIXED_FILE;
+
+ cmd->q_id = q_id;
+ cmd->flags = 0;
+ cmd->reserved = 0;
+ cmd->elem_bytes = elem_bytes;
+ cmd->nr_elem = nr_elem;
+
+ user_data = build_user_data(buf_idx, _IOC_NR(op), 0, q_id, 0);
+ io_uring_sqe_set_data64(sqe, user_data);
+
+ t->cmd_inflight += 1;
+
+ ublk_dbg(UBLK_DBG_IO_CMD, "%s: thread %u qid %d cmd_op %x data %lx "
+ "nr_elem %u elem_bytes %u buf_size %u buf_idx %d "
+ "cmd_inflight %u\n",
+ __func__, t->idx, q_id, op, user_data,
+ cmd->nr_elem, cmd->elem_bytes,
+ nr_elem * elem_bytes, buf_idx, t->cmd_inflight);
+}
+
+static void ublk_setup_commit_sqe(struct ublk_thread *t,
+ struct io_uring_sqe *sqe,
+ unsigned short buf_idx)
+{
+ struct ublk_batch_io *cmd;
+
+ cmd = (struct ublk_batch_io *)ublk_get_sqe_cmd(sqe);
+
+ sqe->rw_flags= IORING_URING_CMD_FIXED;
+ sqe->buf_index = buf_idx;
+ cmd->flags |= t->cmd_flags;
+}
+
+int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q)
+{
+ unsigned short nr_elem = q->q_depth;
+ unsigned short buf_idx = ublk_alloc_commit_buf(t);
+ struct io_uring_sqe *sqe;
+ void *buf;
+ int i;
+
+ ublk_assert(buf_idx != UBLKS_T_COMMIT_BUF_INV_IDX);
+
+ ublk_io_alloc_sqes(t, &sqe, 1);
+
+ ublk_assert(nr_elem == q->q_depth);
+ buf = ublk_get_commit_buf(t, buf_idx);
+ for (i = 0; i < nr_elem; i++) {
+ struct ublk_batch_elem *elem = (struct ublk_batch_elem *)(
+ buf + i * t->commit_buf_elem_size);
+ struct ublk_io *io = &q->ios[i];
+
+ elem->tag = i;
+ elem->result = 0;
+
+ if (ublk_queue_use_auto_zc(q))
+ elem->buf_index = ublk_batch_io_buf_idx(t, q, i);
+ else if (!ublk_queue_no_buf(q))
+ elem->buf_addr = (__u64)io->buf_addr;
+ }
+
+ sqe->addr = (__u64)buf;
+ sqe->len = t->commit_buf_elem_size * nr_elem;
+
+ ublk_init_batch_cmd(t, q->q_id, sqe, UBLK_U_IO_PREP_IO_CMDS,
+ t->commit_buf_elem_size, nr_elem, buf_idx);
+ ublk_setup_commit_sqe(t, sqe, buf_idx);
+ return 0;
+}
+
+static void ublk_batch_compl_commit_cmd(struct ublk_thread *t,
+ const struct io_uring_cqe *cqe,
+ unsigned op)
+{
+ unsigned short buf_idx = user_data_to_tag(cqe->user_data);
+
+ if (op == _IOC_NR(UBLK_U_IO_PREP_IO_CMDS))
+ ublk_assert(cqe->res == 0);
+ else if (op == _IOC_NR(UBLK_U_IO_COMMIT_IO_CMDS))
+ ;//assert(cqe->res == t->commit_buf_size);
+ else
+ ublk_assert(0);
+
+ ublk_free_commit_buf(t, buf_idx);
+}
+
+void ublk_batch_compl_cmd(struct ublk_thread *t,
+ const struct io_uring_cqe *cqe)
+{
+ unsigned op = user_data_to_op(cqe->user_data);
+
+ if (op == _IOC_NR(UBLK_U_IO_PREP_IO_CMDS) ||
+ op == _IOC_NR(UBLK_U_IO_COMMIT_IO_CMDS)) {
+ ublk_batch_compl_commit_cmd(t, cqe, op);
+ return;
+ }
+}
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index d7cf8d90c1eb..9434c5f0de19 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -778,28 +778,32 @@ static void ublk_handle_cqe(struct ublk_thread *t,
{
struct ublk_dev *dev = t->dev;
unsigned q_id = user_data_to_q_id(cqe->user_data);
- struct ublk_queue *q = &dev->q[q_id];
unsigned cmd_op = user_data_to_op(cqe->user_data);
if (cqe->res < 0 && cqe->res != -ENODEV)
- ublk_err("%s: res %d userdata %llx queue state %x\n", __func__,
- cqe->res, cqe->user_data, q->flags);
+ ublk_err("%s: res %d userdata %llx thread state %x\n", __func__,
+ cqe->res, cqe->user_data, t->state);
- ublk_dbg(UBLK_DBG_IO_CMD, "%s: res %d (qid %d tag %u cmd_op %u target %d/%d) stopping %d\n",
- __func__, cqe->res, q->q_id, user_data_to_tag(cqe->user_data),
- cmd_op, is_target_io(cqe->user_data),
+ ublk_dbg(UBLK_DBG_IO_CMD, "%s: res %d (thread %d qid %d tag %u cmd_op %x "
+ "data %lx target %d/%d) stopping %d\n",
+ __func__, cqe->res, t->idx, q_id,
+ user_data_to_tag(cqe->user_data),
+ cmd_op, cqe->user_data, is_target_io(cqe->user_data),
user_data_to_tgt_data(cqe->user_data),
(t->state & UBLKS_T_STOPPING));
/* Don't retrieve io in case of target io */
if (is_target_io(cqe->user_data)) {
- ublksrv_handle_tgt_cqe(t, q, cqe);
+ ublksrv_handle_tgt_cqe(t, &dev->q[q_id], cqe);
return;
}
t->cmd_inflight--;
- ublk_handle_uring_cmd(t, q, cqe);
+ if (ublk_thread_batch_io(t))
+ ublk_batch_compl_cmd(t, cqe);
+ else
+ ublk_handle_uring_cmd(t, &dev->q[q_id], cqe);
}
static int ublk_reap_events_uring(struct ublk_thread *t)
@@ -855,6 +859,22 @@ struct ublk_thread_info {
cpu_set_t *affinity;
};
+static void ublk_batch_setup_queues(struct ublk_thread *t)
+{
+ int i;
+
+ /* setup all queues in the 1st thread */
+ for (i = 0; i < t->dev->dev_info.nr_hw_queues; i++) {
+ struct ublk_queue *q = &t->dev->q[i];
+ int ret;
+
+ ret = ublk_batch_queue_prep_io_cmds(t, q);
+ ublk_assert(ret == 0);
+ ret = ublk_process_io(t);
+ ublk_assert(ret >= 0);
+ }
+}
+
static void *ublk_io_handler_fn(void *data)
{
struct ublk_thread_info *info = data;
@@ -879,8 +899,14 @@ static void *ublk_io_handler_fn(void *data)
ublk_dbg(UBLK_DBG_THREAD, "tid %d: ublk dev %d thread %u started\n",
gettid(), dev_id, t->idx);
- /* submit all io commands to ublk driver */
- ublk_submit_fetch_commands(t);
+ if (!ublk_thread_batch_io(t)) {
+ /* submit all io commands to ublk driver */
+ ublk_submit_fetch_commands(t);
+ } else if (!t->idx) {
+ /* prepare all io commands in the 1st thread context */
+ ublk_batch_setup_queues(t);
+ }
+
do {
if (ublk_process_io(t) < 0)
break;
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index 6a10779b830e..158836405a14 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -383,10 +383,16 @@ static inline void ublk_set_sqe_cmd_op(struct io_uring_sqe *sqe, __u32 cmd_op)
addr[1] = 0;
}
+static inline unsigned short ublk_batch_io_buf_idx(
+ const struct ublk_thread *t, const struct ublk_queue *q,
+ unsigned tag);
+
static inline unsigned short ublk_io_buf_idx(const struct ublk_thread *t,
const struct ublk_queue *q,
unsigned tag)
{
+ if (ublk_queue_batch_io(q))
+ return ublk_batch_io_buf_idx(t, q, tag);
return q->ios[tag].buf_index;
}
@@ -449,6 +455,22 @@ static inline int ublk_queue_no_buf(const struct ublk_queue *q)
return ublk_queue_use_zc(q) || ublk_queue_use_auto_zc(q);
}
+/*
+ * Each IO's buffer index has to be calculated by this helper for
+ * UBLKS_T_BATCH_IO
+ */
+static inline unsigned short ublk_batch_io_buf_idx(
+ const struct ublk_thread *t, const struct ublk_queue *q,
+ unsigned tag)
+{
+ return tag;
+}
+
+/* Queue UBLK_U_IO_PREP_IO_CMDS for a specific queue with batch elements */
+int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q);
+/* Handle completion of batch I/O commands (prep/commit) */
+void ublk_batch_compl_cmd(struct ublk_thread *t,
+ const struct io_uring_cqe *cqe);
/* Initialize batch I/O state and calculate buffer parameters */
void ublk_batch_prepare(struct ublk_thread *t);
/* Allocate and register commit buffers for batch operations */
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 20/23] selftests: ublk: handle UBLK_U_IO_COMMIT_IO_CMDS
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (18 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 19/23] selftests: ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 21/23] selftests: ublk: handle UBLK_U_IO_FETCH_IO_CMDS Ming Lei
` (2 subsequent siblings)
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Implement UBLK_U_IO_COMMIT_IO_CMDS to enable efficient batched
completion of I/O operations in the batch I/O framework.
This completes the batch I/O infrastructure by adding the commit
phase that notifies the kernel about completed I/O operations:
Key features:
- Batch multiple I/O completions into single UBLK_U_IO_COMMIT_IO_CMDS
- Dynamic commit buffer allocation and management per thread
- Automatic commit buffer preparation before processing events
- Commit buffer submission after processing completed I/Os
- Integration with existing completion workflows
Implementation details:
- ublk_batch_prep_commit() allocates and initializes commit buffers
- ublk_batch_complete_io() adds completed I/Os to current batch
- ublk_batch_commit_io_cmds() submits batched completions to kernel
- Modified ublk_process_io() to handle batch commit lifecycle
- Enhanced ublk_complete_io() to route to batch or legacy completion
The commit buffer stores completion information (tag, result, buffer
details) for multiple I/Os, then submits them all at once, significantly
reducing syscall overhead compared to individual I/O completions.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/batch.c | 74 ++++++++++++++++++++++++++--
tools/testing/selftests/ublk/kublk.c | 8 ++-
tools/testing/selftests/ublk/kublk.h | 69 +++++++++++++++++---------
3 files changed, 122 insertions(+), 29 deletions(-)
diff --git a/tools/testing/selftests/ublk/batch.c b/tools/testing/selftests/ublk/batch.c
index e680c9625de1..83f6df61fed9 100644
--- a/tools/testing/selftests/ublk/batch.c
+++ b/tools/testing/selftests/ublk/batch.c
@@ -170,7 +170,7 @@ static void ublk_init_batch_cmd(struct ublk_thread *t, __u16 q_id,
cmd->elem_bytes = elem_bytes;
cmd->nr_elem = nr_elem;
- user_data = build_user_data(buf_idx, _IOC_NR(op), 0, q_id, 0);
+ user_data = build_user_data(buf_idx, _IOC_NR(op), nr_elem, q_id, 0);
io_uring_sqe_set_data64(sqe, user_data);
t->cmd_inflight += 1;
@@ -241,9 +241,11 @@ static void ublk_batch_compl_commit_cmd(struct ublk_thread *t,
if (op == _IOC_NR(UBLK_U_IO_PREP_IO_CMDS))
ublk_assert(cqe->res == 0);
- else if (op == _IOC_NR(UBLK_U_IO_COMMIT_IO_CMDS))
- ;//assert(cqe->res == t->commit_buf_size);
- else
+ else if (op == _IOC_NR(UBLK_U_IO_COMMIT_IO_CMDS)) {
+ int nr_elem = user_data_to_tgt_data(cqe->user_data);
+
+ ublk_assert(cqe->res == t->commit_buf_elem_size * nr_elem);
+ } else
ublk_assert(0);
ublk_free_commit_buf(t, buf_idx);
@@ -260,3 +262,67 @@ void ublk_batch_compl_cmd(struct ublk_thread *t,
return;
}
}
+
+void ublk_batch_commit_io_cmds(struct ublk_thread *t)
+{
+ struct io_uring_sqe *sqe;
+ unsigned short buf_idx;
+ unsigned short nr_elem = t->commit.done;
+
+ /* nothing to commit */
+ if (!nr_elem) {
+ ublk_free_commit_buf(t, t->commit.buf_idx);
+ return;
+ }
+
+ ublk_io_alloc_sqes(t, &sqe, 1);
+ buf_idx = t->commit.buf_idx;
+ sqe->addr = (__u64)t->commit.elem;
+ sqe->len = nr_elem * t->commit_buf_elem_size;
+
+ /* commit isn't per-queue command */
+ ublk_init_batch_cmd(t, t->commit.q_id, sqe, UBLK_U_IO_COMMIT_IO_CMDS,
+ t->commit_buf_elem_size, nr_elem, buf_idx);
+ ublk_setup_commit_sqe(t, sqe, buf_idx);
+}
+
+static void ublk_batch_init_commit(struct ublk_thread *t,
+ unsigned short buf_idx)
+{
+ /* so far only support 1:1 queue/thread mapping */
+ t->commit.q_id = t->idx;
+ t->commit.buf_idx = buf_idx;
+ t->commit.elem = ublk_get_commit_buf(t, buf_idx);
+ t->commit.done = 0;
+ t->commit.count = t->commit_buf_size /
+ t->commit_buf_elem_size;
+}
+
+void ublk_batch_prep_commit(struct ublk_thread *t)
+{
+ unsigned short buf_idx = ublk_alloc_commit_buf(t);
+
+ ublk_assert(buf_idx != UBLKS_T_COMMIT_BUF_INV_IDX);
+ ublk_batch_init_commit(t, buf_idx);
+}
+
+void ublk_batch_complete_io(struct ublk_thread *t, struct ublk_queue *q,
+ unsigned tag, int res)
+{
+ struct batch_commit_buf *cb = &t->commit;
+ struct ublk_batch_elem *elem = (struct ublk_batch_elem *)(cb->elem +
+ cb->done * t->commit_buf_elem_size);
+ struct ublk_io *io = &q->ios[tag];
+
+ ublk_assert(q->q_id == t->commit.q_id);
+
+ elem->tag = tag;
+ elem->buf_index = ublk_batch_io_buf_idx(t, q, tag);
+ elem->result = res;
+
+ if (!ublk_queue_no_buf(q))
+ elem->buf_addr = (__u64) (uintptr_t) io->buf_addr;
+
+ cb->done += 1;
+ ublk_assert(cb->done <= cb->count);
+}
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index 9434c5f0de19..42e9e7fb8d88 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -835,7 +835,13 @@ static int ublk_process_io(struct ublk_thread *t)
return -ENODEV;
ret = io_uring_submit_and_wait(&t->ring, 1);
- reapped = ublk_reap_events_uring(t);
+ if (ublk_thread_batch_io(t)) {
+ ublk_batch_prep_commit(t);
+ reapped = ublk_reap_events_uring(t);
+ ublk_batch_commit_io_cmds(t);
+ } else {
+ reapped = ublk_reap_events_uring(t);
+ }
ublk_dbg(UBLK_DBG_THREAD, "submit result %d, reapped %d stop %d idle %d\n",
ret, reapped, (t->state & UBLKS_T_STOPPING),
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index 158836405a14..e51ef2f32b5b 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -179,6 +179,14 @@ struct ublk_batch_elem {
__u64 buf_addr;
};
+struct batch_commit_buf {
+ unsigned short q_id;
+ unsigned short buf_idx;
+ void *elem;
+ unsigned short done;
+ unsigned short count;
+};
+
struct ublk_thread {
struct ublk_dev *dev;
struct io_uring ring;
@@ -207,6 +215,7 @@ struct ublk_thread {
void *commit_buf;
#define UBLKS_T_COMMIT_BUF_INV_IDX ((unsigned short)-1)
struct allocator commit_buf_alloc;
+ struct batch_commit_buf commit;
};
struct ublk_dev {
@@ -401,30 +410,6 @@ static inline struct ublk_io *ublk_get_io(struct ublk_queue *q, unsigned tag)
return &q->ios[tag];
}
-static inline int ublk_complete_io(struct ublk_thread *t, struct ublk_queue *q,
- unsigned tag, int res)
-{
- struct ublk_io *io = &q->ios[tag];
-
- ublk_mark_io_done(io, res);
-
- return ublk_queue_io_cmd(t, io);
-}
-
-static inline void ublk_queued_tgt_io(struct ublk_thread *t, struct ublk_queue *q,
- unsigned tag, int queued)
-{
- if (queued < 0)
- ublk_complete_io(t, q, tag, queued);
- else {
- struct ublk_io *io = ublk_get_io(q, tag);
-
- t->io_inflight += queued;
- io->tgt_ios = queued;
- io->result = 0;
- }
-}
-
static inline int ublk_completed_tgt_io(struct ublk_thread *t,
struct ublk_queue *q, unsigned tag)
{
@@ -478,6 +463,42 @@ int ublk_batch_alloc_buf(struct ublk_thread *t);
/* Free commit buffers and cleanup batch allocator */
void ublk_batch_free_buf(struct ublk_thread *t);
+/* Prepare a new commit buffer for batching completed I/O operations */
+void ublk_batch_prep_commit(struct ublk_thread *t);
+/* Submit UBLK_U_IO_COMMIT_IO_CMDS with batched completed I/O operations */
+void ublk_batch_commit_io_cmds(struct ublk_thread *t);
+/* Add a completed I/O operation to the current batch commit buffer */
+void ublk_batch_complete_io(struct ublk_thread *t, struct ublk_queue *q,
+ unsigned tag, int res);
+
+static inline int ublk_complete_io(struct ublk_thread *t, struct ublk_queue *q,
+ unsigned tag, int res)
+{
+ if (ublk_queue_batch_io(q)) {
+ ublk_batch_complete_io(t, q, tag, res);
+ return 0;
+ } else {
+ struct ublk_io *io = &q->ios[tag];
+
+ ublk_mark_io_done(io, res);
+ return ublk_queue_io_cmd(t, io);
+ }
+}
+
+static inline void ublk_queued_tgt_io(struct ublk_thread *t, struct ublk_queue *q,
+ unsigned tag, int queued)
+{
+ if (queued < 0)
+ ublk_complete_io(t, q, tag, queued);
+ else {
+ struct ublk_io *io = ublk_get_io(q, tag);
+
+ t->io_inflight += queued;
+ io->tgt_ios = queued;
+ io->result = 0;
+ }
+}
+
extern const struct ublk_tgt_ops null_tgt_ops;
extern const struct ublk_tgt_ops loop_tgt_ops;
extern const struct ublk_tgt_ops stripe_tgt_ops;
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 21/23] selftests: ublk: handle UBLK_U_IO_FETCH_IO_CMDS
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (19 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 20/23] selftests: ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 22/23] selftests: ublk: add --batch/-b for enabling F_BATCH_IO Ming Lei
2025-09-01 10:02 ` [PATCH 23/23] selftests: ublk: support arbitrary threads/queues combination Ming Lei
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add support for UBLK_U_IO_FETCH_IO_CMDS to enable efficient batch
fetching of I/O commands using multishot io_uring operations.
Key improvements:
- Implement multishot UBLK_U_IO_FETCH_IO_CMDS for continuous command fetching
- Add fetch buffer management with page-aligned, mlocked buffers
- Process fetched I/O command tags from kernel-provided buffers
- Integrate fetch operations with existing batch I/O infrastructure
- Significantly reduce uring_cmd issuing overhead through batching
The implementation uses two fetch buffers per thread with automatic
requeuing to maintain continuous I/O command flow. Each fetch operation
retrieves multiple command tags in a single syscall, dramatically
improving performance compared to individual command fetching.
Technical details:
- Fetch buffers are page-aligned and mlocked for optimal performance
- Uses IORING_URING_CMD_MULTISHOT for continuous operation
- Automatic buffer management and requeuing on completion
- Enhanced CQE handling for fetch command completions
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/batch.c | 142 ++++++++++++++++++++++++++-
tools/testing/selftests/ublk/kublk.c | 14 ++-
tools/testing/selftests/ublk/kublk.h | 14 +++
3 files changed, 166 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/ublk/batch.c b/tools/testing/selftests/ublk/batch.c
index 83f6df61fed9..7f196be8e0e1 100644
--- a/tools/testing/selftests/ublk/batch.c
+++ b/tools/testing/selftests/ublk/batch.c
@@ -136,15 +136,63 @@ void ublk_batch_prepare(struct ublk_thread *t)
t->state |= UBLKS_T_BATCH_IO;
}
+static void free_batch_fetch_buf(struct ublk_thread *t)
+{
+ int i;
+
+ for (i = 0; i < UBLKS_T_NR_FETCH_BUF; i++) {
+ io_uring_free_buf_ring(&t->ring, t->fetch[i].br, 1, i);
+ munlock(t->fetch[i].fetch_buf, t->fetch[i].fetch_buf_size);
+ free(t->fetch[i].fetch_buf);
+ }
+}
+
+static int alloc_batch_fetch_buf(struct ublk_thread *t)
+{
+ /* page aligned fetch buffer, and it is mlocked for speedup delivery */
+ unsigned pg_sz = getpagesize();
+ unsigned buf_size = round_up(t->dev->dev_info.queue_depth * 2, pg_sz);
+ int ret;
+ int i = 0;
+
+ for (i = 0; i < UBLKS_T_NR_FETCH_BUF; i++) {
+ t->fetch[i].fetch_buf_size = buf_size;
+
+ if (posix_memalign((void **)&t->fetch[i].fetch_buf, pg_sz,
+ t->fetch[i].fetch_buf_size))
+ return -ENOMEM;
+
+ /* lock fetch buffer page for fast fetching */
+ if (mlock(t->fetch[i].fetch_buf, t->fetch[i].fetch_buf_size))
+ ublk_err("%s: can't lock fetch buffer %s\n", __func__,
+ strerror(errno));
+ t->fetch[i].br = io_uring_setup_buf_ring(&t->ring, 1,
+ i, IOU_PBUF_RING_INC, &ret);
+ if (!t->fetch[i].br) {
+ ublk_err("Buffer ring register failed %d\n", ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
int ublk_batch_alloc_buf(struct ublk_thread *t)
{
+ int ret;
+
ublk_assert(t->nr_commit_buf < 16);
- return alloc_batch_commit_buf(t);
+
+ ret = alloc_batch_commit_buf(t);
+ if (ret)
+ return ret;
+ return alloc_batch_fetch_buf(t);
}
void ublk_batch_free_buf(struct ublk_thread *t)
{
free_batch_commit_buf(t);
+ free_batch_fetch_buf(t);
}
static void ublk_init_batch_cmd(struct ublk_thread *t, __u16 q_id,
@@ -196,6 +244,84 @@ static void ublk_setup_commit_sqe(struct ublk_thread *t,
cmd->flags |= t->cmd_flags;
}
+static void ublk_batch_queue_fetch(struct ublk_thread *t,
+ struct ublk_queue *q,
+ unsigned short buf_idx)
+{
+ unsigned short nr_elem = t->fetch[buf_idx].fetch_buf_size / 2;
+ struct io_uring_sqe *sqe;
+
+ io_uring_buf_ring_add(t->fetch[buf_idx].br, t->fetch[buf_idx].fetch_buf,
+ t->fetch[buf_idx].fetch_buf_size,
+ 0, 0, 0);
+ io_uring_buf_ring_advance(t->fetch[buf_idx].br, 1);
+
+ ublk_io_alloc_sqes(t, &sqe, 1);
+
+ ublk_init_batch_cmd(t, q->q_id, sqe, UBLK_U_IO_FETCH_IO_CMDS, 2, nr_elem,
+ buf_idx);
+
+ sqe->rw_flags= IORING_URING_CMD_MULTISHOT;
+ sqe->buf_group = buf_idx;
+ sqe->flags |= IOSQE_BUFFER_SELECT;
+
+ t->fetch[buf_idx].fetch_buf_off = 0;
+}
+
+void ublk_batch_start_fetch(struct ublk_thread *t,
+ struct ublk_queue *q)
+{
+ int i;
+
+ for (i = 0; i < UBLKS_T_NR_FETCH_BUF; i++)
+ ublk_batch_queue_fetch(t, q, i);
+}
+
+static unsigned short ublk_compl_batch_fetch(struct ublk_thread *t,
+ struct ublk_queue *q,
+ const struct io_uring_cqe *cqe)
+{
+ unsigned short buf_idx = user_data_to_tag(cqe->user_data);
+ unsigned start = t->fetch[buf_idx].fetch_buf_off;
+ unsigned end = start + cqe->res;
+ void *buf = t->fetch[buf_idx].fetch_buf;
+ int i;
+
+ if (cqe->res < 0) {
+ if (cqe->res == -ENOBUFS) {
+ if (start != t->fetch[buf_idx].fetch_buf_size)
+ ublk_err("%s: maybe cq overflow done %u\n", __func__, start);
+ }
+ return buf_idx;
+ }
+
+ if ((end - start) / 2 > q->q_depth) {
+ ublk_err("%s: fetch duplicated ios offset %u count %u\n", __func__, start, cqe->res);
+
+ for (i = start; i < end; i += 2) {
+ unsigned short tag = *(unsigned short *)(buf + i);
+
+ ublk_err("%u ", tag);
+ }
+ ublk_err("\n");
+ }
+
+ for (i = start; i < end; i += 2) {
+ unsigned short tag = *(unsigned short *)(buf + i);
+
+ if (tag == UBLK_BATCH_IO_UNUSED_TAG)
+ continue;
+
+ if (tag >= q->q_depth)
+ ublk_err("%s: bad tag %u\n", __func__, tag);
+
+ if (q->tgt_ops->queue_io)
+ q->tgt_ops->queue_io(t, q, tag);
+ }
+ t->fetch[buf_idx].fetch_buf_off = end;
+ return buf_idx;
+}
+
int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q)
{
unsigned short nr_elem = q->q_depth;
@@ -255,12 +381,26 @@ void ublk_batch_compl_cmd(struct ublk_thread *t,
const struct io_uring_cqe *cqe)
{
unsigned op = user_data_to_op(cqe->user_data);
+ struct ublk_queue *q;
+ unsigned buf_idx;
+ unsigned q_id;
if (op == _IOC_NR(UBLK_U_IO_PREP_IO_CMDS) ||
op == _IOC_NR(UBLK_U_IO_COMMIT_IO_CMDS)) {
ublk_batch_compl_commit_cmd(t, cqe, op);
return;
}
+
+ /* FETCH command is per queue */
+ q_id = user_data_to_q_id(cqe->user_data);
+ q = &t->dev->q[q_id];
+ buf_idx = ublk_compl_batch_fetch(t, q, cqe);
+
+ if (cqe->res < 0 && cqe->res != -ENOBUFS) {
+ t->state |= UBLKS_T_STOPPING;
+ } else if (!(cqe->flags & IORING_CQE_F_MORE) || cqe->res == -ENOBUFS) {
+ ublk_batch_queue_fetch(t, q, buf_idx);
+ }
}
void ublk_batch_commit_io_cmds(struct ublk_thread *t)
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index 42e9e7fb8d88..ceb2e80304a6 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -489,6 +489,10 @@ static int ublk_thread_init(struct ublk_thread *t)
int ring_depth = dev->tgt.sq_depth, cq_depth = dev->tgt.cq_depth;
int ret;
+ /* FETCH_IO_CMDS is multishot, so increase cq depth for BATCH_IO */
+ if (ublk_dev_batch_io(dev))
+ cq_depth += dev->dev_info.queue_depth;
+
ret = ublk_setup_ring(&t->ring, ring_depth, cq_depth,
IORING_SETUP_COOP_TASKRUN |
IORING_SETUP_SINGLE_ISSUER |
@@ -780,7 +784,7 @@ static void ublk_handle_cqe(struct ublk_thread *t,
unsigned q_id = user_data_to_q_id(cqe->user_data);
unsigned cmd_op = user_data_to_op(cqe->user_data);
- if (cqe->res < 0 && cqe->res != -ENODEV)
+ if (cqe->res < 0 && cqe->res != -ENODEV && cqe->res != -ENOBUFS)
ublk_err("%s: res %d userdata %llx thread state %x\n", __func__,
cqe->res, cqe->user_data, t->state);
@@ -908,9 +912,13 @@ static void *ublk_io_handler_fn(void *data)
if (!ublk_thread_batch_io(t)) {
/* submit all io commands to ublk driver */
ublk_submit_fetch_commands(t);
- } else if (!t->idx) {
+ } else {
+ struct ublk_queue *q = &t->dev->q[t->idx];
+
/* prepare all io commands in the 1st thread context */
- ublk_batch_setup_queues(t);
+ if (!t->idx)
+ ublk_batch_setup_queues(t);
+ ublk_batch_start_fetch(t, q);
}
do {
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index e51ef2f32b5b..bfc010b66952 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -187,6 +187,13 @@ struct batch_commit_buf {
unsigned short count;
};
+struct batch_fetch_buf {
+ struct io_uring_buf_ring *br;
+ void *fetch_buf;
+ unsigned int fetch_buf_size;
+ unsigned int fetch_buf_off;
+};
+
struct ublk_thread {
struct ublk_dev *dev;
struct io_uring ring;
@@ -216,6 +223,10 @@ struct ublk_thread {
#define UBLKS_T_COMMIT_BUF_INV_IDX ((unsigned short)-1)
struct allocator commit_buf_alloc;
struct batch_commit_buf commit;
+
+ /* FETCH_IO_CMDS buffer */
+#define UBLKS_T_NR_FETCH_BUF 2
+ struct batch_fetch_buf fetch[UBLKS_T_NR_FETCH_BUF];
};
struct ublk_dev {
@@ -453,6 +464,9 @@ static inline unsigned short ublk_batch_io_buf_idx(
/* Queue UBLK_U_IO_PREP_IO_CMDS for a specific queue with batch elements */
int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q);
+/* Start fetching I/O commands using multishot UBLK_U_IO_FETCH_IO_CMDS */
+void ublk_batch_start_fetch(struct ublk_thread *t,
+ struct ublk_queue *q);
/* Handle completion of batch I/O commands (prep/commit) */
void ublk_batch_compl_cmd(struct ublk_thread *t,
const struct io_uring_cqe *cqe);
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 22/23] selftests: ublk: add --batch/-b for enabling F_BATCH_IO
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (20 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 21/23] selftests: ublk: handle UBLK_U_IO_FETCH_IO_CMDS Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
2025-09-01 10:02 ` [PATCH 23/23] selftests: ublk: support arbitrary threads/queues combination Ming Lei
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Add --batch/-b for enabling F_BATCH_IO.
Add generic_13 for covering its basic function.
Add stress_06 and stress_07 for covering stress test.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/Makefile | 3 ++
tools/testing/selftests/ublk/kublk.c | 13 +++++-
.../testing/selftests/ublk/test_generic_13.sh | 32 +++++++++++++
.../testing/selftests/ublk/test_stress_06.sh | 45 +++++++++++++++++++
.../testing/selftests/ublk/test_stress_07.sh | 44 ++++++++++++++++++
5 files changed, 136 insertions(+), 1 deletion(-)
create mode 100755 tools/testing/selftests/ublk/test_generic_13.sh
create mode 100755 tools/testing/selftests/ublk/test_stress_06.sh
create mode 100755 tools/testing/selftests/ublk/test_stress_07.sh
diff --git a/tools/testing/selftests/ublk/Makefile b/tools/testing/selftests/ublk/Makefile
index 19793678f24c..7141995f1f14 100644
--- a/tools/testing/selftests/ublk/Makefile
+++ b/tools/testing/selftests/ublk/Makefile
@@ -20,6 +20,7 @@ TEST_PROGS += test_generic_09.sh
TEST_PROGS += test_generic_10.sh
TEST_PROGS += test_generic_11.sh
TEST_PROGS += test_generic_12.sh
+TEST_PROGS += test_generic_13.sh
TEST_PROGS += test_null_01.sh
TEST_PROGS += test_null_02.sh
@@ -38,6 +39,8 @@ TEST_PROGS += test_stress_02.sh
TEST_PROGS += test_stress_03.sh
TEST_PROGS += test_stress_04.sh
TEST_PROGS += test_stress_05.sh
+TEST_PROGS += test_stress_06.sh
+TEST_PROGS += test_stress_07.sh
TEST_GEN_PROGS_EXTENDED = kublk
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index ceb2e80304a6..4b7e9c1c09f4 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -1439,6 +1439,7 @@ static int cmd_dev_get_features(void)
[const_ilog2(UBLK_F_AUTO_BUF_REG)] = "AUTO_BUF_REG",
[const_ilog2(UBLK_F_QUIESCE)] = "QUIESCE",
[const_ilog2(UBLK_F_PER_IO_DAEMON)] = "PER_IO_DAEMON",
+ [const_ilog2(UBLK_F_BATCH_IO)] = "BATCH_IO",
};
struct ublk_dev *dev;
__u64 features = 0;
@@ -1534,6 +1535,7 @@ static void __cmd_create_help(char *exe, bool recovery)
printf("\t[--foreground] [--quiet] [-z] [--auto_zc] [--auto_zc_fallback] [--debug_mask mask] [-r 0|1 ] [-g]\n");
printf("\t[-e 0|1 ] [-i 0|1]\n");
printf("\t[--nthreads threads] [--per_io_tasks]\n");
+ printf("\t[--batch|-b]\n");
printf("\t[target options] [backfile1] [backfile2] ...\n");
printf("\tdefault: nr_queues=2(max 32), depth=128(max 1024), dev_id=-1(auto allocation)\n");
printf("\tdefault: nthreads=nr_queues");
@@ -1595,6 +1597,7 @@ int main(int argc, char *argv[])
{ "size", 1, NULL, 's'},
{ "nthreads", 1, NULL, 0 },
{ "per_io_tasks", 0, NULL, 0 },
+ { "batch", 0, NULL, 'b'},
{ 0, 0, 0, 0 }
};
const struct ublk_tgt_ops *ops = NULL;
@@ -1616,9 +1619,12 @@ int main(int argc, char *argv[])
opterr = 0;
optind = 2;
- while ((opt = getopt_long(argc, argv, "t:n:d:q:r:e:i:s:gaz",
+ while ((opt = getopt_long(argc, argv, "t:n:d:q:r:e:i:s:gazb",
longopts, &option_idx)) != -1) {
switch (opt) {
+ case 'b':
+ ctx.flags |= UBLK_F_BATCH_IO;
+ break;
case 'a':
ctx.all = 1;
break;
@@ -1697,6 +1703,11 @@ int main(int argc, char *argv[])
}
}
+ if (ctx.per_io_tasks && (ctx.flags & UBLK_F_BATCH_IO)) {
+ ublk_err("per_io_task and F_BATCH_IO conflict\n");
+ return -EINVAL;
+ }
+
/* auto_zc_fallback depends on F_AUTO_BUF_REG & F_SUPPORT_ZERO_COPY */
if (ctx.auto_zc_fallback &&
!((ctx.flags & UBLK_F_AUTO_BUF_REG) &&
diff --git a/tools/testing/selftests/ublk/test_generic_13.sh b/tools/testing/selftests/ublk/test_generic_13.sh
new file mode 100755
index 000000000000..ac457b45f439
--- /dev/null
+++ b/tools/testing/selftests/ublk/test_generic_13.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+. "$(cd "$(dirname "$0")" && pwd)"/test_common.sh
+
+TID="generic_13"
+ERR_CODE=0
+
+if ! _have_feature "BATCH_IO"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+_prep_test "generic" "test basic function of UBLK_F_BATCH_IO"
+
+_create_backfile 0 256M
+_create_backfile 1 256M
+
+dev_id=$(_add_ublk_dev -t loop -q 2 -b "${UBLK_BACKFILES[0]}")
+_check_add_dev $TID $?
+
+if ! _mkfs_mount_test /dev/ublkb"${dev_id}"; then
+ _cleanup_test "generic"
+ _show_result $TID 255
+fi
+
+dev_id=$(_add_ublk_dev -t stripe -b --auto_zc "${UBLK_BACKFILES[0]}" "${UBLK_BACKFILES[1]}")
+_check_add_dev $TID $?
+_mkfs_mount_test /dev/ublkb"${dev_id}"
+ERR_CODE=$?
+
+_cleanup_test "generic"
+_show_result $TID $ERR_CODE
diff --git a/tools/testing/selftests/ublk/test_stress_06.sh b/tools/testing/selftests/ublk/test_stress_06.sh
new file mode 100755
index 000000000000..190db0b4f2ad
--- /dev/null
+++ b/tools/testing/selftests/ublk/test_stress_06.sh
@@ -0,0 +1,45 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+. "$(cd "$(dirname "$0")" && pwd)"/test_common.sh
+TID="stress_06"
+ERR_CODE=0
+
+ublk_io_and_remove()
+{
+ run_io_and_remove "$@"
+ ERR_CODE=$?
+ if [ ${ERR_CODE} -ne 0 ]; then
+ echo "$TID failure: $*"
+ _show_result $TID $ERR_CODE
+ fi
+}
+
+if ! _have_program fio; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+if ! _have_feature "ZERO_COPY"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+if ! _have_feature "AUTO_BUF_REG"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+if ! _have_feature "BATCH_IO"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+_prep_test "stress" "run IO and remove device(zero copy)"
+
+_create_backfile 0 256M
+_create_backfile 1 128M
+_create_backfile 2 128M
+
+ublk_io_and_remove 8G -t null -q 4 -b &
+ublk_io_and_remove 256M -t loop -q 4 --auto_zc -b "${UBLK_BACKFILES[0]}" &
+ublk_io_and_remove 256M -t stripe -q 4 --auto_zc -b "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" &
+ublk_io_and_remove 8G -t null -q 4 -z --auto_zc --auto_zc_fallback -b &
+wait
+
+_cleanup_test "stress"
+_show_result $TID $ERR_CODE
diff --git a/tools/testing/selftests/ublk/test_stress_07.sh b/tools/testing/selftests/ublk/test_stress_07.sh
new file mode 100755
index 000000000000..1b6bdb31da03
--- /dev/null
+++ b/tools/testing/selftests/ublk/test_stress_07.sh
@@ -0,0 +1,44 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+. "$(cd "$(dirname "$0")" && pwd)"/test_common.sh
+TID="stress_07"
+ERR_CODE=0
+
+ublk_io_and_kill_daemon()
+{
+ run_io_and_kill_daemon "$@"
+ ERR_CODE=$?
+ if [ ${ERR_CODE} -ne 0 ]; then
+ echo "$TID failure: $*"
+ _show_result $TID $ERR_CODE
+ fi
+}
+
+if ! _have_program fio; then
+ exit "$UBLK_SKIP_CODE"
+fi
+if ! _have_feature "ZERO_COPY"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+if ! _have_feature "AUTO_BUF_REG"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+if ! _have_feature "BATCH_IO"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+_prep_test "stress" "run IO and kill ublk server(zero copy)"
+
+_create_backfile 0 256M
+_create_backfile 1 128M
+_create_backfile 2 128M
+
+ublk_io_and_kill_daemon 8G -t null -q 4 -z -b &
+ublk_io_and_kill_daemon 256M -t loop -q 4 --auto_zc -b "${UBLK_BACKFILES[0]}" &
+ublk_io_and_kill_daemon 256M -t stripe -q 4 -b "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" &
+ublk_io_and_kill_daemon 8G -t null -q 4 -z --auto_zc --auto_zc_fallback -b &
+wait
+
+_cleanup_test "stress"
+_show_result $TID $ERR_CODE
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 23/23] selftests: ublk: support arbitrary threads/queues combination
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
` (21 preceding siblings ...)
2025-09-01 10:02 ` [PATCH 22/23] selftests: ublk: add --batch/-b for enabling F_BATCH_IO Ming Lei
@ 2025-09-01 10:02 ` Ming Lei
22 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-01 10:02 UTC (permalink / raw)
To: Jens Axboe, linux-block; +Cc: Uday Shankar, Caleb Sander Mateos, Ming Lei
Enable flexible thread-to-queue mapping in batch I/O mode to support
arbitrary combinations of threads and queues, improving resource
utilization and scalability.
Key improvements:
- Support N:M thread-to-queue mapping (previously limited to 1:1)
- Dynamic buffer allocation based on actual queue assignment per thread
- Thread-safe queue preparation with spinlock protection
- Intelligent buffer index calculation for multi-queue scenarios
- Enhanced validation for thread/queue combination constraints
Implementation details:
- Add q_thread_map matrix to track queue-to-thread assignments
- Dynamic allocation of commit and fetch buffers per thread
- Round-robin queue assignment algorithm for load balancing
- Per-queue spinlock to prevent race conditions during prep
- Updated buffer index calculation using queue position within thread
This enables efficient configurations like:
- Any other N:M combinations for optimal resource matching
Testing:
- Added test_generic_14.sh: 4 threads vs 1 queue
- Added test_generic_15.sh: 1 thread vs 4 queues
- Validates correctness across different mapping scenarios
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
tools/testing/selftests/ublk/Makefile | 2 +
tools/testing/selftests/ublk/batch.c | 200 +++++++++++++++---
tools/testing/selftests/ublk/kublk.c | 34 ++-
tools/testing/selftests/ublk/kublk.h | 37 +++-
.../testing/selftests/ublk/test_generic_14.sh | 30 +++
.../testing/selftests/ublk/test_generic_15.sh | 30 +++
6 files changed, 285 insertions(+), 48 deletions(-)
create mode 100755 tools/testing/selftests/ublk/test_generic_14.sh
create mode 100755 tools/testing/selftests/ublk/test_generic_15.sh
diff --git a/tools/testing/selftests/ublk/Makefile b/tools/testing/selftests/ublk/Makefile
index 7141995f1f14..e19dec410a52 100644
--- a/tools/testing/selftests/ublk/Makefile
+++ b/tools/testing/selftests/ublk/Makefile
@@ -21,6 +21,8 @@ TEST_PROGS += test_generic_10.sh
TEST_PROGS += test_generic_11.sh
TEST_PROGS += test_generic_12.sh
TEST_PROGS += test_generic_13.sh
+TEST_PROGS += test_generic_14.sh
+TEST_PROGS += test_generic_15.sh
TEST_PROGS += test_null_01.sh
TEST_PROGS += test_null_02.sh
diff --git a/tools/testing/selftests/ublk/batch.c b/tools/testing/selftests/ublk/batch.c
index 7f196be8e0e1..1c9e4d2dcde2 100644
--- a/tools/testing/selftests/ublk/batch.c
+++ b/tools/testing/selftests/ublk/batch.c
@@ -70,6 +70,7 @@ static void free_batch_commit_buf(struct ublk_thread *t)
{
free(t->commit_buf);
allocator_deinit(&t->commit_buf_alloc);
+ free(t->commit);
}
static int alloc_batch_commit_buf(struct ublk_thread *t)
@@ -79,7 +80,13 @@ static int alloc_batch_commit_buf(struct ublk_thread *t)
struct iovec iov[t->nr_commit_buf];
unsigned int page_sz = getpagesize();
void *buf = NULL;
- int i, ret;
+ int i, ret, j = 0;
+
+ t->commit = calloc(t->nr_queues, sizeof(*t->commit));
+ for (i = 0; i < t->dev->dev_info.nr_hw_queues; i++) {
+ if (t->dev->q_thread_map[t->idx][i])
+ t->commit[j++].q_id = i;
+ }
allocator_init(&t->commit_buf_alloc, t->nr_commit_buf);
@@ -108,6 +115,17 @@ static int alloc_batch_commit_buf(struct ublk_thread *t)
return ret;
}
+static unsigned int ublk_thread_nr_queues(const struct ublk_thread *t)
+{
+ int i;
+ int ret = 0;
+
+ for (i = 0; i < t->dev->dev_info.nr_hw_queues; i++)
+ ret += !!t->dev->q_thread_map[t->idx][i];
+
+ return ret;
+}
+
void ublk_batch_prepare(struct ublk_thread *t)
{
/*
@@ -120,10 +138,13 @@ void ublk_batch_prepare(struct ublk_thread *t)
*/
struct ublk_queue *q = &t->dev->q[0];
+ /* cache nr_queues because we don't support dynamic load-balance yet */
+ t->nr_queues = ublk_thread_nr_queues(t);
+
t->commit_buf_elem_size = ublk_commit_elem_buf_size(t->dev);
t->commit_buf_size = ublk_commit_buf_size(t);
t->commit_buf_start = t->nr_bufs;
- t->nr_commit_buf = 2;
+ t->nr_commit_buf = 2 * t->nr_queues;
t->nr_bufs += t->nr_commit_buf;
t->cmd_flags = 0;
@@ -140,11 +161,12 @@ static void free_batch_fetch_buf(struct ublk_thread *t)
{
int i;
- for (i = 0; i < UBLKS_T_NR_FETCH_BUF; i++) {
+ for (i = 0; i < t->nr_fetch_bufs; i++) {
io_uring_free_buf_ring(&t->ring, t->fetch[i].br, 1, i);
munlock(t->fetch[i].fetch_buf, t->fetch[i].fetch_buf_size);
free(t->fetch[i].fetch_buf);
}
+ free(t->fetch);
}
static int alloc_batch_fetch_buf(struct ublk_thread *t)
@@ -155,7 +177,12 @@ static int alloc_batch_fetch_buf(struct ublk_thread *t)
int ret;
int i = 0;
- for (i = 0; i < UBLKS_T_NR_FETCH_BUF; i++) {
+ /* double fetch buffer for each queue */
+ t->nr_fetch_bufs = t->nr_queues * 2;
+ t->fetch = calloc(t->nr_fetch_bufs, sizeof(*t->fetch));
+
+ /* allocate one buffer for each queue */
+ for (i = 0; i < t->nr_fetch_bufs; i++) {
t->fetch[i].fetch_buf_size = buf_size;
if (posix_memalign((void **)&t->fetch[i].fetch_buf, pg_sz,
@@ -181,7 +208,7 @@ int ublk_batch_alloc_buf(struct ublk_thread *t)
{
int ret;
- ublk_assert(t->nr_commit_buf < 16);
+ ublk_assert(t->nr_commit_buf < 2 * UBLK_MAX_QUEUES);
ret = alloc_batch_commit_buf(t);
if (ret)
@@ -268,13 +295,20 @@ static void ublk_batch_queue_fetch(struct ublk_thread *t,
t->fetch[buf_idx].fetch_buf_off = 0;
}
-void ublk_batch_start_fetch(struct ublk_thread *t,
- struct ublk_queue *q)
+void ublk_batch_start_fetch(struct ublk_thread *t)
{
int i;
+ int j = 0;
- for (i = 0; i < UBLKS_T_NR_FETCH_BUF; i++)
- ublk_batch_queue_fetch(t, q, i);
+ for (i = 0; i < t->dev->dev_info.nr_hw_queues; i++) {
+ if (t->dev->q_thread_map[t->idx][i]) {
+ struct ublk_queue *q = &t->dev->q[i];
+
+ /* submit two fetch commands for each queue */
+ ublk_batch_queue_fetch(t, q, j++);
+ ublk_batch_queue_fetch(t, q, j++);
+ }
+ }
}
static unsigned short ublk_compl_batch_fetch(struct ublk_thread *t,
@@ -322,7 +356,7 @@ static unsigned short ublk_compl_batch_fetch(struct ublk_thread *t,
return buf_idx;
}
-int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q)
+static int __ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q)
{
unsigned short nr_elem = q->q_depth;
unsigned short buf_idx = ublk_alloc_commit_buf(t);
@@ -359,6 +393,22 @@ int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q)
return 0;
}
+int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q)
+{
+ int ret = 0;
+
+ pthread_spin_lock(&q->lock);
+ if (q->flags & UBLKS_Q_PREPARED)
+ goto unlock;
+ ret = __ublk_batch_queue_prep_io_cmds(t, q);
+ if (!ret)
+ q->flags |= UBLKS_Q_PREPARED;
+unlock:
+ pthread_spin_unlock(&q->lock);
+
+ return ret;
+}
+
static void ublk_batch_compl_commit_cmd(struct ublk_thread *t,
const struct io_uring_cqe *cqe,
unsigned op)
@@ -403,59 +453,89 @@ void ublk_batch_compl_cmd(struct ublk_thread *t,
}
}
-void ublk_batch_commit_io_cmds(struct ublk_thread *t)
+static void __ublk_batch_commit_io_cmds(struct ublk_thread *t,
+ struct batch_commit_buf *cb)
{
struct io_uring_sqe *sqe;
unsigned short buf_idx;
- unsigned short nr_elem = t->commit.done;
+ unsigned short nr_elem = cb->done;
/* nothing to commit */
if (!nr_elem) {
- ublk_free_commit_buf(t, t->commit.buf_idx);
+ ublk_free_commit_buf(t, cb->buf_idx);
return;
}
ublk_io_alloc_sqes(t, &sqe, 1);
- buf_idx = t->commit.buf_idx;
- sqe->addr = (__u64)t->commit.elem;
+ buf_idx = cb->buf_idx;
+ sqe->addr = (__u64)cb->elem;
sqe->len = nr_elem * t->commit_buf_elem_size;
/* commit isn't per-queue command */
- ublk_init_batch_cmd(t, t->commit.q_id, sqe, UBLK_U_IO_COMMIT_IO_CMDS,
+ ublk_init_batch_cmd(t, cb->q_id, sqe, UBLK_U_IO_COMMIT_IO_CMDS,
t->commit_buf_elem_size, nr_elem, buf_idx);
ublk_setup_commit_sqe(t, sqe, buf_idx);
}
-static void ublk_batch_init_commit(struct ublk_thread *t,
- unsigned short buf_idx)
+void ublk_batch_commit_io_cmds(struct ublk_thread *t)
+{
+ int i;
+
+ for (i = 0; i < t->nr_queues; i++) {
+ struct batch_commit_buf *cb = &t->commit[i];
+
+ if (cb->buf_idx != UBLKS_T_COMMIT_BUF_INV_IDX)
+ __ublk_batch_commit_io_cmds(t, cb);
+ }
+
+}
+
+static void __ublk_batch_init_commit(struct ublk_thread *t,
+ struct batch_commit_buf *cb,
+ unsigned short buf_idx)
{
/* so far only support 1:1 queue/thread mapping */
- t->commit.q_id = t->idx;
- t->commit.buf_idx = buf_idx;
- t->commit.elem = ublk_get_commit_buf(t, buf_idx);
- t->commit.done = 0;
- t->commit.count = t->commit_buf_size /
+ cb->buf_idx = buf_idx;
+ cb->elem = ublk_get_commit_buf(t, buf_idx);
+ cb->done = 0;
+ cb->count = t->commit_buf_size /
t->commit_buf_elem_size;
}
-void ublk_batch_prep_commit(struct ublk_thread *t)
+/* COMMIT_IO_CMDS is per-queue command, so use its own commit buffer */
+static void ublk_batch_init_commit(struct ublk_thread *t,
+ struct batch_commit_buf *cb)
{
unsigned short buf_idx = ublk_alloc_commit_buf(t);
ublk_assert(buf_idx != UBLKS_T_COMMIT_BUF_INV_IDX);
- ublk_batch_init_commit(t, buf_idx);
+ ublk_assert(!ublk_batch_commit_prepared(cb));
+
+ __ublk_batch_init_commit(t, cb, buf_idx);
+}
+
+void ublk_batch_prep_commit(struct ublk_thread *t)
+{
+ int i;
+
+ for (i = 0; i < t->nr_queues; i++)
+ t->commit[i].buf_idx = UBLKS_T_COMMIT_BUF_INV_IDX;
}
void ublk_batch_complete_io(struct ublk_thread *t, struct ublk_queue *q,
unsigned tag, int res)
{
- struct batch_commit_buf *cb = &t->commit;
- struct ublk_batch_elem *elem = (struct ublk_batch_elem *)(cb->elem +
- cb->done * t->commit_buf_elem_size);
+ unsigned q_t_idx = ublk_queue_idx_in_thread(t, q);
+ struct batch_commit_buf *cb = &t->commit[q_t_idx];
+ struct ublk_batch_elem *elem;
struct ublk_io *io = &q->ios[tag];
- ublk_assert(q->q_id == t->commit.q_id);
+ if (!ublk_batch_commit_prepared(cb))
+ ublk_batch_init_commit(t, cb);
+ ublk_assert(q->q_id == cb->q_id);
+
+ elem = (struct ublk_batch_elem *)(cb->elem + cb->done * t->commit_buf_elem_size);
elem->tag = tag;
elem->buf_index = ublk_batch_io_buf_idx(t, q, tag);
elem->result = res;
@@ -466,3 +546,65 @@ void ublk_batch_complete_io(struct ublk_thread *t, struct ublk_queue *q,
cb->done += 1;
ublk_assert(cb->done <= cb->count);
}
+
+void ublk_batch_setup_map(struct ublk_dev *dev)
+{
+ int i, j;
+ int nthreads = dev->nthreads;
+ int queues = dev->dev_info.nr_hw_queues;
+
+ /*
+ * Setup round-robin queue-to-thread mapping for arbitrary N:M combinations.
+ *
+ * This algorithm distributes queues across threads (and threads across queues)
+ * in a balanced round-robin fashion to ensure even load distribution.
+ *
+ * Examples:
+ * - 2 threads, 4 queues: T0=[Q0,Q2], T1=[Q1,Q3]
+ * - 4 threads, 2 queues: T0=[Q0], T1=[Q1], T2=[Q0], T3=[Q1]
+ * - 3 threads, 3 queues: T0=[Q0], T1=[Q1], T2=[Q2] (1:1 mapping)
+ *
+ * Phase 1: Mark which queues each thread handles (boolean mapping)
+ */
+ for (i = 0, j = 0; i < queues || j < nthreads; i++, j++) {
+ dev->q_thread_map[j % nthreads][i % queues] = 1;
+ }
+
+ /*
+ * Phase 2: Convert boolean mapping to sequential indices within each thread.
+ *
+ * Transform from: q_thread_map[thread][queue] = 1 (handles queue)
+ * To: q_thread_map[thread][queue] = N (queue index within thread)
+ *
+ * This allows each thread to know the local index of each queue it handles,
+ * which is essential for buffer allocation and management. For example:
+ * - Thread 0 handling queues [0,2] becomes: q_thread_map[0][0]=1, q_thread_map[0][2]=2
+ * - Thread 1 handling queues [1,3] becomes: q_thread_map[1][1]=1, q_thread_map[1][3]=2
+ */
+ for (j = 0; j < nthreads; j++) {
+ unsigned char seq = 1;
+
+ for (i = 0; i < queues; i++) {
+ if (dev->q_thread_map[j][i])
+ dev->q_thread_map[j][i] = seq++;
+ }
+ }
+
+#if 0
+ for (j = 0; j < dev->nthreads; j++) {
+ printf("thread %0d: ", j);
+ for (i = 0; i < dev->dev_info.nr_hw_queues; i++) {
+ if (dev->q_thread_map[j][i])
+ printf("%03u ", i);
+ }
+ printf("\n");
+ }
+ printf("\n");
+ for (j = 0; j < dev->nthreads; j++) {
+ for (i = 0; i < dev->dev_info.nr_hw_queues; i++) {
+ printf("%03u ", dev->q_thread_map[j][i]);
+ }
+ printf("\n");
+ }
+#endif
+}
diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c
index 4b7e9c1c09f4..8ec97ddb861b 100644
--- a/tools/testing/selftests/ublk/kublk.c
+++ b/tools/testing/selftests/ublk/kublk.c
@@ -442,6 +442,7 @@ static int ublk_queue_init(struct ublk_queue *q, unsigned extra_flags)
int cmd_buf_size, io_buf_size;
unsigned long off;
+ pthread_spin_init(&q->lock, PTHREAD_PROCESS_PRIVATE);
q->tgt_ops = dev->tgt.ops;
q->flags = 0;
q->q_depth = depth;
@@ -491,7 +492,7 @@ static int ublk_thread_init(struct ublk_thread *t)
/* FETCH_IO_CMDS is multishot, so increase cq depth for BATCH_IO */
if (ublk_dev_batch_io(dev))
- cq_depth += dev->dev_info.queue_depth;
+ cq_depth += dev->dev_info.queue_depth * 2;
ret = ublk_setup_ring(&t->ring, ring_depth, cq_depth,
IORING_SETUP_COOP_TASKRUN |
@@ -574,6 +575,9 @@ static int ublk_dev_prep(const struct dev_ctx *ctx, struct ublk_dev *dev)
return -1;
}
+ if (ublk_dev_batch_io(dev))
+ ublk_batch_setup_map(dev);
+
dev->fds[0] = fd;
if (dev->tgt.ops->init_tgt)
ret = dev->tgt.ops->init_tgt(ctx, dev);
@@ -873,14 +877,18 @@ static void ublk_batch_setup_queues(struct ublk_thread *t)
{
int i;
- /* setup all queues in the 1st thread */
for (i = 0; i < t->dev->dev_info.nr_hw_queues; i++) {
struct ublk_queue *q = &t->dev->q[i];
int ret;
+ /*
+ * Only prepare io commands in the mapped thread context,
+ * otherwise io command buffer index may not work as expected
+ */
+ if (t->dev->q_thread_map[t->idx][i] == 0)
+ continue;
+
ret = ublk_batch_queue_prep_io_cmds(t, q);
- ublk_assert(ret == 0);
- ret = ublk_process_io(t);
ublk_assert(ret >= 0);
}
}
@@ -913,12 +921,8 @@ static void *ublk_io_handler_fn(void *data)
/* submit all io commands to ublk driver */
ublk_submit_fetch_commands(t);
} else {
- struct ublk_queue *q = &t->dev->q[t->idx];
-
- /* prepare all io commands in the 1st thread context */
- if (!t->idx)
- ublk_batch_setup_queues(t);
- ublk_batch_start_fetch(t, q);
+ ublk_batch_setup_queues(t);
+ ublk_batch_start_fetch(t);
}
do {
@@ -1199,7 +1203,8 @@ static int __cmd_dev_add(const struct dev_ctx *ctx)
goto fail;
}
- if (nthreads != nr_queues && !ctx->per_io_tasks) {
+ if (nthreads != nr_queues && (!ctx->per_io_tasks &&
+ !(ctx->flags & UBLK_F_BATCH_IO))) {
ublk_err("%s: threads %u must be same as queues %u if "
"not using per_io_tasks\n",
__func__, nthreads, nr_queues);
@@ -1718,6 +1723,13 @@ int main(int argc, char *argv[])
return -EINVAL;
}
+ if ((ctx.flags & UBLK_F_AUTO_BUF_REG) &&
+ (ctx.flags & UBLK_F_BATCH_IO) &&
+ (ctx.nthreads > ctx.nr_hw_queues)) {
+ ublk_err("too many threads for F_AUTO_BUF_REG & F_BATCH_IO\n");
+ return -EINVAL;
+ }
+
i = optind;
while (i < argc && ctx.nr_files < MAX_BACK_FILES) {
ctx.files[ctx.nr_files++] = argv[i++];
diff --git a/tools/testing/selftests/ublk/kublk.h b/tools/testing/selftests/ublk/kublk.h
index bfc010b66952..03648ec44623 100644
--- a/tools/testing/selftests/ublk/kublk.h
+++ b/tools/testing/selftests/ublk/kublk.h
@@ -165,10 +165,14 @@ struct ublk_queue {
const struct ublk_tgt_ops *tgt_ops;
struct ublksrv_io_desc *io_cmd_buf;
-/* borrow one bit of ublk uapi flags, which may never be used */
+/* borrow two bit of ublk uapi flags, which may never be used */
#define UBLKS_Q_AUTO_BUF_REG_FALLBACK (1ULL << 63)
+#define UBLKS_Q_PREPARED (1ULL << 62)
__u64 flags;
struct ublk_io ios[UBLK_QUEUE_DEPTH];
+
+ /* used for prep io commands */
+ pthread_spinlock_t lock;
};
/* align with `ublk_elem_header` */
@@ -201,7 +205,8 @@ struct ublk_thread {
unsigned int io_inflight;
pthread_t thread;
- unsigned idx;
+ unsigned short idx;
+ unsigned short nr_queues;
#define UBLKS_T_STOPPING (1U << 0)
#define UBLKS_T_IDLE (1U << 1)
@@ -222,11 +227,11 @@ struct ublk_thread {
void *commit_buf;
#define UBLKS_T_COMMIT_BUF_INV_IDX ((unsigned short)-1)
struct allocator commit_buf_alloc;
- struct batch_commit_buf commit;
+ struct batch_commit_buf *commit;
/* FETCH_IO_CMDS buffer */
-#define UBLKS_T_NR_FETCH_BUF 2
- struct batch_fetch_buf fetch[UBLKS_T_NR_FETCH_BUF];
+ unsigned short nr_fetch_bufs;
+ struct batch_fetch_buf *fetch;
};
struct ublk_dev {
@@ -236,6 +241,7 @@ struct ublk_dev {
struct ublk_thread threads[UBLK_MAX_THREADS];
unsigned nthreads;
unsigned per_io_tasks;
+ unsigned char q_thread_map[UBLK_MAX_THREADS][UBLK_MAX_QUEUES];
int fds[MAX_BACK_FILES + 1]; /* fds[0] points to /dev/ublkcN */
int nr_fds;
@@ -451,6 +457,21 @@ static inline int ublk_queue_no_buf(const struct ublk_queue *q)
return ublk_queue_use_zc(q) || ublk_queue_use_auto_zc(q);
}
+static inline int ublk_batch_commit_prepared(struct batch_commit_buf *cb)
+{
+ return cb->buf_idx != UBLKS_T_COMMIT_BUF_INV_IDX;
+}
+
+static inline unsigned ublk_queue_idx_in_thread(const struct ublk_thread *t,
+ const struct ublk_queue *q)
+{
+ unsigned char idx;
+
+ idx = t->dev->q_thread_map[t->idx][q->q_id];
+ ublk_assert(idx != 0);
+ return idx - 1;
+}
+
/*
* Each IO's buffer index has to be calculated by this helper for
* UBLKS_T_BATCH_IO
@@ -459,14 +480,13 @@ static inline unsigned short ublk_batch_io_buf_idx(
const struct ublk_thread *t, const struct ublk_queue *q,
unsigned tag)
{
- return tag;
+ return ublk_queue_idx_in_thread(t, q) * q->q_depth + tag;
}
/* Queue UBLK_U_IO_PREP_IO_CMDS for a specific queue with batch elements */
int ublk_batch_queue_prep_io_cmds(struct ublk_thread *t, struct ublk_queue *q);
/* Start fetching I/O commands using multishot UBLK_U_IO_FETCH_IO_CMDS */
-void ublk_batch_start_fetch(struct ublk_thread *t,
- struct ublk_queue *q);
+void ublk_batch_start_fetch(struct ublk_thread *t);
/* Handle completion of batch I/O commands (prep/commit) */
void ublk_batch_compl_cmd(struct ublk_thread *t,
const struct io_uring_cqe *cqe);
@@ -484,6 +504,7 @@ void ublk_batch_commit_io_cmds(struct ublk_thread *t);
/* Add a completed I/O operation to the current batch commit buffer */
void ublk_batch_complete_io(struct ublk_thread *t, struct ublk_queue *q,
unsigned tag, int res);
+void ublk_batch_setup_map(struct ublk_dev *dev);
static inline int ublk_complete_io(struct ublk_thread *t, struct ublk_queue *q,
unsigned tag, int res)
diff --git a/tools/testing/selftests/ublk/test_generic_14.sh b/tools/testing/selftests/ublk/test_generic_14.sh
new file mode 100755
index 000000000000..16a41fd16428
--- /dev/null
+++ b/tools/testing/selftests/ublk/test_generic_14.sh
@@ -0,0 +1,30 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+. "$(cd "$(dirname "$0")" && pwd)"/test_common.sh
+
+TID="generic_14"
+ERR_CODE=0
+
+if ! _have_feature "BATCH_IO"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+if ! _have_program fio; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+_prep_test "generic" "test UBLK_F_BATCH_IO with 4_threads vs. 1_queues"
+
+_create_backfile 0 512M
+
+dev_id=$(_add_ublk_dev -t loop -q 1 --nthreads 4 -b "${UBLK_BACKFILES[0]}")
+_check_add_dev $TID $?
+
+# run fio over the ublk disk
+fio --name=job1 --filename=/dev/ublkb"${dev_id}" --ioengine=libaio --rw=readwrite \
+ --iodepth=32 --size=100M --numjobs=4 > /dev/null 2>&1
+ERR_CODE=$?
+
+_cleanup_test "generic"
+_show_result $TID $ERR_CODE
diff --git a/tools/testing/selftests/ublk/test_generic_15.sh b/tools/testing/selftests/ublk/test_generic_15.sh
new file mode 100755
index 000000000000..6b7000b34a9d
--- /dev/null
+++ b/tools/testing/selftests/ublk/test_generic_15.sh
@@ -0,0 +1,30 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+. "$(cd "$(dirname "$0")" && pwd)"/test_common.sh
+
+TID="generic_15"
+ERR_CODE=0
+
+if ! _have_feature "BATCH_IO"; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+if ! _have_program fio; then
+ exit "$UBLK_SKIP_CODE"
+fi
+
+_prep_test "generic" "test UBLK_F_BATCH_IO with 1_threads vs. 4_queues"
+
+_create_backfile 0 512M
+
+dev_id=$(_add_ublk_dev -t loop -q 4 --nthreads 1 -b "${UBLK_BACKFILES[0]}")
+_check_add_dev $TID $?
+
+# run fio over the ublk disk
+fio --name=job1 --filename=/dev/ublkb"${dev_id}" --ioengine=libaio --rw=readwrite \
+ --iodepth=32 --size=100M --numjobs=4 > /dev/null 2>&1
+ERR_CODE=$?
+
+_cleanup_test "generic"
+_show_result $TID $ERR_CODE
--
2.47.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* Re: [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS
2025-09-01 10:02 ` [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
@ 2025-09-02 6:19 ` kernel test robot
0 siblings, 0 replies; 43+ messages in thread
From: kernel test robot @ 2025-09-02 6:19 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, linux-block
Cc: llvm, oe-kbuild-all, Uday Shankar, Caleb Sander Mateos, Ming Lei
Hi Ming,
kernel test robot noticed the following build warnings:
[auto build test WARNING on shuah-kselftest/next]
[also build test WARNING on shuah-kselftest/fixes axboe-block/for-next linus/master v6.17-rc4 next-20250901]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Ming-Lei/ublk-add-parameter-struct-io_uring_cmd-to-ublk_prep_auto_buf_reg/20250901-181042
base: https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git next
patch link: https://lore.kernel.org/r/20250901100242.3231000-10-ming.lei%40redhat.com
patch subject: [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS
config: i386-buildonly-randconfig-004-20250902 (https://download.01.org/0day-ci/archive/20250902/202509021316.7z4i5qbA-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250902/202509021316.7z4i5qbA-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202509021316.7z4i5qbA-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/block/ublk_drv.c:2803:4: warning: format specifies type 'long' but the argument has type 'int' [-Wformat]
2801 | pr_warn("%s: dev %u queue %u io %ld: commit failure %d\n",
| ~~~
| %d
2802 | __func__, ubq->dev->dev_info.dev_id, ubq->q_id,
2803 | io - ubq->ios, ret);
| ^~~~~~~~~~~~~
include/linux/printk.h:567:37: note: expanded from macro 'pr_warn'
567 | printk(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
| ~~~ ^~~~~~~~~~~
include/linux/printk.h:514:60: note: expanded from macro 'printk'
514 | #define printk(fmt, ...) printk_index_wrap(_printk, fmt, ##__VA_ARGS__)
| ~~~ ^~~~~~~~~~~
include/linux/printk.h:486:19: note: expanded from macro 'printk_index_wrap'
486 | _p_func(_fmt, ##__VA_ARGS__); \
| ~~~~ ^~~~~~~~~~~
1 warning generated.
vim +2803 drivers/block/ublk_drv.c
2768
2769 static int ublk_batch_commit_io(struct ublk_io *io,
2770 const struct ublk_batch_io_data *data)
2771 {
2772 const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
2773 struct ublk_queue *ubq = data->ubq;
2774 u16 buf_idx = UBLK_INVALID_BUF_IDX;
2775 union ublk_io_buf buf = { 0 };
2776 struct request *req = NULL;
2777 bool auto_reg = false;
2778 bool compl = false;
2779 int ret;
2780
2781 if (ublk_support_auto_buf_reg(data->ubq)) {
2782 buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
2783 auto_reg = true;
2784 } else if (ublk_need_map_io(data->ubq))
2785 buf.addr = ublk_batch_buf_addr(uc, data->elem);
2786
2787 ublk_io_lock(io);
2788 ret = ublk_batch_commit_io_check(ubq, io, &buf);
2789 if (!ret) {
2790 io->res = data->elem->result;
2791 io->buf = buf;
2792 req = ublk_fill_io_cmd(io, data->cmd);
2793
2794 if (auto_reg)
2795 __ublk_handle_auto_buf_reg(io, data->cmd, &buf_idx);
2796 compl = ublk_need_complete_req(ubq, io);
2797 }
2798 ublk_io_unlock(io);
2799
2800 if (unlikely(ret)) {
2801 pr_warn("%s: dev %u queue %u io %ld: commit failure %d\n",
2802 __func__, ubq->dev->dev_info.dev_id, ubq->q_id,
> 2803 io - ubq->ios, ret);
2804 return ret;
2805 }
2806
2807 /* can't touch 'ublk_io' any more */
2808 if (buf_idx != UBLK_INVALID_BUF_IDX)
2809 io_buffer_unregister_bvec(data->cmd, buf_idx, data->issue_flags);
2810 if (req_op(req) == REQ_OP_ZONE_APPEND)
2811 req->__sector = ublk_batch_zone_lba(uc, data->elem);
2812 if (compl)
2813 __ublk_complete_rq(req);
2814 return 0;
2815 }
2816
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg()
2025-09-01 10:02 ` [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() Ming Lei
@ 2025-09-03 3:47 ` Caleb Sander Mateos
0 siblings, 0 replies; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 3:47 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() and
> prepare for reusing this helper for the coming UBLK_BATCH_IO feature,
> which can fetch & commit one batch of io commands via single uring_cmd.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 02/23] ublk: add `union ublk_io_buf` with improved naming
2025-09-01 10:02 ` [PATCH 02/23] ublk: add `union ublk_io_buf` with improved naming Ming Lei
@ 2025-09-03 4:01 ` Caleb Sander Mateos
0 siblings, 0 replies; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 4:01 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Add `union ublk_io_buf`, meantime apply it to `struct ublk_io` for
> storing either ublk auto buffer register data or ublk server io buffer
> address.
The commit message could be a bit clearer that this is naming the
previously anonymous union of struct ublk_io's addr and buf fields. My
initial impression was that it was introducing a new union type.
Other than that,
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
>
> The union uses clear field names:
> - `addr`: for regular ublk server io buffer addresses
> - `auto_reg`: for ublk auto buffer registration data
>
> This eliminates confusing access patterns and improves code readability.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 40 ++++++++++++++++++++++------------------
> 1 file changed, 22 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 040528ad5d30..9185978abeb7 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -155,12 +155,13 @@ struct ublk_uring_cmd_pdu {
> */
> #define UBLK_REFCOUNT_INIT (REFCOUNT_MAX / 2)
>
> +union ublk_io_buf {
> + __u64 addr;
> + struct ublk_auto_buf_reg auto_reg;
> +};
> +
> struct ublk_io {
> - /* userspace buffer address from io cmd */
> - union {
> - __u64 addr;
> - struct ublk_auto_buf_reg buf;
> - };
> + union ublk_io_buf buf;
> unsigned int flags;
> int res;
>
> @@ -500,7 +501,7 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
> iod->op_flags = ublk_op | ublk_req_build_flags(req);
> iod->nr_sectors = blk_rq_sectors(req);
> iod->start_sector = blk_rq_pos(req);
> - iod->addr = io->addr;
> + iod->addr = io->buf.addr;
>
> return BLK_STS_OK;
> }
> @@ -1012,7 +1013,7 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
> struct iov_iter iter;
> const int dir = ITER_DEST;
>
> - import_ubuf(dir, u64_to_user_ptr(io->addr), rq_bytes, &iter);
> + import_ubuf(dir, u64_to_user_ptr(io->buf.addr), rq_bytes, &iter);
> return ublk_copy_user_pages(req, 0, &iter, dir);
> }
> return rq_bytes;
> @@ -1033,7 +1034,7 @@ static int ublk_unmap_io(const struct ublk_queue *ubq,
>
> WARN_ON_ONCE(io->res > rq_bytes);
>
> - import_ubuf(dir, u64_to_user_ptr(io->addr), io->res, &iter);
> + import_ubuf(dir, u64_to_user_ptr(io->buf.addr), io->res, &iter);
> return ublk_copy_user_pages(req, 0, &iter, dir);
> }
> return rq_bytes;
> @@ -1104,7 +1105,7 @@ static blk_status_t ublk_setup_iod(struct ublk_queue *ubq, struct request *req)
> iod->op_flags = ublk_op | ublk_req_build_flags(req);
> iod->nr_sectors = blk_rq_sectors(req);
> iod->start_sector = blk_rq_pos(req);
> - iod->addr = io->addr;
> + iod->addr = io->buf.addr;
>
> return BLK_STS_OK;
> }
> @@ -1219,9 +1220,9 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> int ret;
>
> ret = io_buffer_register_bvec(cmd, req, ublk_io_release,
> - io->buf.index, issue_flags);
> + io->buf.auto_reg.index, issue_flags);
> if (ret) {
> - if (io->buf.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
> + if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
> ublk_auto_buf_reg_fallback(ubq, io);
> return true;
> }
> @@ -1513,7 +1514,7 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
> */
> io->flags &= UBLK_IO_FLAG_CANCELED;
> io->cmd = NULL;
> - io->addr = 0;
> + io->buf.addr = 0;
>
> /*
> * old task is PF_EXITING, put it now
> @@ -2007,13 +2008,16 @@ static inline int ublk_check_cmd_op(u32 cmd_op)
>
> static inline int ublk_set_auto_buf_reg(struct ublk_io *io, struct io_uring_cmd *cmd)
> {
> - io->buf = ublk_sqe_addr_to_auto_buf_reg(READ_ONCE(cmd->sqe->addr));
> + struct ublk_auto_buf_reg buf;
> +
> + buf = ublk_sqe_addr_to_auto_buf_reg(READ_ONCE(cmd->sqe->addr));
>
> - if (io->buf.reserved0 || io->buf.reserved1)
> + if (buf.reserved0 || buf.reserved1)
> return -EINVAL;
>
> - if (io->buf.flags & ~UBLK_AUTO_BUF_REG_F_MASK)
> + if (buf.flags & ~UBLK_AUTO_BUF_REG_F_MASK)
> return -EINVAL;
> + io->buf.auto_reg = buf;
> return 0;
> }
>
> @@ -2035,7 +2039,7 @@ static int ublk_handle_auto_buf_reg(struct ublk_io *io,
> * this ublk request gets stuck.
> */
> if (io->buf_ctx_handle == io_uring_cmd_ctx_handle(cmd))
> - *buf_idx = io->buf.index;
> + *buf_idx = io->buf.auto_reg.index;
> }
>
> return ublk_set_auto_buf_reg(io, cmd);
> @@ -2063,7 +2067,7 @@ ublk_config_io_buf(const struct ublk_queue *ubq, struct ublk_io *io,
> if (ublk_support_auto_buf_reg(ubq))
> return ublk_handle_auto_buf_reg(io, cmd, buf_idx);
>
> - io->addr = buf_addr;
> + io->buf.addr = buf_addr;
> return 0;
> }
>
> @@ -2259,7 +2263,7 @@ static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io,
> */
> io->flags &= ~UBLK_IO_FLAG_NEED_GET_DATA;
> /* update iod->addr because ublksrv may have passed a new io buffer */
> - ublk_get_iod(ubq, req->tag)->addr = io->addr;
> + ublk_get_iod(ubq, req->tag)->addr = io->buf.addr;
> pr_devel("%s: update iod->addr: qid %d tag %d io_flags %x addr %llx\n",
> __func__, ubq->q_id, req->tag, io->flags,
> ublk_get_iod(ubq, req->tag)->addr);
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req()
2025-09-01 10:02 ` [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req() Ming Lei
@ 2025-09-03 4:41 ` Caleb Sander Mateos
2025-09-10 2:23 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 4:41 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Refactor auto buffer register code and prepare for supporting batch IO
> feature, and the main motivation is to put 'ublk_io' operation code
> together, so that per-io lock can be applied for the code block.
>
> The key changes are:
> - Rename ublk_auto_buf_reg() as ublk_do_auto_buf_reg()
Thanks, the type and the function having the same name was a minor annoyance.
> - Introduce an enum `auto_buf_reg_res` to represent the result of
> the buffer registration attempt (FAIL, FALLBACK, OK).
> - Split the existing `ublk_do_auto_buf_reg` function into two:
> - `__ublk_do_auto_buf_reg`: Performs the actual buffer registration
> and returns the `auto_buf_reg_res` status.
> - `ublk_do_auto_buf_reg`: A wrapper that calls the internal function
> and handles the I/O preparation based on the result.
> - Introduce `ublk_prep_auto_buf_reg_io` to encapsulate the logic for
> preparing the I/O for completion after buffer registration.
> - Pass the `tag` directly to `ublk_auto_buf_reg_fallback` to avoid
> recalculating it.
>
> This refactoring makes the control flow clearer and isolates the different
> stages of the auto buffer registration process.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 65 +++++++++++++++++++++++++++-------------
> 1 file changed, 44 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 9185978abeb7..e53f623b0efe 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -1205,17 +1205,36 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
> }
>
> static void
> -ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, struct ublk_io *io)
> +ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, unsigned tag)
> {
> - unsigned tag = io - ubq->ios;
The reason to calculate the tag like this was to avoid the pointer
dereference in req->tag. But req->tag is already accessed just prior
in ublk_dispatch_req(), so it should be cached and not too expensive
to load again.
> struct ublksrv_io_desc *iod = ublk_get_iod(ubq, tag);
>
> iod->op_flags |= UBLK_IO_F_NEED_REG_BUF;
> }
>
> -static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> - struct ublk_io *io, struct io_uring_cmd *cmd,
> - unsigned int issue_flags)
> +enum auto_buf_reg_res {
> + AUTO_BUF_REG_FAIL,
> + AUTO_BUF_REG_FALLBACK,
> + AUTO_BUF_REG_OK,
> +};
nit: move this enum definition next to the function that returns it?
> +
> +static void ublk_prep_auto_buf_reg_io(const struct ublk_queue *ubq,
> + struct request *req, struct ublk_io *io,
> + struct io_uring_cmd *cmd, bool registered)
How about passing enum auto_buf_reg_res instead of bool registered to
avoid the duplicated == AUTO_BUF_REG_OK in the callers?
> +{
> + if (registered) {
> + io->task_registered_buffers = 1;
> + io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
> + io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
> + }
> + ublk_init_req_ref(ubq, io);
> + __ublk_prep_compl_io_cmd(io, req);
> +}
> +
> +static enum auto_buf_reg_res
> +__ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> + struct ublk_io *io, struct io_uring_cmd *cmd,
> + unsigned int issue_flags)
> {
> int ret;
>
> @@ -1223,29 +1242,27 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> io->buf.auto_reg.index, issue_flags);
> if (ret) {
> if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
> - ublk_auto_buf_reg_fallback(ubq, io);
> - return true;
> + ublk_auto_buf_reg_fallback(ubq, req->tag);
> + return AUTO_BUF_REG_FALLBACK;
> }
> blk_mq_end_request(req, BLK_STS_IOERR);
> - return false;
> + return AUTO_BUF_REG_FAIL;
> }
>
> - io->task_registered_buffers = 1;
> - io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
> - io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
> - return true;
> + return AUTO_BUF_REG_OK;
> }
>
> -static bool ublk_prep_auto_buf_reg(struct ublk_queue *ubq,
> - struct request *req, struct ublk_io *io,
> - struct io_uring_cmd *cmd,
> - unsigned int issue_flags)
> +static void ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> + struct ublk_io *io, struct io_uring_cmd *cmd,
> + unsigned int issue_flags)
> {
> - ublk_init_req_ref(ubq, io);
> - if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req))
> - return ublk_auto_buf_reg(ubq, req, io, cmd, issue_flags);
> + enum auto_buf_reg_res res = __ublk_do_auto_buf_reg(ubq, req, io, cmd,
> + issue_flags);
>
> - return true;
> + if (res != AUTO_BUF_REG_FAIL) {
> + ublk_prep_auto_buf_reg_io(ubq, req, io, cmd, res == AUTO_BUF_REG_OK);
> + io_uring_cmd_done(cmd, UBLK_IO_RES_OK, 0, issue_flags);
> + }
> }
>
> static bool ublk_start_io(const struct ublk_queue *ubq, struct request *req,
> @@ -1318,8 +1335,14 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
> if (!ublk_start_io(ubq, req, io))
> return;
>
> - if (ublk_prep_auto_buf_reg(ubq, req, io, io->cmd, issue_flags))
> + if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req)) {
> + struct io_uring_cmd *cmd = io->cmd;
Don't really see the need for this intermediate variable
Best,
Caleb
> +
> + ublk_do_auto_buf_reg(ubq, req, io, cmd, issue_flags);
> + } else {
> + ublk_init_req_ref(ubq, io);
> ublk_complete_io_cmd(io, req, UBLK_IO_RES_OK, issue_flags);
> + }
> }
>
> static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd,
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 04/23] ublk: add helper of __ublk_fetch()
2025-09-01 10:02 ` [PATCH 04/23] ublk: add helper of __ublk_fetch() Ming Lei
@ 2025-09-03 4:42 ` Caleb Sander Mateos
2025-09-10 2:30 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-03 4:42 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Add helper __ublk_fetch() for the coming batch io feature.
>
> Meantime move ublk_config_io_buf() out of __ublk_fetch() because batch
> io has new interface for configuring buffer.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 31 ++++++++++++++++++++-----------
> 1 file changed, 20 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index e53f623b0efe..f265795a8d57 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -2206,18 +2206,12 @@ static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr)
> return 0;
> }
>
> -static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> - struct ublk_io *io, __u64 buf_addr)
> +static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> + struct ublk_io *io)
> {
> struct ublk_device *ub = ubq->dev;
> int ret = 0;
>
> - /*
> - * When handling FETCH command for setting up ublk uring queue,
> - * ub->mutex is the innermost lock, and we won't block for handling
> - * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
> - */
> - mutex_lock(&ub->mutex);
> /* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
> if (ublk_queue_ready(ubq)) {
> ret = -EBUSY;
> @@ -2233,13 +2227,28 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV);
>
> ublk_fill_io_cmd(io, cmd);
> - ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
> - if (ret)
> - goto out;
>
> WRITE_ONCE(io->task, get_task_struct(current));
> ublk_mark_io_ready(ub, ubq);
> out:
> + return ret;
If the out: section no longer releases any resources, can we replace
the "goto out" with just "return ret"?
> +}
> +
> +static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> + struct ublk_io *io, __u64 buf_addr)
> +{
> + struct ublk_device *ub = ubq->dev;
> + int ret;
> +
> + /*
> + * When handling FETCH command for setting up ublk uring queue,
> + * ub->mutex is the innermost lock, and we won't block for handling
> + * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
> + */
> + mutex_lock(&ub->mutex);
> + ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
> + if (!ret)
> + ret = __ublk_fetch(cmd, ubq, io);
How come the order of operations was switched here? ublk_fetch()
previously checked ublk_queue_ready(ubq) and io->flags &
UBLK_IO_FLAG_ACTIVE first, which seems necessary to prevent
overwriting a ublk_io that has already been fetched.
Best,
Caleb
> mutex_unlock(&ub->mutex);
> return ret;
> }
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 05/23] ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO
2025-09-01 10:02 ` [PATCH 05/23] ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO Ming Lei
@ 2025-09-06 18:47 ` Caleb Sander Mateos
0 siblings, 0 replies; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-06 18:47 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Introduces the basic structure for a batched I/O feature in the ublk driver.
> It adds placeholder functions and a new file operations structure,
> ublk_ch_batch_io_fops, which will be used for fetching and committing I/O
> commands in batches. Currently, the feature is disabled and returns
> -EOPNOTSUPP.
Technically the "return -EOPNOTSUPP" isn't even reachable since
ublk_ch_batch_io_fops isn't used yet. I think saying "the feature is
disabled" would be sufficient.
Other than that,
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 26 +++++++++++++++++++++++++-
> 1 file changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index f265795a8d57..a0dfad8a56f0 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -256,6 +256,11 @@ static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
> size_t offset);
> static inline unsigned int ublk_req_build_flags(struct request *req);
>
> +static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
> +{
> + return false;
> +}
> +
> static inline struct ublksrv_io_desc *
> ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> {
> @@ -2509,6 +2514,12 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> return ublk_ch_uring_cmd_local(cmd, issue_flags);
> }
>
> +static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> + unsigned int issue_flags)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> static inline bool ublk_check_ubuf_dir(const struct request *req,
> int ubuf_dir)
> {
> @@ -2624,6 +2635,16 @@ static const struct file_operations ublk_ch_fops = {
> .mmap = ublk_ch_mmap,
> };
>
> +static const struct file_operations ublk_ch_batch_io_fops = {
> + .owner = THIS_MODULE,
> + .open = ublk_ch_open,
> + .release = ublk_ch_release,
> + .read_iter = ublk_ch_read_iter,
> + .write_iter = ublk_ch_write_iter,
> + .uring_cmd = ublk_ch_batch_io_uring_cmd,
> + .mmap = ublk_ch_mmap,
> +};
> +
> static void ublk_deinit_queue(struct ublk_device *ub, int q_id)
> {
> int size = ublk_queue_cmd_buf_size(ub, q_id);
> @@ -2761,7 +2782,10 @@ static int ublk_add_chdev(struct ublk_device *ub)
> if (ret)
> goto fail;
>
> - cdev_init(&ub->cdev, &ublk_ch_fops);
> + if (ublk_dev_support_batch_io(ub))
> + cdev_init(&ub->cdev, &ublk_ch_batch_io_fops);
> + else
> + cdev_init(&ub->cdev, &ublk_ch_fops);
> ret = cdev_device_add(&ub->cdev, dev);
> if (ret)
> goto fail;
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 06/23] ublk: prepare for not tracking task context for command batch
2025-09-01 10:02 ` [PATCH 06/23] ublk: prepare for not tracking task context for command batch Ming Lei
@ 2025-09-06 18:48 ` Caleb Sander Mateos
2025-09-10 2:35 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-06 18:48 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> batch io is designed to be independent of task context, and we will not
> track task context for batch io feature.
>
> So warn on non-batch-io code paths.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index a0dfad8a56f0..46be5b656f22 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -261,6 +261,11 @@ static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
> return false;
> }
>
> +static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> +{
> + return false;
> +}
> +
> static inline struct ublksrv_io_desc *
> ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> {
> @@ -1309,6 +1314,8 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
> __func__, ubq->q_id, req->tag, io->flags,
> ublk_get_iod(ubq, req->tag)->addr);
>
> + WARN_ON_ONCE(ublk_support_batch_io(ubq));
Hmm, not a huge fan of extra checks in the I/O path. It seems fairly
easy to verify from the code that these functions won't be called for
batch commands. Do we really need the assertion?
> +
> /*
> * Task is exiting if either:
> *
> @@ -1868,6 +1875,8 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
> if (WARN_ON_ONCE(pdu->tag >= ubq->q_depth))
> return;
>
> + WARN_ON_ONCE(ublk_support_batch_io(ubq));
> +
> task = io_uring_cmd_get_task(cmd);
> io = &ubq->ios[pdu->tag];
> if (WARN_ON_ONCE(task && task != io->task))
> @@ -2233,7 +2242,10 @@ static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
>
> ublk_fill_io_cmd(io, cmd);
>
> - WRITE_ONCE(io->task, get_task_struct(current));
> + if (ublk_support_batch_io(ubq))
> + WRITE_ONCE(io->task, NULL);
Don't see a need to explicitly write NULL here since the ublk_io
memory is zero-initialized.
Best,
Caleb
> + else
> + WRITE_ONCE(io->task, get_task_struct(current));
> ublk_mark_io_ready(ub, ubq);
> out:
> return ret;
> @@ -2347,6 +2359,8 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
> if (tag >= ubq->q_depth)
> goto out;
>
> + WARN_ON_ONCE(ublk_support_batch_io(ubq));
> +
> io = &ubq->ios[tag];
> /* UBLK_IO_FETCH_REQ can be handled on any task, which sets io->task */
> if (unlikely(_IOC_NR(cmd_op) == UBLK_IO_FETCH_REQ)) {
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS
2025-09-01 10:02 ` [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
@ 2025-09-06 18:50 ` Caleb Sander Mateos
2025-09-10 3:05 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-06 18:50 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> Add new command UBLK_U_IO_PREP_IO_CMDS, which is the batch version of
> UBLK_IO_FETCH_REQ.
>
> Add new command UBLK_U_IO_COMMIT_IO_CMDS, which is for committing io command
> result only, still the batch version.
>
> The new command header type is `struct ublk_batch_io`, and fixed buffer is
> required for these two uring_cmd.
The commit message could be clearer that it doesn't actually implement
these commands yet, just validates the SQE fields.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 102 +++++++++++++++++++++++++++++++++-
> include/uapi/linux/ublk_cmd.h | 49 ++++++++++++++++
> 2 files changed, 149 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 46be5b656f22..4da0dbbd7e16 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -85,6 +85,11 @@
> UBLK_PARAM_TYPE_DEVT | UBLK_PARAM_TYPE_ZONED | \
> UBLK_PARAM_TYPE_DMA_ALIGN | UBLK_PARAM_TYPE_SEGMENT)
>
> +#define UBLK_BATCH_F_ALL \
> + (UBLK_BATCH_F_HAS_ZONE_LBA | \
> + UBLK_BATCH_F_HAS_BUF_ADDR | \
> + UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK)
> +
> struct ublk_uring_cmd_pdu {
> /*
> * Store requests in same batch temporarily for queuing them to
> @@ -108,6 +113,11 @@ struct ublk_uring_cmd_pdu {
> u16 tag;
> };
>
> +struct ublk_batch_io_data {
> + struct ublk_queue *ubq;
> + struct io_uring_cmd *cmd;
> +};
> +
> /*
> * io command is active: sqe cmd is received, and its cqe isn't done
> *
> @@ -277,7 +287,7 @@ static inline bool ublk_dev_is_zoned(const struct ublk_device *ub)
> return ub->dev_info.flags & UBLK_F_ZONED;
> }
>
> -static inline bool ublk_queue_is_zoned(struct ublk_queue *ubq)
> +static inline bool ublk_queue_is_zoned(const struct ublk_queue *ubq)
This change could go in a separate commit.
> {
> return ubq->flags & UBLK_F_ZONED;
> }
> @@ -2528,10 +2538,98 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> return ublk_ch_uring_cmd_local(cmd, issue_flags);
> }
>
> +static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
> +{
> + const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
> + UBLK_BATCH_F_HAS_ZONE_LBA;
Can we use a fixed-size integer type, i.e. u16?
> +
> + if (uc->flags & ~UBLK_BATCH_F_ALL)
> + return -EINVAL;
> +
> + /* UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK requires buffer index */
> + if ((uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) &&
> + (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR))
> + return -EINVAL;
> +
> + switch (uc->flags & mask) {
> + case 0:
> + if (uc->elem_bytes != 8)
sizeof(struct ublk_elem_header)?
> + return -EINVAL;
> + break;
> + case UBLK_BATCH_F_HAS_ZONE_LBA:
> + case UBLK_BATCH_F_HAS_BUF_ADDR:
> + if (uc->elem_bytes != 8 + 8)
sizeof(u64)?
> + return -EINVAL;
> + break;
> + case UBLK_BATCH_F_HAS_ZONE_LBA | UBLK_BATCH_F_HAS_BUF_ADDR:
> + if (uc->elem_bytes != 8 + 8 + 8)
> + return -EINVAL;
So elem_bytes is redundant with flags? Do we really need a separate a
separate field then?
> + break;
> + default:
> + return -EINVAL;
default case is unreachable?
> + }
> +
> + return 0;
> +}
> +
> +static int ublk_check_batch_cmd(const struct ublk_batch_io_data *data,
> + const struct ublk_batch_io *uc)
> +{
> + if (!(data->cmd->flags & IORING_URING_CMD_FIXED))
> + return -EINVAL;
> +
> + if (uc->nr_elem * uc->elem_bytes > data->cmd->sqe->len)
Cast nr_elem and/or elem_bytes to u32 to avoid overflow concerns?
Should also use READ_ONCE() to read the userspace-mapped sqe->len.
> + return -E2BIG;
> +
> + if (uc->nr_elem > data->ubq->q_depth)
> + return -E2BIG;
> +
> + if ((uc->flags & UBLK_BATCH_F_HAS_ZONE_LBA) &&
> + !ublk_queue_is_zoned(data->ubq))
> + return -EINVAL;
> +
> + if ((uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR) &&
> + !ublk_need_map_io(data->ubq))
> + return -EINVAL;
> +
> + if ((uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) &&
> + !ublk_support_auto_buf_reg(data->ubq))
> + return -EINVAL;
> +
> + if (uc->reserved || uc->reserved2)
> + return -EINVAL;
> +
> + return ublk_check_batch_cmd_flags(uc);
> +}
> +
> static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> unsigned int issue_flags)
> {
> - return -EOPNOTSUPP;
> + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> + struct ublk_device *ub = cmd->file->private_data;
> + struct ublk_batch_io_data data = {
> + .cmd = cmd,
> + };
> + u32 cmd_op = cmd->cmd_op;
> + int ret = -EINVAL;
> +
> + if (uc->q_id >= ub->dev_info.nr_hw_queues)
> + goto out;
> + data.ubq = ublk_get_queue(ub, uc->q_id);
Should be using READ_ONCE() to read from userspace-mapped memory.
> +
> + switch (cmd_op) {
> + case UBLK_U_IO_PREP_IO_CMDS:
> + case UBLK_U_IO_COMMIT_IO_CMDS:
> + ret = ublk_check_batch_cmd(&data, uc);
> + if (ret)
> + goto out;
> + ret = -EOPNOTSUPP;
> + break;
> + default:
> + ret = -EOPNOTSUPP;
> + }
> +out:
> + return ret;
> }
>
> static inline bool ublk_check_ubuf_dir(const struct request *req,
> diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
> index ec77dabba45b..01d3af52cfb4 100644
> --- a/include/uapi/linux/ublk_cmd.h
> +++ b/include/uapi/linux/ublk_cmd.h
> @@ -102,6 +102,10 @@
> _IOWR('u', 0x23, struct ublksrv_io_cmd)
> #define UBLK_U_IO_UNREGISTER_IO_BUF \
> _IOWR('u', 0x24, struct ublksrv_io_cmd)
> +#define UBLK_U_IO_PREP_IO_CMDS \
> + _IOWR('u', 0x25, struct ublk_batch_io)
> +#define UBLK_U_IO_COMMIT_IO_CMDS \
> + _IOWR('u', 0x26, struct ublk_batch_io)
>
> /* only ABORT means that no re-fetch */
> #define UBLK_IO_RES_OK 0
> @@ -525,6 +529,51 @@ struct ublksrv_io_cmd {
> };
> };
>
> +struct ublk_elem_header {
> + __u16 tag; /* IO tag */
> +
> + /*
> + * Buffer index for incoming io command, only valid iff
> + * UBLK_F_AUTO_BUF_REG is set
> + */
> + __u16 buf_index;
> + __u32 result; /* I/O completion result (commit only) */
The result is unsigned? So there's no way to specify a request failure?
> +};
> +
> +/*
> + * uring_cmd buffer structure
Add "for batch commands"?
> + *
> + * buffer includes multiple elements, which number is specified by
> + * `nr_elem`. Each element buffer is organized in the following order:
> + *
> + * struct ublk_elem_buffer {
> + * // Mandatory fields (8 bytes)
> + * struct ublk_elem_header header;
> + *
> + * // Optional fields (8 bytes each, included based on flags)
> + *
> + * // Buffer address (if UBLK_BATCH_F_HAS_BUF_ADDR) for copying data
> + * // between ublk request and ublk server buffer
> + * __u64 buf_addr;
> + *
> + * // returned Zone append LBA (if UBLK_BATCH_F_HAS_ZONE_LBA)
> + * __u64 zone_lba;
> + * }
> + *
> + * Used for `UBLK_U_IO_PREP_IO_CMDS` and `UBLK_U_IO_COMMIT_IO_CMDS`
> + */
> +struct ublk_batch_io {
> + __u16 q_id;
So this doesn't allow batching completions across ublk queues? That
seems like it significantly limits the usefulness of this feature. A
ublk server thread may be handling ublk requests from a number of
client threads which are submitting to different ublk queues.
Best,
Caleb
> +#define UBLK_BATCH_F_HAS_ZONE_LBA (1 << 0)
> +#define UBLK_BATCH_F_HAS_BUF_ADDR (1 << 1)
> +#define UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK (1 << 2)
> + __u16 flags;
> + __u16 nr_elem;
> + __u8 elem_bytes;
> + __u8 reserved;
> + __u64 reserved2;
> +};
> +
> struct ublk_param_basic {
> #define UBLK_ATTR_READ_ONLY (1 << 0)
> #define UBLK_ATTR_ROTATIONAL (1 << 1)
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-09-01 10:02 ` [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
@ 2025-09-06 19:48 ` Caleb Sander Mateos
2025-09-10 3:56 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-06 19:48 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
> which allows userspace to prepare a batch of I/O requests.
>
> The core of this change is the `ublk_walk_cmd_buf` function, which iterates
> over the elements in the uring_cmd fixed buffer. For each element, it parses
> the I/O details, finds the corresponding `ublk_io` structure, and prepares it
> for future dispatch.
>
> Add per-io lock for protecting concurrent delivery and committing.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
> include/uapi/linux/ublk_cmd.h | 5 +
> 2 files changed, 195 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 4da0dbbd7e16..a4bae3d1562a 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
> struct ublk_batch_io_data {
> struct ublk_queue *ubq;
> struct io_uring_cmd *cmd;
> + unsigned int issue_flags;
> +
> + /* set when walking the element buffer */
> + const struct ublk_elem_header *elem;
> };
>
> /*
> @@ -200,6 +204,7 @@ struct ublk_io {
> unsigned task_registered_buffers;
>
> void *buf_ctx_handle;
> + spinlock_t lock;
From our experience writing a high-throughput ublk server, the
spinlocks and mutexes in the kernel are some of the largest CPU
hotspots. We have spent a lot of effort working to avoid locking where
possible or shard data structures to reduce contention on the locks.
Even uncontended locks are still very expensive to acquire and release
on machines with many CPUs due to the cache coherency overhead. ublk's
per-io daemon architecture is great for performance by removing the
need for locks in the I/O path. I can't really see us adopting this
ublk batching feature; adding a spin_lock() + spin_unlock() to every
ublk commit operation is not worth the reduction in io_uring SQEs and
uring_cmds.
> } ____cacheline_aligned_in_smp;
>
> struct ublk_queue {
> @@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> return false;
> }
>
> +static inline void ublk_io_lock(struct ublk_io *io)
> +{
> + spin_lock(&io->lock);
> +}
> +
> +static inline void ublk_io_unlock(struct ublk_io *io)
> +{
> + spin_unlock(&io->lock);
> +}
> +
> static inline struct ublksrv_io_desc *
> ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> {
> @@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> return ublk_ch_uring_cmd_local(cmd, issue_flags);
> }
>
> +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
> + const struct ublk_elem_header *elem)
> +{
> + const void *buf = (const void *)elem;
Don't need an explicit cast in order to cast to void *.
> +
> + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
> + return *(__u64 *)(buf + sizeof(*elem));
> + return -1;
Why -1 and not 0? ublk_check_fetch_buf() is expecting a 0 buf_addr to
indicate the lack
> +}
> +
> +static struct ublk_auto_buf_reg
> +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
> + const struct ublk_elem_header *elem)
> +{
> + struct ublk_auto_buf_reg reg = {
> + .index = elem->buf_index,
> + .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
> + UBLK_AUTO_BUF_REG_FALLBACK : 0,
> + };
> +
> + return reg;
> +}
> +
> +/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
"can cover" is a bit vague. Can you be explicit that the buffer size
needs to be a multiple of any possible buffer element size?
> +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
> +struct ublk_batch_io_iter {
> + /* copy to this buffer from iterator first */
> + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
> + struct iov_iter iter;
> + unsigned done, total;
> + unsigned char elem_bytes;
> +};
> +
> +static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> + struct ublk_batch_io_data *data,
> + unsigned bytes,
> + int (*cb)(struct ublk_io *io,
> + const struct ublk_batch_io_data *data))
> +{
> + int i, ret = 0;
> +
> + for (i = 0; i < bytes; i += iter->elem_bytes) {
> + const struct ublk_elem_header *elem =
> + (const struct ublk_elem_header *)&iter->buf[i];
> + struct ublk_io *io;
> +
> + if (unlikely(elem->tag >= data->ubq->q_depth)) {
> + ret = -EINVAL;
> + break;
> + }
> +
> + io = &data->ubq->ios[elem->tag];
> + data->elem = elem;
> + ret = cb(io, data);
Why not just pas elem as a separate argument to the callback?
> + if (unlikely(ret))
> + break;
> + }
> + iter->done += i;
> + return ret;
> +}
> +
> +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> + struct ublk_batch_io_data *data,
> + int (*cb)(struct ublk_io *io,
> + const struct ublk_batch_io_data *data))
> +{
> + int ret = 0;
> +
> + while (iter->done < iter->total) {
> + unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
> +
> + ret = copy_from_iter(iter->buf, len, &iter->iter);
> + if (ret != len) {
How would this be possible? The iterator comes from an io_uring
registered buffer with at least the requested length, so the user
addresses should have been validated when the buffer was registered.
Should this just be a WARN_ON()?
> + pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
> + data->ubq->dev->dev_info.dev_id, ret, len);
> + ret = -EINVAL;
> + break;
> + }
> +
> + ret = __ublk_walk_cmd_buf(iter, data, len, cb);
> + if (ret)
> + break;
> + }
> + return ret;
> +}
> +
> +static int ublk_batch_unprep_io(struct ublk_io *io,
> + const struct ublk_batch_io_data *data)
> +{
> + if (ublk_queue_ready(data->ubq))
> + data->ubq->dev->nr_queues_ready--;
> +
> + ublk_io_lock(io);
> + io->flags = 0;
> + ublk_io_unlock(io);
> + data->ubq->nr_io_ready--;
> + return 0;
This "unprep" looks very subtle and fairly complicated. Is it really
necessary? What's wrong with leaving the I/Os that were successfully
prepped? It also looks racy to clear io->flags after the queue is
ready, as the io may already be in use by some I/O request.
> +}
> +
> +static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter,
> + struct ublk_batch_io_data *data)
> +{
> + int ret;
> +
> + if (!iter->done)
> + return;
> +
> + iov_iter_revert(&iter->iter, iter->done);
Shouldn't the iterator be reverted by the total number of bytes
copied, which may be more than iter->done?
> + iter->total = iter->done;
> + iter->done = 0;
> +
> + ret = ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io);
> + WARN_ON_ONCE(ret);
> +}
> +
> +static int ublk_batch_prep_io(struct ublk_io *io,
> + const struct ublk_batch_io_data *data)
> +{
> + const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
> + union ublk_io_buf buf = { 0 };
> + int ret;
> +
> + if (ublk_support_auto_buf_reg(data->ubq))
> + buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
> + else if (ublk_need_map_io(data->ubq)) {
> + buf.addr = ublk_batch_buf_addr(uc, data->elem);
> +
> + ret = ublk_check_fetch_buf(data->ubq, buf.addr);
> + if (ret)
> + return ret;
> + }
> +
> + ublk_io_lock(io);
> + ret = __ublk_fetch(data->cmd, data->ubq, io);
> + if (!ret)
> + io->buf = buf;
> + ublk_io_unlock(io);
> +
> + return ret;
> +}
> +
> +static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
> +{
> + struct io_uring_cmd *cmd = data->cmd;
> + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> + struct ublk_batch_io_iter iter = {
> + .total = uc->nr_elem * uc->elem_bytes,
> + .elem_bytes = uc->elem_bytes,
> + };
> + int ret;
> +
> + ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
Could iter.total be used in place of cmd->sqe->len? That way userspace
wouldn't have to specify a redundant value in the SQE len field.
> + WRITE, &iter.iter, cmd, data->issue_flags);
> + if (ret)
> + return ret;
> +
> + mutex_lock(&data->ubq->dev->mutex);
> + ret = ublk_walk_cmd_buf(&iter, data, ublk_batch_prep_io);
> +
> + if (ret && iter.done)
> + ublk_batch_revert_prep_cmd(&iter, data);
The iter.done check is duplicated in ublk_batch_revert_prep_cmd().
> + mutex_unlock(&data->ubq->dev->mutex);
> + return ret;
> +}
> +
> static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
> {
> const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
> @@ -2609,6 +2789,7 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> struct ublk_device *ub = cmd->file->private_data;
> struct ublk_batch_io_data data = {
> .cmd = cmd,
> + .issue_flags = issue_flags,
> };
> u32 cmd_op = cmd->cmd_op;
> int ret = -EINVAL;
> @@ -2619,6 +2800,11 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
>
> switch (cmd_op) {
> case UBLK_U_IO_PREP_IO_CMDS:
> + ret = ublk_check_batch_cmd(&data, uc);
> + if (ret)
> + goto out;
> + ret = ublk_handle_batch_prep_cmd(&data);
> + break;
> case UBLK_U_IO_COMMIT_IO_CMDS:
> ret = ublk_check_batch_cmd(&data, uc);
> if (ret)
> @@ -2780,7 +2966,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
> struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
> gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO;
> void *ptr;
> - int size;
> + int size, i;
>
> spin_lock_init(&ubq->cancel_lock);
> ubq->flags = ub->dev_info.flags;
> @@ -2792,6 +2978,9 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
> if (!ptr)
> return -ENOMEM;
>
> + for (i = 0; i < ubq->q_depth; i++)
> + spin_lock_init(&ubq->ios[i].lock);
> +
> ubq->io_cmd_buf = ptr;
> ubq->dev = ub;
> return 0;
> diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
> index 01d3af52cfb4..38c8cc10d694 100644
> --- a/include/uapi/linux/ublk_cmd.h
> +++ b/include/uapi/linux/ublk_cmd.h
> @@ -102,6 +102,11 @@
> _IOWR('u', 0x23, struct ublksrv_io_cmd)
> #define UBLK_U_IO_UNREGISTER_IO_BUF \
> _IOWR('u', 0x24, struct ublksrv_io_cmd)
> +
> +/*
> + * return 0 if the command is run successfully, otherwise failure code
> + * is returned
> + */
Not sure this is really necessary to comment, that's pretty standard
for syscalls and uring_cmds.
Best,
Caleb
> #define UBLK_U_IO_PREP_IO_CMDS \
> _IOWR('u', 0x25, struct ublk_batch_io)
> #define UBLK_U_IO_COMMIT_IO_CMDS \
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req()
2025-09-03 4:41 ` Caleb Sander Mateos
@ 2025-09-10 2:23 ` Ming Lei
2025-09-11 18:13 ` Caleb Sander Mateos
0 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-10 2:23 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Tue, Sep 02, 2025 at 09:41:55PM -0700, Caleb Sander Mateos wrote:
> On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > Refactor auto buffer register code and prepare for supporting batch IO
> > feature, and the main motivation is to put 'ublk_io' operation code
> > together, so that per-io lock can be applied for the code block.
> >
> > The key changes are:
> > - Rename ublk_auto_buf_reg() as ublk_do_auto_buf_reg()
>
> Thanks, the type and the function having the same name was a minor annoyance.
>
> > - Introduce an enum `auto_buf_reg_res` to represent the result of
> > the buffer registration attempt (FAIL, FALLBACK, OK).
> > - Split the existing `ublk_do_auto_buf_reg` function into two:
> > - `__ublk_do_auto_buf_reg`: Performs the actual buffer registration
> > and returns the `auto_buf_reg_res` status.
> > - `ublk_do_auto_buf_reg`: A wrapper that calls the internal function
> > and handles the I/O preparation based on the result.
> > - Introduce `ublk_prep_auto_buf_reg_io` to encapsulate the logic for
> > preparing the I/O for completion after buffer registration.
> > - Pass the `tag` directly to `ublk_auto_buf_reg_fallback` to avoid
> > recalculating it.
> >
> > This refactoring makes the control flow clearer and isolates the different
> > stages of the auto buffer registration process.
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > drivers/block/ublk_drv.c | 65 +++++++++++++++++++++++++++-------------
> > 1 file changed, 44 insertions(+), 21 deletions(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index 9185978abeb7..e53f623b0efe 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -1205,17 +1205,36 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
> > }
> >
> > static void
> > -ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, struct ublk_io *io)
> > +ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, unsigned tag)
> > {
> > - unsigned tag = io - ubq->ios;
>
> The reason to calculate the tag like this was to avoid the pointer
> dereference in req->tag. But req->tag is already accessed just prior
> in ublk_dispatch_req(), so it should be cached and not too expensive
> to load again.
Ok, one thing is that ublk_auto_buf_reg_fallback() should be called in slow
path...
>
> > struct ublksrv_io_desc *iod = ublk_get_iod(ubq, tag);
> >
> > iod->op_flags |= UBLK_IO_F_NEED_REG_BUF;
> > }
> >
> > -static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > - struct ublk_io *io, struct io_uring_cmd *cmd,
> > - unsigned int issue_flags)
> > +enum auto_buf_reg_res {
> > + AUTO_BUF_REG_FAIL,
> > + AUTO_BUF_REG_FALLBACK,
> > + AUTO_BUF_REG_OK,
> > +};
>
> nit: move this enum definition next to the function that returns it?
Yeah, good point.
>
> > +
> > +static void ublk_prep_auto_buf_reg_io(const struct ublk_queue *ubq,
> > + struct request *req, struct ublk_io *io,
> > + struct io_uring_cmd *cmd, bool registered)
>
> How about passing enum auto_buf_reg_res instead of bool registered to
> avoid the duplicated == AUTO_BUF_REG_OK in the callers?
OK, either way is fine for me.
>
> > +{
> > + if (registered) {
> > + io->task_registered_buffers = 1;
> > + io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
> > + io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
> > + }
> > + ublk_init_req_ref(ubq, io);
> > + __ublk_prep_compl_io_cmd(io, req);
> > +}
> > +
> > +static enum auto_buf_reg_res
> > +__ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > + struct ublk_io *io, struct io_uring_cmd *cmd,
> > + unsigned int issue_flags)
> > {
> > int ret;
> >
> > @@ -1223,29 +1242,27 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > io->buf.auto_reg.index, issue_flags);
> > if (ret) {
> > if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
> > - ublk_auto_buf_reg_fallback(ubq, io);
> > - return true;
> > + ublk_auto_buf_reg_fallback(ubq, req->tag);
> > + return AUTO_BUF_REG_FALLBACK;
> > }
> > blk_mq_end_request(req, BLK_STS_IOERR);
> > - return false;
> > + return AUTO_BUF_REG_FAIL;
> > }
> >
> > - io->task_registered_buffers = 1;
> > - io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
> > - io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
> > - return true;
> > + return AUTO_BUF_REG_OK;
> > }
> >
> > -static bool ublk_prep_auto_buf_reg(struct ublk_queue *ubq,
> > - struct request *req, struct ublk_io *io,
> > - struct io_uring_cmd *cmd,
> > - unsigned int issue_flags)
> > +static void ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > + struct ublk_io *io, struct io_uring_cmd *cmd,
> > + unsigned int issue_flags)
> > {
> > - ublk_init_req_ref(ubq, io);
> > - if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req))
> > - return ublk_auto_buf_reg(ubq, req, io, cmd, issue_flags);
> > + enum auto_buf_reg_res res = __ublk_do_auto_buf_reg(ubq, req, io, cmd,
> > + issue_flags);
> >
> > - return true;
> > + if (res != AUTO_BUF_REG_FAIL) {
> > + ublk_prep_auto_buf_reg_io(ubq, req, io, cmd, res == AUTO_BUF_REG_OK);
> > + io_uring_cmd_done(cmd, UBLK_IO_RES_OK, 0, issue_flags);
> > + }
> > }
> >
> > static bool ublk_start_io(const struct ublk_queue *ubq, struct request *req,
> > @@ -1318,8 +1335,14 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
> > if (!ublk_start_io(ubq, req, io))
> > return;
> >
> > - if (ublk_prep_auto_buf_reg(ubq, req, io, io->cmd, issue_flags))
> > + if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req)) {
> > + struct io_uring_cmd *cmd = io->cmd;
>
> Don't really see the need for this intermediate variable
Yes, will remove it, but the big thing is that there isn't io->cmd for BATCH_IO
any more.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 04/23] ublk: add helper of __ublk_fetch()
2025-09-03 4:42 ` Caleb Sander Mateos
@ 2025-09-10 2:30 ` Ming Lei
0 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-10 2:30 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Tue, Sep 02, 2025 at 09:42:37PM -0700, Caleb Sander Mateos wrote:
> On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > Add helper __ublk_fetch() for the coming batch io feature.
> >
> > Meantime move ublk_config_io_buf() out of __ublk_fetch() because batch
> > io has new interface for configuring buffer.
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > drivers/block/ublk_drv.c | 31 ++++++++++++++++++++-----------
> > 1 file changed, 20 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index e53f623b0efe..f265795a8d57 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -2206,18 +2206,12 @@ static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr)
> > return 0;
> > }
> >
> > -static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> > - struct ublk_io *io, __u64 buf_addr)
> > +static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> > + struct ublk_io *io)
> > {
> > struct ublk_device *ub = ubq->dev;
> > int ret = 0;
> >
> > - /*
> > - * When handling FETCH command for setting up ublk uring queue,
> > - * ub->mutex is the innermost lock, and we won't block for handling
> > - * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
> > - */
> > - mutex_lock(&ub->mutex);
> > /* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
> > if (ublk_queue_ready(ubq)) {
> > ret = -EBUSY;
> > @@ -2233,13 +2227,28 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> > WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV);
> >
> > ublk_fill_io_cmd(io, cmd);
> > - ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
> > - if (ret)
> > - goto out;
> >
> > WRITE_ONCE(io->task, get_task_struct(current));
> > ublk_mark_io_ready(ub, ubq);
> > out:
> > + return ret;
>
> If the out: section no longer releases any resources, can we replace
> the "goto out" with just "return ret"?
OK.
>
> > +}
> > +
> > +static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> > + struct ublk_io *io, __u64 buf_addr)
> > +{
> > + struct ublk_device *ub = ubq->dev;
> > + int ret;
> > +
> > + /*
> > + * When handling FETCH command for setting up ublk uring queue,
> > + * ub->mutex is the innermost lock, and we won't block for handling
> > + * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
> > + */
> > + mutex_lock(&ub->mutex);
> > + ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
> > + if (!ret)
> > + ret = __ublk_fetch(cmd, ubq, io);
>
> How come the order of operations was switched here? ublk_fetch()
> previously checked ublk_queue_ready(ubq) and io->flags &
> UBLK_IO_FLAG_ACTIVE first, which seems necessary to prevent
> overwriting a ublk_io that has already been fetched.
Good point, that is actually what ublk_batch_prep_io() is doing: commit the
buffer descriptor into io slot only after __ublk_fetch() runs successfully.
I will fix the order.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 06/23] ublk: prepare for not tracking task context for command batch
2025-09-06 18:48 ` Caleb Sander Mateos
@ 2025-09-10 2:35 ` Ming Lei
0 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-10 2:35 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Sat, Sep 06, 2025 at 11:48:08AM -0700, Caleb Sander Mateos wrote:
> On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > batch io is designed to be independent of task context, and we will not
> > track task context for batch io feature.
> >
> > So warn on non-batch-io code paths.
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > drivers/block/ublk_drv.c | 16 +++++++++++++++-
> > 1 file changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index a0dfad8a56f0..46be5b656f22 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -261,6 +261,11 @@ static inline bool ublk_dev_support_batch_io(const struct ublk_device *ub)
> > return false;
> > }
> >
> > +static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> > +{
> > + return false;
> > +}
> > +
> > static inline struct ublksrv_io_desc *
> > ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> > {
> > @@ -1309,6 +1314,8 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
> > __func__, ubq->q_id, req->tag, io->flags,
> > ublk_get_iod(ubq, req->tag)->addr);
> >
> > + WARN_ON_ONCE(ublk_support_batch_io(ubq));
>
> Hmm, not a huge fan of extra checks in the I/O path. It seems fairly
> easy to verify from the code that these functions won't be called for
> batch commands. Do we really need the assertion?
It is just a safety guard, and can be removed, but ubq->flag is really in
hot cache.
>
> > +
> > /*
> > * Task is exiting if either:
> > *
> > @@ -1868,6 +1875,8 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
> > if (WARN_ON_ONCE(pdu->tag >= ubq->q_depth))
> > return;
> >
> > + WARN_ON_ONCE(ublk_support_batch_io(ubq));
> > +
> > task = io_uring_cmd_get_task(cmd);
> > io = &ubq->ios[pdu->tag];
> > if (WARN_ON_ONCE(task && task != io->task))
> > @@ -2233,7 +2242,10 @@ static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
> >
> > ublk_fill_io_cmd(io, cmd);
> >
> > - WRITE_ONCE(io->task, get_task_struct(current));
> > + if (ublk_support_batch_io(ubq))
> > + WRITE_ONCE(io->task, NULL);
>
> Don't see a need to explicitly write NULL here since the ublk_io
> memory is zero-initialized.
You are right, but ublk_fetch() is in slow path.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS
2025-09-06 18:50 ` Caleb Sander Mateos
@ 2025-09-10 3:05 ` Ming Lei
0 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-09-10 3:05 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Sat, Sep 06, 2025 at 11:50:55AM -0700, Caleb Sander Mateos wrote:
> On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > Add new command UBLK_U_IO_PREP_IO_CMDS, which is the batch version of
> > UBLK_IO_FETCH_REQ.
> >
> > Add new command UBLK_U_IO_COMMIT_IO_CMDS, which is for committing io command
> > result only, still the batch version.
> >
> > The new command header type is `struct ublk_batch_io`, and fixed buffer is
> > required for these two uring_cmd.
>
> The commit message could be clearer that it doesn't actually implement
> these commands yet, just validates the SQE fields.
OK.
>
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > drivers/block/ublk_drv.c | 102 +++++++++++++++++++++++++++++++++-
> > include/uapi/linux/ublk_cmd.h | 49 ++++++++++++++++
> > 2 files changed, 149 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index 46be5b656f22..4da0dbbd7e16 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -85,6 +85,11 @@
> > UBLK_PARAM_TYPE_DEVT | UBLK_PARAM_TYPE_ZONED | \
> > UBLK_PARAM_TYPE_DMA_ALIGN | UBLK_PARAM_TYPE_SEGMENT)
> >
> > +#define UBLK_BATCH_F_ALL \
> > + (UBLK_BATCH_F_HAS_ZONE_LBA | \
> > + UBLK_BATCH_F_HAS_BUF_ADDR | \
> > + UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK)
> > +
> > struct ublk_uring_cmd_pdu {
> > /*
> > * Store requests in same batch temporarily for queuing them to
> > @@ -108,6 +113,11 @@ struct ublk_uring_cmd_pdu {
> > u16 tag;
> > };
> >
> > +struct ublk_batch_io_data {
> > + struct ublk_queue *ubq;
> > + struct io_uring_cmd *cmd;
> > +};
> > +
> > /*
> > * io command is active: sqe cmd is received, and its cqe isn't done
> > *
> > @@ -277,7 +287,7 @@ static inline bool ublk_dev_is_zoned(const struct ublk_device *ub)
> > return ub->dev_info.flags & UBLK_F_ZONED;
> > }
> >
> > -static inline bool ublk_queue_is_zoned(struct ublk_queue *ubq)
> > +static inline bool ublk_queue_is_zoned(const struct ublk_queue *ubq)
>
> This change could go in a separate commit.
Fair enough.
>
> > {
> > return ubq->flags & UBLK_F_ZONED;
> > }
> > @@ -2528,10 +2538,98 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> > return ublk_ch_uring_cmd_local(cmd, issue_flags);
> > }
> >
> > +static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
> > +{
> > + const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
> > + UBLK_BATCH_F_HAS_ZONE_LBA;
>
> Can we use a fixed-size integer type, i.e. u16?
Good point, since here the flag is defined in uapi.
>
> > +
> > + if (uc->flags & ~UBLK_BATCH_F_ALL)
> > + return -EINVAL;
> > +
> > + /* UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK requires buffer index */
> > + if ((uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) &&
> > + (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR))
> > + return -EINVAL;
> > +
> > + switch (uc->flags & mask) {
> > + case 0:
> > + if (uc->elem_bytes != 8)
>
> sizeof(struct ublk_elem_header)?
Yes.
>
> > + return -EINVAL;
> > + break;
> > + case UBLK_BATCH_F_HAS_ZONE_LBA:
> > + case UBLK_BATCH_F_HAS_BUF_ADDR:
> > + if (uc->elem_bytes != 8 + 8)
>
> sizeof(u64)?
OK
>
> > + return -EINVAL;
> > + break;
> > + case UBLK_BATCH_F_HAS_ZONE_LBA | UBLK_BATCH_F_HAS_BUF_ADDR:
> > + if (uc->elem_bytes != 8 + 8 + 8)
> > + return -EINVAL;
>
> So elem_bytes is redundant with flags? Do we really need a separate a
> separate field then?
It is used for cross verification purpose, especially the command
header has enough space.
>
> > + break;
> > + default:
> > + return -EINVAL;
>
> default case is unreachable?
Right.
>
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int ublk_check_batch_cmd(const struct ublk_batch_io_data *data,
> > + const struct ublk_batch_io *uc)
> > +{
> > + if (!(data->cmd->flags & IORING_URING_CMD_FIXED))
> > + return -EINVAL;
> > +
> > + if (uc->nr_elem * uc->elem_bytes > data->cmd->sqe->len)
>
> Cast nr_elem and/or elem_bytes to u32 to avoid overflow concerns?
`u16` * `u8` can't overflow.
>
> Should also use READ_ONCE() to read the userspace-mapped sqe->len.
Yes.
When I write the patch, sqe is always copied, but it isn't true now.
>
> > + return -E2BIG;
> > +
> > + if (uc->nr_elem > data->ubq->q_depth)
> > + return -E2BIG;
> > +
> > + if ((uc->flags & UBLK_BATCH_F_HAS_ZONE_LBA) &&
> > + !ublk_queue_is_zoned(data->ubq))
> > + return -EINVAL;
> > +
> > + if ((uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR) &&
> > + !ublk_need_map_io(data->ubq))
> > + return -EINVAL;
> > +
> > + if ((uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) &&
> > + !ublk_support_auto_buf_reg(data->ubq))
> > + return -EINVAL;
> > +
> > + if (uc->reserved || uc->reserved2)
> > + return -EINVAL;
> > +
> > + return ublk_check_batch_cmd_flags(uc);
> > +}
> > +
> > static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> > unsigned int issue_flags)
> > {
> > - return -EOPNOTSUPP;
> > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> > + struct ublk_device *ub = cmd->file->private_data;
> > + struct ublk_batch_io_data data = {
> > + .cmd = cmd,
> > + };
> > + u32 cmd_op = cmd->cmd_op;
> > + int ret = -EINVAL;
> > +
> > + if (uc->q_id >= ub->dev_info.nr_hw_queues)
> > + goto out;
> > + data.ubq = ublk_get_queue(ub, uc->q_id);
>
> Should be using READ_ONCE() to read from userspace-mapped memory.
Indeed, it can be copied to `ublk_batch_io_data`, just 64bits.
>
>
>
> > +
> > + switch (cmd_op) {
> > + case UBLK_U_IO_PREP_IO_CMDS:
> > + case UBLK_U_IO_COMMIT_IO_CMDS:
> > + ret = ublk_check_batch_cmd(&data, uc);
> > + if (ret)
> > + goto out;
> > + ret = -EOPNOTSUPP;
> > + break;
> > + default:
> > + ret = -EOPNOTSUPP;
> > + }
> > +out:
> > + return ret;
> > }
> >
> > static inline bool ublk_check_ubuf_dir(const struct request *req,
> > diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
> > index ec77dabba45b..01d3af52cfb4 100644
> > --- a/include/uapi/linux/ublk_cmd.h
> > +++ b/include/uapi/linux/ublk_cmd.h
> > @@ -102,6 +102,10 @@
> > _IOWR('u', 0x23, struct ublksrv_io_cmd)
> > #define UBLK_U_IO_UNREGISTER_IO_BUF \
> > _IOWR('u', 0x24, struct ublksrv_io_cmd)
> > +#define UBLK_U_IO_PREP_IO_CMDS \
> > + _IOWR('u', 0x25, struct ublk_batch_io)
> > +#define UBLK_U_IO_COMMIT_IO_CMDS \
> > + _IOWR('u', 0x26, struct ublk_batch_io)
> >
> > /* only ABORT means that no re-fetch */
> > #define UBLK_IO_RES_OK 0
> > @@ -525,6 +529,51 @@ struct ublksrv_io_cmd {
> > };
> > };
> >
> > +struct ublk_elem_header {
> > + __u16 tag; /* IO tag */
> > +
> > + /*
> > + * Buffer index for incoming io command, only valid iff
> > + * UBLK_F_AUTO_BUF_REG is set
> > + */
> > + __u16 buf_index;
> > + __u32 result; /* I/O completion result (commit only) */
>
> The result is unsigned? So there's no way to specify a request failure?
oops, definitely it should be __s32.
>
> > +};
> > +
> > +/*
> > + * uring_cmd buffer structure
>
> Add "for batch commands"?
Sure.
>
> > + *
> > + * buffer includes multiple elements, which number is specified by
> > + * `nr_elem`. Each element buffer is organized in the following order:
> > + *
> > + * struct ublk_elem_buffer {
> > + * // Mandatory fields (8 bytes)
> > + * struct ublk_elem_header header;
> > + *
> > + * // Optional fields (8 bytes each, included based on flags)
> > + *
> > + * // Buffer address (if UBLK_BATCH_F_HAS_BUF_ADDR) for copying data
> > + * // between ublk request and ublk server buffer
> > + * __u64 buf_addr;
> > + *
> > + * // returned Zone append LBA (if UBLK_BATCH_F_HAS_ZONE_LBA)
> > + * __u64 zone_lba;
> > + * }
> > + *
> > + * Used for `UBLK_U_IO_PREP_IO_CMDS` and `UBLK_U_IO_COMMIT_IO_CMDS`
> > + */
> > +struct ublk_batch_io {
> > + __u16 q_id;
>
> So this doesn't allow batching completions across ublk queues? That
Yes.
I tried device-wide batch, it may introduce a little complexity to
ublk_handle_batch_commit_cmd(), especially it becomes hard to apply
some future optimizations, such as batched request completion.
It isn't hard to add one variant for covering cross-queue commit,
just `qid` need to be added to 'ublk_elem_buffer', which becomes not
aligned any more.
> seems like it significantly limits the usefulness of this feature. A
> ublk server thread may be handling ublk requests from a number of
> client threads which are submitting to different ublk queues.
But ublk server still can support cross-queue completion.
The selftest code does support arbitrary ublk queues/thread combination,
just a one-shot per-queue commit buffer is needed.
It is sort of trade-off between driver and ublk server implementation.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-09-06 19:48 ` Caleb Sander Mateos
@ 2025-09-10 3:56 ` Ming Lei
2025-09-18 18:12 ` Caleb Sander Mateos
0 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-09-10 3:56 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Sat, Sep 06, 2025 at 12:48:41PM -0700, Caleb Sander Mateos wrote:
> On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
> > which allows userspace to prepare a batch of I/O requests.
> >
> > The core of this change is the `ublk_walk_cmd_buf` function, which iterates
> > over the elements in the uring_cmd fixed buffer. For each element, it parses
> > the I/O details, finds the corresponding `ublk_io` structure, and prepares it
> > for future dispatch.
> >
> > Add per-io lock for protecting concurrent delivery and committing.
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
> > include/uapi/linux/ublk_cmd.h | 5 +
> > 2 files changed, 195 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index 4da0dbbd7e16..a4bae3d1562a 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
> > struct ublk_batch_io_data {
> > struct ublk_queue *ubq;
> > struct io_uring_cmd *cmd;
> > + unsigned int issue_flags;
> > +
> > + /* set when walking the element buffer */
> > + const struct ublk_elem_header *elem;
> > };
> >
> > /*
> > @@ -200,6 +204,7 @@ struct ublk_io {
> > unsigned task_registered_buffers;
> >
> > void *buf_ctx_handle;
> > + spinlock_t lock;
>
> From our experience writing a high-throughput ublk server, the
> spinlocks and mutexes in the kernel are some of the largest CPU
> hotspots. We have spent a lot of effort working to avoid locking where
> possible or shard data structures to reduce contention on the locks.
> Even uncontended locks are still very expensive to acquire and release
> on machines with many CPUs due to the cache coherency overhead. ublk's
> per-io daemon architecture is great for performance by removing the
io-uring highly depends on batch submission and completion, but per-io daemon
may break the batch easily, because it doesn't guarantee that one batch IOs
can be forwarded in single io task/io_uring when static tag mapping policy is
taken, for example:
```
[root@ktest-40 ublk]# ./kublk add -t null --nthreads 8 -q 4 --per_io_tasks
dev id 0: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
max rq size 1048576 daemon pid 89975 flags 0x6042 state LIVE
queue 0: affinity(0 )
queue 1: affinity(4 )
queue 2: affinity(8 )
queue 3: affinity(12 )
[root@ktest-40 ublk]#
[root@ktest-40 ublk]# ./kublk add -t null -q 4
dev id 1: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
max rq size 1048576 daemon pid 90002 flags 0x6042 state LIVE
queue 0: affinity(0 )
queue 1: affinity(4 )
queue 2: affinity(8 )
queue 3: affinity(12 )
[root@ktest-40 ublk]#
[root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb0
submitter=0, tid=90024, file=/dev/ublkb0, nfiles=1, node=-1
polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
Engine=io_uring, sq_ring=128, cq_ring=128
IOPS=188.54K, BW=736MiB/s, IOS/call=32/31
IOPS=187.90K, BW=734MiB/s, IOS/call=32/32
IOPS=195.39K, BW=763MiB/s, IOS/call=32/32
^CExiting on signal
Maximum IOPS=195.39K
[root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb1
submitter=0, tid=90026, file=/dev/ublkb1, nfiles=1, node=-1
polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
Engine=io_uring, sq_ring=128, cq_ring=128
IOPS=608.26K, BW=2.38GiB/s, IOS/call=32/31
IOPS=586.59K, BW=2.29GiB/s, IOS/call=32/31
IOPS=599.62K, BW=2.34GiB/s, IOS/call=32/32
^CExiting on signal
Maximum IOPS=608.26K
```
> need for locks in the I/O path. I can't really see us adopting this
> ublk batching feature; adding a spin_lock() + spin_unlock() to every
> ublk commit operation is not worth the reduction in io_uring SQEs and
> uring_cmds.
As I mentioned in cover letter, the per-io lock can be avoided for UBLK_F_PER_IO_DAEMON
as one follow-up, since io->task is still there for helping to track task context.
Just want to avoid too much features in enablement stage, that is also
why the spin lock is wrapped in helper.
>
> > } ____cacheline_aligned_in_smp;
> >
> > struct ublk_queue {
> > @@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> > return false;
> > }
> >
> > +static inline void ublk_io_lock(struct ublk_io *io)
> > +{
> > + spin_lock(&io->lock);
> > +}
> > +
> > +static inline void ublk_io_unlock(struct ublk_io *io)
> > +{
> > + spin_unlock(&io->lock);
> > +}
> > +
> > static inline struct ublksrv_io_desc *
> > ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> > {
> > @@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> > return ublk_ch_uring_cmd_local(cmd, issue_flags);
> > }
> >
> > +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
> > + const struct ublk_elem_header *elem)
> > +{
> > + const void *buf = (const void *)elem;
>
> Don't need an explicit cast in order to cast to void *.
OK.
>
>
> > +
> > + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
> > + return *(__u64 *)(buf + sizeof(*elem));
> > + return -1;
>
> Why -1 and not 0? ublk_check_fetch_buf() is expecting a 0 buf_addr to
> indicate the lack
Good catch, it needs to return 0.
>
> > +}
> > +
> > +static struct ublk_auto_buf_reg
> > +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
> > + const struct ublk_elem_header *elem)
> > +{
> > + struct ublk_auto_buf_reg reg = {
> > + .index = elem->buf_index,
> > + .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
> > + UBLK_AUTO_BUF_REG_FALLBACK : 0,
> > + };
> > +
> > + return reg;
> > +}
> > +
> > +/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
>
> "can cover" is a bit vague. Can you be explicit that the buffer size
> needs to be a multiple of any possible buffer element size?
I should have documented that 48 is least common multiple(LCM) of (8, 16 and
24)
>
> > +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
> > +struct ublk_batch_io_iter {
> > + /* copy to this buffer from iterator first */
> > + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
> > + struct iov_iter iter;
> > + unsigned done, total;
> > + unsigned char elem_bytes;
> > +};
> > +
> > +static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > + struct ublk_batch_io_data *data,
> > + unsigned bytes,
> > + int (*cb)(struct ublk_io *io,
> > + const struct ublk_batch_io_data *data))
> > +{
> > + int i, ret = 0;
> > +
> > + for (i = 0; i < bytes; i += iter->elem_bytes) {
> > + const struct ublk_elem_header *elem =
> > + (const struct ublk_elem_header *)&iter->buf[i];
> > + struct ublk_io *io;
> > +
> > + if (unlikely(elem->tag >= data->ubq->q_depth)) {
> > + ret = -EINVAL;
> > + break;
> > + }
> > +
> > + io = &data->ubq->ios[elem->tag];
> > + data->elem = elem;
> > + ret = cb(io, data);
>
> Why not just pas elem as a separate argument to the callback?
One reason is that we don't have complete type for 'elem' since its size
is a variable.
>
> > + if (unlikely(ret))
> > + break;
> > + }
> > + iter->done += i;
> > + return ret;
> > +}
> > +
> > +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > + struct ublk_batch_io_data *data,
> > + int (*cb)(struct ublk_io *io,
> > + const struct ublk_batch_io_data *data))
> > +{
> > + int ret = 0;
> > +
> > + while (iter->done < iter->total) {
> > + unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
> > +
> > + ret = copy_from_iter(iter->buf, len, &iter->iter);
> > + if (ret != len) {
>
> How would this be possible? The iterator comes from an io_uring
> registered buffer with at least the requested length, so the user
> addresses should have been validated when the buffer was registered.
> Should this just be a WARN_ON()?
yes, that is why pr_warn() is used, I remember that WARN_ON() isn't
encouraged in user code path.
>
> > + pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
> > + data->ubq->dev->dev_info.dev_id, ret, len);
> > + ret = -EINVAL;
> > + break;
> > + }
> > +
> > + ret = __ublk_walk_cmd_buf(iter, data, len, cb);
> > + if (ret)
> > + break;
> > + }
> > + return ret;
> > +}
> > +
> > +static int ublk_batch_unprep_io(struct ublk_io *io,
> > + const struct ublk_batch_io_data *data)
> > +{
> > + if (ublk_queue_ready(data->ubq))
> > + data->ubq->dev->nr_queues_ready--;
> > +
> > + ublk_io_lock(io);
> > + io->flags = 0;
> > + ublk_io_unlock(io);
> > + data->ubq->nr_io_ready--;
> > + return 0;
>
> This "unprep" looks very subtle and fairly complicated. Is it really
> necessary? What's wrong with leaving the I/Os that were successfully
> prepped? It also looks racy to clear io->flags after the queue is
> ready, as the io may already be in use by some I/O request.
ublk_batch_unprep_io() is called in partial completion of UBLK_U_IO_PREP_IO_CMDS,
when START_DEV can't succeed, so there can't be any IO.
>
> > +}
> > +
> > +static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter,
> > + struct ublk_batch_io_data *data)
> > +{
> > + int ret;
> > +
> > + if (!iter->done)
> > + return;
> > +
> > + iov_iter_revert(&iter->iter, iter->done);
>
> Shouldn't the iterator be reverted by the total number of bytes
> copied, which may be more than iter->done?
->done is exactly the total bytes handled.
>
> > + iter->total = iter->done;
> > + iter->done = 0;
> > +
> > + ret = ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io);
> > + WARN_ON_ONCE(ret);
> > +}
> > +
> > +static int ublk_batch_prep_io(struct ublk_io *io,
> > + const struct ublk_batch_io_data *data)
> > +{
> > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
> > + union ublk_io_buf buf = { 0 };
> > + int ret;
> > +
> > + if (ublk_support_auto_buf_reg(data->ubq))
> > + buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
> > + else if (ublk_need_map_io(data->ubq)) {
> > + buf.addr = ublk_batch_buf_addr(uc, data->elem);
> > +
> > + ret = ublk_check_fetch_buf(data->ubq, buf.addr);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + ublk_io_lock(io);
> > + ret = __ublk_fetch(data->cmd, data->ubq, io);
> > + if (!ret)
> > + io->buf = buf;
> > + ublk_io_unlock(io);
> > +
> > + return ret;
> > +}
> > +
> > +static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
> > +{
> > + struct io_uring_cmd *cmd = data->cmd;
> > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> > + struct ublk_batch_io_iter iter = {
> > + .total = uc->nr_elem * uc->elem_bytes,
> > + .elem_bytes = uc->elem_bytes,
> > + };
> > + int ret;
> > +
> > + ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
>
> Could iter.total be used in place of cmd->sqe->len? That way userspace
> wouldn't have to specify a redundant value in the SQE len field.
This way follows how buffer is used in io_uring/rw.c, but looks it can be saved.
But benefit is cross-verify, cause io-uring sqe user interface is complicated.
>
> > + WRITE, &iter.iter, cmd, data->issue_flags);
> > + if (ret)
> > + return ret;
> > +
> > + mutex_lock(&data->ubq->dev->mutex);
> > + ret = ublk_walk_cmd_buf(&iter, data, ublk_batch_prep_io);
> > +
> > + if (ret && iter.done)
> > + ublk_batch_revert_prep_cmd(&iter, data);
>
> The iter.done check is duplicated in ublk_batch_revert_prep_cmd().
OK, we can remove the check in ublk_batch_revert_prep_cmd().
>
> > + mutex_unlock(&data->ubq->dev->mutex);
> > + return ret;
> > +}
> > +
> > static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
> > {
> > const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
> > @@ -2609,6 +2789,7 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> > struct ublk_device *ub = cmd->file->private_data;
> > struct ublk_batch_io_data data = {
> > .cmd = cmd,
> > + .issue_flags = issue_flags,
> > };
> > u32 cmd_op = cmd->cmd_op;
> > int ret = -EINVAL;
> > @@ -2619,6 +2800,11 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> >
> > switch (cmd_op) {
> > case UBLK_U_IO_PREP_IO_CMDS:
> > + ret = ublk_check_batch_cmd(&data, uc);
> > + if (ret)
> > + goto out;
> > + ret = ublk_handle_batch_prep_cmd(&data);
> > + break;
> > case UBLK_U_IO_COMMIT_IO_CMDS:
> > ret = ublk_check_batch_cmd(&data, uc);
> > if (ret)
> > @@ -2780,7 +2966,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
> > struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
> > gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO;
> > void *ptr;
> > - int size;
> > + int size, i;
> >
> > spin_lock_init(&ubq->cancel_lock);
> > ubq->flags = ub->dev_info.flags;
> > @@ -2792,6 +2978,9 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
> > if (!ptr)
> > return -ENOMEM;
> >
> > + for (i = 0; i < ubq->q_depth; i++)
> > + spin_lock_init(&ubq->ios[i].lock);
> > +
> > ubq->io_cmd_buf = ptr;
> > ubq->dev = ub;
> > return 0;
> > diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
> > index 01d3af52cfb4..38c8cc10d694 100644
> > --- a/include/uapi/linux/ublk_cmd.h
> > +++ b/include/uapi/linux/ublk_cmd.h
> > @@ -102,6 +102,11 @@
> > _IOWR('u', 0x23, struct ublksrv_io_cmd)
> > #define UBLK_U_IO_UNREGISTER_IO_BUF \
> > _IOWR('u', 0x24, struct ublksrv_io_cmd)
> > +
> > +/*
> > + * return 0 if the command is run successfully, otherwise failure code
> > + * is returned
> > + */
>
> Not sure this is really necessary to comment, that's pretty standard
> for syscalls and uring_cmds.
OK, I think it is for showing the difference with UBLK_U_IO_COMMIT_IO_CMDS,
which has to support partial commit, however UBLK_U_IO_PREP_IO_CMDS
need to be all or nothing.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req()
2025-09-10 2:23 ` Ming Lei
@ 2025-09-11 18:13 ` Caleb Sander Mateos
0 siblings, 0 replies; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-11 18:13 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Tue, Sep 9, 2025 at 7:23 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Tue, Sep 02, 2025 at 09:41:55PM -0700, Caleb Sander Mateos wrote:
> > On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> > >
> > > Refactor auto buffer register code and prepare for supporting batch IO
> > > feature, and the main motivation is to put 'ublk_io' operation code
> > > together, so that per-io lock can be applied for the code block.
> > >
> > > The key changes are:
> > > - Rename ublk_auto_buf_reg() as ublk_do_auto_buf_reg()
> >
> > Thanks, the type and the function having the same name was a minor annoyance.
> >
> > > - Introduce an enum `auto_buf_reg_res` to represent the result of
> > > the buffer registration attempt (FAIL, FALLBACK, OK).
> > > - Split the existing `ublk_do_auto_buf_reg` function into two:
> > > - `__ublk_do_auto_buf_reg`: Performs the actual buffer registration
> > > and returns the `auto_buf_reg_res` status.
> > > - `ublk_do_auto_buf_reg`: A wrapper that calls the internal function
> > > and handles the I/O preparation based on the result.
> > > - Introduce `ublk_prep_auto_buf_reg_io` to encapsulate the logic for
> > > preparing the I/O for completion after buffer registration.
> > > - Pass the `tag` directly to `ublk_auto_buf_reg_fallback` to avoid
> > > recalculating it.
> > >
> > > This refactoring makes the control flow clearer and isolates the different
> > > stages of the auto buffer registration process.
> > >
> > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > ---
> > > drivers/block/ublk_drv.c | 65 +++++++++++++++++++++++++++-------------
> > > 1 file changed, 44 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > > index 9185978abeb7..e53f623b0efe 100644
> > > --- a/drivers/block/ublk_drv.c
> > > +++ b/drivers/block/ublk_drv.c
> > > @@ -1205,17 +1205,36 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
> > > }
> > >
> > > static void
> > > -ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, struct ublk_io *io)
> > > +ublk_auto_buf_reg_fallback(const struct ublk_queue *ubq, unsigned tag)
> > > {
> > > - unsigned tag = io - ubq->ios;
> >
> > The reason to calculate the tag like this was to avoid the pointer
> > dereference in req->tag. But req->tag is already accessed just prior
> > in ublk_dispatch_req(), so it should be cached and not too expensive
> > to load again.
>
> Ok, one thing is that ublk_auto_buf_reg_fallback() should be called in slow
> path...
What you have seems fine. Just providing some background on why I
wrote it like this.
Best,
Caleb
>
> >
> > > struct ublksrv_io_desc *iod = ublk_get_iod(ubq, tag);
> > >
> > > iod->op_flags |= UBLK_IO_F_NEED_REG_BUF;
> > > }
> > >
> > > -static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > > - struct ublk_io *io, struct io_uring_cmd *cmd,
> > > - unsigned int issue_flags)
> > > +enum auto_buf_reg_res {
> > > + AUTO_BUF_REG_FAIL,
> > > + AUTO_BUF_REG_FALLBACK,
> > > + AUTO_BUF_REG_OK,
> > > +};
> >
> > nit: move this enum definition next to the function that returns it?
>
> Yeah, good point.
>
> >
> > > +
> > > +static void ublk_prep_auto_buf_reg_io(const struct ublk_queue *ubq,
> > > + struct request *req, struct ublk_io *io,
> > > + struct io_uring_cmd *cmd, bool registered)
> >
> > How about passing enum auto_buf_reg_res instead of bool registered to
> > avoid the duplicated == AUTO_BUF_REG_OK in the callers?
>
> OK, either way is fine for me.
>
> >
> > > +{
> > > + if (registered) {
> > > + io->task_registered_buffers = 1;
> > > + io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
> > > + io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
> > > + }
> > > + ublk_init_req_ref(ubq, io);
> > > + __ublk_prep_compl_io_cmd(io, req);
> > > +}
> > > +
> > > +static enum auto_buf_reg_res
> > > +__ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > > + struct ublk_io *io, struct io_uring_cmd *cmd,
> > > + unsigned int issue_flags)
> > > {
> > > int ret;
> > >
> > > @@ -1223,29 +1242,27 @@ static bool ublk_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > > io->buf.auto_reg.index, issue_flags);
> > > if (ret) {
> > > if (io->buf.auto_reg.flags & UBLK_AUTO_BUF_REG_FALLBACK) {
> > > - ublk_auto_buf_reg_fallback(ubq, io);
> > > - return true;
> > > + ublk_auto_buf_reg_fallback(ubq, req->tag);
> > > + return AUTO_BUF_REG_FALLBACK;
> > > }
> > > blk_mq_end_request(req, BLK_STS_IOERR);
> > > - return false;
> > > + return AUTO_BUF_REG_FAIL;
> > > }
> > >
> > > - io->task_registered_buffers = 1;
> > > - io->buf_ctx_handle = io_uring_cmd_ctx_handle(cmd);
> > > - io->flags |= UBLK_IO_FLAG_AUTO_BUF_REG;
> > > - return true;
> > > + return AUTO_BUF_REG_OK;
> > > }
> > >
> > > -static bool ublk_prep_auto_buf_reg(struct ublk_queue *ubq,
> > > - struct request *req, struct ublk_io *io,
> > > - struct io_uring_cmd *cmd,
> > > - unsigned int issue_flags)
> > > +static void ublk_do_auto_buf_reg(const struct ublk_queue *ubq, struct request *req,
> > > + struct ublk_io *io, struct io_uring_cmd *cmd,
> > > + unsigned int issue_flags)
> > > {
> > > - ublk_init_req_ref(ubq, io);
> > > - if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req))
> > > - return ublk_auto_buf_reg(ubq, req, io, cmd, issue_flags);
> > > + enum auto_buf_reg_res res = __ublk_do_auto_buf_reg(ubq, req, io, cmd,
> > > + issue_flags);
> > >
> > > - return true;
> > > + if (res != AUTO_BUF_REG_FAIL) {
> > > + ublk_prep_auto_buf_reg_io(ubq, req, io, cmd, res == AUTO_BUF_REG_OK);
> > > + io_uring_cmd_done(cmd, UBLK_IO_RES_OK, 0, issue_flags);
> > > + }
> > > }
> > >
> > > static bool ublk_start_io(const struct ublk_queue *ubq, struct request *req,
> > > @@ -1318,8 +1335,14 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
> > > if (!ublk_start_io(ubq, req, io))
> > > return;
> > >
> > > - if (ublk_prep_auto_buf_reg(ubq, req, io, io->cmd, issue_flags))
> > > + if (ublk_support_auto_buf_reg(ubq) && ublk_rq_has_data(req)) {
> > > + struct io_uring_cmd *cmd = io->cmd;
> >
> > Don't really see the need for this intermediate variable
>
> Yes, will remove it, but the big thing is that there isn't io->cmd for BATCH_IO
> any more.
>
>
> Thanks,
> Ming
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-09-10 3:56 ` Ming Lei
@ 2025-09-18 18:12 ` Caleb Sander Mateos
2025-10-16 10:08 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 18:12 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Tue, Sep 9, 2025 at 8:56 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Sat, Sep 06, 2025 at 12:48:41PM -0700, Caleb Sander Mateos wrote:
> > On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> > >
> > > This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
> > > which allows userspace to prepare a batch of I/O requests.
> > >
> > > The core of this change is the `ublk_walk_cmd_buf` function, which iterates
> > > over the elements in the uring_cmd fixed buffer. For each element, it parses
> > > the I/O details, finds the corresponding `ublk_io` structure, and prepares it
> > > for future dispatch.
> > >
> > > Add per-io lock for protecting concurrent delivery and committing.
> > >
> > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > ---
> > > drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
> > > include/uapi/linux/ublk_cmd.h | 5 +
> > > 2 files changed, 195 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > > index 4da0dbbd7e16..a4bae3d1562a 100644
> > > --- a/drivers/block/ublk_drv.c
> > > +++ b/drivers/block/ublk_drv.c
> > > @@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
> > > struct ublk_batch_io_data {
> > > struct ublk_queue *ubq;
> > > struct io_uring_cmd *cmd;
> > > + unsigned int issue_flags;
> > > +
> > > + /* set when walking the element buffer */
> > > + const struct ublk_elem_header *elem;
> > > };
> > >
> > > /*
> > > @@ -200,6 +204,7 @@ struct ublk_io {
> > > unsigned task_registered_buffers;
> > >
> > > void *buf_ctx_handle;
> > > + spinlock_t lock;
> >
> > From our experience writing a high-throughput ublk server, the
> > spinlocks and mutexes in the kernel are some of the largest CPU
> > hotspots. We have spent a lot of effort working to avoid locking where
> > possible or shard data structures to reduce contention on the locks.
> > Even uncontended locks are still very expensive to acquire and release
> > on machines with many CPUs due to the cache coherency overhead. ublk's
> > per-io daemon architecture is great for performance by removing the
>
> io-uring highly depends on batch submission and completion, but per-io daemon
> may break the batch easily, because it doesn't guarantee that one batch IOs
> can be forwarded in single io task/io_uring when static tag mapping policy is
> taken, for example:
That's a good point. We've mainly focused on optimizing the ublk
server side, but it's true that distributing incoming ublk I/Os to
more ublk server threads adds overhead on the submitting side. One
idea we had but haven't experimented with much is for the ublk server
to perform the round-robin assignment of tags within each queue to
threads in larger chunks. For example, with a chunk size of 4, tags 0
to 3 would be assigned to thread 0, tags 4 to 7 would be assigned to
thread 1, etc. That would improve the batching of ublk I/Os when
dispatching them from the submitting CPU to the ublk server thread.
There's an inherent tradeoff where distributing tags to ublk server
threads in larger chunks makes the distribution less balanced for
small numbers of I/Os, but it will be balanced when averaged over
large numbers of I/Os.
>
> ```
> [root@ktest-40 ublk]# ./kublk add -t null --nthreads 8 -q 4 --per_io_tasks
> dev id 0: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> max rq size 1048576 daemon pid 89975 flags 0x6042 state LIVE
> queue 0: affinity(0 )
> queue 1: affinity(4 )
> queue 2: affinity(8 )
> queue 3: affinity(12 )
> [root@ktest-40 ublk]#
> [root@ktest-40 ublk]# ./kublk add -t null -q 4
> dev id 1: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> max rq size 1048576 daemon pid 90002 flags 0x6042 state LIVE
> queue 0: affinity(0 )
> queue 1: affinity(4 )
> queue 2: affinity(8 )
> queue 3: affinity(12 )
> [root@ktest-40 ublk]#
> [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb0
> submitter=0, tid=90024, file=/dev/ublkb0, nfiles=1, node=-1
> polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> Engine=io_uring, sq_ring=128, cq_ring=128
> IOPS=188.54K, BW=736MiB/s, IOS/call=32/31
> IOPS=187.90K, BW=734MiB/s, IOS/call=32/32
> IOPS=195.39K, BW=763MiB/s, IOS/call=32/32
> ^CExiting on signal
> Maximum IOPS=195.39K
>
> [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb1
> submitter=0, tid=90026, file=/dev/ublkb1, nfiles=1, node=-1
> polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> Engine=io_uring, sq_ring=128, cq_ring=128
> IOPS=608.26K, BW=2.38GiB/s, IOS/call=32/31
> IOPS=586.59K, BW=2.29GiB/s, IOS/call=32/31
> IOPS=599.62K, BW=2.34GiB/s, IOS/call=32/32
> ^CExiting on signal
> Maximum IOPS=608.26K
>
> ```
>
>
> > need for locks in the I/O path. I can't really see us adopting this
> > ublk batching feature; adding a spin_lock() + spin_unlock() to every
> > ublk commit operation is not worth the reduction in io_uring SQEs and
> > uring_cmds.
>
> As I mentioned in cover letter, the per-io lock can be avoided for UBLK_F_PER_IO_DAEMON
> as one follow-up, since io->task is still there for helping to track task context.
>
> Just want to avoid too much features in enablement stage, that is also
> why the spin lock is wrapped in helper.
Okay, good to know there's at least an idea for how to avoid the
spinlock. Makes sense to defer it to follow-on work.
>
> >
> > > } ____cacheline_aligned_in_smp;
> > >
> > > struct ublk_queue {
> > > @@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> > > return false;
> > > }
> > >
> > > +static inline void ublk_io_lock(struct ublk_io *io)
> > > +{
> > > + spin_lock(&io->lock);
> > > +}
> > > +
> > > +static inline void ublk_io_unlock(struct ublk_io *io)
> > > +{
> > > + spin_unlock(&io->lock);
> > > +}
> > > +
> > > static inline struct ublksrv_io_desc *
> > > ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> > > {
> > > @@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> > > return ublk_ch_uring_cmd_local(cmd, issue_flags);
> > > }
> > >
> > > +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
> > > + const struct ublk_elem_header *elem)
> > > +{
> > > + const void *buf = (const void *)elem;
> >
> > Don't need an explicit cast in order to cast to void *.
>
> OK.
>
> >
> >
> > > +
> > > + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
> > > + return *(__u64 *)(buf + sizeof(*elem));
> > > + return -1;
> >
> > Why -1 and not 0? ublk_check_fetch_buf() is expecting a 0 buf_addr to
> > indicate the lack
>
> Good catch, it needs to return 0.
>
> >
> > > +}
> > > +
> > > +static struct ublk_auto_buf_reg
> > > +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
> > > + const struct ublk_elem_header *elem)
> > > +{
> > > + struct ublk_auto_buf_reg reg = {
> > > + .index = elem->buf_index,
> > > + .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
> > > + UBLK_AUTO_BUF_REG_FALLBACK : 0,
> > > + };
> > > +
> > > + return reg;
> > > +}
> > > +
> > > +/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
> >
> > "can cover" is a bit vague. Can you be explicit that the buffer size
> > needs to be a multiple of any possible buffer element size?
>
> I should have documented that 48 is least common multiple(LCM) of (8, 16 and
> 24)
>
> >
> > > +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
> > > +struct ublk_batch_io_iter {
> > > + /* copy to this buffer from iterator first */
> > > + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
> > > + struct iov_iter iter;
> > > + unsigned done, total;
> > > + unsigned char elem_bytes;
> > > +};
> > > +
> > > +static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > + struct ublk_batch_io_data *data,
> > > + unsigned bytes,
> > > + int (*cb)(struct ublk_io *io,
> > > + const struct ublk_batch_io_data *data))
> > > +{
> > > + int i, ret = 0;
> > > +
> > > + for (i = 0; i < bytes; i += iter->elem_bytes) {
> > > + const struct ublk_elem_header *elem =
> > > + (const struct ublk_elem_header *)&iter->buf[i];
> > > + struct ublk_io *io;
> > > +
> > > + if (unlikely(elem->tag >= data->ubq->q_depth)) {
> > > + ret = -EINVAL;
> > > + break;
> > > + }
> > > +
> > > + io = &data->ubq->ios[elem->tag];
> > > + data->elem = elem;
> > > + ret = cb(io, data);
> >
> > Why not just pas elem as a separate argument to the callback?
>
> One reason is that we don't have complete type for 'elem' since its size
> is a variable.
I didn't mean to pass ublk_elem_header by value, still by pointer.
Just that you could pass const struct ublk_elem_header *elem as an
additional parameter to the callback. I think that would make the code
a bit easier to follow than passing it via data->elem.
>
> >
> > > + if (unlikely(ret))
> > > + break;
> > > + }
> > > + iter->done += i;
> > > + return ret;
> > > +}
> > > +
> > > +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > + struct ublk_batch_io_data *data,
> > > + int (*cb)(struct ublk_io *io,
> > > + const struct ublk_batch_io_data *data))
> > > +{
> > > + int ret = 0;
> > > +
> > > + while (iter->done < iter->total) {
> > > + unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
> > > +
> > > + ret = copy_from_iter(iter->buf, len, &iter->iter);
> > > + if (ret != len) {
> >
> > How would this be possible? The iterator comes from an io_uring
> > registered buffer with at least the requested length, so the user
> > addresses should have been validated when the buffer was registered.
> > Should this just be a WARN_ON()?
>
> yes, that is why pr_warn() is used, I remember that WARN_ON() isn't
> encouraged in user code path.
>
> >
> > > + pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
> > > + data->ubq->dev->dev_info.dev_id, ret, len);
> > > + ret = -EINVAL;
> > > + break;
> > > + }
> > > +
> > > + ret = __ublk_walk_cmd_buf(iter, data, len, cb);
> > > + if (ret)
> > > + break;
> > > + }
> > > + return ret;
> > > +}
> > > +
> > > +static int ublk_batch_unprep_io(struct ublk_io *io,
> > > + const struct ublk_batch_io_data *data)
> > > +{
> > > + if (ublk_queue_ready(data->ubq))
> > > + data->ubq->dev->nr_queues_ready--;
> > > +
> > > + ublk_io_lock(io);
> > > + io->flags = 0;
> > > + ublk_io_unlock(io);
> > > + data->ubq->nr_io_ready--;
> > > + return 0;
> >
> > This "unprep" looks very subtle and fairly complicated. Is it really
> > necessary? What's wrong with leaving the I/Os that were successfully
> > prepped? It also looks racy to clear io->flags after the queue is
> > ready, as the io may already be in use by some I/O request.
>
> ublk_batch_unprep_io() is called in partial completion of UBLK_U_IO_PREP_IO_CMDS,
> when START_DEV can't succeed, so there can't be any IO.
Isn't it possible that the UBLK_U_IO_PREP_IO_CMDS batch contains all
the I/Os not yet prepped followed by some duplicates? Then the device
could be started following the successful completion of all the newly
prepped I/Os, but the batch would fail on the following duplicate
I/Os, causing the successfully prepped I/Os to be unprepped?
>
> >
> > > +}
> > > +
> > > +static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter,
> > > + struct ublk_batch_io_data *data)
> > > +{
> > > + int ret;
> > > +
> > > + if (!iter->done)
> > > + return;
> > > +
> > > + iov_iter_revert(&iter->iter, iter->done);
> >
> > Shouldn't the iterator be reverted by the total number of bytes
> > copied, which may be more than iter->done?
>
> ->done is exactly the total bytes handled.
But the number of bytes "handled" is not the same as the number of
bytes the iterator was advanced by, right? The copy_from_iter() is
responsible for advancing the iterator, but __ublk_walk_cmd_buf() may
break early before processing all those elements. iter->done would
only be set to the number of bytes processed by __ublk_walk_cmd_buf(),
which may be less than the bytes obtained from the iterator.
>
> >
> > > + iter->total = iter->done;
> > > + iter->done = 0;
> > > +
> > > + ret = ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io);
> > > + WARN_ON_ONCE(ret);
> > > +}
> > > +
> > > +static int ublk_batch_prep_io(struct ublk_io *io,
> > > + const struct ublk_batch_io_data *data)
> > > +{
> > > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
> > > + union ublk_io_buf buf = { 0 };
> > > + int ret;
> > > +
> > > + if (ublk_support_auto_buf_reg(data->ubq))
> > > + buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
> > > + else if (ublk_need_map_io(data->ubq)) {
> > > + buf.addr = ublk_batch_buf_addr(uc, data->elem);
> > > +
> > > + ret = ublk_check_fetch_buf(data->ubq, buf.addr);
> > > + if (ret)
> > > + return ret;
> > > + }
> > > +
> > > + ublk_io_lock(io);
> > > + ret = __ublk_fetch(data->cmd, data->ubq, io);
> > > + if (!ret)
> > > + io->buf = buf;
> > > + ublk_io_unlock(io);
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
> > > +{
> > > + struct io_uring_cmd *cmd = data->cmd;
> > > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> > > + struct ublk_batch_io_iter iter = {
> > > + .total = uc->nr_elem * uc->elem_bytes,
> > > + .elem_bytes = uc->elem_bytes,
> > > + };
> > > + int ret;
> > > +
> > > + ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
> >
> > Could iter.total be used in place of cmd->sqe->len? That way userspace
> > wouldn't have to specify a redundant value in the SQE len field.
>
> This way follows how buffer is used in io_uring/rw.c, but looks it can be saved.
> But benefit is cross-verify, cause io-uring sqe user interface is complicated.
In a IORING_OP_{READ,WRITE}{,V} operation, there aren't other fields
that can be used to determine the length of data that will be
accessed. I would rather not require userspace to pass a redundant
value, this makes the UAPI even more complicated.
Best,
Caleb
>
> >
> > > + WRITE, &iter.iter, cmd, data->issue_flags);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + mutex_lock(&data->ubq->dev->mutex);
> > > + ret = ublk_walk_cmd_buf(&iter, data, ublk_batch_prep_io);
> > > +
> > > + if (ret && iter.done)
> > > + ublk_batch_revert_prep_cmd(&iter, data);
> >
> > The iter.done check is duplicated in ublk_batch_revert_prep_cmd().
>
> OK, we can remove the check in ublk_batch_revert_prep_cmd().
>
> >
> > > + mutex_unlock(&data->ubq->dev->mutex);
> > > + return ret;
> > > +}
> > > +
> > > static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc)
> > > {
> > > const unsigned short mask = UBLK_BATCH_F_HAS_BUF_ADDR |
> > > @@ -2609,6 +2789,7 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> > > struct ublk_device *ub = cmd->file->private_data;
> > > struct ublk_batch_io_data data = {
> > > .cmd = cmd,
> > > + .issue_flags = issue_flags,
> > > };
> > > u32 cmd_op = cmd->cmd_op;
> > > int ret = -EINVAL;
> > > @@ -2619,6 +2800,11 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring_cmd *cmd,
> > >
> > > switch (cmd_op) {
> > > case UBLK_U_IO_PREP_IO_CMDS:
> > > + ret = ublk_check_batch_cmd(&data, uc);
> > > + if (ret)
> > > + goto out;
> > > + ret = ublk_handle_batch_prep_cmd(&data);
> > > + break;
> > > case UBLK_U_IO_COMMIT_IO_CMDS:
> > > ret = ublk_check_batch_cmd(&data, uc);
> > > if (ret)
> > > @@ -2780,7 +2966,7 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
> > > struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
> > > gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO;
> > > void *ptr;
> > > - int size;
> > > + int size, i;
> > >
> > > spin_lock_init(&ubq->cancel_lock);
> > > ubq->flags = ub->dev_info.flags;
> > > @@ -2792,6 +2978,9 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
> > > if (!ptr)
> > > return -ENOMEM;
> > >
> > > + for (i = 0; i < ubq->q_depth; i++)
> > > + spin_lock_init(&ubq->ios[i].lock);
> > > +
> > > ubq->io_cmd_buf = ptr;
> > > ubq->dev = ub;
> > > return 0;
> > > diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
> > > index 01d3af52cfb4..38c8cc10d694 100644
> > > --- a/include/uapi/linux/ublk_cmd.h
> > > +++ b/include/uapi/linux/ublk_cmd.h
> > > @@ -102,6 +102,11 @@
> > > _IOWR('u', 0x23, struct ublksrv_io_cmd)
> > > #define UBLK_U_IO_UNREGISTER_IO_BUF \
> > > _IOWR('u', 0x24, struct ublksrv_io_cmd)
> > > +
> > > +/*
> > > + * return 0 if the command is run successfully, otherwise failure code
> > > + * is returned
> > > + */
> >
> > Not sure this is really necessary to comment, that's pretty standard
> > for syscalls and uring_cmds.
>
> OK, I think it is for showing the difference with UBLK_U_IO_COMMIT_IO_CMDS,
> which has to support partial commit, however UBLK_U_IO_PREP_IO_CMDS
> need to be all or nothing.
>
>
> Thanks,
> Ming
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-09-18 18:12 ` Caleb Sander Mateos
@ 2025-10-16 10:08 ` Ming Lei
2025-10-22 8:00 ` Caleb Sander Mateos
0 siblings, 1 reply; 43+ messages in thread
From: Ming Lei @ 2025-10-16 10:08 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Thu, Sep 18, 2025 at 11:12:00AM -0700, Caleb Sander Mateos wrote:
> On Tue, Sep 9, 2025 at 8:56 PM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Sat, Sep 06, 2025 at 12:48:41PM -0700, Caleb Sander Mateos wrote:
> > > On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> > > >
> > > > This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
> > > > which allows userspace to prepare a batch of I/O requests.
> > > >
> > > > The core of this change is the `ublk_walk_cmd_buf` function, which iterates
> > > > over the elements in the uring_cmd fixed buffer. For each element, it parses
> > > > the I/O details, finds the corresponding `ublk_io` structure, and prepares it
> > > > for future dispatch.
> > > >
> > > > Add per-io lock for protecting concurrent delivery and committing.
> > > >
> > > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > > ---
> > > > drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
> > > > include/uapi/linux/ublk_cmd.h | 5 +
> > > > 2 files changed, 195 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > > > index 4da0dbbd7e16..a4bae3d1562a 100644
> > > > --- a/drivers/block/ublk_drv.c
> > > > +++ b/drivers/block/ublk_drv.c
> > > > @@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
> > > > struct ublk_batch_io_data {
> > > > struct ublk_queue *ubq;
> > > > struct io_uring_cmd *cmd;
> > > > + unsigned int issue_flags;
> > > > +
> > > > + /* set when walking the element buffer */
> > > > + const struct ublk_elem_header *elem;
> > > > };
> > > >
> > > > /*
> > > > @@ -200,6 +204,7 @@ struct ublk_io {
> > > > unsigned task_registered_buffers;
> > > >
> > > > void *buf_ctx_handle;
> > > > + spinlock_t lock;
> > >
> > > From our experience writing a high-throughput ublk server, the
> > > spinlocks and mutexes in the kernel are some of the largest CPU
> > > hotspots. We have spent a lot of effort working to avoid locking where
> > > possible or shard data structures to reduce contention on the locks.
> > > Even uncontended locks are still very expensive to acquire and release
> > > on machines with many CPUs due to the cache coherency overhead. ublk's
> > > per-io daemon architecture is great for performance by removing the
> >
> > io-uring highly depends on batch submission and completion, but per-io daemon
> > may break the batch easily, because it doesn't guarantee that one batch IOs
> > can be forwarded in single io task/io_uring when static tag mapping policy is
> > taken, for example:
>
> That's a good point. We've mainly focused on optimizing the ublk
> server side, but it's true that distributing incoming ublk I/Os to
> more ublk server threads adds overhead on the submitting side. One
> idea we had but haven't experimented with much is for the ublk server
> to perform the round-robin assignment of tags within each queue to
round-robin often hurts perf, and it isn't enabled yet.
> threads in larger chunks. For example, with a chunk size of 4, tags 0
> to 3 would be assigned to thread 0, tags 4 to 7 would be assigned to
> thread 1, etc. That would improve the batching of ublk I/Os when
> dispatching them from the submitting CPU to the ublk server thread.
> There's an inherent tradeoff where distributing tags to ublk server
> threads in larger chunks makes the distribution less balanced for
> small numbers of I/Os, but it will be balanced when averaged over
> large numbers of I/Os.
How can fixed chunk size work generically? It depends on workload batch
size on /dev/ublkbN, and different workload takes different batch size.
>
> >
> > ```
> > [root@ktest-40 ublk]# ./kublk add -t null --nthreads 8 -q 4 --per_io_tasks
> > dev id 0: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> > max rq size 1048576 daemon pid 89975 flags 0x6042 state LIVE
> > queue 0: affinity(0 )
> > queue 1: affinity(4 )
> > queue 2: affinity(8 )
> > queue 3: affinity(12 )
> > [root@ktest-40 ublk]#
> > [root@ktest-40 ublk]# ./kublk add -t null -q 4
> > dev id 1: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> > max rq size 1048576 daemon pid 90002 flags 0x6042 state LIVE
> > queue 0: affinity(0 )
> > queue 1: affinity(4 )
> > queue 2: affinity(8 )
> > queue 3: affinity(12 )
> > [root@ktest-40 ublk]#
> > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb0
> > submitter=0, tid=90024, file=/dev/ublkb0, nfiles=1, node=-1
> > polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> > Engine=io_uring, sq_ring=128, cq_ring=128
> > IOPS=188.54K, BW=736MiB/s, IOS/call=32/31
> > IOPS=187.90K, BW=734MiB/s, IOS/call=32/32
> > IOPS=195.39K, BW=763MiB/s, IOS/call=32/32
> > ^CExiting on signal
> > Maximum IOPS=195.39K
> >
> > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb1
> > submitter=0, tid=90026, file=/dev/ublkb1, nfiles=1, node=-1
> > polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> > Engine=io_uring, sq_ring=128, cq_ring=128
> > IOPS=608.26K, BW=2.38GiB/s, IOS/call=32/31
> > IOPS=586.59K, BW=2.29GiB/s, IOS/call=32/31
> > IOPS=599.62K, BW=2.34GiB/s, IOS/call=32/32
> > ^CExiting on signal
> > Maximum IOPS=608.26K
> >
> > ```
> >
> >
> > > need for locks in the I/O path. I can't really see us adopting this
> > > ublk batching feature; adding a spin_lock() + spin_unlock() to every
> > > ublk commit operation is not worth the reduction in io_uring SQEs and
> > > uring_cmds.
> >
> > As I mentioned in cover letter, the per-io lock can be avoided for UBLK_F_PER_IO_DAEMON
> > as one follow-up, since io->task is still there for helping to track task context.
> >
> > Just want to avoid too much features in enablement stage, that is also
> > why the spin lock is wrapped in helper.
>
> Okay, good to know there's at least an idea for how to avoid the
> spinlock. Makes sense to defer it to follow-on work.
>
> >
> > >
> > > > } ____cacheline_aligned_in_smp;
> > > >
> > > > struct ublk_queue {
> > > > @@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> > > > return false;
> > > > }
> > > >
> > > > +static inline void ublk_io_lock(struct ublk_io *io)
> > > > +{
> > > > + spin_lock(&io->lock);
> > > > +}
> > > > +
> > > > +static inline void ublk_io_unlock(struct ublk_io *io)
> > > > +{
> > > > + spin_unlock(&io->lock);
> > > > +}
> > > > +
> > > > static inline struct ublksrv_io_desc *
> > > > ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> > > > {
> > > > @@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> > > > return ublk_ch_uring_cmd_local(cmd, issue_flags);
> > > > }
> > > >
> > > > +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
> > > > + const struct ublk_elem_header *elem)
> > > > +{
> > > > + const void *buf = (const void *)elem;
> > >
> > > Don't need an explicit cast in order to cast to void *.
> >
> > OK.
> >
> > >
> > >
> > > > +
> > > > + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
> > > > + return *(__u64 *)(buf + sizeof(*elem));
> > > > + return -1;
> > >
> > > Why -1 and not 0? ublk_check_fetch_buf() is expecting a 0 buf_addr to
> > > indicate the lack
> >
> > Good catch, it needs to return 0.
> >
> > >
> > > > +}
> > > > +
> > > > +static struct ublk_auto_buf_reg
> > > > +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
> > > > + const struct ublk_elem_header *elem)
> > > > +{
> > > > + struct ublk_auto_buf_reg reg = {
> > > > + .index = elem->buf_index,
> > > > + .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
> > > > + UBLK_AUTO_BUF_REG_FALLBACK : 0,
> > > > + };
> > > > +
> > > > + return reg;
> > > > +}
> > > > +
> > > > +/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
> > >
> > > "can cover" is a bit vague. Can you be explicit that the buffer size
> > > needs to be a multiple of any possible buffer element size?
> >
> > I should have documented that 48 is least common multiple(LCM) of (8, 16 and
> > 24)
> >
> > >
> > > > +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
> > > > +struct ublk_batch_io_iter {
> > > > + /* copy to this buffer from iterator first */
> > > > + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
> > > > + struct iov_iter iter;
> > > > + unsigned done, total;
> > > > + unsigned char elem_bytes;
> > > > +};
> > > > +
> > > > +static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > > + struct ublk_batch_io_data *data,
> > > > + unsigned bytes,
> > > > + int (*cb)(struct ublk_io *io,
> > > > + const struct ublk_batch_io_data *data))
> > > > +{
> > > > + int i, ret = 0;
> > > > +
> > > > + for (i = 0; i < bytes; i += iter->elem_bytes) {
> > > > + const struct ublk_elem_header *elem =
> > > > + (const struct ublk_elem_header *)&iter->buf[i];
> > > > + struct ublk_io *io;
> > > > +
> > > > + if (unlikely(elem->tag >= data->ubq->q_depth)) {
> > > > + ret = -EINVAL;
> > > > + break;
> > > > + }
> > > > +
> > > > + io = &data->ubq->ios[elem->tag];
> > > > + data->elem = elem;
> > > > + ret = cb(io, data);
> > >
> > > Why not just pas elem as a separate argument to the callback?
> >
> > One reason is that we don't have complete type for 'elem' since its size
> > is a variable.
>
> I didn't mean to pass ublk_elem_header by value, still by pointer.
> Just that you could pass const struct ublk_elem_header *elem as an
> additional parameter to the callback. I think that would make the code
> a bit easier to follow than passing it via data->elem.
OK.
>
> >
> > >
> > > > + if (unlikely(ret))
> > > > + break;
> > > > + }
> > > > + iter->done += i;
> > > > + return ret;
> > > > +}
> > > > +
> > > > +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > > + struct ublk_batch_io_data *data,
> > > > + int (*cb)(struct ublk_io *io,
> > > > + const struct ublk_batch_io_data *data))
> > > > +{
> > > > + int ret = 0;
> > > > +
> > > > + while (iter->done < iter->total) {
> > > > + unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
> > > > +
> > > > + ret = copy_from_iter(iter->buf, len, &iter->iter);
> > > > + if (ret != len) {
> > >
> > > How would this be possible? The iterator comes from an io_uring
> > > registered buffer with at least the requested length, so the user
> > > addresses should have been validated when the buffer was registered.
> > > Should this just be a WARN_ON()?
> >
> > yes, that is why pr_warn() is used, I remember that WARN_ON() isn't
> > encouraged in user code path.
> >
> > >
> > > > + pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
> > > > + data->ubq->dev->dev_info.dev_id, ret, len);
> > > > + ret = -EINVAL;
> > > > + break;
> > > > + }
> > > > +
> > > > + ret = __ublk_walk_cmd_buf(iter, data, len, cb);
> > > > + if (ret)
> > > > + break;
> > > > + }
> > > > + return ret;
> > > > +}
> > > > +
> > > > +static int ublk_batch_unprep_io(struct ublk_io *io,
> > > > + const struct ublk_batch_io_data *data)
> > > > +{
> > > > + if (ublk_queue_ready(data->ubq))
> > > > + data->ubq->dev->nr_queues_ready--;
> > > > +
> > > > + ublk_io_lock(io);
> > > > + io->flags = 0;
> > > > + ublk_io_unlock(io);
> > > > + data->ubq->nr_io_ready--;
> > > > + return 0;
> > >
> > > This "unprep" looks very subtle and fairly complicated. Is it really
> > > necessary? What's wrong with leaving the I/Os that were successfully
> > > prepped? It also looks racy to clear io->flags after the queue is
> > > ready, as the io may already be in use by some I/O request.
> >
> > ublk_batch_unprep_io() is called in partial completion of UBLK_U_IO_PREP_IO_CMDS,
> > when START_DEV can't succeed, so there can't be any IO.
>
> Isn't it possible that the UBLK_U_IO_PREP_IO_CMDS batch contains all
> the I/Os not yet prepped followed by some duplicates? Then the device
> could be started following the successful completion of all the newly
> prepped I/Os, but the batch would fail on the following duplicate
> I/Os, causing the successfully prepped I/Os to be unprepped?
It can be avoided easily because ub->mutex is required for UBLK_U_IO_PREP_IO_CMDS,
such as, ub->dev_info.state can be set to UBLK_S_DEV_DEAD in case of any failure.
>
> >
> > >
> > > > +}
> > > > +
> > > > +static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter,
> > > > + struct ublk_batch_io_data *data)
> > > > +{
> > > > + int ret;
> > > > +
> > > > + if (!iter->done)
> > > > + return;
> > > > +
> > > > + iov_iter_revert(&iter->iter, iter->done);
> > >
> > > Shouldn't the iterator be reverted by the total number of bytes
> > > copied, which may be more than iter->done?
> >
> > ->done is exactly the total bytes handled.
>
> But the number of bytes "handled" is not the same as the number of
> bytes the iterator was advanced by, right? The copy_from_iter() is
> responsible for advancing the iterator, but __ublk_walk_cmd_buf() may
> break early before processing all those elements. iter->done would
> only be set to the number of bytes processed by __ublk_walk_cmd_buf(),
> which may be less than the bytes obtained from the iterator.
Good catch, it could be handled by reverting the unhandled bytes manually
in __ublk_walk_cmd_buf().
>
> >
> > >
> > > > + iter->total = iter->done;
> > > > + iter->done = 0;
> > > > +
> > > > + ret = ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io);
> > > > + WARN_ON_ONCE(ret);
> > > > +}
> > > > +
> > > > +static int ublk_batch_prep_io(struct ublk_io *io,
> > > > + const struct ublk_batch_io_data *data)
> > > > +{
> > > > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
> > > > + union ublk_io_buf buf = { 0 };
> > > > + int ret;
> > > > +
> > > > + if (ublk_support_auto_buf_reg(data->ubq))
> > > > + buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
> > > > + else if (ublk_need_map_io(data->ubq)) {
> > > > + buf.addr = ublk_batch_buf_addr(uc, data->elem);
> > > > +
> > > > + ret = ublk_check_fetch_buf(data->ubq, buf.addr);
> > > > + if (ret)
> > > > + return ret;
> > > > + }
> > > > +
> > > > + ublk_io_lock(io);
> > > > + ret = __ublk_fetch(data->cmd, data->ubq, io);
> > > > + if (!ret)
> > > > + io->buf = buf;
> > > > + ublk_io_unlock(io);
> > > > +
> > > > + return ret;
> > > > +}
> > > > +
> > > > +static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
> > > > +{
> > > > + struct io_uring_cmd *cmd = data->cmd;
> > > > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> > > > + struct ublk_batch_io_iter iter = {
> > > > + .total = uc->nr_elem * uc->elem_bytes,
> > > > + .elem_bytes = uc->elem_bytes,
> > > > + };
> > > > + int ret;
> > > > +
> > > > + ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
> > >
> > > Could iter.total be used in place of cmd->sqe->len? That way userspace
> > > wouldn't have to specify a redundant value in the SQE len field.
> >
> > This way follows how buffer is used in io_uring/rw.c, but looks it can be saved.
> > But benefit is cross-verify, cause io-uring sqe user interface is complicated.
>
> In a IORING_OP_{READ,WRITE}{,V} operation, there aren't other fields
> that can be used to determine the length of data that will be
> accessed. I would rather not require userspace to pass a redundant
> value, this makes the UAPI even more complicated.
Fair enough, will drop the sqe->len use.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-10-16 10:08 ` Ming Lei
@ 2025-10-22 8:00 ` Caleb Sander Mateos
2025-10-22 10:15 ` Ming Lei
0 siblings, 1 reply; 43+ messages in thread
From: Caleb Sander Mateos @ 2025-10-22 8:00 UTC (permalink / raw)
To: Ming Lei; +Cc: Jens Axboe, linux-block, Uday Shankar
On Thu, Oct 16, 2025 at 3:08 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Thu, Sep 18, 2025 at 11:12:00AM -0700, Caleb Sander Mateos wrote:
> > On Tue, Sep 9, 2025 at 8:56 PM Ming Lei <ming.lei@redhat.com> wrote:
> > >
> > > On Sat, Sep 06, 2025 at 12:48:41PM -0700, Caleb Sander Mateos wrote:
> > > > On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> > > > >
> > > > > This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
> > > > > which allows userspace to prepare a batch of I/O requests.
> > > > >
> > > > > The core of this change is the `ublk_walk_cmd_buf` function, which iterates
> > > > > over the elements in the uring_cmd fixed buffer. For each element, it parses
> > > > > the I/O details, finds the corresponding `ublk_io` structure, and prepares it
> > > > > for future dispatch.
> > > > >
> > > > > Add per-io lock for protecting concurrent delivery and committing.
> > > > >
> > > > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > > > ---
> > > > > drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
> > > > > include/uapi/linux/ublk_cmd.h | 5 +
> > > > > 2 files changed, 195 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > > > > index 4da0dbbd7e16..a4bae3d1562a 100644
> > > > > --- a/drivers/block/ublk_drv.c
> > > > > +++ b/drivers/block/ublk_drv.c
> > > > > @@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
> > > > > struct ublk_batch_io_data {
> > > > > struct ublk_queue *ubq;
> > > > > struct io_uring_cmd *cmd;
> > > > > + unsigned int issue_flags;
> > > > > +
> > > > > + /* set when walking the element buffer */
> > > > > + const struct ublk_elem_header *elem;
> > > > > };
> > > > >
> > > > > /*
> > > > > @@ -200,6 +204,7 @@ struct ublk_io {
> > > > > unsigned task_registered_buffers;
> > > > >
> > > > > void *buf_ctx_handle;
> > > > > + spinlock_t lock;
> > > >
> > > > From our experience writing a high-throughput ublk server, the
> > > > spinlocks and mutexes in the kernel are some of the largest CPU
> > > > hotspots. We have spent a lot of effort working to avoid locking where
> > > > possible or shard data structures to reduce contention on the locks.
> > > > Even uncontended locks are still very expensive to acquire and release
> > > > on machines with many CPUs due to the cache coherency overhead. ublk's
> > > > per-io daemon architecture is great for performance by removing the
> > >
> > > io-uring highly depends on batch submission and completion, but per-io daemon
> > > may break the batch easily, because it doesn't guarantee that one batch IOs
> > > can be forwarded in single io task/io_uring when static tag mapping policy is
> > > taken, for example:
> >
> > That's a good point. We've mainly focused on optimizing the ublk
> > server side, but it's true that distributing incoming ublk I/Os to
> > more ublk server threads adds overhead on the submitting side. One
> > idea we had but haven't experimented with much is for the ublk server
> > to perform the round-robin assignment of tags within each queue to
>
> round-robin often hurts perf, and it isn't enabled yet.
I don't mean BLK_MQ_F_TAG_RR. I thought even the default tag
allocation scheme resulted in approximately round-robin tag
allocation, right? __sbitmap_queue_get_batch() will attempt to
allocate contiguous bits from the map, so a batch of queued requests
will likely be assigned sequential tags (or a couple sequential runs
of tags) in the queue. I guess that's only true if the queue is mostly
empty; if many tags are in use, it will be harder to allocate
contiguous sets of tags.
>
> > threads in larger chunks. For example, with a chunk size of 4, tags 0
> > to 3 would be assigned to thread 0, tags 4 to 7 would be assigned to
> > thread 1, etc. That would improve the batching of ublk I/Os when
> > dispatching them from the submitting CPU to the ublk server thread.
> > There's an inherent tradeoff where distributing tags to ublk server
> > threads in larger chunks makes the distribution less balanced for
> > small numbers of I/Os, but it will be balanced when averaged over
> > large numbers of I/Os.
>
> How can fixed chunk size work generically? It depends on workload batch
> size on /dev/ublkbN, and different workload takes different batch size.
Yes, that's a good point. It requires pretty specific knowledge of the
workload to optimize the tag assignment to ublk server threads like
this.
>
> >
> > >
> > > ```
> > > [root@ktest-40 ublk]# ./kublk add -t null --nthreads 8 -q 4 --per_io_tasks
> > > dev id 0: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> > > max rq size 1048576 daemon pid 89975 flags 0x6042 state LIVE
> > > queue 0: affinity(0 )
> > > queue 1: affinity(4 )
> > > queue 2: affinity(8 )
> > > queue 3: affinity(12 )
> > > [root@ktest-40 ublk]#
> > > [root@ktest-40 ublk]# ./kublk add -t null -q 4
> > > dev id 1: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> > > max rq size 1048576 daemon pid 90002 flags 0x6042 state LIVE
> > > queue 0: affinity(0 )
> > > queue 1: affinity(4 )
> > > queue 2: affinity(8 )
> > > queue 3: affinity(12 )
> > > [root@ktest-40 ublk]#
> > > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb0
> > > submitter=0, tid=90024, file=/dev/ublkb0, nfiles=1, node=-1
> > > polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> > > Engine=io_uring, sq_ring=128, cq_ring=128
> > > IOPS=188.54K, BW=736MiB/s, IOS/call=32/31
> > > IOPS=187.90K, BW=734MiB/s, IOS/call=32/32
> > > IOPS=195.39K, BW=763MiB/s, IOS/call=32/32
> > > ^CExiting on signal
> > > Maximum IOPS=195.39K
> > >
> > > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb1
> > > submitter=0, tid=90026, file=/dev/ublkb1, nfiles=1, node=-1
> > > polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> > > Engine=io_uring, sq_ring=128, cq_ring=128
> > > IOPS=608.26K, BW=2.38GiB/s, IOS/call=32/31
> > > IOPS=586.59K, BW=2.29GiB/s, IOS/call=32/31
> > > IOPS=599.62K, BW=2.34GiB/s, IOS/call=32/32
> > > ^CExiting on signal
> > > Maximum IOPS=608.26K
> > >
> > > ```
> > >
> > >
> > > > need for locks in the I/O path. I can't really see us adopting this
> > > > ublk batching feature; adding a spin_lock() + spin_unlock() to every
> > > > ublk commit operation is not worth the reduction in io_uring SQEs and
> > > > uring_cmds.
> > >
> > > As I mentioned in cover letter, the per-io lock can be avoided for UBLK_F_PER_IO_DAEMON
> > > as one follow-up, since io->task is still there for helping to track task context.
> > >
> > > Just want to avoid too much features in enablement stage, that is also
> > > why the spin lock is wrapped in helper.
> >
> > Okay, good to know there's at least an idea for how to avoid the
> > spinlock. Makes sense to defer it to follow-on work.
> >
> > >
> > > >
> > > > > } ____cacheline_aligned_in_smp;
> > > > >
> > > > > struct ublk_queue {
> > > > > @@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> > > > > return false;
> > > > > }
> > > > >
> > > > > +static inline void ublk_io_lock(struct ublk_io *io)
> > > > > +{
> > > > > + spin_lock(&io->lock);
> > > > > +}
> > > > > +
> > > > > +static inline void ublk_io_unlock(struct ublk_io *io)
> > > > > +{
> > > > > + spin_unlock(&io->lock);
> > > > > +}
> > > > > +
> > > > > static inline struct ublksrv_io_desc *
> > > > > ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> > > > > {
> > > > > @@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> > > > > return ublk_ch_uring_cmd_local(cmd, issue_flags);
> > > > > }
> > > > >
> > > > > +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
> > > > > + const struct ublk_elem_header *elem)
> > > > > +{
> > > > > + const void *buf = (const void *)elem;
> > > >
> > > > Don't need an explicit cast in order to cast to void *.
> > >
> > > OK.
> > >
> > > >
> > > >
> > > > > +
> > > > > + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
> > > > > + return *(__u64 *)(buf + sizeof(*elem));
> > > > > + return -1;
> > > >
> > > > Why -1 and not 0? ublk_check_fetch_buf() is expecting a 0 buf_addr to
> > > > indicate the lack
> > >
> > > Good catch, it needs to return 0.
> > >
> > > >
> > > > > +}
> > > > > +
> > > > > +static struct ublk_auto_buf_reg
> > > > > +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
> > > > > + const struct ublk_elem_header *elem)
> > > > > +{
> > > > > + struct ublk_auto_buf_reg reg = {
> > > > > + .index = elem->buf_index,
> > > > > + .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
> > > > > + UBLK_AUTO_BUF_REG_FALLBACK : 0,
> > > > > + };
> > > > > +
> > > > > + return reg;
> > > > > +}
> > > > > +
> > > > > +/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
> > > >
> > > > "can cover" is a bit vague. Can you be explicit that the buffer size
> > > > needs to be a multiple of any possible buffer element size?
> > >
> > > I should have documented that 48 is least common multiple(LCM) of (8, 16 and
> > > 24)
> > >
> > > >
> > > > > +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
> > > > > +struct ublk_batch_io_iter {
> > > > > + /* copy to this buffer from iterator first */
> > > > > + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
> > > > > + struct iov_iter iter;
> > > > > + unsigned done, total;
> > > > > + unsigned char elem_bytes;
> > > > > +};
> > > > > +
> > > > > +static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > > > + struct ublk_batch_io_data *data,
> > > > > + unsigned bytes,
> > > > > + int (*cb)(struct ublk_io *io,
> > > > > + const struct ublk_batch_io_data *data))
> > > > > +{
> > > > > + int i, ret = 0;
> > > > > +
> > > > > + for (i = 0; i < bytes; i += iter->elem_bytes) {
> > > > > + const struct ublk_elem_header *elem =
> > > > > + (const struct ublk_elem_header *)&iter->buf[i];
> > > > > + struct ublk_io *io;
> > > > > +
> > > > > + if (unlikely(elem->tag >= data->ubq->q_depth)) {
> > > > > + ret = -EINVAL;
> > > > > + break;
> > > > > + }
> > > > > +
> > > > > + io = &data->ubq->ios[elem->tag];
> > > > > + data->elem = elem;
> > > > > + ret = cb(io, data);
> > > >
> > > > Why not just pas elem as a separate argument to the callback?
> > >
> > > One reason is that we don't have complete type for 'elem' since its size
> > > is a variable.
> >
> > I didn't mean to pass ublk_elem_header by value, still by pointer.
> > Just that you could pass const struct ublk_elem_header *elem as an
> > additional parameter to the callback. I think that would make the code
> > a bit easier to follow than passing it via data->elem.
>
> OK.
>
> >
> > >
> > > >
> > > > > + if (unlikely(ret))
> > > > > + break;
> > > > > + }
> > > > > + iter->done += i;
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > > > + struct ublk_batch_io_data *data,
> > > > > + int (*cb)(struct ublk_io *io,
> > > > > + const struct ublk_batch_io_data *data))
> > > > > +{
> > > > > + int ret = 0;
> > > > > +
> > > > > + while (iter->done < iter->total) {
> > > > > + unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
> > > > > +
> > > > > + ret = copy_from_iter(iter->buf, len, &iter->iter);
> > > > > + if (ret != len) {
> > > >
> > > > How would this be possible? The iterator comes from an io_uring
> > > > registered buffer with at least the requested length, so the user
> > > > addresses should have been validated when the buffer was registered.
> > > > Should this just be a WARN_ON()?
> > >
> > > yes, that is why pr_warn() is used, I remember that WARN_ON() isn't
> > > encouraged in user code path.
> > >
> > > >
> > > > > + pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
> > > > > + data->ubq->dev->dev_info.dev_id, ret, len);
> > > > > + ret = -EINVAL;
> > > > > + break;
> > > > > + }
> > > > > +
> > > > > + ret = __ublk_walk_cmd_buf(iter, data, len, cb);
> > > > > + if (ret)
> > > > > + break;
> > > > > + }
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static int ublk_batch_unprep_io(struct ublk_io *io,
> > > > > + const struct ublk_batch_io_data *data)
> > > > > +{
> > > > > + if (ublk_queue_ready(data->ubq))
> > > > > + data->ubq->dev->nr_queues_ready--;
> > > > > +
> > > > > + ublk_io_lock(io);
> > > > > + io->flags = 0;
> > > > > + ublk_io_unlock(io);
> > > > > + data->ubq->nr_io_ready--;
> > > > > + return 0;
> > > >
> > > > This "unprep" looks very subtle and fairly complicated. Is it really
> > > > necessary? What's wrong with leaving the I/Os that were successfully
> > > > prepped? It also looks racy to clear io->flags after the queue is
> > > > ready, as the io may already be in use by some I/O request.
> > >
> > > ublk_batch_unprep_io() is called in partial completion of UBLK_U_IO_PREP_IO_CMDS,
> > > when START_DEV can't succeed, so there can't be any IO.
> >
> > Isn't it possible that the UBLK_U_IO_PREP_IO_CMDS batch contains all
> > the I/Os not yet prepped followed by some duplicates? Then the device
> > could be started following the successful completion of all the newly
> > prepped I/Os, but the batch would fail on the following duplicate
> > I/Os, causing the successfully prepped I/Os to be unprepped?
>
> It can be avoided easily because ub->mutex is required for UBLK_U_IO_PREP_IO_CMDS,
> such as, ub->dev_info.state can be set to UBLK_S_DEV_DEAD in case of any failure.
Are you saying that the situation I described isn't possible, or that
it can be prevented with an additional state check?
I don't think the mutex alone prevents this situation. The mutex
guards against concurrent UBLK_U_IO_PREP_IO_CMDS, but it doesn't
prevent requests from being queued concurrently to the ublk device
once it's ready. And __ublk_fetch() will mark the ublk device as ready
as soon as all the tags have been fetched/prepped, when there could
still be more commands in the UBLK_U_IO_PREP_IO_CMDS batch.
I think to fix the issue, you'd need to wait to mark the ublk device
ready until the end of the UBLK_U_IO_PREP_IO_CMDS batch.
Best,
Caleb
>
> >
> > >
> > > >
> > > > > +}
> > > > > +
> > > > > +static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter,
> > > > > + struct ublk_batch_io_data *data)
> > > > > +{
> > > > > + int ret;
> > > > > +
> > > > > + if (!iter->done)
> > > > > + return;
> > > > > +
> > > > > + iov_iter_revert(&iter->iter, iter->done);
> > > >
> > > > Shouldn't the iterator be reverted by the total number of bytes
> > > > copied, which may be more than iter->done?
> > >
> > > ->done is exactly the total bytes handled.
> >
> > But the number of bytes "handled" is not the same as the number of
> > bytes the iterator was advanced by, right? The copy_from_iter() is
> > responsible for advancing the iterator, but __ublk_walk_cmd_buf() may
> > break early before processing all those elements. iter->done would
> > only be set to the number of bytes processed by __ublk_walk_cmd_buf(),
> > which may be less than the bytes obtained from the iterator.
>
> Good catch, it could be handled by reverting the unhandled bytes manually
> in __ublk_walk_cmd_buf().
>
> >
> > >
> > > >
> > > > > + iter->total = iter->done;
> > > > > + iter->done = 0;
> > > > > +
> > > > > + ret = ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io);
> > > > > + WARN_ON_ONCE(ret);
> > > > > +}
> > > > > +
> > > > > +static int ublk_batch_prep_io(struct ublk_io *io,
> > > > > + const struct ublk_batch_io_data *data)
> > > > > +{
> > > > > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(data->cmd->sqe);
> > > > > + union ublk_io_buf buf = { 0 };
> > > > > + int ret;
> > > > > +
> > > > > + if (ublk_support_auto_buf_reg(data->ubq))
> > > > > + buf.auto_reg = ublk_batch_auto_buf_reg(uc, data->elem);
> > > > > + else if (ublk_need_map_io(data->ubq)) {
> > > > > + buf.addr = ublk_batch_buf_addr(uc, data->elem);
> > > > > +
> > > > > + ret = ublk_check_fetch_buf(data->ubq, buf.addr);
> > > > > + if (ret)
> > > > > + return ret;
> > > > > + }
> > > > > +
> > > > > + ublk_io_lock(io);
> > > > > + ret = __ublk_fetch(data->cmd, data->ubq, io);
> > > > > + if (!ret)
> > > > > + io->buf = buf;
> > > > > + ublk_io_unlock(io);
> > > > > +
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static int ublk_handle_batch_prep_cmd(struct ublk_batch_io_data *data)
> > > > > +{
> > > > > + struct io_uring_cmd *cmd = data->cmd;
> > > > > + const struct ublk_batch_io *uc = io_uring_sqe_cmd(cmd->sqe);
> > > > > + struct ublk_batch_io_iter iter = {
> > > > > + .total = uc->nr_elem * uc->elem_bytes,
> > > > > + .elem_bytes = uc->elem_bytes,
> > > > > + };
> > > > > + int ret;
> > > > > +
> > > > > + ret = io_uring_cmd_import_fixed(cmd->sqe->addr, cmd->sqe->len,
> > > >
> > > > Could iter.total be used in place of cmd->sqe->len? That way userspace
> > > > wouldn't have to specify a redundant value in the SQE len field.
> > >
> > > This way follows how buffer is used in io_uring/rw.c, but looks it can be saved.
> > > But benefit is cross-verify, cause io-uring sqe user interface is complicated.
> >
> > In a IORING_OP_{READ,WRITE}{,V} operation, there aren't other fields
> > that can be used to determine the length of data that will be
> > accessed. I would rather not require userspace to pass a redundant
> > value, this makes the UAPI even more complicated.
>
> Fair enough, will drop the sqe->len use.
>
>
> Thanks,
> Ming
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS
2025-10-22 8:00 ` Caleb Sander Mateos
@ 2025-10-22 10:15 ` Ming Lei
0 siblings, 0 replies; 43+ messages in thread
From: Ming Lei @ 2025-10-22 10:15 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, Uday Shankar
On Wed, Oct 22, 2025 at 01:00:53AM -0700, Caleb Sander Mateos wrote:
> On Thu, Oct 16, 2025 at 3:08 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Thu, Sep 18, 2025 at 11:12:00AM -0700, Caleb Sander Mateos wrote:
> > > On Tue, Sep 9, 2025 at 8:56 PM Ming Lei <ming.lei@redhat.com> wrote:
> > > >
> > > > On Sat, Sep 06, 2025 at 12:48:41PM -0700, Caleb Sander Mateos wrote:
> > > > > On Mon, Sep 1, 2025 at 3:03 AM Ming Lei <ming.lei@redhat.com> wrote:
> > > > > >
> > > > > > This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command,
> > > > > > which allows userspace to prepare a batch of I/O requests.
> > > > > >
> > > > > > The core of this change is the `ublk_walk_cmd_buf` function, which iterates
> > > > > > over the elements in the uring_cmd fixed buffer. For each element, it parses
> > > > > > the I/O details, finds the corresponding `ublk_io` structure, and prepares it
> > > > > > for future dispatch.
> > > > > >
> > > > > > Add per-io lock for protecting concurrent delivery and committing.
> > > > > >
> > > > > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > > > > ---
> > > > > > drivers/block/ublk_drv.c | 191 +++++++++++++++++++++++++++++++++-
> > > > > > include/uapi/linux/ublk_cmd.h | 5 +
> > > > > > 2 files changed, 195 insertions(+), 1 deletion(-)
> > > > > >
> > > > > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > > > > > index 4da0dbbd7e16..a4bae3d1562a 100644
> > > > > > --- a/drivers/block/ublk_drv.c
> > > > > > +++ b/drivers/block/ublk_drv.c
> > > > > > @@ -116,6 +116,10 @@ struct ublk_uring_cmd_pdu {
> > > > > > struct ublk_batch_io_data {
> > > > > > struct ublk_queue *ubq;
> > > > > > struct io_uring_cmd *cmd;
> > > > > > + unsigned int issue_flags;
> > > > > > +
> > > > > > + /* set when walking the element buffer */
> > > > > > + const struct ublk_elem_header *elem;
> > > > > > };
> > > > > >
> > > > > > /*
> > > > > > @@ -200,6 +204,7 @@ struct ublk_io {
> > > > > > unsigned task_registered_buffers;
> > > > > >
> > > > > > void *buf_ctx_handle;
> > > > > > + spinlock_t lock;
> > > > >
> > > > > From our experience writing a high-throughput ublk server, the
> > > > > spinlocks and mutexes in the kernel are some of the largest CPU
> > > > > hotspots. We have spent a lot of effort working to avoid locking where
> > > > > possible or shard data structures to reduce contention on the locks.
> > > > > Even uncontended locks are still very expensive to acquire and release
> > > > > on machines with many CPUs due to the cache coherency overhead. ublk's
> > > > > per-io daemon architecture is great for performance by removing the
> > > >
> > > > io-uring highly depends on batch submission and completion, but per-io daemon
> > > > may break the batch easily, because it doesn't guarantee that one batch IOs
> > > > can be forwarded in single io task/io_uring when static tag mapping policy is
> > > > taken, for example:
> > >
> > > That's a good point. We've mainly focused on optimizing the ublk
> > > server side, but it's true that distributing incoming ublk I/Os to
> > > more ublk server threads adds overhead on the submitting side. One
> > > idea we had but haven't experimented with much is for the ublk server
> > > to perform the round-robin assignment of tags within each queue to
> >
> > round-robin often hurts perf, and it isn't enabled yet.
>
> I don't mean BLK_MQ_F_TAG_RR. I thought even the default tag
> allocation scheme resulted in approximately round-robin tag
> allocation, right? __sbitmap_queue_get_batch() will attempt to
> allocate contiguous bits from the map, so a batch of queued requests
> will likely be assigned sequential tags (or a couple sequential runs
> of tags) in the queue. I guess that's only true if the queue is mostly
> empty; if many tags are in use, it will be harder to allocate
> contiguous sets of tags.
Yes, __sbitmap_queue_get_batch() may fail and fallback to single bit
allocation, so you need to setup big queue depth for avoiding batch
allocation failure. But it is still hard to avoid in case of very
high IO depth.
>
> >
> > > threads in larger chunks. For example, with a chunk size of 4, tags 0
> > > to 3 would be assigned to thread 0, tags 4 to 7 would be assigned to
> > > thread 1, etc. That would improve the batching of ublk I/Os when
> > > dispatching them from the submitting CPU to the ublk server thread.
> > > There's an inherent tradeoff where distributing tags to ublk server
> > > threads in larger chunks makes the distribution less balanced for
> > > small numbers of I/Os, but it will be balanced when averaged over
> > > large numbers of I/Os.
> >
> > How can fixed chunk size work generically? It depends on workload batch
> > size on /dev/ublkbN, and different workload takes different batch size.
>
> Yes, that's a good point. It requires pretty specific knowledge of the
> workload to optimize the tag assignment to ublk server threads like
> this.
>
> >
> > >
> > > >
> > > > ```
> > > > [root@ktest-40 ublk]# ./kublk add -t null --nthreads 8 -q 4 --per_io_tasks
> > > > dev id 0: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> > > > max rq size 1048576 daemon pid 89975 flags 0x6042 state LIVE
> > > > queue 0: affinity(0 )
> > > > queue 1: affinity(4 )
> > > > queue 2: affinity(8 )
> > > > queue 3: affinity(12 )
> > > > [root@ktest-40 ublk]#
> > > > [root@ktest-40 ublk]# ./kublk add -t null -q 4
> > > > dev id 1: nr_hw_queues 4 queue_depth 128 block size 512 dev_capacity 524288000
> > > > max rq size 1048576 daemon pid 90002 flags 0x6042 state LIVE
> > > > queue 0: affinity(0 )
> > > > queue 1: affinity(4 )
> > > > queue 2: affinity(8 )
> > > > queue 3: affinity(12 )
> > > > [root@ktest-40 ublk]#
> > > > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb0
> > > > submitter=0, tid=90024, file=/dev/ublkb0, nfiles=1, node=-1
> > > > polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> > > > Engine=io_uring, sq_ring=128, cq_ring=128
> > > > IOPS=188.54K, BW=736MiB/s, IOS/call=32/31
> > > > IOPS=187.90K, BW=734MiB/s, IOS/call=32/32
> > > > IOPS=195.39K, BW=763MiB/s, IOS/call=32/32
> > > > ^CExiting on signal
> > > > Maximum IOPS=195.39K
> > > >
> > > > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p0 /dev/ublkb1
> > > > submitter=0, tid=90026, file=/dev/ublkb1, nfiles=1, node=-1
> > > > polled=0, fixedbufs=1, register_files=1, buffered=0, QD=128
> > > > Engine=io_uring, sq_ring=128, cq_ring=128
> > > > IOPS=608.26K, BW=2.38GiB/s, IOS/call=32/31
> > > > IOPS=586.59K, BW=2.29GiB/s, IOS/call=32/31
> > > > IOPS=599.62K, BW=2.34GiB/s, IOS/call=32/32
> > > > ^CExiting on signal
> > > > Maximum IOPS=608.26K
> > > >
> > > > ```
> > > >
> > > >
> > > > > need for locks in the I/O path. I can't really see us adopting this
> > > > > ublk batching feature; adding a spin_lock() + spin_unlock() to every
> > > > > ublk commit operation is not worth the reduction in io_uring SQEs and
> > > > > uring_cmds.
> > > >
> > > > As I mentioned in cover letter, the per-io lock can be avoided for UBLK_F_PER_IO_DAEMON
> > > > as one follow-up, since io->task is still there for helping to track task context.
> > > >
> > > > Just want to avoid too much features in enablement stage, that is also
> > > > why the spin lock is wrapped in helper.
> > >
> > > Okay, good to know there's at least an idea for how to avoid the
> > > spinlock. Makes sense to defer it to follow-on work.
> > >
> > > >
> > > > >
> > > > > > } ____cacheline_aligned_in_smp;
> > > > > >
> > > > > > struct ublk_queue {
> > > > > > @@ -276,6 +281,16 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
> > > > > > return false;
> > > > > > }
> > > > > >
> > > > > > +static inline void ublk_io_lock(struct ublk_io *io)
> > > > > > +{
> > > > > > + spin_lock(&io->lock);
> > > > > > +}
> > > > > > +
> > > > > > +static inline void ublk_io_unlock(struct ublk_io *io)
> > > > > > +{
> > > > > > + spin_unlock(&io->lock);
> > > > > > +}
> > > > > > +
> > > > > > static inline struct ublksrv_io_desc *
> > > > > > ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
> > > > > > {
> > > > > > @@ -2538,6 +2553,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
> > > > > > return ublk_ch_uring_cmd_local(cmd, issue_flags);
> > > > > > }
> > > > > >
> > > > > > +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc,
> > > > > > + const struct ublk_elem_header *elem)
> > > > > > +{
> > > > > > + const void *buf = (const void *)elem;
> > > > >
> > > > > Don't need an explicit cast in order to cast to void *.
> > > >
> > > > OK.
> > > >
> > > > >
> > > > >
> > > > > > +
> > > > > > + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR)
> > > > > > + return *(__u64 *)(buf + sizeof(*elem));
> > > > > > + return -1;
> > > > >
> > > > > Why -1 and not 0? ublk_check_fetch_buf() is expecting a 0 buf_addr to
> > > > > indicate the lack
> > > >
> > > > Good catch, it needs to return 0.
> > > >
> > > > >
> > > > > > +}
> > > > > > +
> > > > > > +static struct ublk_auto_buf_reg
> > > > > > +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc,
> > > > > > + const struct ublk_elem_header *elem)
> > > > > > +{
> > > > > > + struct ublk_auto_buf_reg reg = {
> > > > > > + .index = elem->buf_index,
> > > > > > + .flags = (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ?
> > > > > > + UBLK_AUTO_BUF_REG_FALLBACK : 0,
> > > > > > + };
> > > > > > +
> > > > > > + return reg;
> > > > > > +}
> > > > > > +
> > > > > > +/* 48 can cover any type of buffer element(8, 16 and 24 bytes) */
> > > > >
> > > > > "can cover" is a bit vague. Can you be explicit that the buffer size
> > > > > needs to be a multiple of any possible buffer element size?
> > > >
> > > > I should have documented that 48 is least common multiple(LCM) of (8, 16 and
> > > > 24)
> > > >
> > > > >
> > > > > > +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10)
> > > > > > +struct ublk_batch_io_iter {
> > > > > > + /* copy to this buffer from iterator first */
> > > > > > + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ];
> > > > > > + struct iov_iter iter;
> > > > > > + unsigned done, total;
> > > > > > + unsigned char elem_bytes;
> > > > > > +};
> > > > > > +
> > > > > > +static int __ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > > > > + struct ublk_batch_io_data *data,
> > > > > > + unsigned bytes,
> > > > > > + int (*cb)(struct ublk_io *io,
> > > > > > + const struct ublk_batch_io_data *data))
> > > > > > +{
> > > > > > + int i, ret = 0;
> > > > > > +
> > > > > > + for (i = 0; i < bytes; i += iter->elem_bytes) {
> > > > > > + const struct ublk_elem_header *elem =
> > > > > > + (const struct ublk_elem_header *)&iter->buf[i];
> > > > > > + struct ublk_io *io;
> > > > > > +
> > > > > > + if (unlikely(elem->tag >= data->ubq->q_depth)) {
> > > > > > + ret = -EINVAL;
> > > > > > + break;
> > > > > > + }
> > > > > > +
> > > > > > + io = &data->ubq->ios[elem->tag];
> > > > > > + data->elem = elem;
> > > > > > + ret = cb(io, data);
> > > > >
> > > > > Why not just pas elem as a separate argument to the callback?
> > > >
> > > > One reason is that we don't have complete type for 'elem' since its size
> > > > is a variable.
> > >
> > > I didn't mean to pass ublk_elem_header by value, still by pointer.
> > > Just that you could pass const struct ublk_elem_header *elem as an
> > > additional parameter to the callback. I think that would make the code
> > > a bit easier to follow than passing it via data->elem.
> >
> > OK.
> >
> > >
> > > >
> > > > >
> > > > > > + if (unlikely(ret))
> > > > > > + break;
> > > > > > + }
> > > > > > + iter->done += i;
> > > > > > + return ret;
> > > > > > +}
> > > > > > +
> > > > > > +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter,
> > > > > > + struct ublk_batch_io_data *data,
> > > > > > + int (*cb)(struct ublk_io *io,
> > > > > > + const struct ublk_batch_io_data *data))
> > > > > > +{
> > > > > > + int ret = 0;
> > > > > > +
> > > > > > + while (iter->done < iter->total) {
> > > > > > + unsigned int len = min(sizeof(iter->buf), iter->total - iter->done);
> > > > > > +
> > > > > > + ret = copy_from_iter(iter->buf, len, &iter->iter);
> > > > > > + if (ret != len) {
> > > > >
> > > > > How would this be possible? The iterator comes from an io_uring
> > > > > registered buffer with at least the requested length, so the user
> > > > > addresses should have been validated when the buffer was registered.
> > > > > Should this just be a WARN_ON()?
> > > >
> > > > yes, that is why pr_warn() is used, I remember that WARN_ON() isn't
> > > > encouraged in user code path.
> > > >
> > > > >
> > > > > > + pr_warn("ublk%d: read batch cmd buffer failed %u/%u\n",
> > > > > > + data->ubq->dev->dev_info.dev_id, ret, len);
> > > > > > + ret = -EINVAL;
> > > > > > + break;
> > > > > > + }
> > > > > > +
> > > > > > + ret = __ublk_walk_cmd_buf(iter, data, len, cb);
> > > > > > + if (ret)
> > > > > > + break;
> > > > > > + }
> > > > > > + return ret;
> > > > > > +}
> > > > > > +
> > > > > > +static int ublk_batch_unprep_io(struct ublk_io *io,
> > > > > > + const struct ublk_batch_io_data *data)
> > > > > > +{
> > > > > > + if (ublk_queue_ready(data->ubq))
> > > > > > + data->ubq->dev->nr_queues_ready--;
> > > > > > +
> > > > > > + ublk_io_lock(io);
> > > > > > + io->flags = 0;
> > > > > > + ublk_io_unlock(io);
> > > > > > + data->ubq->nr_io_ready--;
> > > > > > + return 0;
> > > > >
> > > > > This "unprep" looks very subtle and fairly complicated. Is it really
> > > > > necessary? What's wrong with leaving the I/Os that were successfully
> > > > > prepped? It also looks racy to clear io->flags after the queue is
> > > > > ready, as the io may already be in use by some I/O request.
> > > >
> > > > ublk_batch_unprep_io() is called in partial completion of UBLK_U_IO_PREP_IO_CMDS,
> > > > when START_DEV can't succeed, so there can't be any IO.
> > >
> > > Isn't it possible that the UBLK_U_IO_PREP_IO_CMDS batch contains all
> > > the I/Os not yet prepped followed by some duplicates? Then the device
> > > could be started following the successful completion of all the newly
> > > prepped I/Os, but the batch would fail on the following duplicate
> > > I/Os, causing the successfully prepped I/Os to be unprepped?
> >
> > It can be avoided easily because ub->mutex is required for UBLK_U_IO_PREP_IO_CMDS,
> > such as, ub->dev_info.state can be set to UBLK_S_DEV_DEAD in case of any failure.
>
> Are you saying that the situation I described isn't possible, or that
> it can be prevented with an additional state check?
I meant it can be avoided easily, such as by adding check ublk_dev_ready() in
ublk_ctrl_start_dev() after ub->mutex is acquired.
> I don't think the mutex alone prevents this situation. The mutex
> guards against concurrent UBLK_U_IO_PREP_IO_CMDS, but it doesn't
> prevent requests from being queued concurrently to the ublk device
> once it's ready. And __ublk_fetch() will mark the ublk device as ready
> as soon as all the tags have been fetched/prepped, when there could
> still be more commands in the UBLK_U_IO_PREP_IO_CMDS batch.
> I think to fix the issue, you'd need to wait to mark the ublk device
> ready until the end of the UBLK_U_IO_PREP_IO_CMDS batch.
Thanks,
Ming
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2025-10-22 10:15 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-01 10:02 [PATCH 00/23] ublk: add UBLK_F_BATCH_IO Ming Lei
2025-09-01 10:02 ` [PATCH 01/23] ublk: add parameter `struct io_uring_cmd *` to ublk_prep_auto_buf_reg() Ming Lei
2025-09-03 3:47 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 02/23] ublk: add `union ublk_io_buf` with improved naming Ming Lei
2025-09-03 4:01 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 03/23] ublk: refactor auto buffer register in ublk_dispatch_req() Ming Lei
2025-09-03 4:41 ` Caleb Sander Mateos
2025-09-10 2:23 ` Ming Lei
2025-09-11 18:13 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 04/23] ublk: add helper of __ublk_fetch() Ming Lei
2025-09-03 4:42 ` Caleb Sander Mateos
2025-09-10 2:30 ` Ming Lei
2025-09-01 10:02 ` [PATCH 05/23] ublk: define ublk_ch_batch_io_fops for the coming feature F_BATCH_IO Ming Lei
2025-09-06 18:47 ` Caleb Sander Mateos
2025-09-01 10:02 ` [PATCH 06/23] ublk: prepare for not tracking task context for command batch Ming Lei
2025-09-06 18:48 ` Caleb Sander Mateos
2025-09-10 2:35 ` Ming Lei
2025-09-01 10:02 ` [PATCH 07/23] ublk: add new batch command UBLK_U_IO_PREP_IO_CMDS & UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
2025-09-06 18:50 ` Caleb Sander Mateos
2025-09-10 3:05 ` Ming Lei
2025-09-01 10:02 ` [PATCH 08/23] ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
2025-09-06 19:48 ` Caleb Sander Mateos
2025-09-10 3:56 ` Ming Lei
2025-09-18 18:12 ` Caleb Sander Mateos
2025-10-16 10:08 ` Ming Lei
2025-10-22 8:00 ` Caleb Sander Mateos
2025-10-22 10:15 ` Ming Lei
2025-09-01 10:02 ` [PATCH 09/23] ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
2025-09-02 6:19 ` kernel test robot
2025-09-01 10:02 ` [PATCH 10/23] ublk: add io events fifo structure Ming Lei
2025-09-01 10:02 ` [PATCH 11/23] ublk: add batch I/O dispatch infrastructure Ming Lei
2025-09-01 10:02 ` [PATCH 12/23] ublk: add UBLK_U_IO_FETCH_IO_CMDS for batch I/O processing Ming Lei
2025-09-01 10:02 ` [PATCH 13/23] ublk: abort requests filled in event kfifo Ming Lei
2025-09-01 10:02 ` [PATCH 14/23] ublk: add new feature UBLK_F_BATCH_IO Ming Lei
2025-09-01 10:02 ` [PATCH 15/23] ublk: document " Ming Lei
2025-09-01 10:02 ` [PATCH 16/23] selftests: ublk: replace assert() with ublk_assert() Ming Lei
2025-09-01 10:02 ` [PATCH 17/23] selftests: ublk: add ublk_io_buf_idx() for returning io buffer index Ming Lei
2025-09-01 10:02 ` [PATCH 18/23] selftests: ublk: add batch buffer management infrastructure Ming Lei
2025-09-01 10:02 ` [PATCH 19/23] selftests: ublk: handle UBLK_U_IO_PREP_IO_CMDS Ming Lei
2025-09-01 10:02 ` [PATCH 20/23] selftests: ublk: handle UBLK_U_IO_COMMIT_IO_CMDS Ming Lei
2025-09-01 10:02 ` [PATCH 21/23] selftests: ublk: handle UBLK_U_IO_FETCH_IO_CMDS Ming Lei
2025-09-01 10:02 ` [PATCH 22/23] selftests: ublk: add --batch/-b for enabling F_BATCH_IO Ming Lei
2025-09-01 10:02 ` [PATCH 23/23] selftests: ublk: support arbitrary threads/queues combination Ming Lei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).