* [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd
@ 2025-09-18 1:49 Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req() Caleb Sander Mateos
` (18 more replies)
0 siblings, 19 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_ch_uring_cmd_local() and the functions it calls is a frequent cache miss.
The ublk_queue is only accessed for its q_depth and flags, which are also
available on ublk_device. And ublk_device is already accessed for nr_hw_queues,
so it will already be cached. Unfortunately, the UBLK_IO_NEED_GET_DATA path
still needs to access the ublk_queue for io_cmd_buf, so it's not possible to
avoid accessing the ublk_queue there. (Allocating a single io_cmd_buf for all of
a ublk_device's I/Os could be done in the future.) At least we can optimize
UBLK_IO_FETCH_REQ, UBLK_IO_COMMIT_AND_FETCH_REQ, UBLK_IO_REGISTER_IO_BUF, and
UBLK_IO_UNREGISTER_IO_BUF.
Using only the ublk_device and not the ublk_queue in ublk_dispatch_req() is also
possible, but left for a future change.
Caleb Sander Mateos (17):
ublk: remove ubq check in ublk_check_and_get_req()
ublk: don't pass q_id to ublk_queue_cmd_buf_size()
ublk: don't pass ublk_queue to __ublk_fail_req()
ublk: add helpers to check ublk_device flags
ublk: don't dereference ublk_queue in ublk_ch_uring_cmd_local()
ublk: don't dereference ublk_queue in ublk_check_and_get_req()
ublk: pass ublk_device to ublk_register_io_buf()
ublk: don't access ublk_queue in ublk_register_io_buf()
ublk: don't access ublk_queue in ublk_daemon_register_io_buf()
ublk: pass q_id and tag to __ublk_check_and_get_req()
ublk: don't access ublk_queue in ublk_check_fetch_buf()
ublk: don't access ublk_queue in ublk_config_io_buf()
ublk: don't pass ublk_queue to ublk_fetch()
ublk: don't access ublk_queue in ublk_check_commit_and_fetch()
ublk: don't access ublk_queue in ublk_need_complete_req()
ublk: pass ublk_io to __ublk_complete_rq()
ublk: don't access ublk_queue in ublk_unmap_io()
drivers/block/ublk_drv.c | 155 +++++++++++++++++++++++----------------
1 file changed, 93 insertions(+), 62 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-19 3:53 ` Ming Lei
2025-09-18 1:49 ` [PATCH 02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size() Caleb Sander Mateos
` (17 subsequent siblings)
18 siblings, 1 reply; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
ublk_get_queue() never returns a NULL pointer, so there's no need to
check its return value in ublk_check_and_get_req(). Drop the check.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index aa64f530d5e9..9f2db91af481 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2493,13 +2493,10 @@ static struct request *ublk_check_and_get_req(struct kiocb *iocb,
if (q_id >= ub->dev_info.nr_hw_queues)
return ERR_PTR(-EINVAL);
ubq = ublk_get_queue(ub, q_id);
- if (!ubq)
- return ERR_PTR(-EINVAL);
-
if (!ublk_support_user_copy(ubq))
return ERR_PTR(-EACCES);
if (tag >= ubq->q_depth)
return ERR_PTR(-EINVAL);
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-19 4:04 ` Ming Lei
2025-09-18 1:49 ` [PATCH 03/17] ublk: don't pass ublk_queue to __ublk_fail_req() Caleb Sander Mateos
` (16 subsequent siblings)
18 siblings, 1 reply; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
ublk_queue_cmd_buf_size() only needs the queue depth, which is the same
for all queues. Get the queue depth from the ublk_device instead so the
q_id parameter can be dropped.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 9f2db91af481..bac16ec3151c 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -760,15 +760,13 @@ ublk_queue_cmd_buf(struct ublk_device *ub, int q_id)
static inline int __ublk_queue_cmd_buf_size(int depth)
{
return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE);
}
-static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
+static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub)
{
- struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
-
- return __ublk_queue_cmd_buf_size(ubq->q_depth);
+ return __ublk_queue_cmd_buf_size(ub->dev_info.queue_depth);
}
static int ublk_max_cmd_buf_size(void)
{
return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH);
@@ -1701,11 +1699,11 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
q_id = (phys_off - UBLKSRV_CMD_BUF_OFFSET) / max_sz;
pr_devel("%s: qid %d, pid %d, addr %lx pg_off %lx sz %lu\n",
__func__, q_id, current->pid, vma->vm_start,
phys_off, (unsigned long)sz);
- if (sz != ublk_queue_cmd_buf_size(ub, q_id))
+ if (sz != ublk_queue_cmd_buf_size(ub))
return -EINVAL;
pfn = virt_to_phys(ublk_queue_cmd_buf(ub, q_id)) >> PAGE_SHIFT;
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
}
@@ -2563,11 +2561,11 @@ static const struct file_operations ublk_ch_fops = {
.mmap = ublk_ch_mmap,
};
static void ublk_deinit_queue(struct ublk_device *ub, int q_id)
{
- int size = ublk_queue_cmd_buf_size(ub, q_id);
+ int size = ublk_queue_cmd_buf_size(ub);
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
int i;
for (i = 0; i < ubq->q_depth; i++) {
struct ublk_io *io = &ubq->ios[i];
@@ -2590,11 +2588,11 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
spin_lock_init(&ubq->cancel_lock);
ubq->flags = ub->dev_info.flags;
ubq->q_id = q_id;
ubq->q_depth = ub->dev_info.queue_depth;
- size = ublk_queue_cmd_buf_size(ub, q_id);
+ size = ublk_queue_cmd_buf_size(ub);
ptr = (void *) __get_free_pages(gfp_flags, get_order(size));
if (!ptr)
return -ENOMEM;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 03/17] ublk: don't pass ublk_queue to __ublk_fail_req()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-20 8:52 ` Ming Lei
2025-09-18 1:49 ` [PATCH 04/17] ublk: add helpers to check ublk_device flags Caleb Sander Mateos
` (15 subsequent siblings)
18 siblings, 1 reply; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
__ublk_fail_req() only uses the ublk_queue to get the ublk_device, which
its caller already has. So just pass the ublk_device directly.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index bac16ec3151c..4cb023d26593 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1706,16 +1706,16 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
pfn = virt_to_phys(ublk_queue_cmd_buf(ub, q_id)) >> PAGE_SHIFT;
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
}
-static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
+static void __ublk_fail_req(struct ublk_device *ub, struct ublk_io *io,
struct request *req)
{
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
- if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
+ if (ublk_nosrv_should_reissue_outstanding(ub))
blk_mq_requeue_request(req, false);
else {
io->res = -EIO;
__ublk_complete_rq(req);
}
@@ -1735,11 +1735,11 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
for (i = 0; i < ubq->q_depth; i++) {
struct ublk_io *io = &ubq->ios[i];
if (io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)
- __ublk_fail_req(ubq, io, io->req);
+ __ublk_fail_req(ub, io, io->req);
}
}
static void ublk_start_cancel(struct ublk_device *ub)
{
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 04/17] ublk: add helpers to check ublk_device flags
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (2 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 03/17] ublk: don't pass ublk_queue to __ublk_fail_req() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-20 8:54 ` Ming Lei
2025-09-18 1:49 ` [PATCH 05/17] ublk: don't dereference ublk_queue in ublk_ch_uring_cmd_local() Caleb Sander Mateos
` (14 subsequent siblings)
18 siblings, 1 reply; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
Introduce ublk_device analogues of the ublk_queue flag helpers:
- ublk_support_zero_copy() -> ublk_dev_support_user_copy()
- ublk_support_auto_buf_reg() -> ublk_dev_support_auto_buf_reg()
- ublk_support_user_copy() -> ublk_dev_support_user_copy()
- ublk_need_map_io() -> ublk_dev_need_map_io()
- ublk_need_req_ref() -> ublk_dev_need_req_ref()
- ublk_need_get_data() -> ublk_dev_need_get_data()
These will be used in subsequent changes to avoid accessing the
ublk_queue just for the flags, and instead use the ublk_device.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 4cb023d26593..04b8613ce623 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -660,26 +660,48 @@ static void ublk_apply_params(struct ublk_device *ub)
static inline bool ublk_support_zero_copy(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_SUPPORT_ZERO_COPY;
}
+static inline bool ublk_dev_support_zero_copy(const struct ublk_device *ub)
+{
+ return ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY;
+}
+
static inline bool ublk_support_auto_buf_reg(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_AUTO_BUF_REG;
}
+static inline bool ublk_dev_support_auto_buf_reg(const struct ublk_device *ub)
+{
+ return ub->dev_info.flags & UBLK_F_AUTO_BUF_REG;
+}
+
static inline bool ublk_support_user_copy(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_USER_COPY;
}
+static inline bool ublk_dev_support_user_copy(const struct ublk_device *ub)
+{
+ return ub->dev_info.flags & UBLK_F_USER_COPY;
+}
+
static inline bool ublk_need_map_io(const struct ublk_queue *ubq)
{
return !ublk_support_user_copy(ubq) && !ublk_support_zero_copy(ubq) &&
!ublk_support_auto_buf_reg(ubq);
}
+static inline bool ublk_dev_need_map_io(const struct ublk_device *ub)
+{
+ return !ublk_dev_support_user_copy(ub) &&
+ !ublk_dev_support_zero_copy(ub) &&
+ !ublk_dev_support_auto_buf_reg(ub);
+}
+
static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
{
/*
* read()/write() is involved in user copy, so request reference
* has to be grabbed
@@ -693,10 +715,17 @@ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
*/
return ublk_support_user_copy(ubq) || ublk_support_zero_copy(ubq) ||
ublk_support_auto_buf_reg(ubq);
}
+static inline bool ublk_dev_need_req_ref(const struct ublk_device *ub)
+{
+ return ublk_dev_support_user_copy(ub) ||
+ ublk_dev_support_zero_copy(ub) ||
+ ublk_dev_support_auto_buf_reg(ub);
+}
+
static inline void ublk_init_req_ref(const struct ublk_queue *ubq,
struct ublk_io *io)
{
if (ublk_need_req_ref(ubq))
refcount_set(&io->ref, UBLK_REFCOUNT_INIT);
@@ -724,10 +753,15 @@ static inline bool ublk_sub_req_ref(struct ublk_io *io)
static inline bool ublk_need_get_data(const struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_NEED_GET_DATA;
}
+static inline bool ublk_dev_need_get_data(const struct ublk_device *ub)
+{
+ return ub->dev_info.flags & UBLK_F_NEED_GET_DATA;
+}
+
/* Called in slow path only, keep it noinline for trace purpose */
static noinline struct ublk_device *ublk_get_device(struct ublk_device *ub)
{
if (kobject_get_unless_zero(&ub->cdev_dev.kobj))
return ub;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 05/17] ublk: don't dereference ublk_queue in ublk_ch_uring_cmd_local()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (3 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 04/17] ublk: add helpers to check ublk_device flags Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 06/17] ublk: don't dereference ublk_queue in ublk_check_and_get_req() Caleb Sander Mateos
` (13 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue to
handle a ublk command is a frequent cache miss. Get the queue depth from
the ublk_device instead, which is accessed just before.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 04b8613ce623..58f688eac742 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2331,11 +2331,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
if (q_id >= ub->dev_info.nr_hw_queues)
goto out;
ubq = ublk_get_queue(ub, q_id);
- if (tag >= ubq->q_depth)
+ if (tag >= ub->dev_info.queue_depth)
goto out;
io = &ubq->ios[tag];
/* UBLK_IO_FETCH_REQ can be handled on any task, which sets io->task */
if (unlikely(_IOC_NR(cmd_op) == UBLK_IO_FETCH_REQ)) {
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 06/17] ublk: don't dereference ublk_queue in ublk_check_and_get_req()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (4 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 05/17] ublk: don't dereference ublk_queue in ublk_ch_uring_cmd_local() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 07/17] ublk: pass ublk_device to ublk_register_io_buf() Caleb Sander Mateos
` (12 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_ch_{read,write}_iter() is a frequent cache miss. Get the flags and
queue depth from the ublk_device instead, which is accessed just before.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 58f688eac742..d6d8dcb72e4b 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2525,14 +2525,14 @@ static struct request *ublk_check_and_get_req(struct kiocb *iocb,
if (q_id >= ub->dev_info.nr_hw_queues)
return ERR_PTR(-EINVAL);
ubq = ublk_get_queue(ub, q_id);
- if (!ublk_support_user_copy(ubq))
+ if (!ublk_dev_support_user_copy(ub))
return ERR_PTR(-EACCES);
- if (tag >= ubq->q_depth)
+ if (tag >= ub->dev_info.queue_depth)
return ERR_PTR(-EINVAL);
*io = &ubq->ios[tag];
req = __ublk_check_and_get_req(ub, ubq, *io, buf_off);
if (!req)
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 07/17] ublk: pass ublk_device to ublk_register_io_buf()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (5 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 06/17] ublk: don't dereference ublk_queue in ublk_check_and_get_req() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 08/17] ublk: don't access ublk_queue in ublk_register_io_buf() Caleb Sander Mateos
` (11 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
Avoid repeating the 2 dereferences to get the ublk_device from the
io_uring_cmd by passing it from ublk_ch_uring_cmd_local() to
ublk_register_io_buf().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index d6d8dcb72e4b..cb51f3f3cd33 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2123,15 +2123,15 @@ static void ublk_io_release(void *priv)
else
ublk_put_req_ref(io, rq);
}
static int ublk_register_io_buf(struct io_uring_cmd *cmd,
+ struct ublk_device *ub,
const struct ublk_queue *ubq,
struct ublk_io *io,
unsigned int index, unsigned int issue_flags)
{
- struct ublk_device *ub = cmd->file->private_data;
struct request *req;
int ret;
if (!ublk_support_zero_copy(ubq))
return -EINVAL;
@@ -2150,10 +2150,11 @@ static int ublk_register_io_buf(struct io_uring_cmd *cmd,
return 0;
}
static int
ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
+ struct ublk_device *ub,
const struct ublk_queue *ubq, struct ublk_io *io,
unsigned index, unsigned issue_flags)
{
unsigned new_registered_buffers;
struct request *req = io->req;
@@ -2163,11 +2164,12 @@ ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
* Ensure there are still references for ublk_sub_req_ref() to release.
* If not, fall back on the thread-safe buffer registration.
*/
new_registered_buffers = io->task_registered_buffers + 1;
if (unlikely(new_registered_buffers >= UBLK_REFCOUNT_INIT))
- return ublk_register_io_buf(cmd, ubq, io, index, issue_flags);
+ return ublk_register_io_buf(cmd, ub, ubq, io, index,
+ issue_flags);
if (!ublk_support_zero_copy(ubq) || !ublk_rq_has_data(req))
return -EINVAL;
ret = io_buffer_register_bvec(cmd, req, ublk_io_release, index,
@@ -2354,11 +2356,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
/*
* ublk_register_io_buf() accesses only the io's refcount,
* so can be handled on any task
*/
if (_IOC_NR(cmd_op) == UBLK_IO_REGISTER_IO_BUF)
- return ublk_register_io_buf(cmd, ubq, io, addr,
+ return ublk_register_io_buf(cmd, ub, ubq, io, addr,
issue_flags);
goto out;
}
@@ -2376,11 +2378,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
^ (_IOC_NR(cmd_op) == UBLK_IO_NEED_GET_DATA))
goto out;
switch (_IOC_NR(cmd_op)) {
case UBLK_IO_REGISTER_IO_BUF:
- return ublk_daemon_register_io_buf(cmd, ubq, io, addr,
+ return ublk_daemon_register_io_buf(cmd, ub, ubq, io, addr,
issue_flags);
case UBLK_IO_COMMIT_AND_FETCH_REQ:
ret = ublk_check_commit_and_fetch(ubq, io, addr);
if (ret)
goto out;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 08/17] ublk: don't access ublk_queue in ublk_register_io_buf()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (6 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 07/17] ublk: pass ublk_device to ublk_register_io_buf() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 09/17] ublk: don't access ublk_queue in ublk_daemon_register_io_buf() Caleb Sander Mateos
` (10 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_register_io_buf() is a frequent cache miss. Get the flags from the
ublk_device instead, which is accessed earlier in
ublk_ch_uring_cmd_local().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index cb51f3f3cd33..751ec62655f8 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2131,11 +2131,11 @@ static int ublk_register_io_buf(struct io_uring_cmd *cmd,
unsigned int index, unsigned int issue_flags)
{
struct request *req;
int ret;
- if (!ublk_support_zero_copy(ubq))
+ if (!ublk_dev_support_zero_copy(ub))
return -EINVAL;
req = __ublk_check_and_get_req(ub, ubq, io, 0);
if (!req)
return -EINVAL;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 09/17] ublk: don't access ublk_queue in ublk_daemon_register_io_buf()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (7 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 08/17] ublk: don't access ublk_queue in ublk_register_io_buf() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 10/17] ublk: pass q_id and tag to __ublk_check_and_get_req() Caleb Sander Mateos
` (9 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_daemon_register_io_buf() is a frequent cache miss. Get the flags
from the ublk_device instead, which is accessed earlier in
ublk_ch_uring_cmd_local().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 751ec62655f8..266b46d40886 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2167,11 +2167,11 @@ ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
new_registered_buffers = io->task_registered_buffers + 1;
if (unlikely(new_registered_buffers >= UBLK_REFCOUNT_INIT))
return ublk_register_io_buf(cmd, ub, ubq, io, index,
issue_flags);
- if (!ublk_support_zero_copy(ubq) || !ublk_rq_has_data(req))
+ if (!ublk_dev_support_zero_copy(ub) || !ublk_rq_has_data(req))
return -EINVAL;
ret = io_buffer_register_bvec(cmd, req, ublk_io_release, index,
issue_flags);
if (ret)
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 10/17] ublk: pass q_id and tag to __ublk_check_and_get_req()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (8 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 09/17] ublk: don't access ublk_queue in ublk_daemon_register_io_buf() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 11/17] ublk: don't access ublk_queue in ublk_check_fetch_buf() Caleb Sander Mateos
` (8 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
__ublk_check_and_get_req() only uses its ublk_queue argument to get the
q_id and tag. Pass those arguments explicitly to save an access to the
ublk_queue.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 266b46d40886..cb61f6213962 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -248,12 +248,11 @@ struct ublk_params_header {
static void ublk_io_release(void *priv);
static void ublk_stop_dev_unlocked(struct ublk_device *ub);
static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq);
static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
- const struct ublk_queue *ubq, struct ublk_io *io,
- size_t offset);
+ u16 q_id, u16 tag, struct ublk_io *io, size_t offset);
static inline unsigned int ublk_req_build_flags(struct request *req);
static inline struct ublksrv_io_desc *
ublk_get_iod(const struct ublk_queue *ubq, unsigned tag)
{
@@ -2124,21 +2123,21 @@ static void ublk_io_release(void *priv)
ublk_put_req_ref(io, rq);
}
static int ublk_register_io_buf(struct io_uring_cmd *cmd,
struct ublk_device *ub,
- const struct ublk_queue *ubq,
+ u16 q_id, u16 tag,
struct ublk_io *io,
unsigned int index, unsigned int issue_flags)
{
struct request *req;
int ret;
if (!ublk_dev_support_zero_copy(ub))
return -EINVAL;
- req = __ublk_check_and_get_req(ub, ubq, io, 0);
+ req = __ublk_check_and_get_req(ub, q_id, tag, io, 0);
if (!req)
return -EINVAL;
ret = io_buffer_register_bvec(cmd, req, ublk_io_release, index,
issue_flags);
@@ -2151,11 +2150,11 @@ static int ublk_register_io_buf(struct io_uring_cmd *cmd,
}
static int
ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
struct ublk_device *ub,
- const struct ublk_queue *ubq, struct ublk_io *io,
+ u16 q_id, u16 tag, struct ublk_io *io,
unsigned index, unsigned issue_flags)
{
unsigned new_registered_buffers;
struct request *req = io->req;
int ret;
@@ -2164,11 +2163,11 @@ ublk_daemon_register_io_buf(struct io_uring_cmd *cmd,
* Ensure there are still references for ublk_sub_req_ref() to release.
* If not, fall back on the thread-safe buffer registration.
*/
new_registered_buffers = io->task_registered_buffers + 1;
if (unlikely(new_registered_buffers >= UBLK_REFCOUNT_INIT))
- return ublk_register_io_buf(cmd, ub, ubq, io, index,
+ return ublk_register_io_buf(cmd, ub, q_id, tag, io, index,
issue_flags);
if (!ublk_dev_support_zero_copy(ub) || !ublk_rq_has_data(req))
return -EINVAL;
@@ -2356,12 +2355,12 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
/*
* ublk_register_io_buf() accesses only the io's refcount,
* so can be handled on any task
*/
if (_IOC_NR(cmd_op) == UBLK_IO_REGISTER_IO_BUF)
- return ublk_register_io_buf(cmd, ub, ubq, io, addr,
- issue_flags);
+ return ublk_register_io_buf(cmd, ub, q_id, tag, io,
+ addr, issue_flags);
goto out;
}
/* there is pending io cmd, something must be wrong */
@@ -2378,11 +2377,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
^ (_IOC_NR(cmd_op) == UBLK_IO_NEED_GET_DATA))
goto out;
switch (_IOC_NR(cmd_op)) {
case UBLK_IO_REGISTER_IO_BUF:
- return ublk_daemon_register_io_buf(cmd, ub, ubq, io, addr,
+ return ublk_daemon_register_io_buf(cmd, ub, q_id, tag, io, addr,
issue_flags);
case UBLK_IO_COMMIT_AND_FETCH_REQ:
ret = ublk_check_commit_and_fetch(ubq, io, addr);
if (ret)
goto out;
@@ -2427,20 +2426,19 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
__func__, cmd_op, tag, ret, io->flags);
return ret;
}
static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
- const struct ublk_queue *ubq, struct ublk_io *io, size_t offset)
+ u16 q_id, u16 tag, struct ublk_io *io, size_t offset)
{
- unsigned tag = io - ubq->ios;
struct request *req;
/*
* can't use io->req in case of concurrent UBLK_IO_COMMIT_AND_FETCH_REQ,
* which would overwrite it with io->cmd
*/
- req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
+ req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);
if (!req)
return NULL;
if (!ublk_get_req_ref(io))
return NULL;
@@ -2534,11 +2532,11 @@ static struct request *ublk_check_and_get_req(struct kiocb *iocb,
if (tag >= ub->dev_info.queue_depth)
return ERR_PTR(-EINVAL);
*io = &ubq->ios[tag];
- req = __ublk_check_and_get_req(ub, ubq, *io, buf_off);
+ req = __ublk_check_and_get_req(ub, q_id, tag, *io, buf_off);
if (!req)
return ERR_PTR(-EINVAL);
if (!req->mq_hctx || !req->mq_hctx->driver_data)
goto fail;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 11/17] ublk: don't access ublk_queue in ublk_check_fetch_buf()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (9 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 10/17] ublk: pass q_id and tag to __ublk_check_and_get_req() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 12/17] ublk: don't access ublk_queue in ublk_config_io_buf() Caleb Sander Mateos
` (7 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
Obtain the ublk device flags from ublk_device to avoid needing to access
the ublk_queue, which may be a cache miss.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index cb61f6213962..9c6045e6d03b 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2188,18 +2188,18 @@ static int ublk_unregister_io_buf(struct io_uring_cmd *cmd,
return -EINVAL;
return io_buffer_unregister_bvec(cmd, index, issue_flags);
}
-static int ublk_check_fetch_buf(const struct ublk_queue *ubq, __u64 buf_addr)
+static int ublk_check_fetch_buf(const struct ublk_device *ub, __u64 buf_addr)
{
- if (ublk_need_map_io(ubq)) {
+ if (ublk_dev_need_map_io(ub)) {
/*
* FETCH_RQ has to provide IO buffer if NEED GET
* DATA is not enabled
*/
- if (!buf_addr && !ublk_need_get_data(ubq))
+ if (!buf_addr && !ublk_dev_need_get_data(ub))
return -EINVAL;
} else if (buf_addr) {
/* User copy requires addr to be unset */
return -EINVAL;
}
@@ -2338,11 +2338,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
goto out;
io = &ubq->ios[tag];
/* UBLK_IO_FETCH_REQ can be handled on any task, which sets io->task */
if (unlikely(_IOC_NR(cmd_op) == UBLK_IO_FETCH_REQ)) {
- ret = ublk_check_fetch_buf(ubq, addr);
+ ret = ublk_check_fetch_buf(ub, addr);
if (ret)
goto out;
ret = ublk_fetch(cmd, ubq, io, addr);
if (ret)
goto out;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 12/17] ublk: don't access ublk_queue in ublk_config_io_buf()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (10 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 11/17] ublk: don't access ublk_queue in ublk_check_fetch_buf() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 13/17] ublk: don't pass ublk_queue to ublk_fetch() Caleb Sander Mateos
` (6 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_config_io_buf() is a frequent cache miss. Get the flags
from the ublk_device instead, which is accessed earlier in
ublk_ch_uring_cmd_local().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 9c6045e6d03b..9535382f9f8e 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2079,15 +2079,15 @@ ublk_fill_io_cmd(struct ublk_io *io, struct io_uring_cmd *cmd)
return req;
}
static inline int
-ublk_config_io_buf(const struct ublk_queue *ubq, struct ublk_io *io,
+ublk_config_io_buf(const struct ublk_device *ub, struct ublk_io *io,
struct io_uring_cmd *cmd, unsigned long buf_addr,
u16 *buf_idx)
{
- if (ublk_support_auto_buf_reg(ubq))
+ if (ublk_dev_support_auto_buf_reg(ub))
return ublk_handle_auto_buf_reg(io, cmd, buf_idx);
io->addr = buf_addr;
return 0;
}
@@ -2231,11 +2231,11 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
}
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV);
ublk_fill_io_cmd(io, cmd);
- ret = ublk_config_io_buf(ubq, io, cmd, buf_addr, NULL);
+ ret = ublk_config_io_buf(ub, io, cmd, buf_addr, NULL);
if (ret)
goto out;
WRITE_ONCE(io->task, get_task_struct(current));
ublk_mark_io_ready(ub);
@@ -2385,11 +2385,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
ret = ublk_check_commit_and_fetch(ubq, io, addr);
if (ret)
goto out;
io->res = result;
req = ublk_fill_io_cmd(io, cmd);
- ret = ublk_config_io_buf(ubq, io, cmd, addr, &buf_idx);
+ ret = ublk_config_io_buf(ub, io, cmd, addr, &buf_idx);
compl = ublk_need_complete_req(ubq, io);
/* can't touch 'ublk_io' any more */
if (buf_idx != UBLK_INVALID_BUF_IDX)
io_buffer_unregister_bvec(cmd, buf_idx, issue_flags);
@@ -2406,11 +2406,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
* ublk_get_data() may fail and fallback to requeue, so keep
* uring_cmd active first and prepare for handling new requeued
* request
*/
req = ublk_fill_io_cmd(io, cmd);
- ret = ublk_config_io_buf(ubq, io, cmd, addr, NULL);
+ ret = ublk_config_io_buf(ub, io, cmd, addr, NULL);
WARN_ON_ONCE(ret);
if (likely(ublk_get_data(ubq, io, req))) {
__ublk_prep_compl_io_cmd(io, req);
return UBLK_IO_RES_OK;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 13/17] ublk: don't pass ublk_queue to ublk_fetch()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (11 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 12/17] ublk: don't access ublk_queue in ublk_config_io_buf() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 14/17] ublk: don't access ublk_queue in ublk_check_commit_and_fetch() Caleb Sander Mateos
` (5 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
ublk_fetch() only uses the ublk_queue to get the ublk_device, which its
caller already has. So just pass the ublk_device directly.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 9535382f9f8e..9a726d048703 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2204,14 +2204,13 @@ static int ublk_check_fetch_buf(const struct ublk_device *ub, __u64 buf_addr)
return -EINVAL;
}
return 0;
}
-static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
+static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
struct ublk_io *io, __u64 buf_addr)
{
- struct ublk_device *ub = ubq->dev;
int ret = 0;
/*
* When handling FETCH command for setting up ublk uring queue,
* ub->mutex is the innermost lock, and we won't block for handling
@@ -2341,11 +2340,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
/* UBLK_IO_FETCH_REQ can be handled on any task, which sets io->task */
if (unlikely(_IOC_NR(cmd_op) == UBLK_IO_FETCH_REQ)) {
ret = ublk_check_fetch_buf(ub, addr);
if (ret)
goto out;
- ret = ublk_fetch(cmd, ubq, io, addr);
+ ret = ublk_fetch(cmd, ub, io, addr);
if (ret)
goto out;
ublk_prep_cancel(cmd, issue_flags, ubq, tag);
return -EIOCBQUEUED;
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 14/17] ublk: don't access ublk_queue in ublk_check_commit_and_fetch()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (12 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 13/17] ublk: don't pass ublk_queue to ublk_fetch() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 15/17] ublk: don't access ublk_queue in ublk_need_complete_req() Caleb Sander Mateos
` (4 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_check_commit_and_fetch() is a frequent cache miss. Get the flags
from the ublk_device instead, which is accessed earlier in
ublk_ch_uring_cmd_local().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 9a726d048703..b92b7823005d 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2241,21 +2241,21 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
out:
mutex_unlock(&ub->mutex);
return ret;
}
-static int ublk_check_commit_and_fetch(const struct ublk_queue *ubq,
+static int ublk_check_commit_and_fetch(const struct ublk_device *ub,
struct ublk_io *io, __u64 buf_addr)
{
struct request *req = io->req;
- if (ublk_need_map_io(ubq)) {
+ if (ublk_dev_need_map_io(ub)) {
/*
* COMMIT_AND_FETCH_REQ has to provide IO buffer if
* NEED GET DATA is not enabled or it is Read IO.
*/
- if (!buf_addr && (!ublk_need_get_data(ubq) ||
+ if (!buf_addr && (!ublk_dev_need_get_data(ub) ||
req_op(req) == REQ_OP_READ))
return -EINVAL;
} else if (req_op(req) != REQ_OP_ZONE_APPEND && buf_addr) {
/*
* User copy requires addr to be unset when command is
@@ -2379,11 +2379,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
switch (_IOC_NR(cmd_op)) {
case UBLK_IO_REGISTER_IO_BUF:
return ublk_daemon_register_io_buf(cmd, ub, q_id, tag, io, addr,
issue_flags);
case UBLK_IO_COMMIT_AND_FETCH_REQ:
- ret = ublk_check_commit_and_fetch(ubq, io, addr);
+ ret = ublk_check_commit_and_fetch(ub, io, addr);
if (ret)
goto out;
io->res = result;
req = ublk_fill_io_cmd(io, cmd);
ret = ublk_config_io_buf(ub, io, cmd, addr, &buf_idx);
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 15/17] ublk: don't access ublk_queue in ublk_need_complete_req()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (13 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 14/17] ublk: don't access ublk_queue in ublk_check_commit_and_fetch() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 16/17] ublk: pass ublk_io to __ublk_complete_rq() Caleb Sander Mateos
` (3 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_need_complete_req() is a frequent cache miss. Get the flags from
the ublk_device instead, which is accessed earlier in
ublk_ch_uring_cmd_local().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index b92b7823005d..750d0a332685 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2265,14 +2265,14 @@ static int ublk_check_commit_and_fetch(const struct ublk_device *ub,
}
return 0;
}
-static bool ublk_need_complete_req(const struct ublk_queue *ubq,
+static bool ublk_need_complete_req(const struct ublk_device *ub,
struct ublk_io *io)
{
- if (ublk_need_req_ref(ubq))
+ if (ublk_dev_need_req_ref(ub))
return ublk_sub_req_ref(io);
return true;
}
static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io,
@@ -2385,11 +2385,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
if (ret)
goto out;
io->res = result;
req = ublk_fill_io_cmd(io, cmd);
ret = ublk_config_io_buf(ub, io, cmd, addr, &buf_idx);
- compl = ublk_need_complete_req(ubq, io);
+ compl = ublk_need_complete_req(ub, io);
/* can't touch 'ublk_io' any more */
if (buf_idx != UBLK_INVALID_BUF_IDX)
io_buffer_unregister_bvec(cmd, buf_idx, issue_flags);
if (req_op(req) == REQ_OP_ZONE_APPEND)
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 16/17] ublk: pass ublk_io to __ublk_complete_rq()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (14 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 15/17] ublk: don't access ublk_queue in ublk_need_complete_req() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 17/17] ublk: don't access ublk_queue in ublk_unmap_io() Caleb Sander Mateos
` (2 subsequent siblings)
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
All callers of __ublk_complete_rq() already know the ublk_io. Pass it in
to avoid looking it up again.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 750d0a332685..a677eca1ee86 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -527,11 +527,11 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
return BLK_STS_NOTSUPP;
}
#endif
-static inline void __ublk_complete_rq(struct request *req);
+static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io);
static dev_t ublk_chr_devt;
static const struct class ublk_chr_class = {
.name = "ublk-char",
};
@@ -736,11 +736,11 @@ static inline bool ublk_get_req_ref(struct ublk_io *io)
}
static inline void ublk_put_req_ref(struct ublk_io *io, struct request *req)
{
if (refcount_dec_and_test(&io->ref))
- __ublk_complete_rq(req);
+ __ublk_complete_rq(req, io);
}
static inline bool ublk_sub_req_ref(struct ublk_io *io)
{
unsigned sub_refs = UBLK_REFCOUNT_INIT - io->task_registered_buffers;
@@ -1144,14 +1144,13 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_uring_cmd_pdu(
{
return io_uring_cmd_to_pdu(ioucmd, struct ublk_uring_cmd_pdu);
}
/* todo: handle partial completion */
-static inline void __ublk_complete_rq(struct request *req)
+static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io)
{
struct ublk_queue *ubq = req->mq_hctx->driver_data;
- struct ublk_io *io = &ubq->ios[req->tag];
unsigned int unmapped_bytes;
blk_status_t res = BLK_STS_OK;
/* failed read IO if nothing is read */
if (!io->res && req_op(req) == REQ_OP_READ)
@@ -1748,11 +1747,11 @@ static void __ublk_fail_req(struct ublk_device *ub, struct ublk_io *io,
if (ublk_nosrv_should_reissue_outstanding(ub))
blk_mq_requeue_request(req, false);
else {
io->res = -EIO;
- __ublk_complete_rq(req);
+ __ublk_complete_rq(req, io);
}
}
/*
* Called from ublk char device release handler, when any uring_cmd is
@@ -2393,11 +2392,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
if (buf_idx != UBLK_INVALID_BUF_IDX)
io_buffer_unregister_bvec(cmd, buf_idx, issue_flags);
if (req_op(req) == REQ_OP_ZONE_APPEND)
req->__sector = addr;
if (compl)
- __ublk_complete_rq(req);
+ __ublk_complete_rq(req, io);
if (ret)
goto out;
break;
case UBLK_IO_NEED_GET_DATA:
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 17/17] ublk: don't access ublk_queue in ublk_unmap_io()
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (15 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 16/17] ublk: pass ublk_io to __ublk_complete_rq() Caleb Sander Mateos
@ 2025-09-18 1:49 ` Caleb Sander Mateos
2025-09-20 9:32 ` [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Ming Lei
2025-09-20 12:39 ` Jens Axboe
18 siblings, 0 replies; 24+ messages in thread
From: Caleb Sander Mateos @ 2025-09-18 1:49 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
For ublk servers with many ublk queues, accessing the ublk_queue in
ublk_unmap_io() is a frequent cache miss. Pass to __ublk_complete_rq()
whether the ublk server's data buffer needs to be copied to the request.
In the callers __ublk_fail_req() and ublk_ch_uring_cmd_local(), get the
flags from the ublk_device instead, as its flags have just been read.
In ublk_put_req_ref(), pass false since all the features that require
reference counting disable copying of the data buffer upon completion.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
drivers/block/ublk_drv.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index a677eca1ee86..5ab7ff5f03f4 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -527,11 +527,12 @@ static blk_status_t ublk_setup_iod_zoned(struct ublk_queue *ubq,
return BLK_STS_NOTSUPP;
}
#endif
-static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io);
+static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io,
+ bool need_map);
static dev_t ublk_chr_devt;
static const struct class ublk_chr_class = {
.name = "ublk-char",
};
@@ -735,12 +736,15 @@ static inline bool ublk_get_req_ref(struct ublk_io *io)
return refcount_inc_not_zero(&io->ref);
}
static inline void ublk_put_req_ref(struct ublk_io *io, struct request *req)
{
- if (refcount_dec_and_test(&io->ref))
- __ublk_complete_rq(req, io);
+ if (!refcount_dec_and_test(&io->ref))
+ return;
+
+ /* ublk_need_map_io() and ublk_need_req_ref() are mutually exclusive */
+ __ublk_complete_rq(req, io, false);
}
static inline bool ublk_sub_req_ref(struct ublk_io *io)
{
unsigned sub_refs = UBLK_REFCOUNT_INIT - io->task_registered_buffers;
@@ -1046,17 +1050,17 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
return ublk_copy_user_pages(req, 0, &iter, dir);
}
return rq_bytes;
}
-static int ublk_unmap_io(const struct ublk_queue *ubq,
+static int ublk_unmap_io(bool need_map,
const struct request *req,
const struct ublk_io *io)
{
const unsigned int rq_bytes = blk_rq_bytes(req);
- if (!ublk_need_map_io(ubq))
+ if (!need_map)
return rq_bytes;
if (ublk_need_unmap_req(req)) {
struct iov_iter iter;
const int dir = ITER_SOURCE;
@@ -1144,13 +1148,13 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_uring_cmd_pdu(
{
return io_uring_cmd_to_pdu(ioucmd, struct ublk_uring_cmd_pdu);
}
/* todo: handle partial completion */
-static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io)
+static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io,
+ bool need_map)
{
- struct ublk_queue *ubq = req->mq_hctx->driver_data;
unsigned int unmapped_bytes;
blk_status_t res = BLK_STS_OK;
/* failed read IO if nothing is read */
if (!io->res && req_op(req) == REQ_OP_READ)
@@ -1170,11 +1174,11 @@ static inline void __ublk_complete_rq(struct request *req, struct ublk_io *io)
if (req_op(req) != REQ_OP_READ && req_op(req) != REQ_OP_WRITE &&
req_op(req) != REQ_OP_DRV_IN)
goto exit;
/* for READ request, writing data in iod->addr to rq buffers */
- unmapped_bytes = ublk_unmap_io(ubq, req, io);
+ unmapped_bytes = ublk_unmap_io(need_map, req, io);
/*
* Extremely impossible since we got data filled in just before
*
* Re-read simply for this unlikely case.
@@ -1747,11 +1751,11 @@ static void __ublk_fail_req(struct ublk_device *ub, struct ublk_io *io,
if (ublk_nosrv_should_reissue_outstanding(ub))
blk_mq_requeue_request(req, false);
else {
io->res = -EIO;
- __ublk_complete_rq(req, io);
+ __ublk_complete_rq(req, io, ublk_dev_need_map_io(ub));
}
}
/*
* Called from ublk char device release handler, when any uring_cmd is
@@ -2392,11 +2396,11 @@ static int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
if (buf_idx != UBLK_INVALID_BUF_IDX)
io_buffer_unregister_bvec(cmd, buf_idx, issue_flags);
if (req_op(req) == REQ_OP_ZONE_APPEND)
req->__sector = addr;
if (compl)
- __ublk_complete_rq(req, io);
+ __ublk_complete_rq(req, io, ublk_dev_need_map_io(ub));
if (ret)
goto out;
break;
case UBLK_IO_NEED_GET_DATA:
--
2.45.2
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req()
2025-09-18 1:49 ` [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req() Caleb Sander Mateos
@ 2025-09-19 3:53 ` Ming Lei
0 siblings, 0 replies; 24+ messages in thread
From: Ming Lei @ 2025-09-19 3:53 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
On Wed, Sep 17, 2025 at 07:49:37PM -0600, Caleb Sander Mateos wrote:
> ublk_get_queue() never returns a NULL pointer, so there's no need to
> check its return value in ublk_check_and_get_req(). Drop the check.
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size()
2025-09-18 1:49 ` [PATCH 02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size() Caleb Sander Mateos
@ 2025-09-19 4:04 ` Ming Lei
0 siblings, 0 replies; 24+ messages in thread
From: Ming Lei @ 2025-09-19 4:04 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
On Wed, Sep 17, 2025 at 07:49:38PM -0600, Caleb Sander Mateos wrote:
> ublk_queue_cmd_buf_size() only needs the queue depth, which is the same
> for all queues. Get the queue depth from the ublk_device instead so the
> q_id parameter can be dropped.
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 03/17] ublk: don't pass ublk_queue to __ublk_fail_req()
2025-09-18 1:49 ` [PATCH 03/17] ublk: don't pass ublk_queue to __ublk_fail_req() Caleb Sander Mateos
@ 2025-09-20 8:52 ` Ming Lei
0 siblings, 0 replies; 24+ messages in thread
From: Ming Lei @ 2025-09-20 8:52 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
On Wed, Sep 17, 2025 at 07:49:39PM -0600, Caleb Sander Mateos wrote:
> __ublk_fail_req() only uses the ublk_queue to get the ublk_device, which
> its caller already has. So just pass the ublk_device directly.
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 04/17] ublk: add helpers to check ublk_device flags
2025-09-18 1:49 ` [PATCH 04/17] ublk: add helpers to check ublk_device flags Caleb Sander Mateos
@ 2025-09-20 8:54 ` Ming Lei
0 siblings, 0 replies; 24+ messages in thread
From: Ming Lei @ 2025-09-20 8:54 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
On Wed, Sep 17, 2025 at 07:49:40PM -0600, Caleb Sander Mateos wrote:
> Introduce ublk_device analogues of the ublk_queue flag helpers:
> - ublk_support_zero_copy() -> ublk_dev_support_user_copy()
> - ublk_support_auto_buf_reg() -> ublk_dev_support_auto_buf_reg()
> - ublk_support_user_copy() -> ublk_dev_support_user_copy()
> - ublk_need_map_io() -> ublk_dev_need_map_io()
> - ublk_need_req_ref() -> ublk_dev_need_req_ref()
> - ublk_need_get_data() -> ublk_dev_need_get_data()
>
> These will be used in subsequent changes to avoid accessing the
> ublk_queue just for the flags, and instead use the ublk_device.
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (16 preceding siblings ...)
2025-09-18 1:49 ` [PATCH 17/17] ublk: don't access ublk_queue in ublk_unmap_io() Caleb Sander Mateos
@ 2025-09-20 9:32 ` Ming Lei
2025-09-20 12:39 ` Jens Axboe
18 siblings, 0 replies; 24+ messages in thread
From: Ming Lei @ 2025-09-20 9:32 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
On Wed, Sep 17, 2025 at 07:49:36PM -0600, Caleb Sander Mateos wrote:
> For ublk servers with many ublk queues, accessing the ublk_queue in
> ublk_ch_uring_cmd_local() and the functions it calls is a frequent cache miss.
> The ublk_queue is only accessed for its q_depth and flags, which are also
> available on ublk_device. And ublk_device is already accessed for nr_hw_queues,
> so it will already be cached. Unfortunately, the UBLK_IO_NEED_GET_DATA path
> still needs to access the ublk_queue for io_cmd_buf, so it's not possible to
> avoid accessing the ublk_queue there. (Allocating a single io_cmd_buf for all of
> a ublk_device's I/Os could be done in the future.) At least we can optimize
> UBLK_IO_FETCH_REQ, UBLK_IO_COMMIT_AND_FETCH_REQ, UBLK_IO_REGISTER_IO_BUF, and
> UBLK_IO_UNREGISTER_IO_BUF.
> Using only the ublk_device and not the ublk_queue in ublk_dispatch_req() is also
> possible, but left for a future change.
The idea looks good: avoid to read ublk_queue since querying ublk_device is
inevitable & enough.
For the series,
Reviewed-by: Ming Lei <ming.lei@redhat.com>
BTW, 'const struct ublk_device *' can be passed for several helpers, and it
can be one follow-up.
Thanks,
Ming
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
` (17 preceding siblings ...)
2025-09-20 9:32 ` [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Ming Lei
@ 2025-09-20 12:39 ` Jens Axboe
18 siblings, 0 replies; 24+ messages in thread
From: Jens Axboe @ 2025-09-20 12:39 UTC (permalink / raw)
To: Ming Lei, Caleb Sander Mateos; +Cc: linux-block, linux-kernel
On Wed, 17 Sep 2025 19:49:36 -0600, Caleb Sander Mateos wrote:
> For ublk servers with many ublk queues, accessing the ublk_queue in
> ublk_ch_uring_cmd_local() and the functions it calls is a frequent cache miss.
> The ublk_queue is only accessed for its q_depth and flags, which are also
> available on ublk_device. And ublk_device is already accessed for nr_hw_queues,
> so it will already be cached. Unfortunately, the UBLK_IO_NEED_GET_DATA path
> still needs to access the ublk_queue for io_cmd_buf, so it's not possible to
> avoid accessing the ublk_queue there. (Allocating a single io_cmd_buf for all of
> a ublk_device's I/Os could be done in the future.) At least we can optimize
> UBLK_IO_FETCH_REQ, UBLK_IO_COMMIT_AND_FETCH_REQ, UBLK_IO_REGISTER_IO_BUF, and
> UBLK_IO_UNREGISTER_IO_BUF.
> Using only the ublk_device and not the ublk_queue in ublk_dispatch_req() is also
> possible, but left for a future change.
>
> [...]
Applied, thanks!
[01/17] ublk: remove ubq check in ublk_check_and_get_req()
commit: 163f80dabf4f0c4d9e6c39e0dba474814dac78f8
[02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size()
commit: b7e255b0340b5319fca4fe076119d0f929a24305
[03/17] ublk: don't pass ublk_queue to __ublk_fail_req()
commit: 0265595002b989db8e0c32dc33624fa61a974b20
[04/17] ublk: add helpers to check ublk_device flags
commit: d74a383ec70de33ae6577af889556747d6693269
[05/17] ublk: don't dereference ublk_queue in ublk_ch_uring_cmd_local()
commit: 5125535f90564117506d926d0de92c4c2622b720
[06/17] ublk: don't dereference ublk_queue in ublk_check_and_get_req()
commit: b40dcdf8235d536072b9f61eb6d291f0f3720768
[07/17] ublk: pass ublk_device to ublk_register_io_buf()
commit: 8a81926e45670c6d9b6e73e0482485d5c9a627e6
[08/17] ublk: don't access ublk_queue in ublk_register_io_buf()
commit: 692cf47e1af39f86f28069db5ca6b00a7d2daddc
[09/17] ublk: don't access ublk_queue in ublk_daemon_register_io_buf()
commit: ce88e3ef33d35c740d26342be5d8f65972fd5597
[10/17] ublk: pass q_id and tag to __ublk_check_and_get_req()
commit: 25c028aa791503fe0876c20bfd67b2676e6e24d0
[11/17] ublk: don't access ublk_queue in ublk_check_fetch_buf()
commit: a689efd5fde7b39cfbcf43267bccf0e56295cc16
[12/17] ublk: don't access ublk_queue in ublk_config_io_buf()
commit: 23c014448e97d4b59c54816f545ab963bf8dd644
[13/17] ublk: don't pass ublk_queue to ublk_fetch()
commit: 3576e60a33c7f6be024b80f8c87312032fd27892
[14/17] ublk: don't access ublk_queue in ublk_check_commit_and_fetch()
commit: be7962d7e3d9dd9ff5b6bcd3faccb3b0f76a9734
[15/17] ublk: don't access ublk_queue in ublk_need_complete_req()
commit: 122f6387e845dfbfcf1ed795734a1ec779e987f0
[16/17] ublk: pass ublk_io to __ublk_complete_rq()
commit: 97a02be6303646e19e9092042928b0e13543305c
[17/17] ublk: don't access ublk_queue in ublk_unmap_io()
commit: 755a18469ca4eca3b5bb4f52704b7708a9106db9
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-09-20 12:39 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-18 1:49 [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 01/17] ublk: remove ubq check in ublk_check_and_get_req() Caleb Sander Mateos
2025-09-19 3:53 ` Ming Lei
2025-09-18 1:49 ` [PATCH 02/17] ublk: don't pass q_id to ublk_queue_cmd_buf_size() Caleb Sander Mateos
2025-09-19 4:04 ` Ming Lei
2025-09-18 1:49 ` [PATCH 03/17] ublk: don't pass ublk_queue to __ublk_fail_req() Caleb Sander Mateos
2025-09-20 8:52 ` Ming Lei
2025-09-18 1:49 ` [PATCH 04/17] ublk: add helpers to check ublk_device flags Caleb Sander Mateos
2025-09-20 8:54 ` Ming Lei
2025-09-18 1:49 ` [PATCH 05/17] ublk: don't dereference ublk_queue in ublk_ch_uring_cmd_local() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 06/17] ublk: don't dereference ublk_queue in ublk_check_and_get_req() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 07/17] ublk: pass ublk_device to ublk_register_io_buf() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 08/17] ublk: don't access ublk_queue in ublk_register_io_buf() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 09/17] ublk: don't access ublk_queue in ublk_daemon_register_io_buf() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 10/17] ublk: pass q_id and tag to __ublk_check_and_get_req() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 11/17] ublk: don't access ublk_queue in ublk_check_fetch_buf() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 12/17] ublk: don't access ublk_queue in ublk_config_io_buf() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 13/17] ublk: don't pass ublk_queue to ublk_fetch() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 14/17] ublk: don't access ublk_queue in ublk_check_commit_and_fetch() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 15/17] ublk: don't access ublk_queue in ublk_need_complete_req() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 16/17] ublk: pass ublk_io to __ublk_complete_rq() Caleb Sander Mateos
2025-09-18 1:49 ` [PATCH 17/17] ublk: don't access ublk_queue in ublk_unmap_io() Caleb Sander Mateos
2025-09-20 9:32 ` [PATCH 00/17] ublk: avoid accessing ublk_queue to handle ublksrv_io_cmd Ming Lei
2025-09-20 12:39 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).