* [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator
@ 2025-08-06 8:57 Yu Kuai
2025-08-06 8:57 ` [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock Yu Kuai
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Yu Kuai @ 2025-08-06 8:57 UTC (permalink / raw)
To: dlemoal, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yukuai1, yi.zhang, yangerkun,
johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Changes from v2:
- add elevator lock/unlock macros in patch 1;
- improve coding style and commit messages;
- retest with a new environment
- add test for scsi HDD and nvme;
Changes from v1:
- the ioc changes are send separately;
- change the patch 1-3 order as suggested by Damien;
Currently, both mq-deadline and bfq have global spin lock that will be
grabbed inside elevator methods like dispatch_request, insert_requests,
and bio_merge. And the global lock is the main reason mq-deadline and
bfq can't scale very well.
For dispatch_request method, current behavior is dispatching one request at
a time. In the case of multiple dispatching contexts, This behavior, on the
one hand, introduce intense lock contention:
t1: t2: t3:
lock lock lock
// grab lock
ops.dispatch_request
unlock
// grab lock
ops.dispatch_request
unlock
// grab lock
ops.dispatch_request
unlock
on the other hand, messing up the requests dispatching order:
t1:
lock
rq1 = ops.dispatch_request
unlock
t2:
lock
rq2 = ops.dispatch_request
unlock
lock
rq3 = ops.dispatch_request
unlock
lock
rq4 = ops.dispatch_request
unlock
//rq1,rq3 issue to disk
// rq2, rq4 issue to disk
In this case, the elevator dispatch order is rq 1-2-3-4, however,
such order in disk is rq 1-3-2-4, the order for rq2 and rq3 is inversed.
While dispatching request, blk_mq_get_disatpch_budget() and
blk_mq_get_driver_tag() must be called, and they are not ready to be
called inside elevator methods, hence introduce a new method like
dispatch_requests is not possible.
In conclusion, this set factor the global lock out of dispatch_request
method, and support request batch dispatch by calling the methods
multiple time while holding the lock.
Test Environment:
arm64 Kunpeng-920, with 4 nodes 128 cores
nvme: HWE52P431T6M005N
scsi HDD: MG04ACA600E attached to hisi_sas_v3
null_blk set up:
modprobe null_blk nr_devices=0 &&
udevadm settle &&
cd /sys/kernel/config/nullb &&
mkdir nullb0 &&
cd nullb0 &&
echo 0 > completion_nsec &&
echo 512 > blocksize &&
echo 0 > home_node &&
echo 0 > irqmode &&
echo 128 > submit_queues &&
echo 1024 > hw_queue_depth &&
echo 1024 > size &&
echo 0 > memory_backed &&
echo 2 > queue_mode &&
echo 1 > power ||
exit $?
null_blk and nvme test script:
[global]
filename=/dev/{nullb0,nvme0n1}
rw=randwrite
bs=4k
iodepth=32
iodepth_batch_submit=8
iodepth_batch_complete=8
direct=1
ioengine=io_uring
time_based
[write]
numjobs=16
runtime=60
scsi HDD test script: noted this test aims to test if batch dispatch
will affect IO merge.
[global]
filename=/dev/sda
rw=write
bs=4k
iodepth=32
iodepth_batch_submit=1
direct=1
ioengine=libaio
[write]
offset_increment=1g
numjobs=128
Test Result:
1) nullblk: iops test with high IO pressue
| | deadline | bfq |
| --------------- | -------- | -------- |
| before this set | 256k | 153k |
| after this set | 594k | 283k |
2) nvme: iops test with high IO pressue
| | deadline | bfq |
| --------------- | -------- | -------- |
| before this set | 258k | 142k |
| after this set | 568k | 214k |
3) scsi HDD: io merge test, elevator is deadline
| | w/s | %wrqm | wareq-sz | aqu-sz |
| --------------- | ----- | ----- | -------- | ------ |
| before this set | 92.25 | 96.88 | 128 | 129 |
| after this set | 92.63 | 96.88 | 128 | 129 |
Yu Kuai (5):
blk-mq-sched: introduce high level elevator lock
mq-deadline: switch to use elevator lock
block, bfq: switch to use elevator lock
blk-mq-sched: refactor __blk_mq_do_dispatch_sched()
blk-mq-sched: support request batch dispatching for sq elevator
block/bfq-cgroup.c | 6 +-
block/bfq-iosched.c | 53 +++++-----
block/bfq-iosched.h | 2 -
block/blk-mq-sched.c | 246 ++++++++++++++++++++++++++++++-------------
block/blk-mq.h | 21 ++++
block/elevator.c | 1 +
block/elevator.h | 14 ++-
block/mq-deadline.c | 60 +++++------
8 files changed, 263 insertions(+), 140 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-06 8:57 [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
@ 2025-08-06 8:57 ` Yu Kuai
2025-08-11 0:44 ` Damien Le Moal
2025-08-06 8:57 ` [PATCH v3 2/5] mq-deadline: switch to use " Yu Kuai
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Yu Kuai @ 2025-08-06 8:57 UTC (permalink / raw)
To: dlemoal, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yukuai1, yi.zhang, yangerkun,
johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Currently, both mq-deadline and bfq have global spin lock that will be
grabbed inside elevator methods like dispatch_request, insert_requests,
and bio_merge. And the global lock is the main reason mq-deadline and
bfq can't scale very well.
While dispatching request, blk_mq_get_disatpch_budget() and
blk_mq_get_driver_tag() must be called, and they are not ready to be called
inside elevator methods, hence introduce a new method like
dispatch_requests is not possible.
Hence introduce a new high level elevator lock, currently it is protecting
dispatch_request only. Following patches will convert mq-deadline and bfq
to use this lock and finally support request batch dispatching by calling
the method multiple time while holding the lock.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/blk-mq-sched.c | 9 ++++++++-
block/elevator.c | 1 +
block/elevator.h | 14 ++++++++++++--
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 55a0fd105147..1a2da5edbe13 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
if (budget_token < 0)
break;
- rq = e->type->ops.dispatch_request(hctx);
+ if (blk_queue_sq_sched(q)) {
+ elevator_lock(e);
+ rq = e->type->ops.dispatch_request(hctx);
+ elevator_unlock(e);
+ } else {
+ rq = e->type->ops.dispatch_request(hctx);
+ }
+
if (!rq) {
blk_mq_put_dispatch_budget(q, budget_token);
/*
diff --git a/block/elevator.c b/block/elevator.c
index 88f8f36bed98..45303af0ca73 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -144,6 +144,7 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
eq->type = e;
kobject_init(&eq->kobj, &elv_ktype);
mutex_init(&eq->sysfs_lock);
+ spin_lock_init(&eq->lock);
hash_init(eq->hash);
return eq;
diff --git a/block/elevator.h b/block/elevator.h
index a07ce773a38f..81f7700b0339 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -110,12 +110,12 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset);
/*
* each queue has an elevator_queue associated with it
*/
-struct elevator_queue
-{
+struct elevator_queue {
struct elevator_type *type;
void *elevator_data;
struct kobject kobj;
struct mutex sysfs_lock;
+ spinlock_t lock;
unsigned long flags;
DECLARE_HASHTABLE(hash, ELV_HASH_BITS);
};
@@ -124,6 +124,16 @@ struct elevator_queue
#define ELEVATOR_FLAG_DYING 1
#define ELEVATOR_FLAG_ENABLE_WBT_ON_EXIT 2
+#define elevator_lock(e) spin_lock(&(e)->lock)
+#define elevator_unlock(e) spin_unlock(&(e)->lock)
+#define elevator_lock_irq(e) spin_lock_irq(&(e)->lock)
+#define elevator_unlock_irq(e) spin_unlock_irq(&(e)->lock)
+#define elevator_lock_irqsave(e, flags) \
+ spin_lock_irqsave(&(e)->lock, flags)
+#define elevator_unlock_irqrestore(e, flags) \
+ spin_unlock_irqrestore(&(e)->lock, flags)
+#define elevator_lock_assert_held(e) lockdep_assert_held(&(e)->lock)
+
/*
* block elevator interface
*/
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 2/5] mq-deadline: switch to use elevator lock
2025-08-06 8:57 [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
2025-08-06 8:57 ` [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock Yu Kuai
@ 2025-08-06 8:57 ` Yu Kuai
2025-08-06 8:57 ` [PATCH v3 3/5] block, bfq: " Yu Kuai
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Yu Kuai @ 2025-08-06 8:57 UTC (permalink / raw)
To: dlemoal, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yukuai1, yi.zhang, yangerkun,
johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Replace the internal spinlock 'dd->lock' with the new spinlock in
elevator_queue, things will keep working the same way as before.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/mq-deadline.c | 60 ++++++++++++++++++++-------------------------
1 file changed, 27 insertions(+), 33 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 2edf1cac06d5..33a839671378 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -101,7 +101,7 @@ struct deadline_data {
u32 async_depth;
int prio_aging_expire;
- spinlock_t lock;
+ struct elevator_queue *e;
};
/* Maps an I/O priority class to a deadline scheduler priority. */
@@ -213,7 +213,7 @@ static void dd_merged_requests(struct request_queue *q, struct request *req,
const u8 ioprio_class = dd_rq_ioclass(next);
const enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
- lockdep_assert_held(&dd->lock);
+ elevator_lock_assert_held(dd->e);
dd->per_prio[prio].stats.merged++;
@@ -253,7 +253,7 @@ static u32 dd_queued(struct deadline_data *dd, enum dd_prio prio)
{
const struct io_stats_per_prio *stats = &dd->per_prio[prio].stats;
- lockdep_assert_held(&dd->lock);
+ elevator_lock_assert_held(dd->e);
return stats->inserted - atomic_read(&stats->completed);
}
@@ -323,7 +323,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
enum dd_prio prio;
u8 ioprio_class;
- lockdep_assert_held(&dd->lock);
+ elevator_lock_assert_held(dd->e);
if (!list_empty(&per_prio->dispatch)) {
rq = list_first_entry(&per_prio->dispatch, struct request,
@@ -434,7 +434,7 @@ static struct request *dd_dispatch_prio_aged_requests(struct deadline_data *dd,
enum dd_prio prio;
int prio_cnt;
- lockdep_assert_held(&dd->lock);
+ elevator_lock_assert_held(dd->e);
prio_cnt = !!dd_queued(dd, DD_RT_PRIO) + !!dd_queued(dd, DD_BE_PRIO) +
!!dd_queued(dd, DD_IDLE_PRIO);
@@ -466,10 +466,9 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
struct request *rq;
enum dd_prio prio;
- spin_lock(&dd->lock);
rq = dd_dispatch_prio_aged_requests(dd, now);
if (rq)
- goto unlock;
+ return rq;
/*
* Next, dispatch requests in priority order. Ignore lower priority
@@ -481,9 +480,6 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
break;
}
-unlock:
- spin_unlock(&dd->lock);
-
return rq;
}
@@ -552,9 +548,9 @@ static void dd_exit_sched(struct elevator_queue *e)
WARN_ON_ONCE(!list_empty(&per_prio->fifo_list[DD_READ]));
WARN_ON_ONCE(!list_empty(&per_prio->fifo_list[DD_WRITE]));
- spin_lock(&dd->lock);
+ elevator_lock(e);
queued = dd_queued(dd, prio);
- spin_unlock(&dd->lock);
+ elevator_unlock(e);
WARN_ONCE(queued != 0,
"statistics for priority %d: i %u m %u d %u c %u\n",
@@ -601,7 +597,7 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e)
dd->last_dir = DD_WRITE;
dd->fifo_batch = fifo_batch;
dd->prio_aging_expire = prio_aging_expire;
- spin_lock_init(&dd->lock);
+ dd->e = eq;
/* We dispatch from request queue wide instead of hw queue */
blk_queue_flag_set(QUEUE_FLAG_SQ_SCHED, q);
@@ -653,13 +649,12 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
- struct deadline_data *dd = q->elevator->elevator_data;
struct request *free = NULL;
bool ret;
- spin_lock(&dd->lock);
+ elevator_lock(q->elevator);
ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
- spin_unlock(&dd->lock);
+ elevator_unlock(q->elevator);
if (free)
blk_mq_free_request(free);
@@ -681,7 +676,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
struct dd_per_prio *per_prio;
enum dd_prio prio;
- lockdep_assert_held(&dd->lock);
+ elevator_lock_assert_held(dd->e);
prio = ioprio_class_to_prio[ioprio_class];
per_prio = &dd->per_prio[prio];
@@ -722,10 +717,9 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
- struct deadline_data *dd = q->elevator->elevator_data;
LIST_HEAD(free);
- spin_lock(&dd->lock);
+ elevator_lock(q->elevator);
while (!list_empty(list)) {
struct request *rq;
@@ -733,7 +727,7 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
list_del_init(&rq->queuelist);
dd_insert_request(hctx, rq, flags, &free);
}
- spin_unlock(&dd->lock);
+ elevator_unlock(q->elevator);
blk_mq_free_requests(&free);
}
@@ -849,13 +843,13 @@ static const struct elv_fs_entry deadline_attrs[] = {
#define DEADLINE_DEBUGFS_DDIR_ATTRS(prio, data_dir, name) \
static void *deadline_##name##_fifo_start(struct seq_file *m, \
loff_t *pos) \
- __acquires(&dd->lock) \
+ __acquires(&dd->e->lock) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
struct dd_per_prio *per_prio = &dd->per_prio[prio]; \
\
- spin_lock(&dd->lock); \
+ elevator_lock(dd->e); \
return seq_list_start(&per_prio->fifo_list[data_dir], *pos); \
} \
\
@@ -870,12 +864,12 @@ static void *deadline_##name##_fifo_next(struct seq_file *m, void *v, \
} \
\
static void deadline_##name##_fifo_stop(struct seq_file *m, void *v) \
- __releases(&dd->lock) \
+ __releases(&d->e->lock) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
\
- spin_unlock(&dd->lock); \
+ elevator_unlock(dd->e); \
} \
\
static const struct seq_operations deadline_##name##_fifo_seq_ops = { \
@@ -941,11 +935,11 @@ static int dd_queued_show(void *data, struct seq_file *m)
struct deadline_data *dd = q->elevator->elevator_data;
u32 rt, be, idle;
- spin_lock(&dd->lock);
+ elevator_lock(dd->e);
rt = dd_queued(dd, DD_RT_PRIO);
be = dd_queued(dd, DD_BE_PRIO);
idle = dd_queued(dd, DD_IDLE_PRIO);
- spin_unlock(&dd->lock);
+ elevator_unlock(dd->e);
seq_printf(m, "%u %u %u\n", rt, be, idle);
@@ -957,7 +951,7 @@ static u32 dd_owned_by_driver(struct deadline_data *dd, enum dd_prio prio)
{
const struct io_stats_per_prio *stats = &dd->per_prio[prio].stats;
- lockdep_assert_held(&dd->lock);
+ elevator_lock_assert_held(dd->e);
return stats->dispatched + stats->merged -
atomic_read(&stats->completed);
@@ -969,11 +963,11 @@ static int dd_owned_by_driver_show(void *data, struct seq_file *m)
struct deadline_data *dd = q->elevator->elevator_data;
u32 rt, be, idle;
- spin_lock(&dd->lock);
+ elevator_lock(dd->e);
rt = dd_owned_by_driver(dd, DD_RT_PRIO);
be = dd_owned_by_driver(dd, DD_BE_PRIO);
idle = dd_owned_by_driver(dd, DD_IDLE_PRIO);
- spin_unlock(&dd->lock);
+ elevator_unlock(dd->e);
seq_printf(m, "%u %u %u\n", rt, be, idle);
@@ -983,13 +977,13 @@ static int dd_owned_by_driver_show(void *data, struct seq_file *m)
#define DEADLINE_DISPATCH_ATTR(prio) \
static void *deadline_dispatch##prio##_start(struct seq_file *m, \
loff_t *pos) \
- __acquires(&dd->lock) \
+ __acquires(&dd->e->lock) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
struct dd_per_prio *per_prio = &dd->per_prio[prio]; \
\
- spin_lock(&dd->lock); \
+ elevator_lock(dd->e); \
return seq_list_start(&per_prio->dispatch, *pos); \
} \
\
@@ -1004,12 +998,12 @@ static void *deadline_dispatch##prio##_next(struct seq_file *m, \
} \
\
static void deadline_dispatch##prio##_stop(struct seq_file *m, void *v) \
- __releases(&dd->lock) \
+ __releases(&dd->e->lock) \
{ \
struct request_queue *q = m->private; \
struct deadline_data *dd = q->elevator->elevator_data; \
\
- spin_unlock(&dd->lock); \
+ elevator_unlock(dd->e); \
} \
\
static const struct seq_operations deadline_dispatch##prio##_seq_ops = { \
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 3/5] block, bfq: switch to use elevator lock
2025-08-06 8:57 [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
2025-08-06 8:57 ` [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock Yu Kuai
2025-08-06 8:57 ` [PATCH v3 2/5] mq-deadline: switch to use " Yu Kuai
@ 2025-08-06 8:57 ` Yu Kuai
2025-08-06 8:57 ` [PATCH v3 4/5] blk-mq-sched: refactor __blk_mq_do_dispatch_sched() Yu Kuai
2025-08-06 8:57 ` [PATCH v3 5/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
4 siblings, 0 replies; 12+ messages in thread
From: Yu Kuai @ 2025-08-06 8:57 UTC (permalink / raw)
To: dlemoal, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yukuai1, yi.zhang, yangerkun,
johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Replace the internal spinlock 'bfqd->lock' with the new spinlock in
elevator_queue, things will keep working the same way as before.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/bfq-cgroup.c | 6 ++---
block/bfq-iosched.c | 53 ++++++++++++++++++++------------------------
block/bfq-iosched.h | 2 --
block/blk-mq-sched.c | 4 ++--
4 files changed, 29 insertions(+), 36 deletions(-)
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 9fb9f3533150..a23622cc2be2 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -746,7 +746,7 @@ static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
* @bic: the bic to move.
* @bfqg: the group to move to.
*
- * Move bic to blkcg, assuming that bfqd->lock is held; which makes
+ * Move bic to blkcg, assuming that elevator lock is held; which makes
* sure that the reference to cgroup is valid across the call (see
* comments in bfq_bic_update_cgroup on this issue)
*/
@@ -878,7 +878,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
unsigned long flags;
int i;
- spin_lock_irqsave(&bfqd->lock, flags);
+ elevator_lock_irqsave(bfqd->queue->elevator, flags);
if (!entity) /* root group */
goto put_async_queues;
@@ -923,7 +923,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
put_async_queues:
bfq_put_async_queues(bfqd, bfqg);
- spin_unlock_irqrestore(&bfqd->lock, flags);
+ elevator_unlock_irqrestore(bfqd->queue->elevator, flags);
/*
* @blkg is going offline and will be ignored by
* blkg_[rw]stat_recursive_sum(). Transfer stats to the parent so
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index f71ec0887733..f7962072d3aa 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -466,7 +466,7 @@ static struct bfq_io_cq *bfq_bic_lookup(struct request_queue *q)
*/
void bfq_schedule_dispatch(struct bfq_data *bfqd)
{
- lockdep_assert_held(&bfqd->lock);
+ elevator_lock_assert_held(bfqd->queue->elevator);
if (bfqd->queued != 0) {
bfq_log(bfqd, "schedule dispatch");
@@ -591,7 +591,7 @@ static bool bfqq_request_over_limit(struct bfq_data *bfqd,
int level;
retry:
- spin_lock_irq(&bfqd->lock);
+ elevator_lock_irq(bfqd->queue->elevator);
bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
if (!bfqq)
goto out;
@@ -603,7 +603,7 @@ static bool bfqq_request_over_limit(struct bfq_data *bfqd,
/* +1 for bfqq entity, root cgroup not included */
depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1;
if (depth > alloc_depth) {
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(bfqd->queue->elevator);
if (entities != inline_entities)
kfree(entities);
entities = kmalloc_array(depth, sizeof(*entities), GFP_NOIO);
@@ -661,7 +661,7 @@ static bool bfqq_request_over_limit(struct bfq_data *bfqd,
}
}
out:
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(bfqd->queue->elevator);
if (entities != inline_entities)
kfree(entities);
return ret;
@@ -2217,7 +2217,7 @@ static void bfq_add_request(struct request *rq)
bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
bfqq->queued[rq_is_sync(rq)]++;
/*
- * Updating of 'bfqd->queued' is protected by 'bfqd->lock', however, it
+ * Updating of 'bfqd->queued' is protected by elevator lock, however, it
* may be read without holding the lock in bfq_has_work().
*/
WRITE_ONCE(bfqd->queued, bfqd->queued + 1);
@@ -2397,7 +2397,7 @@ static void bfq_remove_request(struct request_queue *q,
list_del_init(&rq->queuelist);
bfqq->queued[sync]--;
/*
- * Updating of 'bfqd->queued' is protected by 'bfqd->lock', however, it
+ * Updating of 'bfqd->queued' is protected by elevator lock, however, it
* may be read without holding the lock in bfq_has_work().
*/
WRITE_ONCE(bfqd->queued, bfqd->queued - 1);
@@ -2454,7 +2454,7 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
struct request *free = NULL;
bool ret;
- spin_lock_irq(&bfqd->lock);
+ elevator_lock_irq(q->elevator);
if (bic) {
/*
@@ -2472,7 +2472,7 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(q->elevator);
if (free)
blk_mq_free_request(free);
@@ -2647,7 +2647,7 @@ static void bfq_end_wr(struct bfq_data *bfqd)
struct bfq_queue *bfqq;
int i;
- spin_lock_irq(&bfqd->lock);
+ elevator_lock_irq(bfqd->queue->elevator);
for (i = 0; i < bfqd->num_actuators; i++) {
list_for_each_entry(bfqq, &bfqd->active_list[i], bfqq_list)
@@ -2657,7 +2657,7 @@ static void bfq_end_wr(struct bfq_data *bfqd)
bfq_bfqq_end_wr(bfqq);
bfq_end_wr_async(bfqd);
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(bfqd->queue->elevator);
}
static sector_t bfq_io_struct_pos(void *io_struct, bool request)
@@ -5303,8 +5303,6 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
struct bfq_queue *in_serv_queue;
bool waiting_rq, idle_timer_disabled = false;
- spin_lock_irq(&bfqd->lock);
-
in_serv_queue = bfqd->in_service_queue;
waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
@@ -5314,7 +5312,6 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
}
- spin_unlock_irq(&bfqd->lock);
bfq_update_dispatch_stats(hctx->queue, rq,
idle_timer_disabled ? in_serv_queue : NULL,
idle_timer_disabled);
@@ -5492,9 +5489,9 @@ static void bfq_exit_icq(struct io_cq *icq)
* this is the last time these queues are accessed.
*/
if (bfqd) {
- spin_lock_irqsave(&bfqd->lock, flags);
+ elevator_lock_irqsave(bfqd->queue->elevator, flags);
_bfq_exit_icq(bic, bfqd->num_actuators);
- spin_unlock_irqrestore(&bfqd->lock, flags);
+ elevator_unlock_irqrestore(bfqd->queue->elevator, flags);
} else {
_bfq_exit_icq(bic, BFQ_MAX_ACTUATORS);
}
@@ -6250,10 +6247,10 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
if (!cgroup_subsys_on_dfl(io_cgrp_subsys) && rq->bio)
bfqg_stats_update_legacy_io(q, rq);
#endif
- spin_lock_irq(&bfqd->lock);
+ elevator_lock_irq(q->elevator);
bfqq = bfq_init_rq(rq);
if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(q->elevator);
blk_mq_free_requests(&free);
return;
}
@@ -6286,7 +6283,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
* merge).
*/
cmd_flags = rq->cmd_flags;
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(q->elevator);
bfq_update_insert_stats(q, bfqq, idle_timer_disabled,
cmd_flags);
@@ -6667,7 +6664,7 @@ static void bfq_finish_requeue_request(struct request *rq)
rq->io_start_time_ns,
rq->cmd_flags);
- spin_lock_irqsave(&bfqd->lock, flags);
+ elevator_lock_irqsave(bfqd->queue->elevator, flags);
if (likely(rq->rq_flags & RQF_STARTED)) {
if (rq == bfqd->waited_rq)
bfq_update_inject_limit(bfqd, bfqq);
@@ -6677,7 +6674,7 @@ static void bfq_finish_requeue_request(struct request *rq)
bfqq_request_freed(bfqq);
bfq_put_queue(bfqq);
RQ_BIC(rq)->requests--;
- spin_unlock_irqrestore(&bfqd->lock, flags);
+ elevator_unlock_irqrestore(bfqd->queue->elevator, flags);
/*
* Reset private fields. In case of a requeue, this allows
@@ -7008,7 +7005,7 @@ bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
enum bfqq_expiration reason;
unsigned long flags;
- spin_lock_irqsave(&bfqd->lock, flags);
+ elevator_lock_irqsave(bfqd->queue->elevator, flags);
/*
* Considering that bfqq may be in race, we should firstly check
@@ -7018,7 +7015,7 @@ bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
* been cleared in __bfq_bfqd_reset_in_service func.
*/
if (bfqq != bfqd->in_service_queue) {
- spin_unlock_irqrestore(&bfqd->lock, flags);
+ elevator_unlock_irqrestore(bfqd->queue->elevator, flags);
return;
}
@@ -7046,7 +7043,7 @@ bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
schedule_dispatch:
bfq_schedule_dispatch(bfqd);
- spin_unlock_irqrestore(&bfqd->lock, flags);
+ elevator_unlock_irqrestore(bfqd->queue->elevator, flags);
}
/*
@@ -7172,10 +7169,10 @@ static void bfq_exit_queue(struct elevator_queue *e)
hrtimer_cancel(&bfqd->idle_slice_timer);
- spin_lock_irq(&bfqd->lock);
+ elevator_lock_irq(e);
list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
bfq_deactivate_bfqq(bfqd, bfqq, false, false);
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(e);
for (actuator = 0; actuator < bfqd->num_actuators; actuator++)
WARN_ON_ONCE(bfqd->rq_in_driver[actuator]);
@@ -7189,10 +7186,10 @@ static void bfq_exit_queue(struct elevator_queue *e)
#ifdef CONFIG_BFQ_GROUP_IOSCHED
blkcg_deactivate_policy(bfqd->queue->disk, &blkcg_policy_bfq);
#else
- spin_lock_irq(&bfqd->lock);
+ elevator_lock_irq(e);
bfq_put_async_queues(bfqd, bfqd->root_group);
kfree(bfqd->root_group);
- spin_unlock_irq(&bfqd->lock);
+ elevator_unlock_irq(e);
#endif
blk_stat_disable_accounting(bfqd->queue);
@@ -7357,8 +7354,6 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
/* see comments on the definition of next field inside bfq_data */
bfqd->actuator_load_threshold = 4;
- spin_lock_init(&bfqd->lock);
-
/*
* The invocation of the next bfq_create_group_hierarchy
* function is the head of a chain of function calls
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index 687a3a7ba784..dab908f05235 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -795,8 +795,6 @@ struct bfq_data {
/* fallback dummy bfqq for extreme OOM conditions */
struct bfq_queue oom_bfqq;
- spinlock_t lock;
-
/*
* bic associated with the task issuing current bio for
* merging. This and the next field are used as a support to
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 1a2da5edbe13..42e3ec06c072 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -114,9 +114,9 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
break;
if (blk_queue_sq_sched(q)) {
- elevator_lock(e);
+ elevator_lock_irq(e);
rq = e->type->ops.dispatch_request(hctx);
- elevator_unlock(e);
+ elevator_unlock_irq(e);
} else {
rq = e->type->ops.dispatch_request(hctx);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 4/5] blk-mq-sched: refactor __blk_mq_do_dispatch_sched()
2025-08-06 8:57 [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
` (2 preceding siblings ...)
2025-08-06 8:57 ` [PATCH v3 3/5] block, bfq: " Yu Kuai
@ 2025-08-06 8:57 ` Yu Kuai
2025-08-06 8:57 ` [PATCH v3 5/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
4 siblings, 0 replies; 12+ messages in thread
From: Yu Kuai @ 2025-08-06 8:57 UTC (permalink / raw)
To: dlemoal, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yukuai1, yi.zhang, yangerkun,
johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Introduce struct sched_dispatch_ctx, and split the helper into
blk_mq_dispatch_one_request() and blk_mq_finish_dispatch(). Also
add comments about the non-error return value.
Make code cleaner, and make it easier to add a new branch to dispatch
a batch of requests at a time in the next patch.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/blk-mq-sched.c | 196 ++++++++++++++++++++++++++-----------------
1 file changed, 118 insertions(+), 78 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 42e3ec06c072..7b77cdb9395d 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -16,6 +16,16 @@
#include "blk-mq-sched.h"
#include "blk-wbt.h"
+struct sched_dispatch_ctx {
+ struct blk_mq_hw_ctx *hctx;
+ struct list_head rq_list;
+
+ int count;
+ bool multi_hctxs;
+ bool run_queue;
+ bool busy;
+};
+
/*
* Mark a hardware queue as needing a restart.
*/
@@ -74,92 +84,92 @@ static bool blk_mq_dispatch_hctx_list(struct list_head *rq_list)
#define BLK_MQ_BUDGET_DELAY 3 /* ms units */
-/*
- * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
- * its queue by itself in its completion handler, so we don't need to
- * restart queue if .get_budget() fails to get the budget.
- *
- * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
- * be run again. This is necessary to avoid starving flushes.
- */
-static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+static bool blk_mq_should_dispatch(struct sched_dispatch_ctx *ctx)
{
- struct request_queue *q = hctx->queue;
- struct elevator_queue *e = q->elevator;
- bool multi_hctxs = false, run_queue = false;
- bool dispatched = false, busy = false;
- unsigned int max_dispatch;
- LIST_HEAD(rq_list);
- int count = 0;
+ struct elevator_queue *e = ctx->hctx->queue->elevator;
- if (hctx->dispatch_busy)
- max_dispatch = 1;
- else
- max_dispatch = hctx->queue->nr_requests;
+ if (e->type->ops.has_work && !e->type->ops.has_work(ctx->hctx))
+ return false;
- do {
- struct request *rq;
- int budget_token;
+ if (!list_empty_careful(&ctx->hctx->dispatch)) {
+ ctx->busy = true;
+ return false;
+ }
- if (e->type->ops.has_work && !e->type->ops.has_work(hctx))
- break;
+ return true;
+}
- if (!list_empty_careful(&hctx->dispatch)) {
- busy = true;
- break;
- }
+static bool blk_mq_dispatch_one_request(struct sched_dispatch_ctx *ctx)
+{
+ struct request_queue *q = ctx->hctx->queue;
+ struct elevator_queue *e = q->elevator;
+ struct request *rq;
+ int budget_token;
- budget_token = blk_mq_get_dispatch_budget(q);
- if (budget_token < 0)
- break;
+ if (!blk_mq_should_dispatch(ctx))
+ return false;
- if (blk_queue_sq_sched(q)) {
- elevator_lock_irq(e);
- rq = e->type->ops.dispatch_request(hctx);
- elevator_unlock_irq(e);
- } else {
- rq = e->type->ops.dispatch_request(hctx);
- }
+ budget_token = blk_mq_get_dispatch_budget(q);
+ if (budget_token < 0)
+ return false;
- if (!rq) {
- blk_mq_put_dispatch_budget(q, budget_token);
- /*
- * We're releasing without dispatching. Holding the
- * budget could have blocked any "hctx"s with the
- * same queue and if we didn't dispatch then there's
- * no guarantee anyone will kick the queue. Kick it
- * ourselves.
- */
- run_queue = true;
- break;
- }
-
- blk_mq_set_rq_budget_token(rq, budget_token);
+ if (blk_queue_sq_sched(q)) {
+ elevator_lock_irq(e);
+ rq = e->type->ops.dispatch_request(ctx->hctx);
+ elevator_unlock_irq(e);
+ } else {
+ rq = e->type->ops.dispatch_request(ctx->hctx);
+ }
+ if (!rq) {
+ blk_mq_put_dispatch_budget(q, budget_token);
/*
- * Now this rq owns the budget which has to be released
- * if this rq won't be queued to driver via .queue_rq()
- * in blk_mq_dispatch_rq_list().
+ * We're releasing without dispatching. Holding the
+ * budget could have blocked any "hctx"s with the
+ * same queue and if we didn't dispatch then there's
+ * no guarantee anyone will kick the queue. Kick it
+ * ourselves.
*/
- list_add_tail(&rq->queuelist, &rq_list);
- count++;
- if (rq->mq_hctx != hctx)
- multi_hctxs = true;
+ ctx->run_queue = true;
+ return false;
+ }
- /*
- * If we cannot get tag for the request, stop dequeueing
- * requests from the IO scheduler. We are unlikely to be able
- * to submit them anyway and it creates false impression for
- * scheduling heuristics that the device can take more IO.
- */
- if (!blk_mq_get_driver_tag(rq))
- break;
- } while (count < max_dispatch);
+ blk_mq_set_rq_budget_token(rq, budget_token);
- if (!count) {
- if (run_queue)
- blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
- } else if (multi_hctxs) {
+ /*
+ * Now this rq owns the budget which has to be released
+ * if this rq won't be queued to driver via .queue_rq()
+ * in blk_mq_dispatch_rq_list().
+ */
+ list_add_tail(&rq->queuelist, &ctx->rq_list);
+ ctx->count++;
+ if (rq->mq_hctx != ctx->hctx)
+ ctx->multi_hctxs = true;
+
+ /*
+ * If we cannot get tag for the request, stop dequeueing
+ * requests from the IO scheduler. We are unlikely to be able
+ * to submit them anyway and it creates false impression for
+ * scheduling heuristics that the device can take more IO.
+ */
+ return blk_mq_get_driver_tag(rq);
+}
+
+/*
+ * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
+ * be run again. This is necessary to avoid starving flushes.
+ * Return 0 if no request is dispatched.
+ * Return 1 if at least one request is dispatched.
+ */
+static int blk_mq_finish_dispatch(struct sched_dispatch_ctx *ctx)
+{
+ bool dispatched = false;
+
+ if (!ctx->count) {
+ if (ctx->run_queue)
+ blk_mq_delay_run_hw_queues(ctx->hctx->queue,
+ BLK_MQ_BUDGET_DELAY);
+ } else if (ctx->multi_hctxs) {
/*
* Requests from different hctx may be dequeued from some
* schedulers, such as bfq and deadline.
@@ -167,19 +177,49 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
* Sort the requests in the list according to their hctx,
* dispatch batching requests from same hctx at a time.
*/
- list_sort(NULL, &rq_list, sched_rq_cmp);
+ list_sort(NULL, &ctx->rq_list, sched_rq_cmp);
do {
- dispatched |= blk_mq_dispatch_hctx_list(&rq_list);
- } while (!list_empty(&rq_list));
+ dispatched |= blk_mq_dispatch_hctx_list(&ctx->rq_list);
+ } while (!list_empty(&ctx->rq_list));
} else {
- dispatched = blk_mq_dispatch_rq_list(hctx, &rq_list, false);
+ dispatched = blk_mq_dispatch_rq_list(ctx->hctx, &ctx->rq_list,
+ false);
}
- if (busy)
+ if (ctx->busy)
return -EAGAIN;
+
return !!dispatched;
}
+/*
+ * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
+ * its queue by itself in its completion handler, so we don't need to
+ * restart queue if .get_budget() fails to get the budget.
+ *
+ * See blk_mq_finish_dispatch() for return values.
+ */
+static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+{
+ unsigned int max_dispatch;
+ struct sched_dispatch_ctx ctx = {
+ .hctx = hctx,
+ .rq_list = LIST_HEAD_INIT(ctx.rq_list),
+ };
+
+ if (hctx->dispatch_busy)
+ max_dispatch = 1;
+ else
+ max_dispatch = hctx->queue->nr_requests;
+
+ do {
+ if (!blk_mq_dispatch_one_request(&ctx))
+ break;
+ } while (ctx.count < max_dispatch);
+
+ return blk_mq_finish_dispatch(&ctx);
+}
+
static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
{
unsigned long end = jiffies + HZ;
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 5/5] blk-mq-sched: support request batch dispatching for sq elevator
2025-08-06 8:57 [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
` (3 preceding siblings ...)
2025-08-06 8:57 ` [PATCH v3 4/5] blk-mq-sched: refactor __blk_mq_do_dispatch_sched() Yu Kuai
@ 2025-08-06 8:57 ` Yu Kuai
4 siblings, 0 replies; 12+ messages in thread
From: Yu Kuai @ 2025-08-06 8:57 UTC (permalink / raw)
To: dlemoal, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yukuai1, yi.zhang, yangerkun,
johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
For dispatch_request method, current behavior is dispatching one request at
a time. In the case of multiple dispatching contexts, This behavior, on the
one hand, introduce intense lock contention:
t1: t2: t3:
lock lock lock
// grab lock
ops.dispatch_request
unlock
// grab lock
ops.dispatch_request
unlock
// grab lock
ops.dispatch_request
unlock
on the other hand, messing up the requests dispatching order:
t1:
lock
rq1 = ops.dispatch_request
unlock
t2:
lock
rq2 = ops.dispatch_request
unlock
lock
rq3 = ops.dispatch_request
unlock
lock
rq4 = ops.dispatch_request
unlock
//rq1,rq3 issue to disk
// rq2, rq4 issue to disk
In this case, the elevator dispatch order is rq 1-2-3-4, however,
such order in disk is rq 1-3-2-4, the order for rq2 and rq3 is inversed.
Fix those problems by introducing elevator_dispatch_requests(), this
helper will grab the lock and dispatch a batch of requests while holding
the lock.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/blk-mq-sched.c | 65 +++++++++++++++++++++++++++++++++++++++++---
block/blk-mq.h | 21 ++++++++++++++
2 files changed, 82 insertions(+), 4 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 7b77cdb9395d..5fced2ac9ce6 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -192,6 +192,59 @@ static int blk_mq_finish_dispatch(struct sched_dispatch_ctx *ctx)
return !!dispatched;
}
+static void blk_mq_dispatch_requests(struct sched_dispatch_ctx *ctx)
+{
+ struct request_queue *q = ctx->hctx->queue;
+ struct elevator_queue *e = q->elevator;
+ bool has_get_budget = q->mq_ops->get_budget != NULL;
+ int budget_token[BUDGET_TOKEN_BATCH];
+ int count = q->nr_requests;
+ int i;
+
+ while (true) {
+ if (!blk_mq_should_dispatch(ctx))
+ return;
+
+ if (has_get_budget) {
+ count = blk_mq_get_dispatch_budgets(q, budget_token);
+ if (count <= 0)
+ return;
+ }
+
+ elevator_lock_irq(e);
+ for (i = 0; i < count; ++i) {
+ struct request *rq =
+ e->type->ops.dispatch_request(ctx->hctx);
+
+ if (!rq) {
+ ctx->run_queue = true;
+ goto err_free_budgets;
+ }
+
+ if (has_get_budget)
+ blk_mq_set_rq_budget_token(rq, budget_token[i]);
+
+ list_add_tail(&rq->queuelist, &ctx->rq_list);
+ ctx->count++;
+
+ if (rq->mq_hctx != ctx->hctx)
+ ctx->multi_hctxs = true;
+
+ if (!blk_mq_get_driver_tag(rq)) {
+ i++;
+ goto err_free_budgets;
+ }
+ }
+ elevator_unlock_irq(e);
+ }
+
+err_free_budgets:
+ elevator_unlock_irq(e);
+ if (has_get_budget)
+ for (; i < count; ++i)
+ blk_mq_put_dispatch_budget(q, budget_token[i]);
+}
+
/*
* Only SCSI implements .get_budget and .put_budget, and SCSI restarts
* its queue by itself in its completion handler, so we don't need to
@@ -212,10 +265,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
else
max_dispatch = hctx->queue->nr_requests;
- do {
- if (!blk_mq_dispatch_one_request(&ctx))
- break;
- } while (ctx.count < max_dispatch);
+ if (!hctx->dispatch_busy && blk_queue_sq_sched(hctx->queue)) {
+ blk_mq_dispatch_requests(&ctx);
+ } else {
+ do {
+ if (!blk_mq_dispatch_one_request(&ctx))
+ break;
+ } while (ctx.count < max_dispatch);
+ }
return blk_mq_finish_dispatch(&ctx);
}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index affb2e14b56e..450c16a07841 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -37,6 +37,7 @@ enum {
};
#define BLK_MQ_CPU_WORK_BATCH (8)
+#define BUDGET_TOKEN_BATCH (8)
typedef unsigned int __bitwise blk_insert_t;
#define BLK_MQ_INSERT_AT_HEAD ((__force blk_insert_t)0x01)
@@ -262,6 +263,26 @@ static inline int blk_mq_get_dispatch_budget(struct request_queue *q)
return 0;
}
+static inline int blk_mq_get_dispatch_budgets(struct request_queue *q,
+ int *budget_token)
+{
+ int count = 0;
+
+ while (count < BUDGET_TOKEN_BATCH) {
+ int token = 0;
+
+ if (q->mq_ops->get_budget)
+ token = q->mq_ops->get_budget(q);
+
+ if (token < 0)
+ return count;
+
+ budget_token[count++] = token;
+ }
+
+ return count;
+}
+
static inline void blk_mq_set_rq_budget_token(struct request *rq, int token)
{
if (token < 0)
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-06 8:57 ` [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock Yu Kuai
@ 2025-08-11 0:44 ` Damien Le Moal
2025-08-11 1:01 ` Yu Kuai
0 siblings, 1 reply; 12+ messages in thread
From: Damien Le Moal @ 2025-08-11 0:44 UTC (permalink / raw)
To: Yu Kuai, hare, jack, bvanassche, tj, josef, axboe, yukuai3
Cc: cgroups, linux-block, linux-kernel, yi.zhang, yangerkun,
johnny.chenyi
On 8/6/25 17:57, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
>
> Currently, both mq-deadline and bfq have global spin lock that will be
> grabbed inside elevator methods like dispatch_request, insert_requests,
> and bio_merge. And the global lock is the main reason mq-deadline and
> bfq can't scale very well.
>
> While dispatching request, blk_mq_get_disatpch_budget() and
> blk_mq_get_driver_tag() must be called, and they are not ready to be called
> inside elevator methods, hence introduce a new method like
> dispatch_requests is not possible.
>
> Hence introduce a new high level elevator lock, currently it is protecting
> dispatch_request only. Following patches will convert mq-deadline and bfq
> to use this lock and finally support request batch dispatching by calling
> the method multiple time while holding the lock.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> block/blk-mq-sched.c | 9 ++++++++-
> block/elevator.c | 1 +
> block/elevator.h | 14 ++++++++++++--
> 3 files changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 55a0fd105147..1a2da5edbe13 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
> if (budget_token < 0)
> break;
>
> - rq = e->type->ops.dispatch_request(hctx);
> + if (blk_queue_sq_sched(q)) {
> + elevator_lock(e);
> + rq = e->type->ops.dispatch_request(hctx);
> + elevator_unlock(e);
I do not think this is safe for bfq since bfq uses the irqsave/irqrestore spin
lock variant. If it is safe, this needs a big comment block explaining why
and/or the rules regarding the scheduler use of this lock.
> + } else {
> + rq = e->type->ops.dispatch_request(hctx);
> + }
> +
> if (!rq) {
> blk_mq_put_dispatch_budget(q, budget_token);
> /*
> diff --git a/block/elevator.c b/block/elevator.c
> index 88f8f36bed98..45303af0ca73 100644
> --- a/block/elevator.c
> +++ b/block/elevator.c
> @@ -144,6 +144,7 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
> eq->type = e;
> kobject_init(&eq->kobj, &elv_ktype);
> mutex_init(&eq->sysfs_lock);
> + spin_lock_init(&eq->lock);
> hash_init(eq->hash);
>
> return eq;
> diff --git a/block/elevator.h b/block/elevator.h
> index a07ce773a38f..81f7700b0339 100644
> --- a/block/elevator.h
> +++ b/block/elevator.h
> @@ -110,12 +110,12 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset);
> /*
> * each queue has an elevator_queue associated with it
> */
> -struct elevator_queue
> -{
> +struct elevator_queue {
> struct elevator_type *type;
> void *elevator_data;
> struct kobject kobj;
> struct mutex sysfs_lock;
> + spinlock_t lock;
> unsigned long flags;
> DECLARE_HASHTABLE(hash, ELV_HASH_BITS);
> };
> @@ -124,6 +124,16 @@ struct elevator_queue
> #define ELEVATOR_FLAG_DYING 1
> #define ELEVATOR_FLAG_ENABLE_WBT_ON_EXIT 2
>
> +#define elevator_lock(e) spin_lock(&(e)->lock)
> +#define elevator_unlock(e) spin_unlock(&(e)->lock)
> +#define elevator_lock_irq(e) spin_lock_irq(&(e)->lock)
> +#define elevator_unlock_irq(e) spin_unlock_irq(&(e)->lock)
> +#define elevator_lock_irqsave(e, flags) \
> + spin_lock_irqsave(&(e)->lock, flags)
> +#define elevator_unlock_irqrestore(e, flags) \
> + spin_unlock_irqrestore(&(e)->lock, flags)
> +#define elevator_lock_assert_held(e) lockdep_assert_held(&(e)->lock)
> +
> /*
> * block elevator interface
> */
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-11 0:44 ` Damien Le Moal
@ 2025-08-11 1:01 ` Yu Kuai
2025-08-11 3:53 ` Damien Le Moal
0 siblings, 1 reply; 12+ messages in thread
From: Yu Kuai @ 2025-08-11 1:01 UTC (permalink / raw)
To: Damien Le Moal, Yu Kuai, hare, jack, bvanassche, tj, josef, axboe
Cc: cgroups, linux-block, linux-kernel, yi.zhang, yangerkun,
johnny.chenyi, yukuai (C)
Hi,
在 2025/08/11 8:44, Damien Le Moal 写道:
> On 8/6/25 17:57, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> Currently, both mq-deadline and bfq have global spin lock that will be
>> grabbed inside elevator methods like dispatch_request, insert_requests,
>> and bio_merge. And the global lock is the main reason mq-deadline and
>> bfq can't scale very well.
>>
>> While dispatching request, blk_mq_get_disatpch_budget() and
>> blk_mq_get_driver_tag() must be called, and they are not ready to be called
>> inside elevator methods, hence introduce a new method like
>> dispatch_requests is not possible.
>>
>> Hence introduce a new high level elevator lock, currently it is protecting
>> dispatch_request only. Following patches will convert mq-deadline and bfq
>> to use this lock and finally support request batch dispatching by calling
>> the method multiple time while holding the lock.
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>> block/blk-mq-sched.c | 9 ++++++++-
>> block/elevator.c | 1 +
>> block/elevator.h | 14 ++++++++++++--
>> 3 files changed, 21 insertions(+), 3 deletions(-)
>>
>> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
>> index 55a0fd105147..1a2da5edbe13 100644
>> --- a/block/blk-mq-sched.c
>> +++ b/block/blk-mq-sched.c
>> @@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>> if (budget_token < 0)
>> break;
>>
>> - rq = e->type->ops.dispatch_request(hctx);
>> + if (blk_queue_sq_sched(q)) {
>> + elevator_lock(e);
>> + rq = e->type->ops.dispatch_request(hctx);
>> + elevator_unlock(e);
>
> I do not think this is safe for bfq since bfq uses the irqsave/irqrestore spin
> lock variant. If it is safe, this needs a big comment block explaining why
> and/or the rules regarding the scheduler use of this lock.
It's correct, however, this patch doesn't change bfq yet, and it's like:
elevator_lock
spin_lock_irq(&bfqd->lock)
spin_unlock_irq(&bfqd->lock)
elevator_unlock
Patch 3 remove bfqd->lock and convert this to:
elevator_lock_irq
elevator_unlock_irq.
Thanks,
Kuai
>
>> + } else {
>> + rq = e->type->ops.dispatch_request(hctx);
>> + }
>> +
>> if (!rq) {
>> blk_mq_put_dispatch_budget(q, budget_token);
>> /*
>> diff --git a/block/elevator.c b/block/elevator.c
>> index 88f8f36bed98..45303af0ca73 100644
>> --- a/block/elevator.c
>> +++ b/block/elevator.c
>> @@ -144,6 +144,7 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
>> eq->type = e;
>> kobject_init(&eq->kobj, &elv_ktype);
>> mutex_init(&eq->sysfs_lock);
>> + spin_lock_init(&eq->lock);
>> hash_init(eq->hash);
>>
>> return eq;
>> diff --git a/block/elevator.h b/block/elevator.h
>> index a07ce773a38f..81f7700b0339 100644
>> --- a/block/elevator.h
>> +++ b/block/elevator.h
>> @@ -110,12 +110,12 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset);
>> /*
>> * each queue has an elevator_queue associated with it
>> */
>> -struct elevator_queue
>> -{
>> +struct elevator_queue {
>> struct elevator_type *type;
>> void *elevator_data;
>> struct kobject kobj;
>> struct mutex sysfs_lock;
>> + spinlock_t lock;
>> unsigned long flags;
>> DECLARE_HASHTABLE(hash, ELV_HASH_BITS);
>> };
>> @@ -124,6 +124,16 @@ struct elevator_queue
>> #define ELEVATOR_FLAG_DYING 1
>> #define ELEVATOR_FLAG_ENABLE_WBT_ON_EXIT 2
>>
>> +#define elevator_lock(e) spin_lock(&(e)->lock)
>> +#define elevator_unlock(e) spin_unlock(&(e)->lock)
>> +#define elevator_lock_irq(e) spin_lock_irq(&(e)->lock)
>> +#define elevator_unlock_irq(e) spin_unlock_irq(&(e)->lock)
>> +#define elevator_lock_irqsave(e, flags) \
>> + spin_lock_irqsave(&(e)->lock, flags)
>> +#define elevator_unlock_irqrestore(e, flags) \
>> + spin_unlock_irqrestore(&(e)->lock, flags)
>> +#define elevator_lock_assert_held(e) lockdep_assert_held(&(e)->lock)
>> +
>> /*
>> * block elevator interface
>> */
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-11 1:01 ` Yu Kuai
@ 2025-08-11 3:53 ` Damien Le Moal
2025-08-11 4:25 ` Yu Kuai
0 siblings, 1 reply; 12+ messages in thread
From: Damien Le Moal @ 2025-08-11 3:53 UTC (permalink / raw)
To: Yu Kuai, hare, jack, bvanassche, tj, josef, axboe
Cc: cgroups, linux-block, linux-kernel, yi.zhang, yangerkun,
johnny.chenyi, yukuai (C)
On 8/11/25 10:01, Yu Kuai wrote:
>>> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
>>> index 55a0fd105147..1a2da5edbe13 100644
>>> --- a/block/blk-mq-sched.c
>>> +++ b/block/blk-mq-sched.c
>>> @@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>>> if (budget_token < 0)
>>> break;
>>>
>>> - rq = e->type->ops.dispatch_request(hctx);
>>> + if (blk_queue_sq_sched(q)) {
>>> + elevator_lock(e);
>>> + rq = e->type->ops.dispatch_request(hctx);
>>> + elevator_unlock(e);
>>
>> I do not think this is safe for bfq since bfq uses the irqsave/irqrestore spin
>> lock variant. If it is safe, this needs a big comment block explaining why
>> and/or the rules regarding the scheduler use of this lock.
>
> It's correct, however, this patch doesn't change bfq yet, and it's like:
>
> elevator_lock
> spin_lock_irq(&bfqd->lock)
> spin_unlock_irq(&bfqd->lock)
> elevator_unlock
>
> Patch 3 remove bfqd->lock and convert this to:
>
> elevator_lock_irq
> elevator_unlock_irq.
I do not understand. Since q->elevator->lock is already taken here, without IRQ
disabled, how can bfq_dispatch_request method again take this same lock with IRQ
disabled ? That cannot possibly work.
The other side of this is that if you use elevator_lock(e) to call
e->type->ops.dispatch_request(hctx), it means that the scheduler *can NOT* use
that same lock in its completion path, since that can be called from IRQ
context. This may not be needed/a problem right now, but I think this needs a
comment in this patch to mention this.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-11 3:53 ` Damien Le Moal
@ 2025-08-11 4:25 ` Yu Kuai
2025-08-11 4:34 ` Damien Le Moal
0 siblings, 1 reply; 12+ messages in thread
From: Yu Kuai @ 2025-08-11 4:25 UTC (permalink / raw)
To: Damien Le Moal, Yu Kuai, hare, jack, bvanassche, tj, josef, axboe
Cc: cgroups, linux-block, linux-kernel, yi.zhang, yangerkun,
johnny.chenyi, yukuai (C)
Hi,
在 2025/08/11 11:53, Damien Le Moal 写道:
> On 8/11/25 10:01, Yu Kuai wrote:
>>>> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
>>>> index 55a0fd105147..1a2da5edbe13 100644
>>>> --- a/block/blk-mq-sched.c
>>>> +++ b/block/blk-mq-sched.c
>>>> @@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>>>> if (budget_token < 0)
>>>> break;
>>>>
>>>> - rq = e->type->ops.dispatch_request(hctx);
>>>> + if (blk_queue_sq_sched(q)) {
>>>> + elevator_lock(e);
>>>> + rq = e->type->ops.dispatch_request(hctx);
>>>> + elevator_unlock(e);
>>>
>>> I do not think this is safe for bfq since bfq uses the irqsave/irqrestore spin
>>> lock variant. If it is safe, this needs a big comment block explaining why
>>> and/or the rules regarding the scheduler use of this lock.
>>
>> It's correct, however, this patch doesn't change bfq yet, and it's like:
>>
>> elevator_lock
>> spin_lock_irq(&bfqd->lock)
>> spin_unlock_irq(&bfqd->lock)
>> elevator_unlock
>>
>> Patch 3 remove bfqd->lock and convert this to:
>>
>> elevator_lock_irq
>> elevator_unlock_irq.
>
> I do not understand. Since q->elevator->lock is already taken here, without IRQ
> disabled, how can bfq_dispatch_request method again take this same lock with IRQ
> disabled ? That cannot possibly work.
Looks like there is still misunderstanding somehow :( After patch 3,
bfq_dispatch_work doesn't grab any lock, elevator lock is held before
calling into dispatch method.
Before:
elevator_lock
bfq_dispatch_request
spin_lock_irq(&bfqd->lock)
spin_unlock_irq(&bfqd->lock)
elevator_unlock
After:
elevator_lock_irq
bfq_dispatch_request
elevator_unlock_irq
>
> The other side of this is that if you use elevator_lock(e) to call
> e->type->ops.dispatch_request(hctx), it means that the scheduler *can NOT* use
> that same lock in its completion path, since that can be called from IRQ
> context. This may not be needed/a problem right now, but I think this needs a
> comment in this patch to mention this.
>
So, the first patch just grab elevator lock for dispatch method, bfq
dispatch still using spin_lock_irq(&bfqd->lock), and complete is still
using bfqd->lock, this is safe.
Later patch 3 remove bfqd->lock and use elevator lock instead, and
because elevator lock will be used in complete path now, elevator_lock
from dispatch path is converted to elevator_lock_irq.
Thanks,
Kuai
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-11 4:25 ` Yu Kuai
@ 2025-08-11 4:34 ` Damien Le Moal
2025-08-11 6:17 ` Yu Kuai
0 siblings, 1 reply; 12+ messages in thread
From: Damien Le Moal @ 2025-08-11 4:34 UTC (permalink / raw)
To: Yu Kuai, hare, jack, bvanassche, tj, josef, axboe
Cc: cgroups, linux-block, linux-kernel, yi.zhang, yangerkun,
johnny.chenyi, yukuai (C)
On 8/11/25 13:25, Yu Kuai wrote:
> Hi,
>
> 在 2025/08/11 11:53, Damien Le Moal 写道:
>> On 8/11/25 10:01, Yu Kuai wrote:
>>>>> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
>>>>> index 55a0fd105147..1a2da5edbe13 100644
>>>>> --- a/block/blk-mq-sched.c
>>>>> +++ b/block/blk-mq-sched.c
>>>>> @@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>>>>> if (budget_token < 0)
>>>>> break;
>>>>>
>>>>> - rq = e->type->ops.dispatch_request(hctx);
>>>>> + if (blk_queue_sq_sched(q)) {
>>>>> + elevator_lock(e);
>>>>> + rq = e->type->ops.dispatch_request(hctx);
>>>>> + elevator_unlock(e);
>>>>
>>>> I do not think this is safe for bfq since bfq uses the irqsave/irqrestore spin
>>>> lock variant. If it is safe, this needs a big comment block explaining why
>>>> and/or the rules regarding the scheduler use of this lock.
>>>
>>> It's correct, however, this patch doesn't change bfq yet, and it's like:
>>>
>>> elevator_lock
>>> spin_lock_irq(&bfqd->lock)
>>> spin_unlock_irq(&bfqd->lock)
>>> elevator_unlock
>>>
>>> Patch 3 remove bfqd->lock and convert this to:
>>>
>>> elevator_lock_irq
>>> elevator_unlock_irq.
>>
>> I do not understand. Since q->elevator->lock is already taken here, without IRQ
>> disabled, how can bfq_dispatch_request method again take this same lock with IRQ
>> disabled ? That cannot possibly work.
>
> Looks like there is still misunderstanding somehow :( After patch 3,
> bfq_dispatch_work doesn't grab any lock, elevator lock is held before
> calling into dispatch method.
>
> Before:
>
> elevator_lock
> bfq_dispatch_request
> spin_lock_irq(&bfqd->lock)
> spin_unlock_irq(&bfqd->lock)
> elevator_unlock
>
> After:
> elevator_lock_irq
> bfq_dispatch_request
> elevator_unlock_irq
Ah, yes, I see it now.
But that is a nasty change that affects *all* schedulers, even those that do not
need to disable IRQs because they are not using the lock in their completion
path, e.g. mq-deadline. So I do not think that is acceptable.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock
2025-08-11 4:34 ` Damien Le Moal
@ 2025-08-11 6:17 ` Yu Kuai
0 siblings, 0 replies; 12+ messages in thread
From: Yu Kuai @ 2025-08-11 6:17 UTC (permalink / raw)
To: Damien Le Moal, Yu Kuai, hare, jack, bvanassche, tj, josef, axboe
Cc: cgroups, linux-block, linux-kernel, yi.zhang, yangerkun,
johnny.chenyi, yukuai (C)
Hi,
在 2025/08/11 12:34, Damien Le Moal 写道:
> On 8/11/25 13:25, Yu Kuai wrote:
>> Hi,
>>
>> 在 2025/08/11 11:53, Damien Le Moal 写道:
>>> On 8/11/25 10:01, Yu Kuai wrote:
>>>>>> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
>>>>>> index 55a0fd105147..1a2da5edbe13 100644
>>>>>> --- a/block/blk-mq-sched.c
>>>>>> +++ b/block/blk-mq-sched.c
>>>>>> @@ -113,7 +113,14 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>>>>>> if (budget_token < 0)
>>>>>> break;
>>>>>>
>>>>>> - rq = e->type->ops.dispatch_request(hctx);
>>>>>> + if (blk_queue_sq_sched(q)) {
>>>>>> + elevator_lock(e);
>>>>>> + rq = e->type->ops.dispatch_request(hctx);
>>>>>> + elevator_unlock(e);
>>>>>
>>>>> I do not think this is safe for bfq since bfq uses the irqsave/irqrestore spin
>>>>> lock variant. If it is safe, this needs a big comment block explaining why
>>>>> and/or the rules regarding the scheduler use of this lock.
>>>>
>>>> It's correct, however, this patch doesn't change bfq yet, and it's like:
>>>>
>>>> elevator_lock
>>>> spin_lock_irq(&bfqd->lock)
>>>> spin_unlock_irq(&bfqd->lock)
>>>> elevator_unlock
>>>>
>>>> Patch 3 remove bfqd->lock and convert this to:
>>>>
>>>> elevator_lock_irq
>>>> elevator_unlock_irq.
>>>
>>> I do not understand. Since q->elevator->lock is already taken here, without IRQ
>>> disabled, how can bfq_dispatch_request method again take this same lock with IRQ
>>> disabled ? That cannot possibly work.
>>
>> Looks like there is still misunderstanding somehow :( After patch 3,
>> bfq_dispatch_work doesn't grab any lock, elevator lock is held before
>> calling into dispatch method.
>>
>> Before:
>>
>> elevator_lock
>> bfq_dispatch_request
>> spin_lock_irq(&bfqd->lock)
>> spin_unlock_irq(&bfqd->lock)
>> elevator_unlock
>>
>> After:
>> elevator_lock_irq
>> bfq_dispatch_request
>> elevator_unlock_irq
>
> Ah, yes, I see it now.
>
> But that is a nasty change that affects *all* schedulers, even those that do not
> need to disable IRQs because they are not using the lock in their completion
> path, e.g. mq-deadline. So I do not think that is acceptable.
>
OK, I did some benchmark and didn't found any performance regression, so
I decided use the irq version for deadline. Perhaps I didn't found such
test.
I can add a flag in elevator_queue->flags in addition like following to
fix this:
diff --git a/block/elevator.h b/block/elevator.h
index 81f7700b0339..f04b9b2ae344 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -123,6 +123,7 @@ struct elevator_queue {
#define ELEVATOR_FLAG_REGISTERED 0
#define ELEVATOR_FLAG_DYING 1
#define ELEVATOR_FLAG_ENABLE_WBT_ON_EXIT 2
+#define ELEVATOR_FLAG_LOCK_AT_COMP 3
#define elevator_lock(e) spin_lock(&(e)->lock)
#define elevator_unlock(e) spin_unlock(&(e)->lock)
@@ -134,6 +135,22 @@ struct elevator_queue {
spin_unlock_irqrestore(&(e)->lock, flags)
#define elevator_lock_assert_held(e) lockdep_assert_held(&(e)->lock)
+static inline void elevator_dispatch_lock(struct elevator_queue *e)
+{
+ if (test_bit(ELEVATOR_FLAG_LOCK_AT_COMP, &e->flags))
+ elevator_lock_irq(e);
+ else
+ elevator_lock(e);
+}
+
+static inline void elevator_dispatch_unlock(struct elevator_queue *e)
+{
+ if (test_bit(ELEVATOR_FLAG_LOCK_AT_COMP, &e->flags))
+ elevator_unlock_irq(e);
+ else
+ elevator_unlock(e);
+}
+
Thanks,
Kuai
^ permalink raw reply related [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-08-11 6:17 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-06 8:57 [PATCH v3 0/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
2025-08-06 8:57 ` [PATCH v3 1/5] blk-mq-sched: introduce high level elevator lock Yu Kuai
2025-08-11 0:44 ` Damien Le Moal
2025-08-11 1:01 ` Yu Kuai
2025-08-11 3:53 ` Damien Le Moal
2025-08-11 4:25 ` Yu Kuai
2025-08-11 4:34 ` Damien Le Moal
2025-08-11 6:17 ` Yu Kuai
2025-08-06 8:57 ` [PATCH v3 2/5] mq-deadline: switch to use " Yu Kuai
2025-08-06 8:57 ` [PATCH v3 3/5] block, bfq: " Yu Kuai
2025-08-06 8:57 ` [PATCH v3 4/5] blk-mq-sched: refactor __blk_mq_do_dispatch_sched() Yu Kuai
2025-08-06 8:57 ` [PATCH v3 5/5] blk-mq-sched: support request batch dispatching for sq elevator Yu Kuai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).