From: Nilay Shroff <nilay@linux.ibm.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: linux-block@vger.kernel.org, axboe@kernel.dk, kch@nvidia.com,
shinichiro.kawasaki@wdc.com, hch@lst.de, gjoyce@ibm.com
Subject: Re: [PATCH 0/2] block: blk-rq-qos: replace static key with atomic bitop
Date: Tue, 5 Aug 2025 22:35:38 +0530 [thread overview]
Message-ID: <897eaaa4-31c7-4661-b5d4-3e2bef1fca1e@linux.ibm.com> (raw)
In-Reply-To: <aJH8qDEzV4tiG2wE@fedora>
On 8/5/25 6:14 PM, Ming Lei wrote:
> On Tue, Aug 05, 2025 at 10:28:14AM +0530, Nilay Shroff wrote:
>>
>>
>> On 8/4/25 7:12 PM, Ming Lei wrote:
>>> On Mon, Aug 04, 2025 at 05:51:09PM +0530, Nilay Shroff wrote:
>>>> This patchset replaces the use of a static key in the I/O path (rq_qos_
>>>> xxx()) with an atomic queue flag (QUEUE_FLAG_QOS_ENABLED). This change
>>>> is made to eliminate a potential deadlock introduced by the use of static
>>>> keys in the blk-rq-qos infrastructure, as reported by lockdep during
>>>> blktests block/005[1].
>>>>
>>>> The original static key approach was introduced to avoid unnecessary
>>>> dereferencing of q->rq_qos when no blk-rq-qos module (e.g., blk-wbt or
>>>> blk-iolatency) is configured. While efficient, enabling a static key at
>>>> runtime requires taking cpu_hotplug_lock and jump_label_mutex, which
>>>> becomes problematic if the queue is already frozen — causing a reverse
>>>> dependency on ->freeze_lock. This results in a lockdep splat indicating
>>>> a potential deadlock.
>>>>
>>>> To resolve this, we now gate q->rq_qos access with a q->queue_flags
>>>> bitop (QUEUE_FLAG_QOS_ENABLED), avoiding the static key and the associated
>>>> locking altogether.
>>>>
>>>> I compared both static key and atomic bitop implementations using ftrace
>>>> function graph tracer over ~50 invocations of rq_qos_issue() while ensuring
>>>> blk-wbt/blk-iolatency were disabled (i.e., no QoS functionality). For
>>>> easy comparision, I made rq_qos_issue() noinline. The comparision was
>>>> made on PowerPC machine.
>>>>
>>>> Static Key (disabled : QoS is not configured):
>>>> 5d0: 00 00 00 60 nop # patched in by static key framework (not taken)
>>>> 5d4: 20 00 80 4e blr # return (branch to link register)
>>>>
>>>> Only a nop and blr (branch to link register) are executed — very lightweight.
>>>>
>>>> atomic bitop (QoS is not configured):
>>>> 5d0: 20 00 23 e9 ld r9,32(r3) # load q->queue_flags
>>>> 5d4: 00 80 29 71 andi. r9,r9,32768 # check QUEUE_FLAG_QOS_ENABLED (bit 15)
>>>> 5d8: 20 00 82 4d beqlr # return if bit not set
>>>>
>>>> This performs an ld and and andi. before returning. Slightly more work,
>>>> but q->queue_flags is typically hot in cache during I/O submission.
>>>>
>>>> With Static Key (disabled):
>>>> Duration (us): min=0.668 max=0.816 avg≈0.750
>>>>
>>>> With atomic bitop QUEUE_FLAG_QOS_ENABLED (bit not set):
>>>> Duration (us): min=0.684 max=0.834 avg≈0.759
>>>>
>>>> As expected, both versions are almost similar in cost. The added latency
>>>> from an extra ld and andi. is in the range of ~9ns.
>>>>
>>>> There're two patches in the series. The first patch replaces static key
>>>> with QUEUE_FLAG_QOS_ENABLED. The second patch ensures that we disable
>>>> the QUEUE_FLAG_QOS_ENABLED when the queue no longer has any associated
>>>> rq_qos policies.
>>>>
>>>> As usual, feedback and review comments are welcome!
>>>>
>>>> [1] https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/
>>>
>>>
>>> Another approach is to call memalloc_noio_save() in cpu hotplug code...
>>>
>> Yes that would help fix this. However per the general usage of GFP_NOIO scope in
>> kernel, it is used when we're performing memory allocations in a context where I/O
>> must not be initiated, because doing so could cause deadlocks or recursion.
>>
>> So we typically, use GFP_NOIO in a code path that is already doing I/O, such as:
>> - In block layer context: during request submission
>> - Filesystem writeback, or swap-out.
>> - Memory reclaim or writeback triggered by memory pressure.
>
> If you grep blk_mq_freeze_queue, you will see the above list is far from
> enough, :-)
>
Yes you were correct:-) I didn't cover all cases but only a subset.
>>
>> The cpu hotplug code may not be running in any of the above context. So
>> IMO, adding memalloc_noio_save() in the cpu hotplug code would not be
>> a good idea, isn't it?
>
> The reasoning(A -> B) looks correct, but the condition A is obviously not.
>
Regarding the use of memalloc_noio_save() in CPU hotplug code:
Notably this issue isn't limited to the CPU hotplug subsystem itself.
In reality, the cpu_hotplug_lock is widely used across various kernel
subsystems—not just in CPU hotplug-specific paths. There are several
code paths outside of the hotplug core that acquire cpu_hotplug_lock
and subsequently perform memory allocations using GFP_KERNEL.
You can observe this by grepping for usages of cpu_hotplug_lock throughout
the kernel. This means that adding memalloc_noio_save() solely within the
CPU hotplug code wouldn't address the broader problem.
I also experimented with placing memalloc_noio_save() in CPU hotplug path,
and as expected, I still encountered a lockdep splat—indicating that the
root cause lies deeper in the general locking and allocation order around
cpu_hotplug_lock and memory reclaim behavior. Please see below the new
lockdep splat observed (after adding memalloc_noio_save() in CPU hotplug
code):
======================================================
WARNING: possible circular locking dependency detected
6.16.0+ #14 Not tainted
------------------------------------------------------
check/4628 is trying to acquire lock:
c0000000027b30c8 (cpu_hotplug_lock){++++}-{0:0}, at: static_key_slow_inc+0x24/0x50
but task is already holding lock:
c0000000cb825d28 (&q->q_usage_counter(io)#18){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x28/0x40
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (&q->q_usage_counter(io)#18){++++}-{0:0}:
__lock_acquire+0x6b4/0x103c
lock_acquire.part.0+0xd0/0x26c
blk_alloc_queue+0x3ac/0x3e8
blk_mq_alloc_queue+0x88/0x11c
__blk_mq_alloc_disk+0x34/0xd8
nvme_alloc_ns+0xdc/0x6ac [nvme_core]
nvme_scan_ns+0x234/0x2d4 [nvme_core]
async_run_entry_fn+0x60/0x1cc
process_one_work+0x2ac/0x7e4
worker_thread+0x238/0x460
kthread+0x158/0x188
start_kernel_thread+0x14/0x18
-> #2 (fs_reclaim){+.+.}-{0:0}:
__lock_acquire+0x6b4/0x103c
lock_acquire.part.0+0xd0/0x26c
fs_reclaim_acquire+0xe0/0x120
__kmalloc_cache_noprof+0x78/0x5d0
jump_label_add_module+0x1b0/0x528
jump_label_module_notify+0xb0/0x114
notifier_call_chain+0xac/0x248
blocking_notifier_call_chain_robust+0x88/0x134
load_module+0x938/0xba0
init_module_from_file+0xb4/0x108
idempotent_init_module+0x26c/0x358
sys_finit_module+0x98/0x140
system_call_exception+0x134/0x360
system_call_vectored_common+0x15c/0x2ec
-> #1 (jump_label_mutex){+.+.}-{4:4}:
__lock_acquire+0x6b4/0x103c
lock_acquire.part.0+0xd0/0x26c
__mutex_lock+0xf0/0xf60
jump_label_init+0x74/0x194
early_init_devtree+0x110/0x534
early_setup+0xc4/0x2a0
start_here_multiplatform+0x84/0xa0
-> #0 (cpu_hotplug_lock){++++}-{0:0}:
check_prev_add+0x170/0x1248
validate_chain+0x7f0/0xba8
__lock_acquire+0x6b4/0x103c
lock_acquire.part.0+0xd0/0x26c
cpus_read_lock+0x6c/0x18c
static_key_slow_inc+0x24/0x50
rq_qos_add+0x108/0x1c0
wbt_init+0x17c/0x234
elevator_change_done+0x228/0x2ac
elv_iosched_store+0x144/0x1f0
queue_attr_store+0x12c/0x164
sysfs_kf_write+0x74/0xc4
kernfs_fop_write_iter+0x1a8/0x2a4
vfs_write+0x45c/0x65c
ksys_write+0x84/0x140
system_call_exception+0x134/0x360
system_call_vectored_common+0x15c/0x2ec
other info that might help us debug this:
Chain exists of:
cpu_hotplug_lock --> fs_reclaim --> &q->q_usage_counter(io)#18
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&q->q_usage_counter(io)#18);
lock(fs_reclaim);
lock(&q->q_usage_counter(io)#18);
rlock(cpu_hotplug_lock);
*** DEADLOCK ***
7 locks held by check/4628:
#0: c0000000b6a92418 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x84/0x140
#1: c0000000b79f5488 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x164/0x2a4
#2: c000000009aef2b8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x170/0x2a4
#3: c0000000cc512190 (&set->update_nr_hwq_lock){++++}-{4:4}, at: elv_iosched_store+0x124/0x1f0
#4: c0000000cb825f30 (&q->rq_qos_mutex){+.+.}-{4:4}, at: wbt_init+0x160/0x234
#5: c0000000cb825d28 (&q->q_usage_counter(io)#18){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x28/0x40
#6: c0000000cb825d60 (&q->q_usage_counter(queue)#15){+.+.}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x28/0x40
Thanks,
--Nilay
next prev parent reply other threads:[~2025-08-05 17:05 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-04 12:21 [PATCH 0/2] block: blk-rq-qos: replace static key with atomic bitop Nilay Shroff
2025-08-04 12:21 ` [PATCH 1/2] block: avoid cpu_hotplug_lock depedency on freeze_lock Nilay Shroff
2025-08-04 12:21 ` [PATCH 2/2] block: clear QUEUE_FLAG_QOS_ENABLED in rq_qos_del() Nilay Shroff
2025-08-04 13:42 ` [PATCH 0/2] block: blk-rq-qos: replace static key with atomic bitop Ming Lei
2025-08-05 4:58 ` Nilay Shroff
2025-08-05 12:44 ` Ming Lei
2025-08-05 17:05 ` Nilay Shroff [this message]
2025-08-06 7:21 ` Ming Lei
2025-08-06 1:28 ` Jens Axboe
2025-08-06 1:44 ` Yu Kuai
2025-08-13 11:20 ` Nilay Shroff
2025-08-13 12:16 ` Jens Axboe
2025-08-13 15:01 ` Nilay Shroff
2025-08-06 5:13 ` Nilay Shroff
2025-08-05 9:28 ` Yu Kuai
2025-08-05 12:14 ` Nilay Shroff
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=897eaaa4-31c7-4661-b5d4-3e2bef1fca1e@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=axboe@kernel.dk \
--cc=gjoyce@ibm.com \
--cc=hch@lst.de \
--cc=kch@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=shinichiro.kawasaki@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox