From: Nilay Shroff <nilay@linux.ibm.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: linux-block@vger.kernel.org, axboe@kernel.dk,
yukuai1@huaweicloud.com, hch@lst.de, shinichiro.kawasaki@wdc.com,
kch@nvidia.com, gjoyce@ibm.com
Subject: Re: [PATCHv3 3/3] block: avoid cpu_hotplug_lock depedency on freeze_lock
Date: Thu, 14 Aug 2025 20:01:11 +0530 [thread overview]
Message-ID: <e33e97f7-0c12-4f70-81d0-4fea05557579@linux.ibm.com> (raw)
In-Reply-To: <aJ3myQW2A8HtteBC@fedora>
On 8/14/25 7:08 PM, Ming Lei wrote:
> On Thu, Aug 14, 2025 at 06:27:08PM +0530, Nilay Shroff wrote:
>>
>>
>> On 8/14/25 6:14 PM, Ming Lei wrote:
>>> On Thu, Aug 14, 2025 at 01:54:59PM +0530, Nilay Shroff wrote:
>>>> A recent lockdep[1] splat observed while running blktest block/005
>>>> reveals a potential deadlock caused by the cpu_hotplug_lock dependency
>>>> on ->freeze_lock. This dependency was introduced by commit 033b667a823e
>>>> ("block: blk-rq-qos: guard rq-qos helpers by static key").
>>>>
>>>> That change added a static key to avoid fetching q->rq_qos when
>>>> neither blk-wbt nor blk-iolatency is configured. The static key
>>>> dynamically patches kernel text to a NOP when disabled, eliminating
>>>> overhead of fetching q->rq_qos in the I/O hot path. However, enabling
>>>> a static key at runtime requires acquiring both cpu_hotplug_lock and
>>>> jump_label_mutex. When this happens after the queue has already been
>>>> frozen (i.e., while holding ->freeze_lock), it creates a locking
>>>> dependency from cpu_hotplug_lock to ->freeze_lock, which leads to a
>>>> potential deadlock reported by lockdep [1].
>>>>
>>>> To resolve this, replace the static key mechanism with q->queue_flags:
>>>> QUEUE_FLAG_QOS_ENABLED. This flag is evaluated in the fast path before
>>>> accessing q->rq_qos. If the flag is set, we proceed to fetch q->rq_qos;
>>>> otherwise, the access is skipped.
>>>>
>>>> Since q->queue_flags is commonly accessed in IO hotpath and resides in
>>>> the first cacheline of struct request_queue, checking it imposes minimal
>>>> overhead while eliminating the deadlock risk.
>>>>
>>>> This change avoids the lockdep splat without introducing performance
>>>> regressions.
>>>>
>>>> [1] https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/
>>>>
>>>> Reported-by: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
>>>> Closes: https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/
>>>> Fixes: 033b667a823e ("block: blk-rq-qos: guard rq-qos helpers by static key")
>>>> Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
>>>> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
>>>> ---
>>>> block/blk-mq-debugfs.c | 1 +
>>>> block/blk-rq-qos.c | 9 ++++---
>>>> block/blk-rq-qos.h | 54 ++++++++++++++++++++++++------------------
>>>> include/linux/blkdev.h | 1 +
>>>> 4 files changed, 37 insertions(+), 28 deletions(-)
>>>>
>>>> diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
>>>> index 7ed3e71f2fc0..32c65efdda46 100644
>>>> --- a/block/blk-mq-debugfs.c
>>>> +++ b/block/blk-mq-debugfs.c
>>>> @@ -95,6 +95,7 @@ static const char *const blk_queue_flag_name[] = {
>>>> QUEUE_FLAG_NAME(SQ_SCHED),
>>>> QUEUE_FLAG_NAME(DISABLE_WBT_DEF),
>>>> QUEUE_FLAG_NAME(NO_ELV_SWITCH),
>>>> + QUEUE_FLAG_NAME(QOS_ENABLED),
>>>> };
>>>> #undef QUEUE_FLAG_NAME
>>>>
>>>> diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
>>>> index b1e24bb85ad2..654478dfbc20 100644
>>>> --- a/block/blk-rq-qos.c
>>>> +++ b/block/blk-rq-qos.c
>>>> @@ -2,8 +2,6 @@
>>>>
>>>> #include "blk-rq-qos.h"
>>>>
>>>> -__read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos);
>>>> -
>>>> /*
>>>> * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,
>>>> * false if 'v' + 1 would be bigger than 'below'.
>>>> @@ -319,8 +317,8 @@ void rq_qos_exit(struct request_queue *q)
>>>> struct rq_qos *rqos = q->rq_qos;
>>>> q->rq_qos = rqos->next;
>>>> rqos->ops->exit(rqos);
>>>> - static_branch_dec(&block_rq_qos);
>>>> }
>>>> + blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);
>>>> mutex_unlock(&q->rq_qos_mutex);
>>>> }
>>>>
>>>> @@ -346,7 +344,7 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
>>>> goto ebusy;
>>>> rqos->next = q->rq_qos;
>>>> q->rq_qos = rqos;
>>>> - static_branch_inc(&block_rq_qos);
>>>> + blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
>>>
>>> One stupid question: can we simply move static_branch_inc(&block_rq_qos)
>>> out of queue freeze in rq_qos_add()?
>>>
>>> What matters is just the 1st static_branch_inc() which switches the counter
>>> from 0 to 1, when blk_mq_freeze_queue() guarantees that all in-progress code
>>> paths observe q->rq_qos as NULL. That means static_branch_inc(&block_rq_qos)
>>> needn't queue freeze protection.
>>>
>> I thought about it earlier but that won't work because we have
>> code paths freezing queue before it reaches upto rq_qos_add(),
>> For instance:
>>
>> We have following code paths from where we invoke
>> rq_qos_add() APIs with queue already frozen:
>>
>> ioc_qos_write()
>> -> blkg_conf_open_bdev_frozen() => freezes queue
>> -> blk_iocost_init()
>> -> rq_qos_add()
>>
>> queue_wb_lat_store() => freezes queue
>> -> wbt_init()
>> -> rq_qos_add()
>
> The above two shouldn't be hard to solve, such as, add helper
> rq_qos_prep_add() for increasing the static branch counter.
>
Yes but then it means that IOs which would be in flight
would take a hit in hotpath: In hotpath those IOs
would evaluate static key branch to true and then fetch
q->rq_qos (which probably would not be in the first
cacheline). So are we okay to take hat hit in IO
hotpath?
Thanks,
--Nilay
next prev parent reply other threads:[~2025-08-14 14:31 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-14 8:24 [PATCHv3 0/3] block: blk-rq-qos: replace static key with atomic bitop Nilay Shroff
2025-08-14 8:24 ` [PATCHv3 1/3] block: skip q->rq_qos check in rq_qos_done_bio() Nilay Shroff
2025-08-14 8:59 ` Yu Kuai
2025-08-14 11:12 ` Ming Lei
2025-08-14 8:24 ` [PATCHv3 2/3] block: decrement block_rq_qos static key in rq_qos_del() Nilay Shroff
2025-08-14 9:14 ` Yu Kuai
2025-08-14 11:33 ` Ming Lei
2025-08-14 8:24 ` [PATCHv3 3/3] block: avoid cpu_hotplug_lock depedency on freeze_lock Nilay Shroff
2025-08-14 9:21 ` Yu Kuai
2025-08-14 12:44 ` Ming Lei
2025-08-14 12:57 ` Nilay Shroff
2025-08-14 13:38 ` Ming Lei
2025-08-14 14:31 ` Nilay Shroff [this message]
2025-08-15 0:13 ` Ming Lei
2025-08-15 1:04 ` Yu Kuai
2025-08-15 7:59 ` Ming Lei
2025-08-15 8:39 ` Yu Kuai
2025-08-15 9:43 ` Nilay Shroff
2025-08-15 13:24 ` Ming Lei
2025-08-15 18:33 ` Nilay Shroff
2025-08-16 1:01 ` Yu Kuai
2025-08-16 1:59 ` Ming Lei
2025-08-21 12:19 ` [PATCHv3 0/3] block: blk-rq-qos: replace static key with atomic bitop Nilay Shroff
2025-08-21 13:11 ` Jens Axboe
2025-08-21 13:11 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e33e97f7-0c12-4f70-81d0-4fea05557579@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=axboe@kernel.dk \
--cc=gjoyce@ibm.com \
--cc=hch@lst.de \
--cc=kch@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=shinichiro.kawasaki@wdc.com \
--cc=yukuai1@huaweicloud.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).