public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
       [not found] <2025111256-CVE-2025-40146-b919@gregkh>
@ 2025-11-27 13:22 ` Zheng Qixing
  2025-11-27 13:39   ` Greg KH
  2025-11-29  4:01 ` Zheng Qixing
  1 sibling, 1 reply; 9+ messages in thread
From: Zheng Qixing @ 2025-11-27 13:22 UTC (permalink / raw)
  To: gregkh
  Cc: cve, gregkh, linux-cve-announce, linux-kernel, yukuai, ming.lei,
	Nilay Shroff, zhangyi (F), yangerkun, Hou Tao

Hi,

Commit b86433721f46 ("blk-mq: fix potential deadlock while nr_requests 
grown")  aims to avoid a deadlock issue when the queue is frozen and 
memory reclaim is triggered.

However, the sysfs nr_requests update path is already under a 
memalloc_noio_save() region while the queue is frozen (via 
blk_mq_freeze_queue()).

Would it be possible to reject this CVE, or clarify why it needs CVE 
assignment in this case?

Any feedback or further explanation would be appreciated.


Best regards,

Qixing


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-27 13:22 ` CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown Zheng Qixing
@ 2025-11-27 13:39   ` Greg KH
  2025-11-28  1:15     ` Zheng Qixing
  0 siblings, 1 reply; 9+ messages in thread
From: Greg KH @ 2025-11-27 13:39 UTC (permalink / raw)
  To: Zheng Qixing
  Cc: cve, linux-cve-announce, linux-kernel, yukuai, ming.lei,
	Nilay Shroff, zhangyi (F), yangerkun, Hou Tao

On Thu, Nov 27, 2025 at 09:22:42PM +0800, Zheng Qixing wrote:
> Hi,
> 
> Commit b86433721f46 ("blk-mq: fix potential deadlock while nr_requests
> grown")  aims to avoid a deadlock issue when the queue is frozen and memory
> reclaim is triggered.
> 
> However, the sysfs nr_requests update path is already under a
> memalloc_noio_save() region while the queue is frozen (via
> blk_mq_freeze_queue()).

Did the lockdep splat in
https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
not describe the issue here that the commit is attempting to solve?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-27 13:39   ` Greg KH
@ 2025-11-28  1:15     ` Zheng Qixing
  2025-11-28  5:20       ` Nilay Shroff
  0 siblings, 1 reply; 9+ messages in thread
From: Zheng Qixing @ 2025-11-28  1:15 UTC (permalink / raw)
  To: Greg KH
  Cc: cve, linux-kernel, yukuai, ming.lei, Nilay Shroff, zhangyi (F),
	yangerkun, Hou Tao


在 2025/11/27 21:39, Greg KH 写道:
> On Thu, Nov 27, 2025 at 09:22:42PM +0800, Zheng Qixing wrote:
>> Hi,
>>
>> Commit b86433721f46 ("blk-mq: fix potential deadlock while nr_requests
>> grown")  aims to avoid a deadlock issue when the queue is frozen and memory
>> reclaim is triggered.
>>
>> However, the sysfs nr_requests update path is already under a
>> memalloc_noio_save() region while the queue is frozen (via
>> blk_mq_freeze_queue()).
> Did the lockdep splat in
> https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
> not describe the issue here that the commit is attempting to solve?
>
> thanks,
>
> greg k-h


The deadlock issue described in this link is about elevator switch path, 
but the patch modifies sysfs nr_requests update path.

I didn't identify any potential deadlock issues on this path. If I 
misunderstood something, could someone help clarify?


Thanks,

Qixing





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-28  1:15     ` Zheng Qixing
@ 2025-11-28  5:20       ` Nilay Shroff
  2025-11-28  6:33         ` yangerkun
  0 siblings, 1 reply; 9+ messages in thread
From: Nilay Shroff @ 2025-11-28  5:20 UTC (permalink / raw)
  To: Zheng Qixing, Greg KH
  Cc: cve, linux-kernel, yukuai, ming.lei, zhangyi (F), yangerkun,
	Hou Tao



On 11/28/25 6:45 AM, Zheng Qixing wrote:
> 
> 在 2025/11/27 21:39, Greg KH 写道:
>> On Thu, Nov 27, 2025 at 09:22:42PM +0800, Zheng Qixing wrote:
>>> Hi,
>>>
>>> Commit b86433721f46 ("blk-mq: fix potential deadlock while nr_requests
>>> grown")  aims to avoid a deadlock issue when the queue is frozen and memory
>>> reclaim is triggered.
>>>
>>> However, the sysfs nr_requests update path is already under a
>>> memalloc_noio_save() region while the queue is frozen (via
>>> blk_mq_freeze_queue()).
>> Did the lockdep splat in
>> https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
>> not describe the issue here that the commit is attempting to solve?
>>
>> thanks,
>>
>> greg k-h
> 
> 
> The deadlock issue described in this link is about elevator switch path, but the patch modifies sysfs nr_requests update path.
> 
> I didn't identify any potential deadlock issues on this path. If I misunderstood something, could someone help clarify?
> 
> 
Let me clarify the confusion here.

The deadlock reported in [1] requires updates across multiple code paths. There are
three distinct paths that need to be fixed to avoid the deadlock. While the report
in [1] only exposed the issue in one of these paths, we already knew that all three
paths needed changes to fully resolve the problem:

1. Elevator change path (via sysfs attribute) that triggers a scheduler tags update
2. Elevator change path triggered by an nr_hw_queues update which also triggers a
   scheduler tags update
3. Scheduler tags update triggered through the nr_requests sysfs attribute (please 
   note when nr_requests grows beyond current queue depth it triggers scheduler
   tags update)

The first two code paths were addressed by:
commit f5a6604f7a44 (“block: fix lockdep warning caused by lock dependency in 
elv_iosched_store”), and commit 04225d13aef1 (“block: fix potential deadlock while
running nr_hw_queue update”) respectively.

The third code path was fixed by:
commit b86433721f46 (“blk-mq: fix potential deadlock while nr_requests grown”).

Ideally, all of these commits should be referenced as collectively fixing the lockdep
splat reported in [1]. I hope this clarifies the situation. Please let me know if you
have any further questions.

[1] https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-28  5:20       ` Nilay Shroff
@ 2025-11-28  6:33         ` yangerkun
  2025-11-28  7:15           ` Nilay Shroff
  0 siblings, 1 reply; 9+ messages in thread
From: yangerkun @ 2025-11-28  6:33 UTC (permalink / raw)
  To: Nilay Shroff, Zheng Qixing, Greg KH
  Cc: cve, linux-kernel, yukuai, ming.lei, zhangyi (F), Hou Tao



在 2025/11/28 13:20, Nilay Shroff 写道:
> 
> 
> On 11/28/25 6:45 AM, Zheng Qixing wrote:
>>
>> 在 2025/11/27 21:39, Greg KH 写道:
>>> On Thu, Nov 27, 2025 at 09:22:42PM +0800, Zheng Qixing wrote:
>>>> Hi,
>>>>
>>>> Commit b86433721f46 ("blk-mq: fix potential deadlock while nr_requests
>>>> grown")  aims to avoid a deadlock issue when the queue is frozen and memory
>>>> reclaim is triggered.
>>>>
>>>> However, the sysfs nr_requests update path is already under a
>>>> memalloc_noio_save() region while the queue is frozen (via
>>>> blk_mq_freeze_queue()).
>>> Did the lockdep splat in
>>> https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
>>> not describe the issue here that the commit is attempting to solve?
>>>
>>> thanks,
>>>
>>> greg k-h
>>
>>
>> The deadlock issue described in this link is about elevator switch path, but the patch modifies sysfs nr_requests update path.
>>
>> I didn't identify any potential deadlock issues on this path. If I misunderstood something, could someone help clarify?
>>
>>
> Let me clarify the confusion here.
> 
> The deadlock reported in [1] requires updates across multiple code paths. There are
> three distinct paths that need to be fixed to avoid the deadlock. While the report
> in [1] only exposed the issue in one of these paths, we already knew that all three
> paths needed changes to fully resolve the problem:
> 
> 1. Elevator change path (via sysfs attribute) that triggers a scheduler tags update
> 2. Elevator change path triggered by an nr_hw_queues update which also triggers a
>     scheduler tags update
> 3. Scheduler tags update triggered through the nr_requests sysfs attribute (please
>     note when nr_requests grows beyond current queue depth it triggers scheduler
>     tags update)

commit b86433721f46d934940528f28d49c1dedb690df1 (HEAD -> master)
Author: Yu Kuai <yukuai3@huawei.com>
Date:   Wed Sep 10 16:04:43 2025 +0800

     blk-mq: fix potential deadlock while nr_requests grown

     Allocate and free sched_tags while queue is freezed can deadlock[1],
     this is a long term problem, hence allocate memory before freezing
     queue and free memory after queue is unfreezed.

     [1] 
https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
     Fixes: e3a2b3f931f5 ("blk-mq: allow changing of queue depth through 
sysfs")

     Signed-off-by: Yu Kuai <yukuai3@huawei.com>
     Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
     Signed-off-by: Jens Axboe <axboe@kernel.dk>

We are assume that what's the problem Yu describe is when we update
nr_request, we may need some memory allocation(nr_requests grows). And
the memory allocation may trigger some memory reclaim, and fall into
another I/O process, and since the request_queue has been freezen, there
exist deadlock.

But after checking the source code, there exist
queue_requests_store->blk_mq_freeze_queue->memalloc_noio_save, the
whole process which may trigger memory allocation won't trigger I/O
process. So deadlock can not happened... And if that's true, this patch
does not fix any problem.

Thanks,
Erkun.

> 
> The first two code paths were addressed by:
> commit f5a6604f7a44 (“block: fix lockdep warning caused by lock dependency in
> elv_iosched_store”), and commit 04225d13aef1 (“block: fix potential deadlock while
> running nr_hw_queue update”) respectively.
> 
> The third code path was fixed by:
> commit b86433721f46 (“blk-mq: fix potential deadlock while nr_requests grown”).
> 
> Ideally, all of these commits should be referenced as collectively fixing the lockdep
> splat reported in [1]. I hope this clarifies the situation. Please let me know if you
> have any further questions.
> 
> [1] https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
> 
> Thanks,
> --Nilay
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-28  6:33         ` yangerkun
@ 2025-11-28  7:15           ` Nilay Shroff
  2025-11-28  9:44             ` Zheng Qixing
  2025-11-29  3:52             ` Zheng Qixing
  0 siblings, 2 replies; 9+ messages in thread
From: Nilay Shroff @ 2025-11-28  7:15 UTC (permalink / raw)
  To: yangerkun, Zheng Qixing, Greg KH
  Cc: cve, linux-kernel, yukuai, ming.lei, zhangyi (F), Hou Tao

> 
> commit b86433721f46d934940528f28d49c1dedb690df1 (HEAD -> master)
> Author: Yu Kuai <yukuai3@huawei.com>
> Date:   Wed Sep 10 16:04:43 2025 +0800
> 
>     blk-mq: fix potential deadlock while nr_requests grown
> 
>     Allocate and free sched_tags while queue is freezed can deadlock[1],
>     this is a long term problem, hence allocate memory before freezing
>     queue and free memory after queue is unfreezed.
> 
>     [1] https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
>     Fixes: e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs")
> 
>     Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>     Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
>     Signed-off-by: Jens Axboe <axboe@kernel.dk>
> 
> We are assume that what's the problem Yu describe is when we update
> nr_request, we may need some memory allocation(nr_requests grows). And
> the memory allocation may trigger some memory reclaim, and fall into
> another I/O process, and since the request_queue has been freezen, there
> exist deadlock.
> 
> But after checking the source code, there exist
> queue_requests_store->blk_mq_freeze_queue->memalloc_noio_save, the
> whole process which may trigger memory allocation won't trigger I/O
> process. So deadlock can not happened... And if that's true, this patch
> does not fix any problem.
> 

Yes, memalloc_noio_save() is invoked before we freeze the queue (e.g., in
elv_iosched_store()), but that does not prevent the deadlock scenario described
in the lockdep splat.

If you look closely at the splat, the problematic lock is not fs_reclaim (which
may be the first impression), but rather ->pcpu_alloc_mutex. From the splat, the
chain of dependencies looks like this:

thread #0: blocked on q->elevator_lock  
  thread #1: blocked on ->pcpu_alloc_mutex
    thread #2: blocked on fs-reclaim
 
Here is the key detail:

Thread #0 is running under GFP_NOIO scope (due to memalloc_noio_save()). 
However, it is not blocked on fs_reclaim. Instead, it is blocked 
on ->elevator_lock.

Thread #1 is also running with GFP_NOIO and holds ->elevator_lock
while the queue is frozen. It is blocked on ->pcpu_alloc_mutex,
which is already held by Thread #2 (the thread that is stuck in
fs_reclaim). Thread #2 is running without GFP_NOIO scope.

In other words:
- GFP_NOIO prevents a thread from entering fs_reclaim, but it does
  not prevent triggering per-CPU memory allocations, which require
  taking ->pcpu_alloc_mutex.
- This ->pcpu_alloc_mutex is the actual source of contention in the
  splat, and it sits outside the protections offered by GFP_NOIO. 

That means:
- Even though memalloc_noio_save() avoids fs reclaim recursion, 
  it does not prevent per-CPU allocations from blocking, and thus
  it cannot prevent the deadlock involving ->pcpu_alloc_mutex.

So the reasoning that “memalloc_noio_save() prevents deadlock” is
incomplete — GFP_NOIO only handles reclaim, not per-CPU allocations.

This is why the patch is still needed: GFP_NOIO scope with freezing
the queue and modifying scheduler tags can still lead to a circular
dependency involving ->pcpu_alloc_mutex, which the splat clearly shows. 

If you look at the reasoning described in commit f5a6604f7a44 (“block: 
fix lockdep warning caused by lock dependency in elv_iosched_store”) and
commit 04225d13aef1 (“block: fix potential deadlock while running 
nr_hw_queue update”), both explicitly explain that the goal is to break
the dependency chain between the percpu allocator lock and the elevator lock.

The third commit, b86433721f46 (“blk-mq: fix potential deadlock while
nr_requests grown”), is less explicit in its explanation, but it addresses
the same underlying issue.

Taken together, all these commits resolve the deadlock described in the
lockdep splat.

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-28  7:15           ` Nilay Shroff
@ 2025-11-28  9:44             ` Zheng Qixing
  2025-11-29  3:52             ` Zheng Qixing
  1 sibling, 0 replies; 9+ messages in thread
From: Zheng Qixing @ 2025-11-28  9:44 UTC (permalink / raw)
  To: Nilay Shroff
  Cc: cve, linux-kernel, yukuai, ming.lei, zhangyi (F), Hou Tao,
	yangerkun, Greg KH, zhengqixing


在 2025/11/28 15:15, Nilay Shroff 写道:
>> commit b86433721f46d934940528f28d49c1dedb690df1 (HEAD -> master)
>> Author: Yu Kuai <yukuai3@huawei.com>
>> Date:   Wed Sep 10 16:04:43 2025 +0800
>>
>>      blk-mq: fix potential deadlock while nr_requests grown
>>
>>      Allocate and free sched_tags while queue is freezed can deadlock[1],
>>      this is a long term problem, hence allocate memory before freezing
>>      queue and free memory after queue is unfreezed.
>>
>>      [1] https://lore.kernel.org/all/0659ea8d-a463-47c8-9180-43c719e106eb@linux.ibm.com/
>>      Fixes: e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs")
>>
>>      Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>      Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
>>      Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>
>> We are assume that what's the problem Yu describe is when we update
>> nr_request, we may need some memory allocation(nr_requests grows). And
>> the memory allocation may trigger some memory reclaim, and fall into
>> another I/O process, and since the request_queue has been freezen, there
>> exist deadlock.
>>
>> But after checking the source code, there exist
>> queue_requests_store->blk_mq_freeze_queue->memalloc_noio_save, the
>> whole process which may trigger memory allocation won't trigger I/O
>> process. So deadlock can not happened... And if that's true, this patch
>> does not fix any problem.
>>
> Yes, memalloc_noio_save() is invoked before we freeze the queue (e.g., in
> elv_iosched_store()), but that does not prevent the deadlock scenario described
> in the lockdep splat.
>
> If you look closely at the splat, the problematic lock is not fs_reclaim (which
> may be the first impression), but rather ->pcpu_alloc_mutex. From the splat, the
> chain of dependencies looks like this:
>
> thread #0: blocked on q->elevator_lock
>    thread #1: blocked on ->pcpu_alloc_mutex
>      thread #2: blocked on fs-reclaim
>   
> Here is the key detail:
>
> Thread #0 is running under GFP_NOIO scope (due to memalloc_noio_save()).
> However, it is not blocked on fs_reclaim. Instead, it is blocked
> on ->elevator_lock.
>
> Thread #1 is also running with GFP_NOIO and holds ->elevator_lock
> while the queue is frozen. It is blocked on ->pcpu_alloc_mutex,
> which is already held by Thread #2 (the thread that is stuck in
> fs_reclaim). Thread #2 is running without GFP_NOIO scope.
>
> In other words:
> - GFP_NOIO prevents a thread from entering fs_reclaim, but it does
>    not prevent triggering per-CPU memory allocations, which require
>    taking ->pcpu_alloc_mutex.
> - This ->pcpu_alloc_mutex is the actual source of contention in the
>    splat, and it sits outside the protections offered by GFP_NOIO.
>
> That means:
> - Even though memalloc_noio_save() avoids fs reclaim recursion,
>    it does not prevent per-CPU allocations from blocking, and thus
>    it cannot prevent the deadlock involving ->pcpu_alloc_mutex.
>

Thank you for the detailed explanation.

Now I understand that there could indeed be a deadlock issue here :)


Thanks,

Qixing


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
  2025-11-28  7:15           ` Nilay Shroff
  2025-11-28  9:44             ` Zheng Qixing
@ 2025-11-29  3:52             ` Zheng Qixing
  1 sibling, 0 replies; 9+ messages in thread
From: Zheng Qixing @ 2025-11-29  3:52 UTC (permalink / raw)
  To: Nilay Shroff, Greg KH
  Cc: cve, linux-kernel, yukuai, ming.lei, zhangyi (F), Hou Tao,
	yangerkun, zhengqixing

Commit 28307d938fb2 ("percpu: make pcpu_alloc() aware of current gfp 
context") has fixed a reclaim recursion for scoped GFP_NOFS context by 
avoiding taking pcpu_alloc_mutex.

@@ -1569,6 +1569,12 @@ static void __percpu *pcpu_alloc(size_t size, 
size_t align, bool reserved, void __percpu *ptr; size_t bits, bit_align; 
+ gfp = current_gfp_context(gfp); + /* whitelisted flags that can be 
passed to the backing allocators */ + pcpu_gfp = gfp & (GFP_KERNEL | 
__GFP_NORETRY | __GFP_NOWARN); + is_atomic = (gfp & GFP_KERNEL) != 
GFP_KERNEL; + do_warn = !(gfp & __GFP_NOWARN); Commit 9a5b183941b5 ("mm, 
percpu: do not consider sleepable allocations atomic") fixes premature 
allocation failures in certain scenarios. However, this change made it 
possible to acquire the pcpu_alloc_mutex under GFP_NOIO scope.

@@ -1745,7 +1745,7 @@ void __percpu *pcpu_alloc_noprof(size_t size, 
size_t align, bool reserved, gfp = current_gfp_context(gfp); /* 
whitelisted flags that can be passed to the backing allocators */ 
pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); - 
is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL; + is_atomic = 
!gfpflags_allow_blocking(gfp); do_warn = !(gfp & __GFP_NOWARN); Here's 
the relevant commit timeline:

e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs") 
v3.16-rc1 28307d938fb2 ("percpu: make pcpu_alloc() aware of current gfp 
context") v5.7-rc5 9a5b183941b5 ("mm, percpu: do not consider sleepable 
allocations atomic") v6.15-rc1 b86433721f46 ("blk-mq: fix potential 
deadlock while nr_requests grown") v6.18-rc1

This means that in the Linux master branch, this deadlock issue *did not 
exist* during the version window from v5.7-rc5 to v6.15-rc1. After 
analyzing the LTS versions, I found that linux-5.7.y through 
linux-6.13.y should also not have this deadlock issue.

If you have any questions or concerns, please feel free to discuss further.


Best regards,

Qixing


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown
       [not found] <2025111256-CVE-2025-40146-b919@gregkh>
  2025-11-27 13:22 ` CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown Zheng Qixing
@ 2025-11-29  4:01 ` Zheng Qixing
  1 sibling, 0 replies; 9+ messages in thread
From: Zheng Qixing @ 2025-11-29  4:01 UTC (permalink / raw)
  To: gregkh, Nilay Shroff
  Cc: cve, linux-kernel, yukuai, ming.lei, zhangyi (F), Hou Tao,
	yangerkun, zhengqixing

Sorry for resending this message.


Commit 28307d938fb2 ("percpu: make pcpu_alloc() aware of current gfp 
context") has fixed a reclaim recursion for scoped GFP_NOFS context by 
avoiding taking pcpu_alloc_mutex.

@@ -1569,6 +1569,12 @@ static void __percpu *pcpu_alloc(size_t size, 
size_t align, bool reserved,
      void __percpu *ptr;
      size_t bits, bit_align;

+    gfp = current_gfp_context(gfp);
+    /* whitelisted flags that can be passed to the backing allocators */
+    pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
+    is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL;
+    do_warn = !(gfp & __GFP_NOWARN);


Commit 9a5b183941b5 ("mm, percpu: do not consider sleepable allocations 
atomic") fixes premature allocation failures in certain scenarios. 
However, this change made it possible to acquire the pcpu_alloc_mutex 
under GFP_NOIO scope.

@@ -1745,7 +1745,7 @@ void __percpu *pcpu_alloc_noprof(size_t size, 
size_t align, bool reserved,
      gfp = current_gfp_context(gfp);
      /* whitelisted flags that can be passed to the backing allocators */
      pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
-    is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL;
+    is_atomic = !gfpflags_allow_blocking(gfp);
      do_warn = !(gfp & __GFP_NOWARN);


Here's the relevant commit timeline:

e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs")    
         v3.16-rc1

28307d938fb2 ("percpu: make pcpu_alloc() aware of current gfp context")  
        v5.7-rc5

9a5b183941b5 ("mm, percpu: do not consider sleepable allocations 
atomic")    v6.15-rc1

b86433721f46 ("blk-mq: fix potential deadlock while nr_requests grown")  
         v6.18-rc1


This means that in the Linux master branch, this deadlock issue *did not 
exist* during the version window from v5.7-rc5 to v6.15-rc1. After 
analyzing the LTS versions, I found that linux-5.7.y through 
linux-6.13.y should also not have this deadlock issue.

If you have any questions or concerns, please feel free to discuss further.


Best regards,

Qixing



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-11-29  4:01 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <2025111256-CVE-2025-40146-b919@gregkh>
2025-11-27 13:22 ` CVE-2025-40146: blk-mq: fix potential deadlock while nr_requests grown Zheng Qixing
2025-11-27 13:39   ` Greg KH
2025-11-28  1:15     ` Zheng Qixing
2025-11-28  5:20       ` Nilay Shroff
2025-11-28  6:33         ` yangerkun
2025-11-28  7:15           ` Nilay Shroff
2025-11-28  9:44             ` Zheng Qixing
2025-11-29  3:52             ` Zheng Qixing
2025-11-29  4:01 ` Zheng Qixing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox