linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] md/raid1: Fix stack memory use after return in raid1_reshape
@ 2025-06-12 11:28 Wang Jinchao
  2025-06-13  9:11 ` Yu Kuai
  2025-06-14  6:44 ` Yu Kuai
  0 siblings, 2 replies; 3+ messages in thread
From: Wang Jinchao @ 2025-06-12 11:28 UTC (permalink / raw)
  To: Song Liu, Yu Kuai; +Cc: Wang Jinchao, linux-raid, linux-kernel

In the raid1_reshape function, newpool is
allocated on the stack and assigned to conf->r1bio_pool.
This results in conf->r1bio_pool.wait.head pointing
to a stack address.
Accessing this address later can lead to a kernel panic.

Example access path:

raid1_reshape()
{
	// newpool is on the stack
	mempool_t newpool, oldpool;
	// initialize newpool.wait.head to stack address
	mempool_init(&newpool, ...);
	conf->r1bio_pool = newpool;
}

raid1_read_request() or raid1_write_request()
{
	alloc_r1bio()
	{
		mempool_alloc()
		{
			// if pool->alloc fails
			remove_element()
			{
				--pool->curr_nr;
			}
		}
	}
}

mempool_free()
{
	if (pool->curr_nr < pool->min_nr) {
		// pool->wait.head is a stack address
		// wake_up() will try to access this invalid address
		// which leads to a kernel panic
		return;
		wake_up(&pool->wait);
	}
}

Fix:
reinit conf->r1bio_pool.wait after assigning newpool.

Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
---
 drivers/md/raid1.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 19c5a0ce5a40..fd4ce2a4136f 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3428,6 +3428,7 @@ static int raid1_reshape(struct mddev *mddev)
 	/* ok, everything is stopped */
 	oldpool = conf->r1bio_pool;
 	conf->r1bio_pool = newpool;
+	init_waitqueue_head(&conf->r1bio_pool.wait);
 
 	for (d = d2 = 0; d < conf->raid_disks; d++) {
 		struct md_rdev *rdev = conf->mirrors[d].rdev;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v3] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12 11:28 [PATCH v3] md/raid1: Fix stack memory use after return in raid1_reshape Wang Jinchao
@ 2025-06-13  9:11 ` Yu Kuai
  2025-06-14  6:44 ` Yu Kuai
  1 sibling, 0 replies; 3+ messages in thread
From: Yu Kuai @ 2025-06-13  9:11 UTC (permalink / raw)
  To: Wang Jinchao, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

在 2025/06/12 19:28, Wang Jinchao 写道:
> In the raid1_reshape function, newpool is
> allocated on the stack and assigned to conf->r1bio_pool.
> This results in conf->r1bio_pool.wait.head pointing
> to a stack address.
> Accessing this address later can lead to a kernel panic.
> 
> Example access path:
> 
> raid1_reshape()
> {
> 	// newpool is on the stack
> 	mempool_t newpool, oldpool;
> 	// initialize newpool.wait.head to stack address
> 	mempool_init(&newpool, ...);
> 	conf->r1bio_pool = newpool;
> }
> 
> raid1_read_request() or raid1_write_request()
> {
> 	alloc_r1bio()
> 	{
> 		mempool_alloc()
> 		{
> 			// if pool->alloc fails
> 			remove_element()
> 			{
> 				--pool->curr_nr;
> 			}
> 		}
> 	}
> }
> 
> mempool_free()
> {
> 	if (pool->curr_nr < pool->min_nr) {
> 		// pool->wait.head is a stack address
> 		// wake_up() will try to access this invalid address
> 		// which leads to a kernel panic
> 		return;
> 		wake_up(&pool->wait);
> 	}
> }
> 
> Fix:
> reinit conf->r1bio_pool.wait after assigning newpool.
> 
> Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
> ---
>   drivers/md/raid1.c | 1 +
>   1 file changed, 1 insertion(+)
> 

Reviewed-by: Yu Kuai <yukuai3@huawei.com>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v3] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12 11:28 [PATCH v3] md/raid1: Fix stack memory use after return in raid1_reshape Wang Jinchao
  2025-06-13  9:11 ` Yu Kuai
@ 2025-06-14  6:44 ` Yu Kuai
  1 sibling, 0 replies; 3+ messages in thread
From: Yu Kuai @ 2025-06-14  6:44 UTC (permalink / raw)
  To: Wang Jinchao, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

在 2025/06/12 19:28, Wang Jinchao 写道:
> In the raid1_reshape function, newpool is
> allocated on the stack and assigned to conf->r1bio_pool.
> This results in conf->r1bio_pool.wait.head pointing
> to a stack address.
> Accessing this address later can lead to a kernel panic.
> 
> Example access path:
> 
> raid1_reshape()
> {
> 	// newpool is on the stack
> 	mempool_t newpool, oldpool;
> 	// initialize newpool.wait.head to stack address
> 	mempool_init(&newpool, ...);
> 	conf->r1bio_pool = newpool;
> }
> 
> raid1_read_request() or raid1_write_request()
> {
> 	alloc_r1bio()
> 	{
> 		mempool_alloc()
> 		{
> 			// if pool->alloc fails
> 			remove_element()
> 			{
> 				--pool->curr_nr;
> 			}
> 		}
> 	}
> }
> 
> mempool_free()
> {
> 	if (pool->curr_nr < pool->min_nr) {
> 		// pool->wait.head is a stack address
> 		// wake_up() will try to access this invalid address
> 		// which leads to a kernel panic
> 		return;
> 		wake_up(&pool->wait);
> 	}
> }
> 
> Fix:
> reinit conf->r1bio_pool.wait after assigning newpool.
> 
> Signed-off-by: Wang Jinchao<wangjinchao600@gmail.com>
> ---

Applied to md-6.16, with a fix tag:
Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()")

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-06-14  6:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-12 11:28 [PATCH v3] md/raid1: Fix stack memory use after return in raid1_reshape Wang Jinchao
2025-06-13  9:11 ` Yu Kuai
2025-06-14  6:44 ` Yu Kuai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).