linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
@ 2025-06-11  8:55 Wang Jinchao
  2025-06-12  7:31 ` Yu Kuai
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-11  8:55 UTC (permalink / raw)
  To: Song Liu, Yu Kuai; +Cc: Wang Jinchao, linux-raid, linux-kernel

In the raid1_reshape function, newpool is
allocated on the stack and assigned to conf->r1bio_pool.
This results in conf->r1bio_pool.wait.head pointing
to a stack address.
Accessing this address later can lead to a kernel panic.

Example access path:

raid1_reshape()
{
	// newpool is on the stack
	mempool_t newpool, oldpool;
	// initialize newpool.wait.head to stack address
	mempool_init(&newpool, ...);
	conf->r1bio_pool = newpool;
}

raid1_read_request() or raid1_write_request()
{
	alloc_r1bio()
	{
		mempool_alloc()
		{
			// if pool->alloc fails
			remove_element()
			{
				--pool->curr_nr;
			}
		}
	}
}

mempool_free()
{
	if (pool->curr_nr < pool->min_nr) {
		// pool->wait.head is a stack address
		// wake_up() will try to access this invalid address
		// which leads to a kernel panic
		return;
		wake_up(&pool->wait);
	}
}

Fix:
The solution is to avoid using a stack-based newpool.
Instead, directly initialize conf->r1bio_pool.

Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
---
v1 -> v2:
- change subject
- use mempool_init(&conf->r1bio_pool) instead of reinitializing the list on stack
---
 drivers/md/raid1.c | 34 +++++++++++++++++++---------------
 1 file changed, 19 insertions(+), 15 deletions(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 19c5a0ce5a40..f2436262092a 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3366,7 +3366,7 @@ static int raid1_reshape(struct mddev *mddev)
 	 * At the same time, we "pack" the devices so that all the missing
 	 * devices have the higher raid_disk numbers.
 	 */
-	mempool_t newpool, oldpool;
+	mempool_t oldpool;
 	struct pool_info *newpoolinfo;
 	struct raid1_info *newmirrors;
 	struct r1conf *conf = mddev->private;
@@ -3375,9 +3375,6 @@ static int raid1_reshape(struct mddev *mddev)
 	int d, d2;
 	int ret;
 
-	memset(&newpool, 0, sizeof(newpool));
-	memset(&oldpool, 0, sizeof(oldpool));
-
 	/* Cannot change chunk_size, layout, or level */
 	if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
 	    mddev->layout != mddev->new_layout ||
@@ -3408,26 +3405,33 @@ static int raid1_reshape(struct mddev *mddev)
 	newpoolinfo->mddev = mddev;
 	newpoolinfo->raid_disks = raid_disks * 2;
 
-	ret = mempool_init(&newpool, NR_RAID_BIOS, r1bio_pool_alloc,
-			   rbio_pool_free, newpoolinfo);
-	if (ret) {
-		kfree(newpoolinfo);
-		return ret;
-	}
 	newmirrors = kzalloc(array3_size(sizeof(struct raid1_info),
-					 raid_disks, 2),
-			     GFP_KERNEL);
+	raid_disks, 2),
+	GFP_KERNEL);
 	if (!newmirrors) {
 		kfree(newpoolinfo);
-		mempool_exit(&newpool);
 		return -ENOMEM;
 	}
 
+	/* stop everything before switching the pool */
 	freeze_array(conf, 0);
 
-	/* ok, everything is stopped */
+	/* backup old pool in case restore is needed */
 	oldpool = conf->r1bio_pool;
-	conf->r1bio_pool = newpool;
+
+	ret = mempool_init(&conf->r1bio_pool, NR_RAID_BIOS, r1bio_pool_alloc,
+			   rbio_pool_free, newpoolinfo);
+	if (ret) {
+		kfree(newpoolinfo);
+		kfree(newmirrors);
+		mempool_exit(&conf->r1bio_pool);
+		/* restore the old pool */
+		conf->r1bio_pool = oldpool;
+		unfreeze_array(conf);
+		pr_err("md/raid1:%s: cannot allocate r1bio_pool for reshape\n",
+			mdname(mddev));
+		return ret;
+	}
 
 	for (d = d2 = 0; d < conf->raid_disks; d++) {
 		struct md_rdev *rdev = conf->mirrors[d].rdev;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-11  8:55 [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape Wang Jinchao
@ 2025-06-12  7:31 ` Yu Kuai
  2025-06-12  7:45   ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Yu Kuai @ 2025-06-12  7:31 UTC (permalink / raw)
  To: Wang Jinchao, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

Hi,

在 2025/06/11 16:55, Wang Jinchao 写道:
> In the raid1_reshape function, newpool is
> allocated on the stack and assigned to conf->r1bio_pool.
> This results in conf->r1bio_pool.wait.head pointing
> to a stack address.
> Accessing this address later can lead to a kernel panic.
> 
> Example access path:
> 
> raid1_reshape()
> {
> 	// newpool is on the stack
> 	mempool_t newpool, oldpool;
> 	// initialize newpool.wait.head to stack address
> 	mempool_init(&newpool, ...);
> 	conf->r1bio_pool = newpool;
> }
> 
> raid1_read_request() or raid1_write_request()
> {
> 	alloc_r1bio()
> 	{
> 		mempool_alloc()
> 		{
> 			// if pool->alloc fails
> 			remove_element()
> 			{
> 				--pool->curr_nr;
> 			}
> 		}
> 	}
> }
> 
> mempool_free()
> {
> 	if (pool->curr_nr < pool->min_nr) {
> 		// pool->wait.head is a stack address
> 		// wake_up() will try to access this invalid address
> 		// which leads to a kernel panic
> 		return;
> 		wake_up(&pool->wait);
> 	}
> }
> 
> Fix:
> The solution is to avoid using a stack-based newpool.
> Instead, directly initialize conf->r1bio_pool.
> 
> Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
> ---
> v1 -> v2:
> - change subject
> - use mempool_init(&conf->r1bio_pool) instead of reinitializing the list on stack
> ---
>   drivers/md/raid1.c | 34 +++++++++++++++++++---------------
>   1 file changed, 19 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 19c5a0ce5a40..f2436262092a 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -3366,7 +3366,7 @@ static int raid1_reshape(struct mddev *mddev)
>   	 * At the same time, we "pack" the devices so that all the missing
>   	 * devices have the higher raid_disk numbers.
>   	 */
> -	mempool_t newpool, oldpool;
> +	mempool_t oldpool;
>   	struct pool_info *newpoolinfo;
>   	struct raid1_info *newmirrors;
>   	struct r1conf *conf = mddev->private;
> @@ -3375,9 +3375,6 @@ static int raid1_reshape(struct mddev *mddev)
>   	int d, d2;
>   	int ret;
>   
> -	memset(&newpool, 0, sizeof(newpool));
> -	memset(&oldpool, 0, sizeof(oldpool));
> -
>   	/* Cannot change chunk_size, layout, or level */
>   	if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
>   	    mddev->layout != mddev->new_layout ||
> @@ -3408,26 +3405,33 @@ static int raid1_reshape(struct mddev *mddev)
>   	newpoolinfo->mddev = mddev;
>   	newpoolinfo->raid_disks = raid_disks * 2;
>   
> -	ret = mempool_init(&newpool, NR_RAID_BIOS, r1bio_pool_alloc,
> -			   rbio_pool_free, newpoolinfo);
> -	if (ret) {
> -		kfree(newpoolinfo);
> -		return ret;
> -	}
>   	newmirrors = kzalloc(array3_size(sizeof(struct raid1_info),
> -					 raid_disks, 2),
> -			     GFP_KERNEL);
> +	raid_disks, 2),
> +	GFP_KERNEL);
>   	if (!newmirrors) {
>   		kfree(newpoolinfo);
> -		mempool_exit(&newpool);
>   		return -ENOMEM;
>   	}
>   
> +	/* stop everything before switching the pool */
>   	freeze_array(conf, 0);
>   
> -	/* ok, everything is stopped */
> +	/* backup old pool in case restore is needed */
>   	oldpool = conf->r1bio_pool;
> -	conf->r1bio_pool = newpool;
> +
> +	ret = mempool_init(&conf->r1bio_pool, NR_RAID_BIOS, r1bio_pool_alloc,
> +			   rbio_pool_free, newpoolinfo);
> +	if (ret) {
> +		kfree(newpoolinfo);
> +		kfree(newmirrors);
> +		mempool_exit(&conf->r1bio_pool);
> +		/* restore the old pool */
> +		conf->r1bio_pool = oldpool;
> +		unfreeze_array(conf);
> +		pr_err("md/raid1:%s: cannot allocate r1bio_pool for reshape\n",
> +			mdname(mddev));
> +		return ret;
> +	}
>   
>   	for (d = d2 = 0; d < conf->raid_disks; d++) {
>   		struct md_rdev *rdev = conf->mirrors[d].rdev;
> 

Any specific reason not to use mempool_resize() and krealloc() here?
In the case if new raid_disks is greater than the old one.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12  7:31 ` Yu Kuai
@ 2025-06-12  7:45   ` Wang Jinchao
  2025-06-12  9:17     ` Yu Kuai
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-12  7:45 UTC (permalink / raw)
  To: Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

在 2025/6/12 15:31, Yu Kuai 写道:
> Hi,
> 
> 在 2025/06/11 16:55, Wang Jinchao 写道:
>> In the raid1_reshape function, newpool is
>> allocated on the stack and assigned to conf->r1bio_pool.
>> This results in conf->r1bio_pool.wait.head pointing
>> to a stack address.
>> Accessing this address later can lead to a kernel panic.
>>
>> Example access path:
>>
>> raid1_reshape()
>> {
>>     // newpool is on the stack
>>     mempool_t newpool, oldpool;
>>     // initialize newpool.wait.head to stack address
>>     mempool_init(&newpool, ...);
>>     conf->r1bio_pool = newpool;
>> }
>>
>> raid1_read_request() or raid1_write_request()
>> {
>>     alloc_r1bio()
>>     {
>>         mempool_alloc()
>>         {
>>             // if pool->alloc fails
>>             remove_element()
>>             {
>>                 --pool->curr_nr;
>>             }
>>         }
>>     }
>> }
>>
>> mempool_free()
>> {
>>     if (pool->curr_nr < pool->min_nr) {
>>         // pool->wait.head is a stack address
>>         // wake_up() will try to access this invalid address
>>         // which leads to a kernel panic
>>         return;
>>         wake_up(&pool->wait);
>>     }
>> }
>>
>> Fix:
>> The solution is to avoid using a stack-based newpool.
>> Instead, directly initialize conf->r1bio_pool.
>>
>> Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
>> ---
>> v1 -> v2:
>> - change subject
>> - use mempool_init(&conf->r1bio_pool) instead of reinitializing the 
>> list on stack
>> ---
>>   drivers/md/raid1.c | 34 +++++++++++++++++++---------------
>>   1 file changed, 19 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>> index 19c5a0ce5a40..f2436262092a 100644
>> --- a/drivers/md/raid1.c
>> +++ b/drivers/md/raid1.c
>> @@ -3366,7 +3366,7 @@ static int raid1_reshape(struct mddev *mddev)
>>        * At the same time, we "pack" the devices so that all the missing
>>        * devices have the higher raid_disk numbers.
>>        */
>> -    mempool_t newpool, oldpool;
>> +    mempool_t oldpool;
>>       struct pool_info *newpoolinfo;
>>       struct raid1_info *newmirrors;
>>       struct r1conf *conf = mddev->private;
>> @@ -3375,9 +3375,6 @@ static int raid1_reshape(struct mddev *mddev)
>>       int d, d2;
>>       int ret;
>> -    memset(&newpool, 0, sizeof(newpool));
>> -    memset(&oldpool, 0, sizeof(oldpool));
>> -
>>       /* Cannot change chunk_size, layout, or level */
>>       if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
>>           mddev->layout != mddev->new_layout ||
>> @@ -3408,26 +3405,33 @@ static int raid1_reshape(struct mddev *mddev)
>>       newpoolinfo->mddev = mddev;
>>       newpoolinfo->raid_disks = raid_disks * 2;
>> -    ret = mempool_init(&newpool, NR_RAID_BIOS, r1bio_pool_alloc,
>> -               rbio_pool_free, newpoolinfo);
>> -    if (ret) {
>> -        kfree(newpoolinfo);
>> -        return ret;
>> -    }
>>       newmirrors = kzalloc(array3_size(sizeof(struct raid1_info),
>> -                     raid_disks, 2),
>> -                 GFP_KERNEL);
>> +    raid_disks, 2),
>> +    GFP_KERNEL);
>>       if (!newmirrors) {
>>           kfree(newpoolinfo);
>> -        mempool_exit(&newpool);
>>           return -ENOMEM;
>>       }
>> +    /* stop everything before switching the pool */
>>       freeze_array(conf, 0);
>> -    /* ok, everything is stopped */
>> +    /* backup old pool in case restore is needed */
>>       oldpool = conf->r1bio_pool;
>> -    conf->r1bio_pool = newpool;
>> +
>> +    ret = mempool_init(&conf->r1bio_pool, NR_RAID_BIOS, 
>> r1bio_pool_alloc,
>> +               rbio_pool_free, newpoolinfo);
>> +    if (ret) {
>> +        kfree(newpoolinfo);
>> +        kfree(newmirrors);
>> +        mempool_exit(&conf->r1bio_pool);
>> +        /* restore the old pool */
>> +        conf->r1bio_pool = oldpool;
>> +        unfreeze_array(conf);
>> +        pr_err("md/raid1:%s: cannot allocate r1bio_pool for reshape\n",
>> +            mdname(mddev));
>> +        return ret;
>> +    }
>>       for (d = d2 = 0; d < conf->raid_disks; d++) {
>>           struct md_rdev *rdev = conf->mirrors[d].rdev;
>>
> 
> Any specific reason not to use mempool_resize() and krealloc() here?
> In the case if new raid_disks is greater than the old one.
The element size is different between the old pool and the new pool.
mempool_resize only resizes the pool size (i.e., the number of elements 
in pool->elements), but does not handle changes in element size, which 
occurs in raid1_reshape.

Another reason may be to avoid modifying the old pool directly — in case 
initializing the new pool fails, the old one remains usable.

If we modify the old pool directly and the operation fails, not only 
will the reshaped RAID be unusable, but the original RAID may also be 
corrupted.
> 
> Thanks,
> Kuai
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12  7:45   ` Wang Jinchao
@ 2025-06-12  9:17     ` Yu Kuai
  2025-06-12  9:55       ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Yu Kuai @ 2025-06-12  9:17 UTC (permalink / raw)
  To: Wang Jinchao, Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

Hi,

在 2025/06/12 15:45, Wang Jinchao 写道:
> 在 2025/6/12 15:31, Yu Kuai 写道:
>> Hi,
>>
>> 在 2025/06/11 16:55, Wang Jinchao 写道:
>>> In the raid1_reshape function, newpool is
>>> allocated on the stack and assigned to conf->r1bio_pool.
>>> This results in conf->r1bio_pool.wait.head pointing
>>> to a stack address.
>>> Accessing this address later can lead to a kernel panic.
>>>
>>> Example access path:
>>>
>>> raid1_reshape()
>>> {
>>>     // newpool is on the stack
>>>     mempool_t newpool, oldpool;
>>>     // initialize newpool.wait.head to stack address
>>>     mempool_init(&newpool, ...);
>>>     conf->r1bio_pool = newpool;
>>> }
>>>
>>> raid1_read_request() or raid1_write_request()
>>> {
>>>     alloc_r1bio()
>>>     {
>>>         mempool_alloc()
>>>         {
>>>             // if pool->alloc fails
>>>             remove_element()
>>>             {
>>>                 --pool->curr_nr;
>>>             }
>>>         }
>>>     }
>>> }
>>>
>>> mempool_free()
>>> {
>>>     if (pool->curr_nr < pool->min_nr) {
>>>         // pool->wait.head is a stack address
>>>         // wake_up() will try to access this invalid address
>>>         // which leads to a kernel panic
>>>         return;
>>>         wake_up(&pool->wait);
>>>     }
>>> }
>>>
>>> Fix:
>>> The solution is to avoid using a stack-based newpool.
>>> Instead, directly initialize conf->r1bio_pool.
>>>
>>> Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
>>> ---
>>> v1 -> v2:
>>> - change subject
>>> - use mempool_init(&conf->r1bio_pool) instead of reinitializing the 
>>> list on stack
>>> ---
>>>   drivers/md/raid1.c | 34 +++++++++++++++++++---------------
>>>   1 file changed, 19 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>>> index 19c5a0ce5a40..f2436262092a 100644
>>> --- a/drivers/md/raid1.c
>>> +++ b/drivers/md/raid1.c
>>> @@ -3366,7 +3366,7 @@ static int raid1_reshape(struct mddev *mddev)
>>>        * At the same time, we "pack" the devices so that all the missing
>>>        * devices have the higher raid_disk numbers.
>>>        */
>>> -    mempool_t newpool, oldpool;
>>> +    mempool_t oldpool;
>>>       struct pool_info *newpoolinfo;
>>>       struct raid1_info *newmirrors;
>>>       struct r1conf *conf = mddev->private;
>>> @@ -3375,9 +3375,6 @@ static int raid1_reshape(struct mddev *mddev)
>>>       int d, d2;
>>>       int ret;
>>> -    memset(&newpool, 0, sizeof(newpool));
>>> -    memset(&oldpool, 0, sizeof(oldpool));
>>> -
>>>       /* Cannot change chunk_size, layout, or level */
>>>       if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
>>>           mddev->layout != mddev->new_layout ||
>>> @@ -3408,26 +3405,33 @@ static int raid1_reshape(struct mddev *mddev)
>>>       newpoolinfo->mddev = mddev;
>>>       newpoolinfo->raid_disks = raid_disks * 2;
>>> -    ret = mempool_init(&newpool, NR_RAID_BIOS, r1bio_pool_alloc,
>>> -               rbio_pool_free, newpoolinfo);
>>> -    if (ret) {
>>> -        kfree(newpoolinfo);
>>> -        return ret;
>>> -    }
>>>       newmirrors = kzalloc(array3_size(sizeof(struct raid1_info),
>>> -                     raid_disks, 2),
>>> -                 GFP_KERNEL);
>>> +    raid_disks, 2),
>>> +    GFP_KERNEL);
>>>       if (!newmirrors) {
>>>           kfree(newpoolinfo);
>>> -        mempool_exit(&newpool);
>>>           return -ENOMEM;
>>>       }
>>> +    /* stop everything before switching the pool */
>>>       freeze_array(conf, 0);
>>> -    /* ok, everything is stopped */
>>> +    /* backup old pool in case restore is needed */
>>>       oldpool = conf->r1bio_pool;
>>> -    conf->r1bio_pool = newpool;
>>> +
>>> +    ret = mempool_init(&conf->r1bio_pool, NR_RAID_BIOS, 
>>> r1bio_pool_alloc,
>>> +               rbio_pool_free, newpoolinfo);
>>> +    if (ret) {
>>> +        kfree(newpoolinfo);
>>> +        kfree(newmirrors);
>>> +        mempool_exit(&conf->r1bio_pool);
>>> +        /* restore the old pool */
>>> +        conf->r1bio_pool = oldpool;
>>> +        unfreeze_array(conf);
>>> +        pr_err("md/raid1:%s: cannot allocate r1bio_pool for reshape\n",
>>> +            mdname(mddev));
>>> +        return ret;
>>> +    }
>>>       for (d = d2 = 0; d < conf->raid_disks; d++) {
>>>           struct md_rdev *rdev = conf->mirrors[d].rdev;
>>>
>>
>> Any specific reason not to use mempool_resize() and krealloc() here?
>> In the case if new raid_disks is greater than the old one.
> The element size is different between the old pool and the new pool.
> mempool_resize only resizes the pool size (i.e., the number of elements 
> in pool->elements), but does not handle changes in element size, which 
> occurs in raid1_reshape.
> 
> Another reason may be to avoid modifying the old pool directly — in case 
> initializing the new pool fails, the old one remains usable.
> 
> If we modify the old pool directly and the operation fails, not only 
> will the reshaped RAID be unusable, but the original RAID may also be 
> corrupted.

Yes, you're right, thanks for the explanation.

I feel like raid1_reshape() need better coding, anyway, your fix looks
good.

Reviewed-by: Yu Kuai <yukuai3@huawei.com>

>>
>> Thanks,
>> Kuai
>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12  9:17     ` Yu Kuai
@ 2025-06-12  9:55       ` Wang Jinchao
  2025-06-12 11:23         ` Yu Kuai
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-12  9:55 UTC (permalink / raw)
  To: Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

On 2025/6/12 17:17, Yu Kuai wrote:
> Hi,
> 
> 在 2025/06/12 15:45, Wang Jinchao 写道:
>> 在 2025/6/12 15:31, Yu Kuai 写道:
>>> Hi,
>>>
>>> 在 2025/06/11 16:55, Wang Jinchao 写道:
>>>> In the raid1_reshape function, newpool is
>>>> allocated on the stack and assigned to conf->r1bio_pool.
>>>> This results in conf->r1bio_pool.wait.head pointing
>>>> to a stack address.
>>>> Accessing this address later can lead to a kernel panic.
>>>>
>>>> Example access path:
>>>>
>>>> raid1_reshape()
>>>> {
>>>>     // newpool is on the stack
>>>>     mempool_t newpool, oldpool;
>>>>     // initialize newpool.wait.head to stack address
>>>>     mempool_init(&newpool, ...);
>>>>     conf->r1bio_pool = newpool;
>>>> }
>>>>
>>>> raid1_read_request() or raid1_write_request()
>>>> {
>>>>     alloc_r1bio()
>>>>     {
>>>>         mempool_alloc()
>>>>         {
>>>>             // if pool->alloc fails
>>>>             remove_element()
>>>>             {
>>>>                 --pool->curr_nr;
>>>>             }
>>>>         }
>>>>     }
>>>> }
>>>>
>>>> mempool_free()
>>>> {
>>>>     if (pool->curr_nr < pool->min_nr) {
>>>>         // pool->wait.head is a stack address
>>>>         // wake_up() will try to access this invalid address
>>>>         // which leads to a kernel panic
>>>>         return;
>>>>         wake_up(&pool->wait);
>>>>     }
>>>> }
>>>>
>>>> Fix:
>>>> The solution is to avoid using a stack-based newpool.
>>>> Instead, directly initialize conf->r1bio_pool.
>>>>
>>>> Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
>>>> ---
>>>> v1 -> v2:
>>>> - change subject
>>>> - use mempool_init(&conf->r1bio_pool) instead of reinitializing the 
>>>> list on stack
>>>> ---
>>>>   drivers/md/raid1.c | 34 +++++++++++++++++++---------------
>>>>   1 file changed, 19 insertions(+), 15 deletions(-)
>>>>
>>>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>>>> index 19c5a0ce5a40..f2436262092a 100644
>>>> --- a/drivers/md/raid1.c
>>>> +++ b/drivers/md/raid1.c
>>>> @@ -3366,7 +3366,7 @@ static int raid1_reshape(struct mddev *mddev)
>>>>        * At the same time, we "pack" the devices so that all the 
>>>> missing
>>>>        * devices have the higher raid_disk numbers.
>>>>        */
>>>> -    mempool_t newpool, oldpool;
>>>> +    mempool_t oldpool;
>>>>       struct pool_info *newpoolinfo;
>>>>       struct raid1_info *newmirrors;
>>>>       struct r1conf *conf = mddev->private;
>>>> @@ -3375,9 +3375,6 @@ static int raid1_reshape(struct mddev *mddev)
>>>>       int d, d2;
>>>>       int ret;
>>>> -    memset(&newpool, 0, sizeof(newpool));
>>>> -    memset(&oldpool, 0, sizeof(oldpool));
>>>> -
>>>>       /* Cannot change chunk_size, layout, or level */
>>>>       if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
>>>>           mddev->layout != mddev->new_layout ||
>>>> @@ -3408,26 +3405,33 @@ static int raid1_reshape(struct mddev *mddev)
>>>>       newpoolinfo->mddev = mddev;
>>>>       newpoolinfo->raid_disks = raid_disks * 2;
>>>> -    ret = mempool_init(&newpool, NR_RAID_BIOS, r1bio_pool_alloc,
>>>> -               rbio_pool_free, newpoolinfo);
>>>> -    if (ret) {
>>>> -        kfree(newpoolinfo);
>>>> -        return ret;
>>>> -    }
>>>>       newmirrors = kzalloc(array3_size(sizeof(struct raid1_info),
>>>> -                     raid_disks, 2),
>>>> -                 GFP_KERNEL);
>>>> +    raid_disks, 2),
>>>> +    GFP_KERNEL);
>>>>       if (!newmirrors) {
>>>>           kfree(newpoolinfo);
>>>> -        mempool_exit(&newpool);
>>>>           return -ENOMEM;
>>>>       }
>>>> +    /* stop everything before switching the pool */
>>>>       freeze_array(conf, 0);
>>>> -    /* ok, everything is stopped */
>>>> +    /* backup old pool in case restore is needed */
>>>>       oldpool = conf->r1bio_pool;
>>>> -    conf->r1bio_pool = newpool;
>>>> +
>>>> +    ret = mempool_init(&conf->r1bio_pool, NR_RAID_BIOS, 
>>>> r1bio_pool_alloc,
>>>> +               rbio_pool_free, newpoolinfo);
>>>> +    if (ret) {
>>>> +        kfree(newpoolinfo);
>>>> +        kfree(newmirrors);
>>>> +        mempool_exit(&conf->r1bio_pool);
>>>> +        /* restore the old pool */
>>>> +        conf->r1bio_pool = oldpool;
>>>> +        unfreeze_array(conf);
>>>> +        pr_err("md/raid1:%s: cannot allocate r1bio_pool for 
>>>> reshape\n",
>>>> +            mdname(mddev));
>>>> +        return ret;
>>>> +    }
>>>>       for (d = d2 = 0; d < conf->raid_disks; d++) {
>>>>           struct md_rdev *rdev = conf->mirrors[d].rdev;
>>>>
>>>
>>> Any specific reason not to use mempool_resize() and krealloc() here?
>>> In the case if new raid_disks is greater than the old one.
>> The element size is different between the old pool and the new pool.
>> mempool_resize only resizes the pool size (i.e., the number of 
>> elements in pool->elements), but does not handle changes in element 
>> size, which occurs in raid1_reshape.
>>
>> Another reason may be to avoid modifying the old pool directly — in 
>> case initializing the new pool fails, the old one remains usable.
>>
>> If we modify the old pool directly and the operation fails, not only 
>> will the reshaped RAID be unusable, but the original RAID may also be 
>> corrupted.
> 
> Yes, you're right, thanks for the explanation.
> 
> I feel like raid1_reshape() need better coding, anyway, your fix looks
> good.
> 
> Reviewed-by: Yu Kuai <yukuai3@huawei.com>

Now that we have the same information, I prefer patch-v1 before 
refactoring raid1_reshape,
because it’s really simple (only one line) and clearer to show the 
backup and restore logic.
Another reason is that v2 freezes the RAID longer than v1.
Would you like me to provide a v3 patch combining the v2 explanation 
with the v1 diff?
Thanks for your reviewing.
> 
>>>
>>> Thanks,
>>> Kuai
>>>
>>
>> .
>>
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12  9:55       ` Wang Jinchao
@ 2025-06-12 11:23         ` Yu Kuai
  2025-06-12 11:46           ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Yu Kuai @ 2025-06-12 11:23 UTC (permalink / raw)
  To: Wang Jinchao, Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

Hi,

在 2025/06/12 17:55, Wang Jinchao 写道:
> Now that we have the same information, I prefer patch-v1 before 
> refactoring raid1_reshape,
> because it’s really simple (only one line) and clearer to show the 
> backup and restore logic.
> Another reason is that v2 freezes the RAID longer than v1.
> Would you like me to provide a v3 patch combining the v2 explanation 
> with the v1 diff?
> Thanks for your reviewing.

I don't have preference here, feel free to do this.

BTW, I feel raid1_reshape can be better coding with following:

- covert r1bio_pool to use mempool_create_kmalloc_pool(use create
instead of init to get rid of the werid assigment);
- no need to reallocate pool_info;
- convert raid1_info to use krealloc;

Welcome if you are willing to, otherwise I'll find myself sometime.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12 11:23         ` Yu Kuai
@ 2025-06-12 11:46           ` Wang Jinchao
  2025-06-12 11:58             ` Yu Kuai
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-12 11:46 UTC (permalink / raw)
  To: Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

On 2025/6/12 19:23, Yu Kuai wrote:
> Hi,
> 
> 在 2025/06/12 17:55, Wang Jinchao 写道:
>> Now that we have the same information, I prefer patch-v1 before 
>> refactoring raid1_reshape,
>> because it’s really simple (only one line) and clearer to show the 
>> backup and restore logic.
>> Another reason is that v2 freezes the RAID longer than v1.
>> Would you like me to provide a v3 patch combining the v2 explanation 
>> with the v1 diff?
>> Thanks for your reviewing.
> 
> I don't have preference here, feel free to do this.
> 
> BTW, I feel raid1_reshape can be better coding with following:
> 
> - covert r1bio_pool to use mempool_create_kmalloc_pool(use create
> instead of init to get rid of the werid assigment);
mempool_create_kmalloc_pool also calls init_waitqueue_head(&pool->wait) 
internally, just like mempool_init.
So the issue only exists if newpool is allocated on the stack.
> - no need to reallocate pool_info;
> - convert raid1_info to use krealloc;
I think reallocating pool_info is only for backup and restore, similar 
to newpool.
> 
> Welcome if you are willing to, otherwise I'll find myself sometime.
I'm a newcomer to RAID and can't quite catch up with it right now.
Maybe I can refactor it later, and I look forward to your guidance.
> 
> Thanks,
> Kuai
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12 11:46           ` Wang Jinchao
@ 2025-06-12 11:58             ` Yu Kuai
  2025-06-12 12:21               ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Yu Kuai @ 2025-06-12 11:58 UTC (permalink / raw)
  To: Wang Jinchao, Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

Hi,

在 2025/06/12 19:46, Wang Jinchao 写道:
> On 2025/6/12 19:23, Yu Kuai wrote:
>> Hi,
>>
>> 在 2025/06/12 17:55, Wang Jinchao 写道:
>>> Now that we have the same information, I prefer patch-v1 before 
>>> refactoring raid1_reshape,
>>> because it’s really simple (only one line) and clearer to show the 
>>> backup and restore logic.
>>> Another reason is that v2 freezes the RAID longer than v1.
>>> Would you like me to provide a v3 patch combining the v2 explanation 
>>> with the v1 diff?
>>> Thanks for your reviewing.
>>
>> I don't have preference here, feel free to do this.
>>
>> BTW, I feel raid1_reshape can be better coding with following:
>>
>> - covert r1bio_pool to use mempool_create_kmalloc_pool(use create
>> instead of init to get rid of the werid assigment);
> mempool_create_kmalloc_pool also calls init_waitqueue_head(&pool->wait) 
> internally, just like mempool_init.

Please notice that creat will allocate memory for mempool, the list is
no longer a stack value, the field bio_pool inside conf should also
covert to a pointer.

> So the issue only exists if newpool is allocated on the stack.
>> - no need to reallocate pool_info;
>> - convert raid1_info to use krealloc;
> I think reallocating pool_info is only for backup and restore, similar 
> to newpool.

You can just change the old value directly, after everything is ready,
with the first mempool change, pool_info is not needed for bio_pool.
>>
>> Welcome if you are willing to, otherwise I'll find myself sometime.
> I'm a newcomer to RAID and can't quite catch up with it right now.
> Maybe I can refactor it later, and I look forward to your guidance.
>>

No hurry, take you time :)

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12 11:58             ` Yu Kuai
@ 2025-06-12 12:21               ` Wang Jinchao
  2025-06-13  9:15                 ` Yu Kuai
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-12 12:21 UTC (permalink / raw)
  To: Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

On 2025/6/12 19:58, Yu Kuai wrote:
> Hi,
> 
> 在 2025/06/12 19:46, Wang Jinchao 写道:
>> On 2025/6/12 19:23, Yu Kuai wrote:
>>> Hi,
>>>
>>> 在 2025/06/12 17:55, Wang Jinchao 写道:
>>>> Now that we have the same information, I prefer patch-v1 before 
>>>> refactoring raid1_reshape,
>>>> because it’s really simple (only one line) and clearer to show the 
>>>> backup and restore logic.
>>>> Another reason is that v2 freezes the RAID longer than v1.
>>>> Would you like me to provide a v3 patch combining the v2 explanation 
>>>> with the v1 diff?
>>>> Thanks for your reviewing.
>>>
>>> I don't have preference here, feel free to do this.
>>>
>>> BTW, I feel raid1_reshape can be better coding with following:
>>>
>>> - covert r1bio_pool to use mempool_create_kmalloc_pool(use create
>>> instead of init to get rid of the werid assigment);
>> mempool_create_kmalloc_pool also calls init_waitqueue_head(&pool- 
>> >wait) internally, just like mempool_init.
> 
> Please notice that creat will allocate memory for mempool, the list is
> no longer a stack value, the field bio_pool inside conf should also
> covert to a pointer.
> 
>> So the issue only exists if newpool is allocated on the stack.
>>> - no need to reallocate pool_info;
>>> - convert raid1_info to use krealloc;
>> I think reallocating pool_info is only for backup and restore, similar 
>> to newpool.
> 
> You can just change the old value directly, after everything is ready,
> with the first mempool change, pool_info is not needed for bio_pool.
>>>
>>> Welcome if you are willing to, otherwise I'll find myself sometime.
>> I'm a newcomer to RAID and can't quite catch up with it right now.
>> Maybe I can refactor it later, and I look forward to your guidance.
>>>
> 
> No hurry, take you time :)
👍
You're right — now I understand the whole.
I'm willing to try refactoring raid1_reshape later.
> 
> Thanks,
> Kuai
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-12 12:21               ` Wang Jinchao
@ 2025-06-13  9:15                 ` Yu Kuai
  2025-06-13 11:53                   ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Yu Kuai @ 2025-06-13  9:15 UTC (permalink / raw)
  To: Wang Jinchao, Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

Hi,

在 2025/06/12 20:21, Wang Jinchao 写道:
> BTW, I feel raid1_reshape can be better coding with following:

And another hint:

The poolinfo can be removed directly, the only other place to use it
is r1buf_pool, and can covert to pass in conf or &conf->raid_disks.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-13  9:15                 ` Yu Kuai
@ 2025-06-13 11:53                   ` Wang Jinchao
  2025-06-16 10:11                     ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-13 11:53 UTC (permalink / raw)
  To: Yu Kuai, Song Liu; +Cc: linux-raid, linux-kernel, yukuai (C)

On 2025/6/13 17:15, Yu Kuai wrote:
> Hi,
> 
> 在 2025/06/12 20:21, Wang Jinchao 写道:
>> BTW, I feel raid1_reshape can be better coding with following:
> 
> And another hint:
> 
> The poolinfo can be removed directly, the only other place to use it
> is r1buf_pool, and can covert to pass in conf or &conf->raid_disks.
Thanks, I'll review these relationships carefully before refactoring.
> 
> Thanks,
> Kuai
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-13 11:53                   ` Wang Jinchao
@ 2025-06-16 10:11                     ` Wang Jinchao
  2025-06-16 11:32                       ` Yu Kuai
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Jinchao @ 2025-06-16 10:11 UTC (permalink / raw)
  To: Yu Kuai; +Cc: linux-raid, linux-kernel, Song Liu, yukuai (C)

On 6/13/25 19:53, Wang Jinchao wrote:
> On 2025/6/13 17:15, Yu Kuai wrote:
>> Hi,
>>
>> 在 2025/06/12 20:21, Wang Jinchao 写道:
>>> BTW, I feel raid1_reshape can be better coding with following:
>>
>> And another hint:
>>
>> The poolinfo can be removed directly, the only other place to use it
>> is r1buf_pool, and can covert to pass in conf or &conf->raid_disks.
> Thanks, I'll review these relationships carefully before refactoring.
>>
>> Thanks,
>> Kuai
>>
> 
Hi Kuai,

After reading the related code, I’d like to discuss the possibility of 
rewriting raid1_reshape.

I am considering converting r1bio_pool to use 
mempool_create_kmalloc_pool. However, mempool_create_kmalloc_pool 
requires the element size as a parameter, but the size is calculated 
dynamically in r1bio_pool_alloc(). Because of this, I feel that 
mempool_create() may be more suitable here.

I noticed that mempool_create() was used historically and was later 
replaced by mempool_init() in commit afeee514ce7f. Using 
mempool_create() would essentially be a partial revert of that commit, 
so I’m not sure whether this is appropriate.

Regarding raid1_info and pool_info, I feel the original design might be 
more suitable for the reshape process.

The goals of raid1_reshape() are:

- Keep the array usable for as long as possible.
- Be able to restore the previous state if reshape fails.
So I think we need to follow some constraints:

- conf should not be modified before freeze_array().
- We should try to prepare everything possible before freeze_array().
- If an error occurs after freeze_array(), it must be possible to roll back.

Now, regarding the idea of rewriting raid1_info or pool_info:

Convert raid1_info using krealloc:

According to rule 1, krealloc() must be called after freeze_array(). 
According to rule 2, it should be called before freeze_array(). → So 
this approach seems to violate one of the rules.

Use conf instead of pool_info:

According to rule 1, conf->raid_disks must be modified after 
freeze_array(). According to rule 2, conf->raid_disks needs to be 
updated before calling mempool_create(), i.e., before freeze_array().
These also seem to conflict.

For now, I’m not considering rule 3, as that would make the logic even 
more complex.

I’d really appreciate your thoughts on whether this direction makes 
sense or if there’s a better approach.

Thank you for your time and guidance.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-16 10:11                     ` Wang Jinchao
@ 2025-06-16 11:32                       ` Yu Kuai
  2025-06-18 11:47                         ` Wang Jinchao
  0 siblings, 1 reply; 14+ messages in thread
From: Yu Kuai @ 2025-06-16 11:32 UTC (permalink / raw)
  To: Wang Jinchao, Yu Kuai; +Cc: linux-raid, linux-kernel, Song Liu, yukuai (C)

Hi,

在 2025/06/16 18:11, Wang Jinchao 写道:
> On 6/13/25 19:53, Wang Jinchao wrote:
>> On 2025/6/13 17:15, Yu Kuai wrote:
>>> Hi,
>>>
>>> 在 2025/06/12 20:21, Wang Jinchao 写道:
>>>> BTW, I feel raid1_reshape can be better coding with following:
>>>
>>> And another hint:
>>>
>>> The poolinfo can be removed directly, the only other place to use it
>>> is r1buf_pool, and can covert to pass in conf or &conf->raid_disks.
>> Thanks, I'll review these relationships carefully before refactoring.
>>>
>>> Thanks,
>>> Kuai
>>>
>>
> Hi Kuai,
> 
> After reading the related code, I’d like to discuss the possibility of 
> rewriting raid1_reshape.
> 
> I am considering converting r1bio_pool to use 
> mempool_create_kmalloc_pool. However, mempool_create_kmalloc_pool 
> requires the element size as a parameter, but the size is calculated 
> dynamically in r1bio_pool_alloc(). Because of this, I feel that 
> mempool_create() may be more suitable here.

You only need raid_disks to calculate the size, the value will not
change.
> 
> I noticed that mempool_create() was used historically and was later 
> replaced by mempool_init() in commit afeee514ce7f. Using 
> mempool_create() would essentially be a partial revert of that commit, 
> so I’m not sure whether this is appropriate.

This is fine, the commit introduce the porblem.
> 
> Regarding raid1_info and pool_info, I feel the original design might be 
> more suitable for the reshape process.
> 
> The goals of raid1_reshape() are:
> 
> - Keep the array usable for as long as possible.
This is not needed, reshape is a slow path, just don't introduce
problems.

> - Be able to restore the previous state if reshape fails.
Yes.

> So I think we need to follow some constraints:
> 
> - conf should not be modified before freeze_array().
> - We should try to prepare everything possible before freeze_array().
> - If an error occurs after freeze_array(), it must be possible to roll 
> back.
> 
> Now, regarding the idea of rewriting raid1_info or pool_info:
> 
> Convert raid1_info using krealloc:

1) If raid_disks decreases, you don't need to realloc it;
2) If raid_disks increases, call krealloc after freezing the array, you
can't call it before in order to prevent concurrent access.
> 
> According to rule 1, krealloc() must be called after freeze_array(). 
> According to rule 2, it should be called before freeze_array(). → So 
> this approach seems to violate one of the rules.
> 
> Use conf instead of pool_info:

I'm suggesting to remove pool_info, you should change conf->raid_disks
as it is now, while the array is freezed.

Thanks,
Kuai

> 
> According to rule 1, conf->raid_disks must be modified after 
> freeze_array(). According to rule 2, conf->raid_disks needs to be 
> updated before calling mempool_create(), i.e., before freeze_array().
> These also seem to conflict.
> 
> For now, I’m not considering rule 3, as that would make the logic even 
> more complex.
> 
> I’d really appreciate your thoughts on whether this direction makes 
> sense or if there’s a better approach.
> 
> Thank you for your time and guidance.
> .
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape
  2025-06-16 11:32                       ` Yu Kuai
@ 2025-06-18 11:47                         ` Wang Jinchao
  0 siblings, 0 replies; 14+ messages in thread
From: Wang Jinchao @ 2025-06-18 11:47 UTC (permalink / raw)
  To: Yu Kuai; +Cc: linux-raid, linux-kernel, Song Liu, yukuai (C)

On 6/16/25 19:32, Yu Kuai wrote:
> Hi,
> 
> 在 2025/06/16 18:11, Wang Jinchao 写道:
>> On 6/13/25 19:53, Wang Jinchao wrote:
>>> On 2025/6/13 17:15, Yu Kuai wrote:
>>>> Hi,
>>>>
>>>> 在 2025/06/12 20:21, Wang Jinchao 写道:
>>>>> BTW, I feel raid1_reshape can be better coding with following:
>>>>
>>>> And another hint:
>>>>
>>>> The poolinfo can be removed directly, the only other place to use it
>>>> is r1buf_pool, and can covert to pass in conf or &conf->raid_disks.
>>> Thanks, I'll review these relationships carefully before refactoring.
>>>>
>>>> Thanks,
>>>> Kuai
>>>>
Hi,
I’ve changed r1bio_pool to a pointer type and sent the patch.
As for the other two optimization suggestions, I found it challenging
to implement proper error rollback after the changes.
I tried a few times but couldn't come up with code I’m happy with,
so I’ve decided to put it on hold for now.

Thanks for your guidance.
>> .
>>
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-06-18 11:47 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-11  8:55 [PATCH v2] md/raid1: Fix stack memory use after return in raid1_reshape Wang Jinchao
2025-06-12  7:31 ` Yu Kuai
2025-06-12  7:45   ` Wang Jinchao
2025-06-12  9:17     ` Yu Kuai
2025-06-12  9:55       ` Wang Jinchao
2025-06-12 11:23         ` Yu Kuai
2025-06-12 11:46           ` Wang Jinchao
2025-06-12 11:58             ` Yu Kuai
2025-06-12 12:21               ` Wang Jinchao
2025-06-13  9:15                 ` Yu Kuai
2025-06-13 11:53                   ` Wang Jinchao
2025-06-16 10:11                     ` Wang Jinchao
2025-06-16 11:32                       ` Yu Kuai
2025-06-18 11:47                         ` Wang Jinchao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).