* [PATCH v3] md: fix mddev uaf while iterating all_mddevs list
@ 2025-02-20 12:43 Yu Kuai
2025-02-20 18:41 ` Guillaume Morin
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Yu Kuai @ 2025-02-20 12:43 UTC (permalink / raw)
To: song, yukuai3, hare, axboe, logang
Cc: linux-raid, linux-kernel, yukuai1, yi.zhang, yangerkun, hch,
guillaume
From: Yu Kuai <yukuai3@huawei.com>
While iterating all_mddevs list from md_notify_reboot() and md_exit(),
list_for_each_entry_safe is used, and this can race with deletint the
next mddev, causing UAF:
t1:
spin_lock
//list_for_each_entry_safe(mddev, n, ...)
mddev_get(mddev1)
// assume mddev2 is the next entry
spin_unlock
t2:
//remove mddev2
...
mddev_free
spin_lock
list_del
spin_unlock
kfree(mddev2)
mddev_put(mddev1)
spin_lock
//continue dereference mddev2->all_mddevs
The old helper for_each_mddev() actually grab the reference of mddev2
while holding the lock, to prevent from being freed. This problem can be
fixed the same way, however, the code will be complex.
Hence switch to use list_for_each_entry, in this case mddev_put() can free
the mddev1 and it's not safe as well. Refer to md_seq_show(), also factor
out a helper mddev_put_locked() to fix this problem.
Cc: Christoph Hellwig <hch@lst.de>
Fixes: f26514342255 ("md: stop using for_each_mddev in md_notify_reboot")
Fixes: 16648bac862f ("md: stop using for_each_mddev in md_exit")
Reported-by: Guillaume Morin <guillaume@morinfr.org>
Closes: https://lore.kernel.org/all/Z7Y0SURoA8xwg7vn@bender.morinfr.org/
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
drivers/md/md.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 827646b3eb59..f501bc5f68f1 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev)
queue_work(md_misc_wq, &mddev->del_work);
}
+static void mddev_put_locked(struct mddev *mddev)
+{
+ if (atomic_dec_and_test(&mddev->active))
+ __mddev_put(mddev);
+}
+
void mddev_put(struct mddev *mddev)
{
if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
@@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
status_unused(seq);
- if (atomic_dec_and_test(&mddev->active))
- __mddev_put(mddev);
-
+ mddev_put_locked(mddev);
return 0;
}
@@ -9895,11 +9899,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
static int md_notify_reboot(struct notifier_block *this,
unsigned long code, void *x)
{
- struct mddev *mddev, *n;
+ struct mddev *mddev;
int need_delay = 0;
spin_lock(&all_mddevs_lock);
- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
+ list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
if (!mddev_get(mddev))
continue;
spin_unlock(&all_mddevs_lock);
@@ -9911,8 +9915,8 @@ static int md_notify_reboot(struct notifier_block *this,
mddev_unlock(mddev);
}
need_delay = 1;
- mddev_put(mddev);
spin_lock(&all_mddevs_lock);
+ mddev_put_locked(mddev);
}
spin_unlock(&all_mddevs_lock);
@@ -10245,7 +10249,7 @@ void md_autostart_arrays(int part)
static __exit void md_exit(void)
{
- struct mddev *mddev, *n;
+ struct mddev *mddev;
int delay = 1;
unregister_blkdev(MD_MAJOR,"md");
@@ -10266,7 +10270,7 @@ static __exit void md_exit(void)
remove_proc_entry("mdstat", NULL);
spin_lock(&all_mddevs_lock);
- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
+ list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
if (!mddev_get(mddev))
continue;
spin_unlock(&all_mddevs_lock);
@@ -10278,8 +10282,8 @@ static __exit void md_exit(void)
* the mddev for destruction by a workqueue, and the
* destroy_workqueue() below will wait for that to complete.
*/
- mddev_put(mddev);
spin_lock(&all_mddevs_lock);
+ mddev_put_locked(mddev);
}
spin_unlock(&all_mddevs_lock);
--
2.39.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3] md: fix mddev uaf while iterating all_mddevs list
2025-02-20 12:43 [PATCH v3] md: fix mddev uaf while iterating all_mddevs list Yu Kuai
@ 2025-02-20 18:41 ` Guillaume Morin
2025-02-24 14:37 ` Christoph Hellwig
2025-03-05 1:32 ` Yu Kuai
2 siblings, 0 replies; 4+ messages in thread
From: Guillaume Morin @ 2025-02-20 18:41 UTC (permalink / raw)
To: Yu Kuai
Cc: song, yukuai3, hare, axboe, logang, linux-raid, linux-kernel,
yi.zhang, yangerkun, hch, guillaume
On 20 Feb 20:43, Yu Kuai wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> While iterating all_mddevs list from md_notify_reboot() and md_exit(),
> list_for_each_entry_safe is used, and this can race with deletint the
> next mddev, causing UAF:
>
> t1:
> spin_lock
> //list_for_each_entry_safe(mddev, n, ...)
> mddev_get(mddev1)
> // assume mddev2 is the next entry
> spin_unlock
> t2:
> //remove mddev2
> ...
> mddev_free
> spin_lock
> list_del
> spin_unlock
> kfree(mddev2)
> mddev_put(mddev1)
> spin_lock
> //continue dereference mddev2->all_mddevs
>
> The old helper for_each_mddev() actually grab the reference of mddev2
> while holding the lock, to prevent from being freed. This problem can be
> fixed the same way, however, the code will be complex.
>
> Hence switch to use list_for_each_entry, in this case mddev_put() can free
> the mddev1 and it's not safe as well. Refer to md_seq_show(), also factor
> out a helper mddev_put_locked() to fix this problem.
>
> Cc: Christoph Hellwig <hch@lst.de>
> Fixes: f26514342255 ("md: stop using for_each_mddev in md_notify_reboot")
> Fixes: 16648bac862f ("md: stop using for_each_mddev in md_exit")
> Reported-by: Guillaume Morin <guillaume@morinfr.org>
> Closes: https://lore.kernel.org/all/Z7Y0SURoA8xwg7vn@bender.morinfr.org/
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/md.c | 22 +++++++++++++---------
> 1 file changed, 13 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 827646b3eb59..f501bc5f68f1 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev)
> queue_work(md_misc_wq, &mddev->del_work);
> }
>
> +static void mddev_put_locked(struct mddev *mddev)
> +{
> + if (atomic_dec_and_test(&mddev->active))
> + __mddev_put(mddev);
> +}
> +
> void mddev_put(struct mddev *mddev)
> {
> if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> @@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
> if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
> status_unused(seq);
>
> - if (atomic_dec_and_test(&mddev->active))
> - __mddev_put(mddev);
> -
> + mddev_put_locked(mddev);
> return 0;
> }
>
> @@ -9895,11 +9899,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
> static int md_notify_reboot(struct notifier_block *this,
> unsigned long code, void *x)
> {
> - struct mddev *mddev, *n;
> + struct mddev *mddev;
> int need_delay = 0;
>
> spin_lock(&all_mddevs_lock);
> - list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
> + list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
> if (!mddev_get(mddev))
> continue;
> spin_unlock(&all_mddevs_lock);
> @@ -9911,8 +9915,8 @@ static int md_notify_reboot(struct notifier_block *this,
> mddev_unlock(mddev);
> }
> need_delay = 1;
> - mddev_put(mddev);
> spin_lock(&all_mddevs_lock);
> + mddev_put_locked(mddev);
> }
> spin_unlock(&all_mddevs_lock);
>
> @@ -10245,7 +10249,7 @@ void md_autostart_arrays(int part)
>
> static __exit void md_exit(void)
> {
> - struct mddev *mddev, *n;
> + struct mddev *mddev;
> int delay = 1;
>
> unregister_blkdev(MD_MAJOR,"md");
> @@ -10266,7 +10270,7 @@ static __exit void md_exit(void)
> remove_proc_entry("mdstat", NULL);
>
> spin_lock(&all_mddevs_lock);
> - list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
> + list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
> if (!mddev_get(mddev))
> continue;
> spin_unlock(&all_mddevs_lock);
> @@ -10278,8 +10282,8 @@ static __exit void md_exit(void)
> * the mddev for destruction by a workqueue, and the
> * destroy_workqueue() below will wait for that to complete.
> */
> - mddev_put(mddev);
> spin_lock(&all_mddevs_lock);
> + mddev_put_locked(mddev);
> }
> spin_unlock(&all_mddevs_lock);
>
> --
> 2.39.2
>
I confirm that this patch fixes our reproducer for the GPF
Thanks
Tested-by: Guillaume Morin <guillaume@morinfr.org>
--
Guillaume Morin <guillaume@morinfr.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] md: fix mddev uaf while iterating all_mddevs list
2025-02-20 12:43 [PATCH v3] md: fix mddev uaf while iterating all_mddevs list Yu Kuai
2025-02-20 18:41 ` Guillaume Morin
@ 2025-02-24 14:37 ` Christoph Hellwig
2025-03-05 1:32 ` Yu Kuai
2 siblings, 0 replies; 4+ messages in thread
From: Christoph Hellwig @ 2025-02-24 14:37 UTC (permalink / raw)
To: Yu Kuai
Cc: song, yukuai3, hare, axboe, logang, linux-raid, linux-kernel,
yi.zhang, yangerkun, hch, guillaume
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] md: fix mddev uaf while iterating all_mddevs list
2025-02-20 12:43 [PATCH v3] md: fix mddev uaf while iterating all_mddevs list Yu Kuai
2025-02-20 18:41 ` Guillaume Morin
2025-02-24 14:37 ` Christoph Hellwig
@ 2025-03-05 1:32 ` Yu Kuai
2 siblings, 0 replies; 4+ messages in thread
From: Yu Kuai @ 2025-03-05 1:32 UTC (permalink / raw)
To: Yu Kuai, song, hare, axboe, logang
Cc: linux-raid, linux-kernel, yi.zhang, yangerkun, hch, guillaume,
yukuai (C)
在 2025/02/20 20:43, Yu Kuai 写道:
> From: Yu Kuai <yukuai3@huawei.com>
>
> While iterating all_mddevs list from md_notify_reboot() and md_exit(),
> list_for_each_entry_safe is used, and this can race with deletint the
> next mddev, causing UAF:
>
> t1:
> spin_lock
> //list_for_each_entry_safe(mddev, n, ...)
> mddev_get(mddev1)
> // assume mddev2 is the next entry
> spin_unlock
> t2:
> //remove mddev2
> ...
> mddev_free
> spin_lock
> list_del
> spin_unlock
> kfree(mddev2)
> mddev_put(mddev1)
> spin_lock
> //continue dereference mddev2->all_mddevs
>
> The old helper for_each_mddev() actually grab the reference of mddev2
> while holding the lock, to prevent from being freed. This problem can be
> fixed the same way, however, the code will be complex.
>
> Hence switch to use list_for_each_entry, in this case mddev_put() can free
> the mddev1 and it's not safe as well. Refer to md_seq_show(), also factor
> out a helper mddev_put_locked() to fix this problem.
>
> Cc: Christoph Hellwig <hch@lst.de>
> Fixes: f26514342255 ("md: stop using for_each_mddev in md_notify_reboot")
> Fixes: 16648bac862f ("md: stop using for_each_mddev in md_exit")
> Reported-by: Guillaume Morin <guillaume@morinfr.org>
> Closes: https://lore.kernel.org/all/Z7Y0SURoA8xwg7vn@bender.morinfr.org/
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/md.c | 22 +++++++++++++---------
> 1 file changed, 13 insertions(+), 9 deletions(-)
>
Applied to md-6.15
Thanks
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 827646b3eb59..f501bc5f68f1 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev)
> queue_work(md_misc_wq, &mddev->del_work);
> }
>
> +static void mddev_put_locked(struct mddev *mddev)
> +{
> + if (atomic_dec_and_test(&mddev->active))
> + __mddev_put(mddev);
> +}
> +
> void mddev_put(struct mddev *mddev)
> {
> if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> @@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
> if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
> status_unused(seq);
>
> - if (atomic_dec_and_test(&mddev->active))
> - __mddev_put(mddev);
> -
> + mddev_put_locked(mddev);
> return 0;
> }
>
> @@ -9895,11 +9899,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
> static int md_notify_reboot(struct notifier_block *this,
> unsigned long code, void *x)
> {
> - struct mddev *mddev, *n;
> + struct mddev *mddev;
> int need_delay = 0;
>
> spin_lock(&all_mddevs_lock);
> - list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
> + list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
> if (!mddev_get(mddev))
> continue;
> spin_unlock(&all_mddevs_lock);
> @@ -9911,8 +9915,8 @@ static int md_notify_reboot(struct notifier_block *this,
> mddev_unlock(mddev);
> }
> need_delay = 1;
> - mddev_put(mddev);
> spin_lock(&all_mddevs_lock);
> + mddev_put_locked(mddev);
> }
> spin_unlock(&all_mddevs_lock);
>
> @@ -10245,7 +10249,7 @@ void md_autostart_arrays(int part)
>
> static __exit void md_exit(void)
> {
> - struct mddev *mddev, *n;
> + struct mddev *mddev;
> int delay = 1;
>
> unregister_blkdev(MD_MAJOR,"md");
> @@ -10266,7 +10270,7 @@ static __exit void md_exit(void)
> remove_proc_entry("mdstat", NULL);
>
> spin_lock(&all_mddevs_lock);
> - list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
> + list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
> if (!mddev_get(mddev))
> continue;
> spin_unlock(&all_mddevs_lock);
> @@ -10278,8 +10282,8 @@ static __exit void md_exit(void)
> * the mddev for destruction by a workqueue, and the
> * destroy_workqueue() below will wait for that to complete.
> */
> - mddev_put(mddev);
> spin_lock(&all_mddevs_lock);
> + mddev_put_locked(mddev);
> }
> spin_unlock(&all_mddevs_lock);
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-03-05 1:32 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-20 12:43 [PATCH v3] md: fix mddev uaf while iterating all_mddevs list Yu Kuai
2025-02-20 18:41 ` Guillaume Morin
2025-02-24 14:37 ` Christoph Hellwig
2025-03-05 1:32 ` Yu Kuai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).