* [PATCH 5.10.y] binder: fix use-after-free in shinker's callback
@ 2024-01-18 21:51 Carlos Llamas
2024-01-22 16:56 ` [PATCH 5.10.y v2] " Carlos Llamas
0 siblings, 1 reply; 3+ messages in thread
From: Carlos Llamas @ 2024-01-18 21:51 UTC (permalink / raw)
To: stable
Cc: Carlos Llamas, Liam Howlett, Minchan Kim, Alice Ryhl,
Greg Kroah-Hartman
commit 3f489c2067c5824528212b0fc18b28d51332d906 upstream.
The mmap read lock is used during the shrinker's callback, which means
that using alloc->vma pointer isn't safe as it can race with munmap().
As of commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
munmap") the mmap lock is downgraded after the vma has been isolated.
I was able to reproduce this issue by manually adding some delays and
triggering page reclaiming through the shrinker's debug sysfs. The
following KASAN report confirms the UAF:
==================================================================
BUG: KASAN: slab-use-after-free in zap_page_range_single+0x470/0x4b8
Read of size 8 at addr ffff356ed50e50f0 by task bash/478
CPU: 1 PID: 478 Comm: bash Not tainted 6.6.0-rc5-00055-g1c8b86a3799f-dirty #70
Hardware name: linux,dummy-virt (DT)
Call trace:
zap_page_range_single+0x470/0x4b8
binder_alloc_free_page+0x608/0xadc
__list_lru_walk_one+0x130/0x3b0
list_lru_walk_node+0xc4/0x22c
binder_shrink_scan+0x108/0x1dc
shrinker_debugfs_scan_write+0x2b4/0x500
full_proxy_write+0xd4/0x140
vfs_write+0x1ac/0x758
ksys_write+0xf0/0x1dc
__arm64_sys_write+0x6c/0x9c
Allocated by task 492:
kmem_cache_alloc+0x130/0x368
vm_area_alloc+0x2c/0x190
mmap_region+0x258/0x18bc
do_mmap+0x694/0xa60
vm_mmap_pgoff+0x170/0x29c
ksys_mmap_pgoff+0x290/0x3a0
__arm64_sys_mmap+0xcc/0x144
Freed by task 491:
kmem_cache_free+0x17c/0x3c8
vm_area_free_rcu_cb+0x74/0x98
rcu_core+0xa38/0x26d4
rcu_core_si+0x10/0x1c
__do_softirq+0x2fc/0xd24
Last potentially related work creation:
__call_rcu_common.constprop.0+0x6c/0xba0
call_rcu+0x10/0x1c
vm_area_free+0x18/0x24
remove_vma+0xe4/0x118
do_vmi_align_munmap.isra.0+0x718/0xb5c
do_vmi_munmap+0xdc/0x1fc
__vm_munmap+0x10c/0x278
__arm64_sys_munmap+0x58/0x7c
Fix this issue by performing instead a vma_lookup() which will fail to
find the vma that was isolated before the mmap lock downgrade. Note that
this option has better performance than upgrading to a mmap write lock
which would increase contention. Plus, mmap_write_trylock() has been
recently removed anyway.
Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Cc: stable@vger.kernel.org
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Minchan Kim <minchan@kernel.org>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-3-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[cmllamas: use find_vma() instead of vma_lookup() as commit ce6d42f2e4a2
is missing in v5.10. This only works because we check the vma against
our cached alloc->vma pointer.]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
(cherry picked from commit aaa8cde67eba56b3ebe8c2c155e896f9ccaa8bb7)
---
drivers/android/binder_alloc.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index b6bf9caaf1d1..61406bfa454b 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1002,7 +1002,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
goto err_mmget;
if (!mmap_read_trylock(mm))
goto err_mmap_read_lock_failed;
- vma = binder_alloc_get_vma(alloc);
+ vma = find_vma(mm, page_addr);
+ if (vma && vma != binder_alloc_get_vma(alloc))
+ goto err_invalid_vma;
list_lru_isolate(lru, item);
spin_unlock(lock);
@@ -1028,6 +1030,8 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
mutex_unlock(&alloc->mutex);
return LRU_REMOVED_RETRY;
+err_invalid_vma:
+ mmap_read_unlock(mm);
err_mmap_read_lock_failed:
mmput_async(mm);
err_mmget:
--
2.43.0.429.g432eaa2c6b-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH 5.10.y v2] binder: fix use-after-free in shinker's callback
2024-01-18 21:51 [PATCH 5.10.y] binder: fix use-after-free in shinker's callback Carlos Llamas
@ 2024-01-22 16:56 ` Carlos Llamas
2024-01-22 17:05 ` Greg Kroah-Hartman
0 siblings, 1 reply; 3+ messages in thread
From: Carlos Llamas @ 2024-01-22 16:56 UTC (permalink / raw)
To: stable
Cc: Carlos Llamas, Liam Howlett, Minchan Kim, Alice Ryhl,
Greg Kroah-Hartman
commit 3f489c2067c5824528212b0fc18b28d51332d906 upstream.
The mmap read lock is used during the shrinker's callback, which means
that using alloc->vma pointer isn't safe as it can race with munmap().
As of commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
munmap") the mmap lock is downgraded after the vma has been isolated.
I was able to reproduce this issue by manually adding some delays and
triggering page reclaiming through the shrinker's debug sysfs. The
following KASAN report confirms the UAF:
==================================================================
BUG: KASAN: slab-use-after-free in zap_page_range_single+0x470/0x4b8
Read of size 8 at addr ffff356ed50e50f0 by task bash/478
CPU: 1 PID: 478 Comm: bash Not tainted 6.6.0-rc5-00055-g1c8b86a3799f-dirty #70
Hardware name: linux,dummy-virt (DT)
Call trace:
zap_page_range_single+0x470/0x4b8
binder_alloc_free_page+0x608/0xadc
__list_lru_walk_one+0x130/0x3b0
list_lru_walk_node+0xc4/0x22c
binder_shrink_scan+0x108/0x1dc
shrinker_debugfs_scan_write+0x2b4/0x500
full_proxy_write+0xd4/0x140
vfs_write+0x1ac/0x758
ksys_write+0xf0/0x1dc
__arm64_sys_write+0x6c/0x9c
Allocated by task 492:
kmem_cache_alloc+0x130/0x368
vm_area_alloc+0x2c/0x190
mmap_region+0x258/0x18bc
do_mmap+0x694/0xa60
vm_mmap_pgoff+0x170/0x29c
ksys_mmap_pgoff+0x290/0x3a0
__arm64_sys_mmap+0xcc/0x144
Freed by task 491:
kmem_cache_free+0x17c/0x3c8
vm_area_free_rcu_cb+0x74/0x98
rcu_core+0xa38/0x26d4
rcu_core_si+0x10/0x1c
__do_softirq+0x2fc/0xd24
Last potentially related work creation:
__call_rcu_common.constprop.0+0x6c/0xba0
call_rcu+0x10/0x1c
vm_area_free+0x18/0x24
remove_vma+0xe4/0x118
do_vmi_align_munmap.isra.0+0x718/0xb5c
do_vmi_munmap+0xdc/0x1fc
__vm_munmap+0x10c/0x278
__arm64_sys_munmap+0x58/0x7c
Fix this issue by performing instead a vma_lookup() which will fail to
find the vma that was isolated before the mmap lock downgrade. Note that
this option has better performance than upgrading to a mmap write lock
which would increase contention. Plus, mmap_write_trylock() has been
recently removed anyway.
Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Cc: stable@vger.kernel.org
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Minchan Kim <minchan@kernel.org>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-3-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[cmllamas: use find_vma() instead of vma_lookup() as commit ce6d42f2e4a2
is missing in v5.10. This only works because we check the vma against
our cached alloc->vma pointer.]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
---
Notes:
v2: remove leftover cherry-pick line from commit log
drivers/android/binder_alloc.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index b6bf9caaf1d1..61406bfa454b 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1002,7 +1002,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
goto err_mmget;
if (!mmap_read_trylock(mm))
goto err_mmap_read_lock_failed;
- vma = binder_alloc_get_vma(alloc);
+ vma = find_vma(mm, page_addr);
+ if (vma && vma != binder_alloc_get_vma(alloc))
+ goto err_invalid_vma;
list_lru_isolate(lru, item);
spin_unlock(lock);
@@ -1028,6 +1030,8 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
mutex_unlock(&alloc->mutex);
return LRU_REMOVED_RETRY;
+err_invalid_vma:
+ mmap_read_unlock(mm);
err_mmap_read_lock_failed:
mmput_async(mm);
err_mmget:
--
2.43.0.429.g432eaa2c6b-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH 5.10.y v2] binder: fix use-after-free in shinker's callback
2024-01-22 16:56 ` [PATCH 5.10.y v2] " Carlos Llamas
@ 2024-01-22 17:05 ` Greg Kroah-Hartman
0 siblings, 0 replies; 3+ messages in thread
From: Greg Kroah-Hartman @ 2024-01-22 17:05 UTC (permalink / raw)
To: Carlos Llamas; +Cc: stable, Liam Howlett, Minchan Kim, Alice Ryhl
On Mon, Jan 22, 2024 at 04:56:02PM +0000, Carlos Llamas wrote:
> commit 3f489c2067c5824528212b0fc18b28d51332d906 upstream.
>
> The mmap read lock is used during the shrinker's callback, which means
> that using alloc->vma pointer isn't safe as it can race with munmap().
> As of commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
> munmap") the mmap lock is downgraded after the vma has been isolated.
>
> I was able to reproduce this issue by manually adding some delays and
> triggering page reclaiming through the shrinker's debug sysfs. The
> following KASAN report confirms the UAF:
>
> ==================================================================
> BUG: KASAN: slab-use-after-free in zap_page_range_single+0x470/0x4b8
> Read of size 8 at addr ffff356ed50e50f0 by task bash/478
>
> CPU: 1 PID: 478 Comm: bash Not tainted 6.6.0-rc5-00055-g1c8b86a3799f-dirty #70
> Hardware name: linux,dummy-virt (DT)
> Call trace:
> zap_page_range_single+0x470/0x4b8
> binder_alloc_free_page+0x608/0xadc
> __list_lru_walk_one+0x130/0x3b0
> list_lru_walk_node+0xc4/0x22c
> binder_shrink_scan+0x108/0x1dc
> shrinker_debugfs_scan_write+0x2b4/0x500
> full_proxy_write+0xd4/0x140
> vfs_write+0x1ac/0x758
> ksys_write+0xf0/0x1dc
> __arm64_sys_write+0x6c/0x9c
>
> Allocated by task 492:
> kmem_cache_alloc+0x130/0x368
> vm_area_alloc+0x2c/0x190
> mmap_region+0x258/0x18bc
> do_mmap+0x694/0xa60
> vm_mmap_pgoff+0x170/0x29c
> ksys_mmap_pgoff+0x290/0x3a0
> __arm64_sys_mmap+0xcc/0x144
>
> Freed by task 491:
> kmem_cache_free+0x17c/0x3c8
> vm_area_free_rcu_cb+0x74/0x98
> rcu_core+0xa38/0x26d4
> rcu_core_si+0x10/0x1c
> __do_softirq+0x2fc/0xd24
>
> Last potentially related work creation:
> __call_rcu_common.constprop.0+0x6c/0xba0
> call_rcu+0x10/0x1c
> vm_area_free+0x18/0x24
> remove_vma+0xe4/0x118
> do_vmi_align_munmap.isra.0+0x718/0xb5c
> do_vmi_munmap+0xdc/0x1fc
> __vm_munmap+0x10c/0x278
> __arm64_sys_munmap+0x58/0x7c
>
> Fix this issue by performing instead a vma_lookup() which will fail to
> find the vma that was isolated before the mmap lock downgrade. Note that
> this option has better performance than upgrading to a mmap write lock
> which would increase contention. Plus, mmap_write_trylock() has been
> recently removed anyway.
>
> Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
> Cc: stable@vger.kernel.org
> Cc: Liam Howlett <liam.howlett@oracle.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> Signed-off-by: Carlos Llamas <cmllamas@google.com>
> Link: https://lore.kernel.org/r/20231201172212.1813387-3-cmllamas@google.com
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> [cmllamas: use find_vma() instead of vma_lookup() as commit ce6d42f2e4a2
> is missing in v5.10. This only works because we check the vma against
> our cached alloc->vma pointer.]
> Signed-off-by: Carlos Llamas <cmllamas@google.com>
> ---
>
> Notes:
> v2: remove leftover cherry-pick line from commit log
Not an issue, I normally strip those out :)
Both now queued up, thanks.
greg k-h
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-01-22 17:05 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-18 21:51 [PATCH 5.10.y] binder: fix use-after-free in shinker's callback Carlos Llamas
2024-01-22 16:56 ` [PATCH 5.10.y v2] " Carlos Llamas
2024-01-22 17:05 ` Greg Kroah-Hartman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox