public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
	Avihai Horon <avihaih@nvidia.com>,
	Amit Matityahu <mitm@nvidia.com>, <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH rdma-next v2] RDMA/ucma: Fix use-after-free bug in ucma_create_uevent
Date: Fri, 12 Feb 2021 11:40:07 -0400	[thread overview]
Message-ID: <20210212154007.GA1716976@nvidia.com> (raw)
In-Reply-To: <20210211090517.1278415-1-leon@kernel.org>

On Thu, Feb 11, 2021 at 11:05:17AM +0200, Leon Romanovsky wrote:
> From: Avihai Horon <avihaih@nvidia.com>
> 
> ucma_process_join() allocates struct ucma_multicast mc and frees it if an
> error occurs during its run.
> Specifically, if an error occurs in copy_to_user(), a use-after-free
> might happen in the following scenario:
> 
> 1. mc struct is allocated.
> 2. rdma_join_multicast() is called and succeeds. During its run,
>    cma_iboe_join_multicast() enqueues a work that will later use the
>    aforementioned mc struct.
> 3. copy_to_user() is called and fails.
> 4. mc struct is deallocated.
> 5. The work that was enqueued by cma_iboe_join_multicast() is run and
>    calls ucma_create_uevent() which tries to access mc struct (which is
>    freed by now).
> 
> Fix this bug by cancelling the work enqueued by cma_iboe_join_multicast().
> Since cma_work_handler() frees struct cma_work, we don't use it in
> cma_iboe_join_multicast() so we can safely cancel the work later.
> 
> The following syzkaller report revealed it:
> 
> BUG: KASAN: use-after-free in ucma_create_uevent+0x2dd/0&times;3f0
> drivers/infiniband/core/ucma.c:272
> Read of size 8 at addr ffff88810b3ad110 by task kworker/u8:1/108
>  
> CPU: 1 PID: 108 Comm: kworker/u8:1 Not tainted 5.10.0-rc6+ #257
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
> Workqueue: rdma_cm cma_work_handler
> Call Trace:
> __dump_stack lib/dump_stack.c:77 [inline]
> dump_stack+0xbe/0xf9 lib/dump_stack.c:118
> print_address_description.constprop.0+0x3e/0×60 mm/kasan/report.c:385
> __kasan_report mm/kasan/report.c:545 [inline]
> kasan_report.cold+0x1f/0×37 mm/kasan/report.c:562
> ucma_create_uevent+0x2dd/0×3f0 drivers/infiniband/core/ucma.c:272
> ucma_event_handler+0xb7/0×3c0 drivers/infiniband/core/ucma.c:349
> cma_cm_event_handler+0x5d/0×1c0 drivers/infiniband/core/cma.c:1977
> cma_work_handler+0xfa/0×190 drivers/infiniband/core/cma.c:2718
> process_one_work+0x54c/0×930 kernel/workqueue.c:2272
> worker_thread+0x82/0×830 kernel/workqueue.c:2418
> kthread+0x1ca/0×220 kernel/kthread.c:292
> ret_from_fork+0x1f/0×30 arch/x86/entry/entry_64.S:296
> 
> Allocated by task 359:
> kasan_save_stack+0x1b/0×40 mm/kasan/common.c:48
> kasan_set_track mm/kasan/common.c:56 [inline]
> __kasan_kmalloc mm/kasan/common.c:461 [inline]
> __kasan_kmalloc.constprop.0+0xc2/0xd0 mm/kasan/common.c:434
> kmalloc include/linux/slab.h:552 [inline]
> kzalloc include/linux/slab.h:664 [inline]
> ucma_process_join+0x16e/0×3f0 drivers/infiniband/core/ucma.c:1453
> ucma_join_multicast+0xda/0×140 drivers/infiniband/core/ucma.c:1538
> ucma_write+0x1f7/0×280 drivers/infiniband/core/ucma.c:1724
> vfs_write fs/read_write.c:603 [inline]
> vfs_write+0x191/0×4c0 fs/read_write.c:585
> ksys_write+0x1a1/0×1e0 fs/read_write.c:658
> do_syscall_64+0x2d/0×40 arch/x86/entry/common.c:46
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> Freed by task 359:
> kasan_save_stack+0x1b/0×40 mm/kasan/common.c:48
> kasan_set_track+0x1c/0×30 mm/kasan/common.c:56
> kasan_set_free_info+0x1b/0×30 mm/kasan/generic.c:355
> __kasan_slab_free+0x112/0×160 mm/kasan/common.c:422
> slab_free_hook mm/slub.c:1544 [inline]
> slab_free_freelist_hook mm/slub.c:1577 [inline]
> slab_free mm/slub.c:3142 [inline]
> kfree+0xb3/0×3e0 mm/slub.c:4124
> ucma_process_join+0x22d/0×3f0 drivers/infiniband/core/ucma.c:1497
> ucma_join_multicast+0xda/0×140 drivers/infiniband/core/ucma.c:1538
> ucma_write+0x1f7/0×280 drivers/infiniband/core/ucma.c:1724
> vfs_write fs/read_write.c:603 [inline]
> vfs_write+0x191/0×4c0 fs/read_write.c:585
> ksys_write+0x1a1/0×1e0 fs/read_write.c:658
> do_syscall_64+0x2d/0×40 arch/x86/entry/common.c:46
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
> The buggy address belongs to the object at ffff88810b3ad100
> which belongs to the cache kmalloc-192 of size 192
> The buggy address is located 16 bytes inside of
> 192-byte region [ffff88810b3ad100, ffff88810b3ad1c0)
> 
> The buggy address belongs to the page:
> page:00000000796da98e refcount:1 mapcount:0 mapping:0000000000000000
> index:0×0 pfn:0×10b3ad
> flags: 0×8000000000000200(slab)
> raw: 8000000000000200 dead000000000100 dead000000000122 ffff888100043540
> raw: 0000000000000000 0000000080100010 00000001ffffffff 0000000000000000
> page dumped because: kasan: bad access detected
> Memory state around the buggy address:
> ffff88810b3ad000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> ffff88810b3ad080: 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc fc
> >ffff88810b3ad100: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ^
> ffff88810b3ad180: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
> ffff88810b3ad200: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> 
> Fixes: b5de0c60cc30 ("RDMA/cma: Fix use after free race in roce multicast join")
> Reported-by: Amit Matityahu <mitm@nvidia.com>
> Signed-off-by: Avihai Horon <avihaih@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> Changelog:
> v2:
>  * Delete cma_id_get() in cma_iboe_join_multicast.
>  * Added WARN_ON(ret) checks.
> v1: https://lore.kernel.org/linux-rdma/20210125121556.838290-1-leon@kernel.org
> ---
>  drivers/infiniband/core/cma.c | 70 ++++++++++++++++++++---------------
>  1 file changed, 41 insertions(+), 29 deletions(-)

Applied to for-next, thanks

Jason

      reply	other threads:[~2021-02-12 15:41 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-11  9:05 [PATCH rdma-next v2] RDMA/ucma: Fix use-after-free bug in ucma_create_uevent Leon Romanovsky
2021-02-12 15:40 ` Jason Gunthorpe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210212154007.GA1716976@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=avihaih@nvidia.com \
    --cc=dledford@redhat.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mitm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox