* Re: KASAN: use-after-free Read in vhost_chr_write_iter
[not found] <20180517134544.GA20646@dragonet.kaist.ac.kr>
@ 2018-05-18 9:24 ` Jason Wang
[not found] ` <58419d62-3074-2e5a-8504-da1cdeb08280@redhat.com>
1 sibling, 0 replies; 6+ messages in thread
From: Jason Wang @ 2018-05-18 9:24 UTC (permalink / raw)
To: DaeRyong Jeong, mst
Cc: bammanag, kt0755, kvm, netdev, linux-kernel, virtualization,
byoungyoung
On 2018年05月17日 21:45, DaeRyong Jeong wrote:
> We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
>
> This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
> version of Syzkaller), which we describe more at the end of this
> report. Our analysis shows that the race occurs when invoking two
> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
>
>
> Analysis:
> We think the concurrent execution of vhost_process_iotlb_msg() and
> vhost_dev_cleanup() causes the crash.
> Both of functions can run concurrently (please see call sequence below),
> and possibly, there is a race on dev->iotlb.
> If the switch occurs right after vhost_dev_cleanup() frees
> dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value and it
> keep executing without returning -EFAULT. Consequently, use-after-free
> occures
>
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE and
> VHOST_IOTLB_INVALIDATE)
> ===== =====
> vhost_umem_clean(dev->iotlb);
> if (!dev->iotlb) {
> ret = -EFAULT;
> break;
> }
> dev->iotlb = NULL;
>
>
> Call Sequence:
> CPU0
> =====
> vhost_net_chr_write_iter
> vhost_chr_write_iter
> vhost_process_iotlb_msg
>
> CPU1
> =====
> vhost_net_ioctl
> vhost_net_reset_owner
> vhost_dev_reset_owner
> vhost_dev_cleanup
Thanks a lot for the analysis.
This could be addressed by simply protect it with dev mutex.
Will post a patch.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: KASAN: use-after-free Read in vhost_chr_write_iter
[not found] ` <58419d62-3074-2e5a-8504-da1cdeb08280@redhat.com>
@ 2018-05-21 2:38 ` Jason Wang
2018-05-21 14:42 ` Michael S. Tsirkin
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Jason Wang @ 2018-05-21 2:38 UTC (permalink / raw)
To: DaeRyong Jeong, mst
Cc: bammanag, kt0755, kvm, netdev, linux-kernel, virtualization,
byoungyoung
[-- Attachment #1: Type: text/plain, Size: 1988 bytes --]
On 2018年05月18日 17:24, Jason Wang wrote:
>
>
> On 2018年05月17日 21:45, DaeRyong Jeong wrote:
>> We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
>>
>> This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
>> version of Syzkaller), which we describe more at the end of this
>> report. Our analysis shows that the race occurs when invoking two
>> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
>>
>>
>> Analysis:
>> We think the concurrent execution of vhost_process_iotlb_msg() and
>> vhost_dev_cleanup() causes the crash.
>> Both of functions can run concurrently (please see call sequence below),
>> and possibly, there is a race on dev->iotlb.
>> If the switch occurs right after vhost_dev_cleanup() frees
>> dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
>> and it
>> keep executing without returning -EFAULT. Consequently, use-after-free
>> occures
>>
>>
>> Thread interleaving:
>> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
>> (In the case of both VHOST_IOTLB_UPDATE and
>> VHOST_IOTLB_INVALIDATE)
>> ===== =====
>> vhost_umem_clean(dev->iotlb);
>> if (!dev->iotlb) {
>> ret = -EFAULT;
>> break;
>> }
>> dev->iotlb = NULL;
>>
>>
>> Call Sequence:
>> CPU0
>> =====
>> vhost_net_chr_write_iter
>> vhost_chr_write_iter
>> vhost_process_iotlb_msg
>>
>> CPU1
>> =====
>> vhost_net_ioctl
>> vhost_net_reset_owner
>> vhost_dev_reset_owner
>> vhost_dev_cleanup
>
> Thanks a lot for the analysis.
>
> This could be addressed by simply protect it with dev mutex.
>
> Will post a patch.
>
Could you please help to test the attached patch? I've done some smoking
test.
Thanks
[-- Attachment #2: 0001-vhost-synchronize-IOTLB-message-with-dev-cleanup.patch --]
[-- Type: text/x-patch, Size: 1507 bytes --]
From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang@redhat.com>
Date: Fri, 18 May 2018 17:33:27 +0800
Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
DaeRyong Jeong reports a race between vhost_dev_cleanup() and
vhost_process_iotlb_msg():
Thread interleaving:
CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
(In the case of both VHOST_IOTLB_UPDATE and
VHOST_IOTLB_INVALIDATE)
===== =====
vhost_umem_clean(dev->iotlb);
if (!dev->iotlb) {
ret = -EFAULT;
break;
}
dev->iotlb = NULL;
The reason is we don't synchronize between them, fixing by protecting
vhost_process_iotlb_msg() with dev mutex.
Reported-by: DaeRyong Jeong <threeearcat@gmail.com>
Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
Reported-by: DaeRyong Jeong <threeearcat@gmail.com>
---
drivers/vhost/vhost.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index f3bd8e9..f0be5f3 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -981,6 +981,7 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
{
int ret = 0;
+ mutex_lock(&dev->mutex);
vhost_dev_lock_vqs(dev);
switch (msg->type) {
case VHOST_IOTLB_UPDATE:
@@ -1016,6 +1017,8 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
}
vhost_dev_unlock_vqs(dev);
+ mutex_unlock(&dev->mutex);
+
return ret;
}
ssize_t vhost_chr_write_iter(struct vhost_dev *dev,
--
2.7.4
[-- Attachment #3: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: KASAN: use-after-free Read in vhost_chr_write_iter
2018-05-21 2:38 ` Jason Wang
@ 2018-05-21 14:42 ` Michael S. Tsirkin
[not found] ` <20180521173645-mutt-send-email-mst@kernel.org>
[not found] ` <20180522083842.GA10604@dragonet.kaist.ac.kr>
2 siblings, 0 replies; 6+ messages in thread
From: Michael S. Tsirkin @ 2018-05-21 14:42 UTC (permalink / raw)
To: Jason Wang
Cc: bammanag, kt0755, kvm, netdev, linux-kernel, virtualization,
byoungyoung, DaeRyong Jeong
On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
>
>
> On 2018年05月18日 17:24, Jason Wang wrote:
> >
> >
> > On 2018年05月17日 21:45, DaeRyong Jeong wrote:
> > > We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
> > >
> > > This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
> > > version of Syzkaller), which we describe more at the end of this
> > > report. Our analysis shows that the race occurs when invoking two
> > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
> > >
> > >
> > > Analysis:
> > > We think the concurrent execution of vhost_process_iotlb_msg() and
> > > vhost_dev_cleanup() causes the crash.
> > > Both of functions can run concurrently (please see call sequence below),
> > > and possibly, there is a race on dev->iotlb.
> > > If the switch occurs right after vhost_dev_cleanup() frees
> > > dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
> > > and it
> > > keep executing without returning -EFAULT. Consequently, use-after-free
> > > occures
> > >
> > >
> > > Thread interleaving:
> > > CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> > > (In the case of both VHOST_IOTLB_UPDATE and
> > > VHOST_IOTLB_INVALIDATE)
> > > ===== =====
> > > vhost_umem_clean(dev->iotlb);
> > > if (!dev->iotlb) {
> > > ret = -EFAULT;
> > > break;
> > > }
> > > dev->iotlb = NULL;
> > >
> > >
> > > Call Sequence:
> > > CPU0
> > > =====
> > > vhost_net_chr_write_iter
> > > vhost_chr_write_iter
> > > vhost_process_iotlb_msg
> > >
> > > CPU1
> > > =====
> > > vhost_net_ioctl
> > > vhost_net_reset_owner
> > > vhost_dev_reset_owner
> > > vhost_dev_cleanup
> >
> > Thanks a lot for the analysis.
> >
> > This could be addressed by simply protect it with dev mutex.
> >
> > Will post a patch.
> >
>
> Could you please help to test the attached patch? I've done some smoking
> test.
>
> Thanks
> >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
> From: Jason Wang <jasowang@redhat.com>
> Date: Fri, 18 May 2018 17:33:27 +0800
> Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
>
> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
> vhost_process_iotlb_msg():
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE and
> VHOST_IOTLB_INVALIDATE)
> ===== =====
> vhost_umem_clean(dev->iotlb);
> if (!dev->iotlb) {
> ret = -EFAULT;
> break;
> }
> dev->iotlb = NULL;
>
> The reason is we don't synchronize between them, fixing by protecting
> vhost_process_iotlb_msg() with dev mutex.
>
> Reported-by: DaeRyong Jeong <threeearcat@gmail.com>
> Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
> Reported-by: DaeRyong Jeong <threeearcat@gmail.com>
Long terms we might want to move iotlb into vqs
so that messages can be processed in parallel.
Not sure how to do it yet.
> ---
> drivers/vhost/vhost.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index f3bd8e9..f0be5f3 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -981,6 +981,7 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
> {
> int ret = 0;
>
> + mutex_lock(&dev->mutex);
> vhost_dev_lock_vqs(dev);
> switch (msg->type) {
> case VHOST_IOTLB_UPDATE:
> @@ -1016,6 +1017,8 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
> }
>
> vhost_dev_unlock_vqs(dev);
> + mutex_unlock(&dev->mutex);
> +
> return ret;
> }
> ssize_t vhost_chr_write_iter(struct vhost_dev *dev,
> --
> 2.7.4
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: KASAN: use-after-free Read in vhost_chr_write_iter
[not found] ` <20180521173645-mutt-send-email-mst@kernel.org>
@ 2018-05-22 3:50 ` Jason Wang
[not found] ` <51cf8274-162f-384b-0ff7-47fbf15412f1@redhat.com>
1 sibling, 0 replies; 6+ messages in thread
From: Jason Wang @ 2018-05-22 3:50 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: bammanag, kt0755, kvm, netdev, linux-kernel, virtualization,
byoungyoung, DaeRyong Jeong
On 2018年05月21日 22:42, Michael S. Tsirkin wrote:
> On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
>> On 2018年05月18日 17:24, Jason Wang wrote:
>>> On 2018年05月17日 21:45, DaeRyong Jeong wrote:
>>>> We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
>>>>
>>>> This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
>>>> version of Syzkaller), which we describe more at the end of this
>>>> report. Our analysis shows that the race occurs when invoking two
>>>> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
>>>>
>>>>
>>>> Analysis:
>>>> We think the concurrent execution of vhost_process_iotlb_msg() and
>>>> vhost_dev_cleanup() causes the crash.
>>>> Both of functions can run concurrently (please see call sequence below),
>>>> and possibly, there is a race on dev->iotlb.
>>>> If the switch occurs right after vhost_dev_cleanup() frees
>>>> dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
>>>> and it
>>>> keep executing without returning -EFAULT. Consequently, use-after-free
>>>> occures
>>>>
>>>>
>>>> Thread interleaving:
>>>> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
>>>> (In the case of both VHOST_IOTLB_UPDATE and
>>>> VHOST_IOTLB_INVALIDATE)
>>>> ===== =====
>>>> vhost_umem_clean(dev->iotlb);
>>>> if (!dev->iotlb) {
>>>> ret = -EFAULT;
>>>> break;
>>>> }
>>>> dev->iotlb = NULL;
>>>>
>>>>
>>>> Call Sequence:
>>>> CPU0
>>>> =====
>>>> vhost_net_chr_write_iter
>>>> vhost_chr_write_iter
>>>> vhost_process_iotlb_msg
>>>>
>>>> CPU1
>>>> =====
>>>> vhost_net_ioctl
>>>> vhost_net_reset_owner
>>>> vhost_dev_reset_owner
>>>> vhost_dev_cleanup
>>> Thanks a lot for the analysis.
>>>
>>> This could be addressed by simply protect it with dev mutex.
>>>
>>> Will post a patch.
>>>
>> Could you please help to test the attached patch? I've done some smoking
>> test.
>>
>> Thanks
>> >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
>> From: Jason Wang<jasowang@redhat.com>
>> Date: Fri, 18 May 2018 17:33:27 +0800
>> Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
>>
>> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
>> vhost_process_iotlb_msg():
>>
>> Thread interleaving:
>> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
>> (In the case of both VHOST_IOTLB_UPDATE and
>> VHOST_IOTLB_INVALIDATE)
>> ===== =====
>> vhost_umem_clean(dev->iotlb);
>> if (!dev->iotlb) {
>> ret = -EFAULT;
>> break;
>> }
>> dev->iotlb = NULL;
>>
>> The reason is we don't synchronize between them, fixing by protecting
>> vhost_process_iotlb_msg() with dev mutex.
>>
>> Reported-by: DaeRyong Jeong<threeearcat@gmail.com>
>> Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
>> Reported-by: DaeRyong Jeong<threeearcat@gmail.com>
> Long terms we might want to move iotlb into vqs
> so that messages can be processed in parallel.
> Not sure how to do it yet.
>
Then we probably need to extend IOTLB msg to have a queue idx. But I
thinkit was probably only help if we split tx/rx into separate processes.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: KASAN: use-after-free Read in vhost_chr_write_iter
[not found] ` <51cf8274-162f-384b-0ff7-47fbf15412f1@redhat.com>
@ 2018-05-22 3:53 ` Michael S. Tsirkin
0 siblings, 0 replies; 6+ messages in thread
From: Michael S. Tsirkin @ 2018-05-22 3:53 UTC (permalink / raw)
To: Jason Wang
Cc: bammanag, kt0755, kvm, netdev, linux-kernel, virtualization,
byoungyoung, DaeRyong Jeong
On Tue, May 22, 2018 at 11:50:29AM +0800, Jason Wang wrote:
>
>
> On 2018年05月21日 22:42, Michael S. Tsirkin wrote:
> > On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
> > > On 2018年05月18日 17:24, Jason Wang wrote:
> > > > On 2018年05月17日 21:45, DaeRyong Jeong wrote:
> > > > > We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
> > > > >
> > > > > This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
> > > > > version of Syzkaller), which we describe more at the end of this
> > > > > report. Our analysis shows that the race occurs when invoking two
> > > > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
> > > > >
> > > > >
> > > > > Analysis:
> > > > > We think the concurrent execution of vhost_process_iotlb_msg() and
> > > > > vhost_dev_cleanup() causes the crash.
> > > > > Both of functions can run concurrently (please see call sequence below),
> > > > > and possibly, there is a race on dev->iotlb.
> > > > > If the switch occurs right after vhost_dev_cleanup() frees
> > > > > dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
> > > > > and it
> > > > > keep executing without returning -EFAULT. Consequently, use-after-free
> > > > > occures
> > > > >
> > > > >
> > > > > Thread interleaving:
> > > > > CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> > > > > (In the case of both VHOST_IOTLB_UPDATE and
> > > > > VHOST_IOTLB_INVALIDATE)
> > > > > ===== =====
> > > > > vhost_umem_clean(dev->iotlb);
> > > > > if (!dev->iotlb) {
> > > > > ret = -EFAULT;
> > > > > break;
> > > > > }
> > > > > dev->iotlb = NULL;
> > > > >
> > > > >
> > > > > Call Sequence:
> > > > > CPU0
> > > > > =====
> > > > > vhost_net_chr_write_iter
> > > > > vhost_chr_write_iter
> > > > > vhost_process_iotlb_msg
> > > > >
> > > > > CPU1
> > > > > =====
> > > > > vhost_net_ioctl
> > > > > vhost_net_reset_owner
> > > > > vhost_dev_reset_owner
> > > > > vhost_dev_cleanup
> > > > Thanks a lot for the analysis.
> > > >
> > > > This could be addressed by simply protect it with dev mutex.
> > > >
> > > > Will post a patch.
> > > >
> > > Could you please help to test the attached patch? I've done some smoking
> > > test.
> > >
> > > Thanks
> > > >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
> > > From: Jason Wang<jasowang@redhat.com>
> > > Date: Fri, 18 May 2018 17:33:27 +0800
> > > Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
> > >
> > > DaeRyong Jeong reports a race between vhost_dev_cleanup() and
> > > vhost_process_iotlb_msg():
> > >
> > > Thread interleaving:
> > > CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> > > (In the case of both VHOST_IOTLB_UPDATE and
> > > VHOST_IOTLB_INVALIDATE)
> > > ===== =====
> > > vhost_umem_clean(dev->iotlb);
> > > if (!dev->iotlb) {
> > > ret = -EFAULT;
> > > break;
> > > }
> > > dev->iotlb = NULL;
> > >
> > > The reason is we don't synchronize between them, fixing by protecting
> > > vhost_process_iotlb_msg() with dev mutex.
> > >
> > > Reported-by: DaeRyong Jeong<threeearcat@gmail.com>
> > > Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
> > > Reported-by: DaeRyong Jeong<threeearcat@gmail.com>
> > Long terms we might want to move iotlb into vqs
> > so that messages can be processed in parallel.
> > Not sure how to do it yet.
> >
>
> Then we probably need to extend IOTLB msg to have a queue idx. But I thinkit
> was probably only help if we split tx/rx into separate processes.
>
> Thanks
3 mutex locks on each access isn't pretty even if done by
a single process, but yes - might be more important for scsi.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: KASAN: use-after-free Read in vhost_chr_write_iter
[not found] ` <20180522083842.GA10604@dragonet.kaist.ac.kr>
@ 2018-05-22 8:42 ` Jason Wang
0 siblings, 0 replies; 6+ messages in thread
From: Jason Wang @ 2018-05-22 8:42 UTC (permalink / raw)
To: DaeRyong Jeong
Cc: bammanag, kt0755, kvm, mst, netdev, linux-kernel, virtualization,
byoungyoung
On 2018年05月22日 16:38, DaeRyong Jeong wrote:
> On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
>> On 2018年05月18日 17:24, Jason Wang wrote:
>>> On 2018年05月17日 21:45, DaeRyong Jeong wrote:
>>>> We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
>>>>
>>>> This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
>>>> version of Syzkaller), which we describe more at the end of this
>>>> report. Our analysis shows that the race occurs when invoking two
>>>> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
>>>>
>>>>
>>>> Analysis:
>>>> We think the concurrent execution of vhost_process_iotlb_msg() and
>>>> vhost_dev_cleanup() causes the crash.
>>>> Both of functions can run concurrently (please see call sequence below),
>>>> and possibly, there is a race on dev->iotlb.
>>>> If the switch occurs right after vhost_dev_cleanup() frees
>>>> dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
>>>> and it
>>>> keep executing without returning -EFAULT. Consequently, use-after-free
>>>> occures
>>>>
>>>>
>>>> Thread interleaving:
>>>> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
>>>> (In the case of both VHOST_IOTLB_UPDATE and
>>>> VHOST_IOTLB_INVALIDATE)
>>>> ===== =====
>>>> vhost_umem_clean(dev->iotlb);
>>>> if (!dev->iotlb) {
>>>> ret = -EFAULT;
>>>> break;
>>>> }
>>>> dev->iotlb = NULL;
>>>>
>>>>
>>>> Call Sequence:
>>>> CPU0
>>>> =====
>>>> vhost_net_chr_write_iter
>>>> vhost_chr_write_iter
>>>> vhost_process_iotlb_msg
>>>>
>>>> CPU1
>>>> =====
>>>> vhost_net_ioctl
>>>> vhost_net_reset_owner
>>>> vhost_dev_reset_owner
>>>> vhost_dev_cleanup
>>> Thanks a lot for the analysis.
>>>
>>> This could be addressed by simply protect it with dev mutex.
>>>
>>> Will post a patch.
>>>
>> Could you please help to test the attached patch? I've done some smoking
>> test.
>>
>> Thanks
> Sorry to say this, but we don't have a reproducer for this bug since our
> reproducer is being implemented.
>
> This crash had occrued a few times in our fuzzer, so I inspected the code
> manually.
>
> It seems the patch is good for me, but we can't test the patch for now.
> Sorry.
>
No problem.
I'm trying to craft a reproducer, looks not hard.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-05-22 8:42 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20180517134544.GA20646@dragonet.kaist.ac.kr>
2018-05-18 9:24 ` KASAN: use-after-free Read in vhost_chr_write_iter Jason Wang
[not found] ` <58419d62-3074-2e5a-8504-da1cdeb08280@redhat.com>
2018-05-21 2:38 ` Jason Wang
2018-05-21 14:42 ` Michael S. Tsirkin
[not found] ` <20180521173645-mutt-send-email-mst@kernel.org>
2018-05-22 3:50 ` Jason Wang
[not found] ` <51cf8274-162f-384b-0ff7-47fbf15412f1@redhat.com>
2018-05-22 3:53 ` Michael S. Tsirkin
[not found] ` <20180522083842.GA10604@dragonet.kaist.ac.kr>
2018-05-22 8:42 ` Jason Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).