* [Qemu-devel] [PATCH v2] aio: Fix use-after-free in cancellation path
@ 2014-05-20 2:00 Fam Zheng
2014-05-20 13:16 ` Stefan Hajnoczi
0 siblings, 1 reply; 4+ messages in thread
From: Fam Zheng @ 2014-05-20 2:00 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, uobergfe, Stefan Hajnoczi
The current flow of canceling a thread from THREAD_ACTIVE state is:
1) Caller wants to cancel a request, so it calls thread_pool_cancel.
2) thread_pool_cancel waits on the conditional variable
elem->check_cancel.
3) The worker thread changes state to THREAD_DONE once the task is
done, and notifies elem->check_cancel to allow thread_pool_cancel
to continue execution, and signals the notifier (pool->notifier) to
allow callback function to be called later. But because of the
global mutex, the notifier won't get processed until step 4) and 5)
are done.
4) thread_pool_cancel continues, leaving the notifier signaled, it
just returns to caller.
5) Caller thinks the request is already canceled successfully, so it
releases any related data, such as freeing elem->common.opaque.
6) In the next main loop iteration, the notifier handler,
event_notifier_ready, is called. It finds the canceled thread in
THREAD_DONE state, so calls elem->common.cb, with an (likely)
dangling opaque pointer. This is a use-after-free.
Fix it by calling event_notifier_ready before leaving
thread_pool_cancel.
Test case update: This change will let cancel complete earlier than
test-thread-pool.c expects, so update the code to check this case: if
it's already done, done_cb sets .aiocb to NULL, skip calling
bdrv_aio_cancel on them.
Reported-by: Ulrich Obergfell <uobergfe@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
---
tests/test-thread-pool.c | 2 +-
thread-pool.c | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index c1f8e13..aa156bc 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -180,7 +180,7 @@ static void test_cancel(void)
/* Canceling the others will be a blocking operation. */
for (i = 0; i < 100; i++) {
- if (data[i].n != 3) {
+ if (data[i].aiocb && data[i].n != 3) {
bdrv_aio_cancel(data[i].aiocb);
}
}
diff --git a/thread-pool.c b/thread-pool.c
index fbdd3ff..d4984ba 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -223,6 +223,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb)
}
pool->pending_cancellations--;
}
+ event_notifier_ready(&pool->notifier);
qemu_mutex_unlock(&pool->lock);
}
--
1.9.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v2] aio: Fix use-after-free in cancellation path
2014-05-20 2:00 [Qemu-devel] [PATCH v2] aio: Fix use-after-free in cancellation path Fam Zheng
@ 2014-05-20 13:16 ` Stefan Hajnoczi
2014-05-20 14:01 ` Paolo Bonzini
0 siblings, 1 reply; 4+ messages in thread
From: Stefan Hajnoczi @ 2014-05-20 13:16 UTC (permalink / raw)
To: Fam Zheng; +Cc: Paolo Bonzini, qemu-devel, uobergfe
On Tue, May 20, 2014 at 10:00:47AM +0800, Fam Zheng wrote:
> diff --git a/thread-pool.c b/thread-pool.c
> index fbdd3ff..d4984ba 100644
> --- a/thread-pool.c
> +++ b/thread-pool.c
> @@ -223,6 +223,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb)
> }
> pool->pending_cancellations--;
> }
> + event_notifier_ready(&pool->notifier);
> qemu_mutex_unlock(&pool->lock);
> }
event_notifier_ready() doesn't need pool->lock. Can you call it outside
the lock or am I missing something?
Stefan
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v2] aio: Fix use-after-free in cancellation path
2014-05-20 13:16 ` Stefan Hajnoczi
@ 2014-05-20 14:01 ` Paolo Bonzini
2014-05-21 2:40 ` Fam Zheng
0 siblings, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2014-05-20 14:01 UTC (permalink / raw)
To: Stefan Hajnoczi, Fam Zheng; +Cc: qemu-devel, uobergfe
Il 20/05/2014 15:16, Stefan Hajnoczi ha scritto:
> On Tue, May 20, 2014 at 10:00:47AM +0800, Fam Zheng wrote:
>> diff --git a/thread-pool.c b/thread-pool.c
>> index fbdd3ff..d4984ba 100644
>> --- a/thread-pool.c
>> +++ b/thread-pool.c
>> @@ -223,6 +223,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb)
>> }
>> pool->pending_cancellations--;
>> }
>> + event_notifier_ready(&pool->notifier);
>> qemu_mutex_unlock(&pool->lock);
>> }
>
> event_notifier_ready() doesn't need pool->lock. Can you call it outside
> the lock or am I missing something?
Yes, in fact I'm a bit wary of calling it inside the lock.
Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v2] aio: Fix use-after-free in cancellation path
2014-05-20 14:01 ` Paolo Bonzini
@ 2014-05-21 2:40 ` Fam Zheng
0 siblings, 0 replies; 4+ messages in thread
From: Fam Zheng @ 2014-05-21 2:40 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, Stefan Hajnoczi, uobergfe
On Tue, 05/20 16:01, Paolo Bonzini wrote:
> Il 20/05/2014 15:16, Stefan Hajnoczi ha scritto:
> >On Tue, May 20, 2014 at 10:00:47AM +0800, Fam Zheng wrote:
> >>diff --git a/thread-pool.c b/thread-pool.c
> >>index fbdd3ff..d4984ba 100644
> >>--- a/thread-pool.c
> >>+++ b/thread-pool.c
> >>@@ -223,6 +223,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb)
> >> }
> >> pool->pending_cancellations--;
> >> }
> >>+ event_notifier_ready(&pool->notifier);
> >> qemu_mutex_unlock(&pool->lock);
> >> }
> >
> >event_notifier_ready() doesn't need pool->lock. Can you call it outside
> >the lock or am I missing something?
>
> Yes, in fact I'm a bit wary of calling it inside the lock.
OK, thanks.
Fam
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2014-05-21 2:41 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-20 2:00 [Qemu-devel] [PATCH v2] aio: Fix use-after-free in cancellation path Fam Zheng
2014-05-20 13:16 ` Stefan Hajnoczi
2014-05-20 14:01 ` Paolo Bonzini
2014-05-21 2:40 ` Fam Zheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).