* [Qemu-devel] 答复: Re: [PATCHv4 01/03] qemu-iothread: IOThread supports theGMainContext event loop
@ 2017-08-23 7:58 wang.yong155
2017-08-23 8:43 ` Fam Zheng
0 siblings, 1 reply; 2+ messages in thread
From: wang.yong155 @ 2017-08-23 7:58 UTC (permalink / raw)
To: famz
Cc: pbonzini, stefanha, jasowang, zhangchen.fnst, zhang.zhanghailiang,
wang.guang55, lizhijian, qemu-devel
>> diff --git a/iothread.c b/iothread.c>> index beeb870..fb1c55b 100644>> --- a/iothread.c>> +++ b/iothread.c>> @@ -57,6 +57,20 @@ static void *iothread_run(void *opaque)>> >> while (!atomic_read(&iothread->stopping)) {>> aio_poll(iothread->ctx, true)>> +>> + if (atomic_read(&iothread->worker_context)) {>> + g_main_context_push_thread_default(iothread->worker_context)>> + iothread->main_loop =>> + g_main_loop_new(iothread->worker_context, TRUE)>> + g_main_loop_run(iothread->main_loop)>> +>> + g_main_loop_unref(iothread->main_loop)>> + iothread->main_loop = NULL>
>You should clear iothread->main_loop first before calling g_main_loop_unref(),>to avoid TOCTOU race with iothread_stop():>
> iothread_run (in IOThread) iothread_stop (in main thread)> ========================================================================> if (atomic_read(&iothread->main_loop)) {> /* frees iothread->main_loop */> g_main_loop_unref(...)> /* Accesses freed memory */> g_main_loop_quit(iothread->main_loop)> }> iothread->main_loop = NULL
When the g_main_loop_quit function is called, the g_main_loop_run function can exit?
iothread_run (in IOThread) iothread_stop (in main thread)
========================================================================
/*step1: set loop->is_running = FALSE*/
g_main_loop_quit(iothread->main_loop)
/*step2: main loop exit */
g_main_loop_run()
/*step3:frees iothread->main_loop memory*/
g_main_loop_unref(...)
iothread->main_loop = NULL
I think it's ok, I don't know whether I understand it correctly or not?
Thanks
>>> +>> + g_main_context_pop_thread_default(iothread->worker_context)>> + g_main_context_unref(iothread->worker_context)>> + iothread->worker_context = NULL>> + }>> }>> >> rcu_unregister_thread()>> @@ -73,6 +87,9 @@ static int iothread_stop(Object *object, void *opaque)>> }>> iothread->stopping = true>> aio_notify(iothread->ctx)>> + if (atomic_read(&iothread->main_loop)) {>> + g_main_loop_quit(iothread->main_loop)>> + }>> qemu_thread_join(&iothread->thread)>> return 0>> }
原始邮件
发件人: <famz@redhat.com>
收件人:王勇10170530
抄送人: <pbonzini@redhat.com> <stefanha@redhat.com> <jasowang@redhat.com> <zhangchen.fnst@cn.fujitsu.com> <zhang.zhanghailiang@huawei.com>王广10165992 <lizhijian@cn.fujitsu.com> <qemu-devel@nongnu.org>
日 期 :2017年08月22日 17:52
主 题 :Re: [PATCHv4 01/03] qemu-iothread: IOThread supports theGMainContext event loop
On Tue, 08/22 16:46, Wang yong wrote:
> From: Wang Yong <wang.yong155@zte.com.cn>
>
> IOThread uses AioContext event loop and does not run a GMainContext.
> Therefore,chardev cannot work in IOThread,such as the chardev is
> used for colo-compare packets reception.
>
> This patch makes the IOThread run the GMainContext event loop,
> chardev and IOThread can work together.
>
> Signed-off-by: Wang Yong <wang.yong155@zte.com.cn>
> Signed-off-by: Wang Guang <wang.guang55@zte.com.cn>
> ---
> include/sysemu/iothread.h | 4 ++++
> iothread.c | 43 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 47 insertions(+)
>
> diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
> index e6da1a4..d2985b3 100644
> --- a/include/sysemu/iothread.h
> +++ b/include/sysemu/iothread.h
> @@ -24,6 +24,9 @@ typedef struct {
>
> QemuThread thread
> AioContext *ctx
> + GMainContext *worker_context
> + GMainLoop *main_loop
> + GOnce once
> QemuMutex init_done_lock
> QemuCond init_done_cond /* is thread initialization done? */
> bool stopping
> @@ -41,5 +44,6 @@ typedef struct {
> char *iothread_get_id(IOThread *iothread)
> AioContext *iothread_get_aio_context(IOThread *iothread)
> void iothread_stop_all(void)
> +GMainContext *iothread_get_g_main_context(IOThread *iothread)
>
> #endif /* IOTHREAD_H */
> diff --git a/iothread.c b/iothread.c
> index beeb870..fb1c55b 100644
> --- a/iothread.c
> +++ b/iothread.c
> @@ -57,6 +57,20 @@ static void *iothread_run(void *opaque)
>
> while (!atomic_read(&iothread->stopping)) {
> aio_poll(iothread->ctx, true)
> +
> + if (atomic_read(&iothread->worker_context)) {
> + g_main_context_push_thread_default(iothread->worker_context)
> + iothread->main_loop =
> + g_main_loop_new(iothread->worker_context, TRUE)
> + g_main_loop_run(iothread->main_loop)
> +
> + g_main_loop_unref(iothread->main_loop)
> + iothread->main_loop = NULL
You should clear iothread->main_loop first before calling g_main_loop_unref(),
to avoid TOCTOU race with iothread_stop():
iothread_run (in IOThread) iothread_stop (in main thread)
========================================================================
if (atomic_read(&iothread->main_loop)) {
/* frees iothread->main_loop */
g_main_loop_unref(...)
/* Accesses freed memory */
g_main_loop_quit(iothread->main_loop)
}
iothread->main_loop = NULL
> +
> + g_main_context_pop_thread_default(iothread->worker_context)
> + g_main_context_unref(iothread->worker_context)
> + iothread->worker_context = NULL
> + }
> }
>
> rcu_unregister_thread()
> @@ -73,6 +87,9 @@ static int iothread_stop(Object *object, void *opaque)
> }
> iothread->stopping = true
> aio_notify(iothread->ctx)
> + if (atomic_read(&iothread->main_loop)) {
> + g_main_loop_quit(iothread->main_loop)
> + }
> qemu_thread_join(&iothread->thread)
> return 0
> }
> @@ -125,6 +142,7 @@ static void iothread_complete(UserCreatable *obj, Error **errp)
>
> qemu_mutex_init(&iothread->init_done_lock)
> qemu_cond_init(&iothread->init_done_cond)
> + iothread->once = (GOnce) G_ONCE_INIT
Please remove the unnecessary type cast.
>
> /* This assumes we are called from a thread with useful CPU affinity for us
> * to inherit.
> @@ -309,3 +327,28 @@ void iothread_stop_all(void)
>
> object_child_foreach(container, iothread_stop, NULL)
> }
> +
> +static gpointer iothread_g_main_context_init(gpointer opaque)
> +{
> + AioContext *ctx
> + IOThread *iothread = opaque
> + GSource *source
> +
> + iothread->worker_context = g_main_context_new()
> +
> + ctx = iothread_get_aio_context(iothread)
> + source = aio_get_g_source(ctx)
> + g_source_attach(source, iothread->worker_context)
> + g_source_unref(source)
> +
> + aio_notify(iothread->ctx)
> +
> + return NULL
> +}
> +
> +GMainContext *iothread_get_g_main_context(IOThread *iothread)
> +{
> + g_once(&iothread->once, iothread_g_main_context_init, iothread)
> +
> + return iothread->worker_context
> +}
> --
> 1.8.3.1
>
>
Fam
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [Qemu-devel] 答复: Re: [PATCHv4 01/03] qemu-iothread: IOThread supports theGMainContext event loop
2017-08-23 7:58 [Qemu-devel] 答复: Re: [PATCHv4 01/03] qemu-iothread: IOThread supports theGMainContext event loop wang.yong155
@ 2017-08-23 8:43 ` Fam Zheng
0 siblings, 0 replies; 2+ messages in thread
From: Fam Zheng @ 2017-08-23 8:43 UTC (permalink / raw)
To: wang.yong155
Cc: pbonzini, stefanha, jasowang, zhangchen.fnst, zhang.zhanghailiang,
wang.guang55, lizhijian, qemu-devel
Hi Wang Yong,
To make the discussion easier, please try to fix your email client to:
1) set In-Reply-To: header when replying
2) use plain text instead of html
3) use monospace fonts to view and compose a reply
4) avoid attaching the original email in the end, just reply inline
5) maybe, use "Re:" in the subject for reply, avoid "答复:"
6) include not only email addresses in From:To:/Cc: headers, but also
the names of recipients, in the form of
Some Body <some.body@example.com>, Another One <another.one@example.com>,
...
Or maybe just switch to a functional email client.
On Wed, 08/23 15:58, wang.yong155@zte.com.cn wrote:
> >> diff --git a/iothread.c b/iothread.c>> index beeb870..fb1c55b 100644>> --- a/iothread.c>> +++ b/iothread.c>> @@ -57,6 +57,20 @@ static void *iothread_run(void *opaque)>> >> while (!atomic_read(&iothread->stopping)) {>> aio_poll(iothread->ctx, true)>> +>> + if (atomic_read(&iothread->worker_context)) {>> + g_main_context_push_thread_default(iothread->worker_context)>> + iothread->main_loop =>> + g_main_loop_new(iothread->worker_context, TRUE)>> + g_main_loop_run(iothread->main_loop)>> +>> + g_main_loop_unref(iothread->main_loop)>> + iothread->main_loop = NULL>
>
> >You should clear iothread->main_loop first before calling g_main_loop_unref(),>to avoid TOCTOU race with iothread_stop():>
>
> > iothread_run (in IOThread) iothread_stop (in main thread)> ========================================================================> if (atomic_read(&iothread->main_loop)) {> /* frees iothread->main_loop */> g_main_loop_unref(...)> /* Accesses freed memory */> g_main_loop_quit(iothread->main_loop)> }> iothread->main_loop = NULL
>
> When the g_main_loop_quit function is called, the g_main_loop_run function can exit?
>
>
>
>
> iothread_run (in IOThread) iothread_stop (in main thread)
>
> ========================================================================
>
> /*step1: set loop->is_running = FALSE*/
>
> g_main_loop_quit(iothread->main_loop)
>
> /*step2: main loop exit */
>
> g_main_loop_run()
>
> /*step3:frees iothread->main_loop memory*/
>
> g_main_loop_unref(...)
>
> iothread->main_loop = NULL
>
>
>
>
> I think it's ok, I don't know whether I understand it correctly or not?
Your sequence is ok. But remember this is multi-threaded and the execution order
between two threads are non-deterministic. The sequence I pointed out is also
"possible" and will cause use-after-free due to TOCTOU race condition [1].
[1]: https://en.wikipedia.org/wiki/Time_of_check_to_time_of_use
Fam
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-08-23 8:43 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-08-23 7:58 [Qemu-devel] 答复: Re: [PATCHv4 01/03] qemu-iothread: IOThread supports theGMainContext event loop wang.yong155
2017-08-23 8:43 ` Fam Zheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).