* [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
@ 2018-01-31 13:15 Klim Kireev
2018-01-31 13:22 ` Daniel P. Berrangé
0 siblings, 1 reply; 6+ messages in thread
From: Klim Kireev @ 2018-01-31 13:15 UTC (permalink / raw)
To: qemu-devel; +Cc: kraxel
On one of our client's node, due to trying to read from closed ioc,
a segmentation fault occured. Corresponding backtrace:
0 object_get_class (obj=obj@entry=0x0)
1 qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
2 qio_channel_read (ioc=<optimized out> ...
3 vnc_client_read_buf (vs=vs@entry=0x55625f3c6000, ...
4 vnc_client_read_plain (vs=0x55625f3c6000)
5 vnc_client_read (vs=0x55625f3c6000)
6 vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
7 g_main_dispatch (context=0x556251568a50)
8 g_main_context_dispatch (context=context@entry=0x556251568a50)
9 glib_pollfds_poll ()
10 os_host_main_loop_wait (timeout=<optimized out>)
11 main_loop_wait (nonblocking=nonblocking@entry=0)
12 main_loop () at vl.c:1909
13 main (argc=<optimized out>, argv=<optimized out>, ...
Having analyzed the coredump, I understood that the reason is that
ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
in vnc_disconnect_finish. Between these two events due to some
reasons the ioc_tag was set again and after vnc_disconnect_finish
the handler is running with freed ioc,
which led to the segmentation fault.
I suggest to check ioc_tag in vnc_disconnect_finish to prevent such
an occurrence.
Signed-off-by: Klim Kireev <klim.kireev@virtuozzo.com>
---
Changelog:
v2: Apply the backtrace
ui/vnc.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/ui/vnc.c b/ui/vnc.c
index 33b087221f..b8bf0180cb 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -1270,6 +1270,10 @@ void vnc_disconnect_finish(VncState *vs)
}
g_free(vs->lossy_rect);
+ if (vs->ioc_tag) {
+ g_source_remove(vs->ioc_tag);
+ vs->ioc_tag = 0;
+ }
object_unref(OBJECT(vs->ioc));
vs->ioc = NULL;
object_unref(OBJECT(vs->sioc));
--
2.14.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
2018-01-31 13:15 Klim Kireev
@ 2018-01-31 13:22 ` Daniel P. Berrangé
0 siblings, 0 replies; 6+ messages in thread
From: Daniel P. Berrangé @ 2018-01-31 13:22 UTC (permalink / raw)
To: Klim Kireev; +Cc: qemu-devel, kraxel
On Wed, Jan 31, 2018 at 04:15:37PM +0300, Klim Kireev wrote:
> On one of our client's node, due to trying to read from closed ioc,
> a segmentation fault occured. Corresponding backtrace:
>
> 0 object_get_class (obj=obj@entry=0x0)
> 1 qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
> 2 qio_channel_read (ioc=<optimized out> ...
> 3 vnc_client_read_buf (vs=vs@entry=0x55625f3c6000, ...
> 4 vnc_client_read_plain (vs=0x55625f3c6000)
> 5 vnc_client_read (vs=0x55625f3c6000)
> 6 vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
> 7 g_main_dispatch (context=0x556251568a50)
> 8 g_main_context_dispatch (context=context@entry=0x556251568a50)
> 9 glib_pollfds_poll ()
> 10 os_host_main_loop_wait (timeout=<optimized out>)
> 11 main_loop_wait (nonblocking=nonblocking@entry=0)
> 12 main_loop () at vl.c:1909
> 13 main (argc=<optimized out>, argv=<optimized out>, ...
>
> Having analyzed the coredump, I understood that the reason is that
> ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
> in vnc_disconnect_finish. Between these two events due to some
> reasons the ioc_tag was set again and after vnc_disconnect_finish
> the handler is running with freed ioc,
> which led to the segmentation fault.
>
> I suggest to check ioc_tag in vnc_disconnect_finish to prevent such
> an occurrence.
I think the root cause here is some codepath failing to check the
'vs->disconnecting' flag. eg vnc_write() checks that before it
calls qio_channel_add_watch. I suspect somewhere else in code needs
such a check
>
> Signed-off-by: Klim Kireev <klim.kireev@virtuozzo.com>
> ---
> Changelog:
> v2: Apply the backtrace
>
> ui/vnc.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/ui/vnc.c b/ui/vnc.c
> index 33b087221f..b8bf0180cb 100644
> --- a/ui/vnc.c
> +++ b/ui/vnc.c
> @@ -1270,6 +1270,10 @@ void vnc_disconnect_finish(VncState *vs)
> }
> g_free(vs->lossy_rect);
>
> + if (vs->ioc_tag) {
> + g_source_remove(vs->ioc_tag);
> + vs->ioc_tag = 0;
> + }
> object_unref(OBJECT(vs->ioc));
> vs->ioc = NULL;
> object_unref(OBJECT(vs->sioc));
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
@ 2018-01-31 16:25 Klim Kireev
2018-01-31 16:44 ` Daniel P. Berrangé
0 siblings, 1 reply; 6+ messages in thread
From: Klim Kireev @ 2018-01-31 16:25 UTC (permalink / raw)
To: qemu-devel; +Cc: kraxel
On one of our client's node, due to trying to read from closed ioc,
a segmentation fault occured. Corresponding backtrace:
0 object_get_class (obj=obj@entry=0x0)
1 qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
2 qio_channel_read (ioc=<optimized out> ...
3 vnc_client_read_buf (vs=vs@entry=0x55625f3c6000, ...
4 vnc_client_read_plain (vs=0x55625f3c6000)
5 vnc_client_read (vs=0x55625f3c6000)
6 vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
7 g_main_dispatch (context=0x556251568a50)
8 g_main_context_dispatch (context=context@entry=0x556251568a50)
9 glib_pollfds_poll ()
10 os_host_main_loop_wait (timeout=<optimized out>)
11 main_loop_wait (nonblocking=nonblocking@entry=0)
12 main_loop () at vl.c:1909
13 main (argc=<optimized out>, argv=<optimized out>, ...
Having analyzed the coredump, I understood that the reason is that
ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
in vnc_disconnect_finish. Between these two events due to some
reasons the ioc_tag was set again and after vnc_disconnect_finish
the handler is running with freed ioc,
which led to the segmentation fault.
The patch checks vs->disconnecting in places where we call
qio_channel_add_watch to prevent such an occurrence.
Signed-off-by: Klim Kireev <klim.kireev@virtuozzo.com>
---
Changelog:
v2: Attach the backtrace
v3: Change checks
ui/vnc.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/ui/vnc.c b/ui/vnc.c
index 33b087221f..708204fa7e 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -1407,13 +1407,19 @@ static void vnc_client_write_locked(VncState *vs)
} else
#endif /* CONFIG_VNC_SASL */
{
- vnc_client_write_plain(vs);
+ if (vs->disconnecting == FALSE) {
+ vnc_client_write_plain(vs);
+ } else {
+ if (vs->ioc_tag != 0) {
+ g_source_remove(vs->ioc_tag);
+ vs->ioc_tag = 0;
+ }
+ }
}
}
static void vnc_client_write(VncState *vs)
{
-
vnc_lock_output(vs);
if (vs->output.offset) {
vnc_client_write_locked(vs);
@@ -1421,8 +1427,12 @@ static void vnc_client_write(VncState *vs)
if (vs->ioc_tag) {
g_source_remove(vs->ioc_tag);
}
- vs->ioc_tag = qio_channel_add_watch(
- vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
+ if (vs->disconnecting == FALSE) {
+ vs->ioc_tag = qio_channel_add_watch(
+ vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
+ } else {
+ vs->ioc_tag = 0;
+ }
}
vnc_unlock_output(vs);
}
--
2.14.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
2018-01-31 16:25 [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling Klim Kireev
@ 2018-01-31 16:44 ` Daniel P. Berrangé
2018-02-07 8:42 ` klim
0 siblings, 1 reply; 6+ messages in thread
From: Daniel P. Berrangé @ 2018-01-31 16:44 UTC (permalink / raw)
To: Klim Kireev; +Cc: qemu-devel, kraxel
On Wed, Jan 31, 2018 at 07:25:21PM +0300, Klim Kireev wrote:
> On one of our client's node, due to trying to read from closed ioc,
> a segmentation fault occured. Corresponding backtrace:
>
> 0 object_get_class (obj=obj@entry=0x0)
> 1 qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
> 2 qio_channel_read (ioc=<optimized out> ...
> 3 vnc_client_read_buf (vs=vs@entry=0x55625f3c6000, ...
> 4 vnc_client_read_plain (vs=0x55625f3c6000)
> 5 vnc_client_read (vs=0x55625f3c6000)
> 6 vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
> 7 g_main_dispatch (context=0x556251568a50)
> 8 g_main_context_dispatch (context=context@entry=0x556251568a50)
> 9 glib_pollfds_poll ()
> 10 os_host_main_loop_wait (timeout=<optimized out>)
> 11 main_loop_wait (nonblocking=nonblocking@entry=0)
> 12 main_loop () at vl.c:1909
> 13 main (argc=<optimized out>, argv=<optimized out>, ...
>
> Having analyzed the coredump, I understood that the reason is that
> ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
> in vnc_disconnect_finish. Between these two events due to some
> reasons the ioc_tag was set again and after vnc_disconnect_finish
> the handler is running with freed ioc,
> which led to the segmentation fault.
>
> The patch checks vs->disconnecting in places where we call
> qio_channel_add_watch to prevent such an occurrence.
>
> Signed-off-by: Klim Kireev <klim.kireev@virtuozzo.com>
> ---
> Changelog:
> v2: Attach the backtrace
>
> v3: Change checks
>
> ui/vnc.c | 18 ++++++++++++++----
> 1 file changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/ui/vnc.c b/ui/vnc.c
> index 33b087221f..708204fa7e 100644
> --- a/ui/vnc.c
> +++ b/ui/vnc.c
> @@ -1407,13 +1407,19 @@ static void vnc_client_write_locked(VncState *vs)
> } else
> #endif /* CONFIG_VNC_SASL */
> {
> - vnc_client_write_plain(vs);
> + if (vs->disconnecting == FALSE) {
> + vnc_client_write_plain(vs);
> + } else {
> + if (vs->ioc_tag != 0) {
> + g_source_remove(vs->ioc_tag);
> + vs->ioc_tag = 0;
> + }
> + }
> }
> }
I'm not sure it is safe to only do the check in the else{} branch
of this code. If this code is reachable, then I think the SASL
branch will cause the same crash problems too. I think probably
need to push the checks up a level or two in the caller stack...
>
> static void vnc_client_write(VncState *vs)
> {
> -
> vnc_lock_output(vs);
> if (vs->output.offset) {
> vnc_client_write_locked(vs);
> @@ -1421,8 +1427,12 @@ static void vnc_client_write(VncState *vs)
> if (vs->ioc_tag) {
> g_source_remove(vs->ioc_tag);
> }
> - vs->ioc_tag = qio_channel_add_watch(
> - vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
> + if (vs->disconnecting == FALSE) {
> + vs->ioc_tag = qio_channel_add_watch(
> + vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
> + } else {
> + vs->ioc_tag = 0;
> + }
> }
> vnc_unlock_output(vs);
> }
...I think perhaps we should do the check in the vnc_client_io()
method, and also in the vnc_flush() method.
I think we also need to put a check in the vnc_jobs_consume_buffer()
method, which can be triggered from a bottom-half.
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
2018-01-31 16:44 ` Daniel P. Berrangé
@ 2018-02-07 8:42 ` klim
2018-02-07 9:00 ` klim
0 siblings, 1 reply; 6+ messages in thread
From: klim @ 2018-02-07 8:42 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: qemu-devel, kraxel
On 01/31/2018 07:44 PM, Daniel P. Berrangé wrote:
> On Wed, Jan 31, 2018 at 07:25:21PM +0300, Klim Kireev wrote:
>> On one of our client's node, due to trying to read from closed ioc,
>> a segmentation fault occured. Corresponding backtrace:
>>
>> 0 object_get_class (obj=obj@entry=0x0)
>> 1 qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
>> 2 qio_channel_read (ioc=<optimized out> ...
>> 3 vnc_client_read_buf (vs=vs@entry=0x55625f3c6000, ...
>> 4 vnc_client_read_plain (vs=0x55625f3c6000)
>> 5 vnc_client_read (vs=0x55625f3c6000)
>> 6 vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
>> 7 g_main_dispatch (context=0x556251568a50)
>> 8 g_main_context_dispatch (context=context@entry=0x556251568a50)
>> 9 glib_pollfds_poll ()
>> 10 os_host_main_loop_wait (timeout=<optimized out>)
>> 11 main_loop_wait (nonblocking=nonblocking@entry=0)
>> 12 main_loop () at vl.c:1909
>> 13 main (argc=<optimized out>, argv=<optimized out>, ...
>>
>> Having analyzed the coredump, I understood that the reason is that
>> ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
>> in vnc_disconnect_finish. Between these two events due to some
>> reasons the ioc_tag was set again and after vnc_disconnect_finish
>> the handler is running with freed ioc,
>> which led to the segmentation fault.
>>
>> The patch checks vs->disconnecting in places where we call
>> qio_channel_add_watch to prevent such an occurrence.
>>
>> Signed-off-by: Klim Kireev <klim.kireev@virtuozzo.com>
>> ---
>> Changelog:
>> v2: Attach the backtrace
>>
>> v3: Change checks
>>
>> ui/vnc.c | 18 ++++++++++++++----
>> 1 file changed, 14 insertions(+), 4 deletions(-)
>>
>> diff --git a/ui/vnc.c b/ui/vnc.c
>> index 33b087221f..708204fa7e 100644
>> --- a/ui/vnc.c
>> +++ b/ui/vnc.c
>> @@ -1407,13 +1407,19 @@ static void vnc_client_write_locked(VncState *vs)
>> } else
>> #endif /* CONFIG_VNC_SASL */
>> {
>> - vnc_client_write_plain(vs);
>> + if (vs->disconnecting == FALSE) {
>> + vnc_client_write_plain(vs);
>> + } else {
>> + if (vs->ioc_tag != 0) {
>> + g_source_remove(vs->ioc_tag);
>> + vs->ioc_tag = 0;
>> + }
>> + }
>> }
>> }
> I'm not sure it is safe to only do the check in the else{} branch
> of this code. If this code is reachable, then I think the SASL
> branch will cause the same crash problems too. I think probably
> need to push the checks up a level or two in the caller stack...
>
>>
>> static void vnc_client_write(VncState *vs)
>> {
>> -
>> vnc_lock_output(vs);
>> if (vs->output.offset) {
>> vnc_client_write_locked(vs);
>> @@ -1421,8 +1427,12 @@ static void vnc_client_write(VncState *vs)
>> if (vs->ioc_tag) {
>> g_source_remove(vs->ioc_tag);
>> }
>> - vs->ioc_tag = qio_channel_add_watch(
>> - vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
>> + if (vs->disconnecting == FALSE) {
>> + vs->ioc_tag = qio_channel_add_watch(
>> + vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
>> + } else {
>> + vs->ioc_tag = 0;
>> + }
>> }
>> vnc_unlock_output(vs);
>> }
> ...I think perhaps we should do the check in the vnc_client_io()
> method, and also in the vnc_flush() method.
>
> I think we also need to put a check in the vnc_jobs_consume_buffer()
> method, which can be triggered from a bottom-half.
Thank you for your advice,
in both places there is a check vs->ioc != NULL, so we don't need check
it there
>
> Regards,
> Daniel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling
2018-02-07 8:42 ` klim
@ 2018-02-07 9:00 ` klim
0 siblings, 0 replies; 6+ messages in thread
From: klim @ 2018-02-07 9:00 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: qemu-devel, kraxel
On 02/07/2018 11:42 AM, klim wrote:
> On 01/31/2018 07:44 PM, Daniel P. Berrangé wrote:
>> On Wed, Jan 31, 2018 at 07:25:21PM +0300, Klim Kireev wrote:
>>> On one of our client's node, due to trying to read from closed ioc,
>>> a segmentation fault occured. Corresponding backtrace:
>>>
>>> 0 object_get_class (obj=obj@entry=0x0)
>>> 1 qio_channel_readv_full (ioc=0x0, iov=0x7ffe55277180 ...
>>> 2 qio_channel_read (ioc=<optimized out> ...
>>> 3 vnc_client_read_buf (vs=vs@entry=0x55625f3c6000, ...
>>> 4 vnc_client_read_plain (vs=0x55625f3c6000)
>>> 5 vnc_client_read (vs=0x55625f3c6000)
>>> 6 vnc_client_io (ioc=<optimized out>, condition=G_IO_IN, ...
>>> 7 g_main_dispatch (context=0x556251568a50)
>>> 8 g_main_context_dispatch (context=context@entry=0x556251568a50)
>>> 9 glib_pollfds_poll ()
>>> 10 os_host_main_loop_wait (timeout=<optimized out>)
>>> 11 main_loop_wait (nonblocking=nonblocking@entry=0)
>>> 12 main_loop () at vl.c:1909
>>> 13 main (argc=<optimized out>, argv=<optimized out>, ...
>>>
>>> Having analyzed the coredump, I understood that the reason is that
>>> ioc_tag is reset on vnc_disconnect_start and ioc is cleaned
>>> in vnc_disconnect_finish. Between these two events due to some
>>> reasons the ioc_tag was set again and after vnc_disconnect_finish
>>> the handler is running with freed ioc,
>>> which led to the segmentation fault.
>>>
>>> The patch checks vs->disconnecting in places where we call
>>> qio_channel_add_watch to prevent such an occurrence.
>>>
>>> Signed-off-by: Klim Kireev <klim.kireev@virtuozzo.com>
>>> ---
>>> Changelog:
>>> v2: Attach the backtrace
>>>
>>> v3: Change checks
>>>
>>> ui/vnc.c | 18 ++++++++++++++----
>>> 1 file changed, 14 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/ui/vnc.c b/ui/vnc.c
>>> index 33b087221f..708204fa7e 100644
>>> --- a/ui/vnc.c
>>> +++ b/ui/vnc.c
>>> @@ -1407,13 +1407,19 @@ static void vnc_client_write_locked(VncState
>>> *vs)
>>> } else
>>> #endif /* CONFIG_VNC_SASL */
>>> {
>>> - vnc_client_write_plain(vs);
>>> + if (vs->disconnecting == FALSE) {
>>> + vnc_client_write_plain(vs);
>>> + } else {
>>> + if (vs->ioc_tag != 0) {
>>> + g_source_remove(vs->ioc_tag);
>>> + vs->ioc_tag = 0;
>>> + }
>>> + }
>>> }
>>> }
>> I'm not sure it is safe to only do the check in the else{} branch
>> of this code. If this code is reachable, then I think the SASL
>> branch will cause the same crash problems too. I think probably
>> need to push the checks up a level or two in the caller stack...
>>
>>> static void vnc_client_write(VncState *vs)
>>> {
>>> -
>>> vnc_lock_output(vs);
>>> if (vs->output.offset) {
>>> vnc_client_write_locked(vs);
>>> @@ -1421,8 +1427,12 @@ static void vnc_client_write(VncState *vs)
>>> if (vs->ioc_tag) {
>>> g_source_remove(vs->ioc_tag);
>>> }
>>> - vs->ioc_tag = qio_channel_add_watch(
>>> - vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
>>> + if (vs->disconnecting == FALSE) {
>>> + vs->ioc_tag = qio_channel_add_watch(
>>> + vs->ioc, G_IO_IN, vnc_client_io, vs, NULL);
>>> + } else {
>>> + vs->ioc_tag = 0;
>>> + }
>>> }
>>> vnc_unlock_output(vs);
>>> }
>> ...I think perhaps we should do the check in the vnc_client_io()
>> method, and also in the vnc_flush() method.
>>
>> I think we also need to put a check in the vnc_jobs_consume_buffer()
>> method, which can be triggered from a bottom-half.
>
> Thank you for your advice,
> in both places there is a check vs->ioc != NULL, so we don't need
> check it there
>
But ioc_tag can be set in those places, so probably you are right and we
need to check it there
>>
>> Regards,
>> Daniel
>
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-02-07 9:00 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-31 16:25 [Qemu-devel] [PATCH v2] vnc: fix segfault in closed connection handling Klim Kireev
2018-01-31 16:44 ` Daniel P. Berrangé
2018-02-07 8:42 ` klim
2018-02-07 9:00 ` klim
-- strict thread matches above, loose matches on Subject: below --
2018-01-31 13:15 Klim Kireev
2018-01-31 13:22 ` Daniel P. Berrangé
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).