* [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data @ 2017-12-28 9:36 Denis V. Lunev 2018-01-10 9:55 ` Edgar Kaziakhmedov 2018-01-10 12:33 ` Daniel P. Berrange 0 siblings, 2 replies; 5+ messages in thread From: Denis V. Lunev @ 2017-12-28 9:36 UTC (permalink / raw) To: qemu-devel; +Cc: Edgar Kaziakhmedov, Denis V . Lunev, Daniel P . Berrange From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> According to the current implementation of websocket protocol in QEMU, qio_channel_websock_handshake_io tries to read handshake from the channel to start communication over tcp socket. But this subroutine doesn't cover scenario when socket was closed while handshaking. Therefore, if G_IO_IN is caught and qio_channel_websock_handshake_read returns zero, error has to be set and connection has to be done. Such behaviour causes 100% CPU load in main QEMU loop, because main loop poll continues to receive and handle G_IO_IN events from websocket. Step to reproduce 100% CPU load: - start qemu with the simplest configuration $ qemu -vnc [::1]:1,websocket=7500 2) open any vnc listener (which doesn't follow websocket protocol) $ vncviewer :7500 3) kill listener 4) qemu main thread eats 100% CPU usage Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Daniel P. Berrange <berrange@redhat.com> --- io/channel-websock.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/io/channel-websock.c b/io/channel-websock.c index 87ebdeb..71f046a 100644 --- a/io/channel-websock.c +++ b/io/channel-websock.c @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc, return FALSE; } if (ret == 0) { - trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN); - /* need more data still */ - return TRUE; + /* If G_IN_IO happened and there is no data to read, close connection */ + error_setg(&err, "connection was closed"); + qio_task_set_error(task, err); + qio_task_complete(task); + return FALSE; } if (err) { -- 2.7.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data 2017-12-28 9:36 [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data Denis V. Lunev @ 2018-01-10 9:55 ` Edgar Kaziakhmedov 2018-01-10 12:33 ` Daniel P. Berrange 1 sibling, 0 replies; 5+ messages in thread From: Edgar Kaziakhmedov @ 2018-01-10 9:55 UTC (permalink / raw) To: Denis V. Lunev, qemu-devel ping On 12/28/2017 12:36 PM, Denis V. Lunev wrote: > From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> > > According to the current implementation of websocket protocol in QEMU, > qio_channel_websock_handshake_io tries to read handshake from the > channel to start communication over tcp socket. But this subroutine > doesn't cover scenario when socket was closed while handshaking. Therefore, > if G_IO_IN is caught and qio_channel_websock_handshake_read returns > zero, error has to be set and connection has to be done. > > Such behaviour causes 100% CPU load in main QEMU loop, because main loop > poll continues to receive and handle G_IO_IN events from websocket. > > Step to reproduce 100% CPU load: > - start qemu with the simplest configuration > $ qemu -vnc [::1]:1,websocket=7500 > 2) open any vnc listener (which doesn't follow websocket protocol) > $ vncviewer :7500 > 3) kill listener > 4) qemu main thread eats 100% CPU usage > > Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> > Signed-off-by: Denis V. Lunev <den@openvz.org> > CC: Daniel P. Berrange <berrange@redhat.com> > --- > io/channel-websock.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/io/channel-websock.c b/io/channel-websock.c > index 87ebdeb..71f046a 100644 > --- a/io/channel-websock.c > +++ b/io/channel-websock.c > @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc, > return FALSE; > } > if (ret == 0) { > - trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN); > - /* need more data still */ > - return TRUE; > + /* If G_IN_IO happened and there is no data to read, close connection */ > + error_setg(&err, "connection was closed"); > + qio_task_set_error(task, err); > + qio_task_complete(task); > + return FALSE; > } > > if (err) { ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data 2017-12-28 9:36 [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data Denis V. Lunev 2018-01-10 9:55 ` Edgar Kaziakhmedov @ 2018-01-10 12:33 ` Daniel P. Berrange 2018-01-10 13:31 ` Edgar Kaziakhmedov 1 sibling, 1 reply; 5+ messages in thread From: Daniel P. Berrange @ 2018-01-10 12:33 UTC (permalink / raw) To: Denis V. Lunev; +Cc: qemu-devel, Edgar Kaziakhmedov On Thu, Dec 28, 2017 at 12:36:18PM +0300, Denis V. Lunev wrote: > From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> > > According to the current implementation of websocket protocol in QEMU, > qio_channel_websock_handshake_io tries to read handshake from the > channel to start communication over tcp socket. But this subroutine > doesn't cover scenario when socket was closed while handshaking. Therefore, > if G_IO_IN is caught and qio_channel_websock_handshake_read returns > zero, error has to be set and connection has to be done. > > Such behaviour causes 100% CPU load in main QEMU loop, because main loop > poll continues to receive and handle G_IO_IN events from websocket. > > Step to reproduce 100% CPU load: > - start qemu with the simplest configuration > $ qemu -vnc [::1]:1,websocket=7500 > 2) open any vnc listener (which doesn't follow websocket protocol) > $ vncviewer :7500 > 3) kill listener > 4) qemu main thread eats 100% CPU usage > > Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> > Signed-off-by: Denis V. Lunev <den@openvz.org> > CC: Daniel P. Berrange <berrange@redhat.com> > --- > io/channel-websock.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/io/channel-websock.c b/io/channel-websock.c > index 87ebdeb..71f046a 100644 > --- a/io/channel-websock.c > +++ b/io/channel-websock.c > @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc, > return FALSE; > } > if (ret == 0) { > - trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN); > - /* need more data still */ > - return TRUE; > + /* If G_IN_IO happened and there is no data to read, close connection */ > + error_setg(&err, "connection was closed"); > + qio_task_set_error(task, err); > + qio_task_complete(task); > + return FALSE; This is bogus - 'ret == 0' from qio_channel_websock_handshake_read() does *not* imply end of file. It says we have read some data, but not yet seen the end of the HTTP headers. If you want to detect EOF, you need to check the qio_channel_read() function return value directly in that method. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data 2018-01-10 12:33 ` Daniel P. Berrange @ 2018-01-10 13:31 ` Edgar Kaziakhmedov 2018-01-10 13:34 ` Daniel P. Berrange 0 siblings, 1 reply; 5+ messages in thread From: Edgar Kaziakhmedov @ 2018-01-10 13:31 UTC (permalink / raw) To: Daniel P. Berrange, Denis V. Lunev; +Cc: qemu-devel On 01/10/2018 03:33 PM, Daniel P. Berrange wrote: > On Thu, Dec 28, 2017 at 12:36:18PM +0300, Denis V. Lunev wrote: >> From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> >> >> According to the current implementation of websocket protocol in QEMU, >> qio_channel_websock_handshake_io tries to read handshake from the >> channel to start communication over tcp socket. But this subroutine >> doesn't cover scenario when socket was closed while handshaking. Therefore, >> if G_IO_IN is caught and qio_channel_websock_handshake_read returns >> zero, error has to be set and connection has to be done. >> >> Such behaviour causes 100% CPU load in main QEMU loop, because main loop >> poll continues to receive and handle G_IO_IN events from websocket. >> >> Step to reproduce 100% CPU load: >> - start qemu with the simplest configuration >> $ qemu -vnc [::1]:1,websocket=7500 >> 2) open any vnc listener (which doesn't follow websocket protocol) >> $ vncviewer :7500 >> 3) kill listener >> 4) qemu main thread eats 100% CPU usage >> >> Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> >> Signed-off-by: Denis V. Lunev <den@openvz.org> >> CC: Daniel P. Berrange <berrange@redhat.com> >> --- >> io/channel-websock.c | 8 +++++--- >> 1 file changed, 5 insertions(+), 3 deletions(-) >> >> diff --git a/io/channel-websock.c b/io/channel-websock.c >> index 87ebdeb..71f046a 100644 >> --- a/io/channel-websock.c >> +++ b/io/channel-websock.c >> @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc, >> return FALSE; >> } >> if (ret == 0) { >> - trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN); >> - /* need more data still */ >> - return TRUE; >> + /* If G_IN_IO happened and there is no data to read, close connection */ >> + error_setg(&err, "connection was closed"); >> + qio_task_set_error(task, err); >> + qio_task_complete(task); >> + return FALSE; > This is bogus - 'ret == 0' from qio_channel_websock_handshake_read() > does *not* imply end of file. > > It says we have read some data, but not yet seen the end of the HTTP > headers. Yes, good catch. > > If you want to detect EOF, you need to check the qio_channel_read() > function return value directly in that method. Directly in qio_channel_websock_handshake_read method suppose called in qio_channel_websock_handshake_read routine. > > Regards, > Daniel ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data 2018-01-10 13:31 ` Edgar Kaziakhmedov @ 2018-01-10 13:34 ` Daniel P. Berrange 0 siblings, 0 replies; 5+ messages in thread From: Daniel P. Berrange @ 2018-01-10 13:34 UTC (permalink / raw) To: Edgar Kaziakhmedov; +Cc: Denis V. Lunev, qemu-devel On Wed, Jan 10, 2018 at 04:31:21PM +0300, Edgar Kaziakhmedov wrote: > > > On 01/10/2018 03:33 PM, Daniel P. Berrange wrote: > > On Thu, Dec 28, 2017 at 12:36:18PM +0300, Denis V. Lunev wrote: > > > From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> > > > > > > According to the current implementation of websocket protocol in QEMU, > > > qio_channel_websock_handshake_io tries to read handshake from the > > > channel to start communication over tcp socket. But this subroutine > > > doesn't cover scenario when socket was closed while handshaking. Therefore, > > > if G_IO_IN is caught and qio_channel_websock_handshake_read returns > > > zero, error has to be set and connection has to be done. > > > > > > Such behaviour causes 100% CPU load in main QEMU loop, because main loop > > > poll continues to receive and handle G_IO_IN events from websocket. > > > > > > Step to reproduce 100% CPU load: > > > - start qemu with the simplest configuration > > > $ qemu -vnc [::1]:1,websocket=7500 > > > 2) open any vnc listener (which doesn't follow websocket protocol) > > > $ vncviewer :7500 > > > 3) kill listener > > > 4) qemu main thread eats 100% CPU usage > > > > > > Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com> > > > Signed-off-by: Denis V. Lunev <den@openvz.org> > > > CC: Daniel P. Berrange <berrange@redhat.com> > > > --- > > > io/channel-websock.c | 8 +++++--- > > > 1 file changed, 5 insertions(+), 3 deletions(-) > > > > > > diff --git a/io/channel-websock.c b/io/channel-websock.c > > > index 87ebdeb..71f046a 100644 > > > --- a/io/channel-websock.c > > > +++ b/io/channel-websock.c > > > @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc, > > > return FALSE; > > > } > > > if (ret == 0) { > > > - trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN); > > > - /* need more data still */ > > > - return TRUE; > > > + /* If G_IN_IO happened and there is no data to read, close connection */ > > > + error_setg(&err, "connection was closed"); > > > + qio_task_set_error(task, err); > > > + qio_task_complete(task); > > > + return FALSE; > > This is bogus - 'ret == 0' from qio_channel_websock_handshake_read() > > does *not* imply end of file. > > > > It says we have read some data, but not yet seen the end of the HTTP > > headers. > Yes, good catch. > > > > If you want to detect EOF, you need to check the qio_channel_read() > > function return value directly in that method. > Directly in qio_channel_websock_handshake_read method suppose called in > qio_channel_websock_handshake_read routine. In this code: if (!handshake_end) { if (ioc->encinput.offset >= 4096) { qio_channel_websock_handshake_send_res_err( ioc, QIO_CHANNEL_WEBSOCK_HANDSHAKE_RES_TOO_LARGE); error_setg(errp, "End of headers not found in first 4096 bytes"); return 1; } else { return 0; } } There needs to be an extra else if branch added } else if (ret == 0) { qio_channel_websock_handshake_send_res_err( ioc, QIO_CHANNEL_WEBSOCK_HANDSHAKE_RES_EOF); error_setg(errp, "End of headers not found before connection closed"); return -1; } Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-01-10 13:34 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-12-28 9:36 [Qemu-devel] [PATCH 1/1] io/channel-websock: handle continuous reads without any data Denis V. Lunev 2018-01-10 9:55 ` Edgar Kaziakhmedov 2018-01-10 12:33 ` Daniel P. Berrange 2018-01-10 13:31 ` Edgar Kaziakhmedov 2018-01-10 13:34 ` Daniel P. Berrange
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).