* [Qemu-devel] [PULL 0/1] Qio next patches
@ 2019-01-24 12:26 Daniel P. Berrangé
2019-01-24 12:26 ` [Qemu-devel] [PULL 1/1] io: ensure UNIX client doesn't unlink server socket Daniel P. Berrangé
2019-01-25 9:54 ` [Qemu-devel] [PULL 0/1] Qio next patches Peter Maydell
0 siblings, 2 replies; 3+ messages in thread
From: Daniel P. Berrangé @ 2019-01-24 12:26 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Maydell, Daniel P. Berrangé
The following changes since commit 6d809e7da943bb4b95b408fbf3d80d097c0f7d38:
Merge remote-tracking branch 'remotes/xtensa/tags/20190122-xtensa' into staging (2019-01-23 21:50:49 +0000)
are available in the Git repository at:
https://github.com/berrange/qemu tags/qio-next-pull-request
for you to fetch changes up to 73564c407caedf992a1c688b5fea776a8b56ba2a:
io: ensure UNIX client doesn't unlink server socket (2019-01-24 12:23:35 +0000)
----------------------------------------------------------------
Merge qio 2019/01/24
Fixes accidental deletion of VNC server UNIX listener socket
----------------------------------------------------------------
Daniel P. Berrangé (1):
io: ensure UNIX client doesn't unlink server socket
io/channel-socket.c | 19 ++------
tests/test-io-channel-socket.c | 86 ++++++++++++++++++++++++++++++----
2 files changed, 80 insertions(+), 25 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [Qemu-devel] [PULL 1/1] io: ensure UNIX client doesn't unlink server socket
2019-01-24 12:26 [Qemu-devel] [PULL 0/1] Qio next patches Daniel P. Berrangé
@ 2019-01-24 12:26 ` Daniel P. Berrangé
2019-01-25 9:54 ` [Qemu-devel] [PULL 0/1] Qio next patches Peter Maydell
1 sibling, 0 replies; 3+ messages in thread
From: Daniel P. Berrangé @ 2019-01-24 12:26 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Maydell, Daniel P. Berrangé, Eric Blake
The qio_channel_socket_close method for was mistakenly unlinking the
UNIX server socket, even if the channel was a client connection. This
was not noticed with chardevs, since they never call close, but with the
VNC server, this caused the VNC server socket to be deleted after the
first client quit.
The qio_channel_socket_close method also needlessly reimplemented the
logic that already exists in socket_listen_cleanup(). Just call that
method directly, for listen sockets only.
This fixes a regression introduced in QEMU 3.0.0 with
commit d66f78e1eaa832f73c771d9df1b606fe75d52a50
Author: Pavel Balaev <mail@void.so>
Date: Mon May 21 19:17:35 2018 +0300
Delete AF_UNIX socket after close
Fixes launchpad #1795100
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
io/channel-socket.c | 19 ++------
tests/test-io-channel-socket.c | 86 ++++++++++++++++++++++++++++++----
2 files changed, 80 insertions(+), 25 deletions(-)
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b50e63a053..bc5f80e780 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -688,10 +688,13 @@ qio_channel_socket_close(QIOChannel *ioc,
int rc = 0;
if (sioc->fd != -1) {
- SocketAddress *addr = socket_local_address(sioc->fd, errp);
#ifdef WIN32
WSAEventSelect(sioc->fd, NULL, 0);
#endif
+ if (qio_channel_has_feature(ioc, QIO_CHANNEL_FEATURE_LISTEN)) {
+ socket_listen_cleanup(sioc->fd, errp);
+ }
+
if (closesocket(sioc->fd) < 0) {
sioc->fd = -1;
error_setg_errno(errp, errno,
@@ -699,20 +702,6 @@ qio_channel_socket_close(QIOChannel *ioc,
return -1;
}
sioc->fd = -1;
-
- if (addr && addr->type == SOCKET_ADDRESS_TYPE_UNIX
- && addr->u.q_unix.path) {
- if (unlink(addr->u.q_unix.path) < 0 && errno != ENOENT) {
- error_setg_errno(errp, errno,
- "Failed to unlink socket %s",
- addr->u.q_unix.path);
- rc = -1;
- }
- }
-
- if (addr) {
- qapi_free_SocketAddress(addr);
- }
}
return rc;
}
diff --git a/tests/test-io-channel-socket.c b/tests/test-io-channel-socket.c
index 0597213f93..c253ae30f5 100644
--- a/tests/test-io-channel-socket.c
+++ b/tests/test-io-channel-socket.c
@@ -49,6 +49,7 @@ static void test_io_channel_set_socket_bufs(QIOChannel *src,
static void test_io_channel_setup_sync(SocketAddress *listen_addr,
SocketAddress *connect_addr,
+ QIOChannel **srv,
QIOChannel **src,
QIOChannel **dst)
{
@@ -78,7 +79,7 @@ static void test_io_channel_setup_sync(SocketAddress *listen_addr,
test_io_channel_set_socket_bufs(*src, *dst);
- object_unref(OBJECT(lioc));
+ *srv = QIO_CHANNEL(lioc);
}
@@ -99,6 +100,7 @@ static void test_io_channel_complete(QIOTask *task,
static void test_io_channel_setup_async(SocketAddress *listen_addr,
SocketAddress *connect_addr,
+ QIOChannel **srv,
QIOChannel **src,
QIOChannel **dst)
{
@@ -146,21 +148,34 @@ static void test_io_channel_setup_async(SocketAddress *listen_addr,
qio_channel_set_delay(*src, false);
test_io_channel_set_socket_bufs(*src, *dst);
- object_unref(OBJECT(lioc));
+ *srv = QIO_CHANNEL(lioc);
g_main_loop_unref(data.loop);
}
+static void test_io_channel_socket_path_exists(SocketAddress *addr,
+ bool expectExists)
+{
+ if (addr->type != SOCKET_ADDRESS_TYPE_UNIX) {
+ return;
+ }
+
+ g_assert(g_file_test(addr->u.q_unix.path,
+ G_FILE_TEST_EXISTS) == expectExists);
+}
+
+
static void test_io_channel(bool async,
SocketAddress *listen_addr,
SocketAddress *connect_addr,
bool passFD)
{
- QIOChannel *src, *dst;
+ QIOChannel *src, *dst, *srv;
QIOChannelTest *test;
if (async) {
- test_io_channel_setup_async(listen_addr, connect_addr, &src, &dst);
+ test_io_channel_setup_async(listen_addr, connect_addr,
+ &srv, &src, &dst);
g_assert(!passFD ||
qio_channel_has_feature(src, QIO_CHANNEL_FEATURE_FD_PASS));
@@ -169,14 +184,25 @@ static void test_io_channel(bool async,
g_assert(qio_channel_has_feature(src, QIO_CHANNEL_FEATURE_SHUTDOWN));
g_assert(qio_channel_has_feature(dst, QIO_CHANNEL_FEATURE_SHUTDOWN));
+ test_io_channel_socket_path_exists(listen_addr, true);
+
test = qio_channel_test_new();
qio_channel_test_run_threads(test, true, src, dst);
qio_channel_test_validate(test);
+ test_io_channel_socket_path_exists(listen_addr, true);
+
+ /* unref without close, to ensure finalize() cleans up */
+
object_unref(OBJECT(src));
object_unref(OBJECT(dst));
+ test_io_channel_socket_path_exists(listen_addr, true);
- test_io_channel_setup_async(listen_addr, connect_addr, &src, &dst);
+ object_unref(OBJECT(srv));
+ test_io_channel_socket_path_exists(listen_addr, false);
+
+ test_io_channel_setup_async(listen_addr, connect_addr,
+ &srv, &src, &dst);
g_assert(!passFD ||
qio_channel_has_feature(src, QIO_CHANNEL_FEATURE_FD_PASS));
@@ -189,10 +215,24 @@ static void test_io_channel(bool async,
qio_channel_test_run_threads(test, false, src, dst);
qio_channel_test_validate(test);
+ /* close before unref, to ensure finalize copes with already closed */
+
+ qio_channel_close(src, &error_abort);
+ qio_channel_close(dst, &error_abort);
+ test_io_channel_socket_path_exists(listen_addr, true);
+
object_unref(OBJECT(src));
object_unref(OBJECT(dst));
+ test_io_channel_socket_path_exists(listen_addr, true);
+
+ qio_channel_close(srv, &error_abort);
+ test_io_channel_socket_path_exists(listen_addr, false);
+
+ object_unref(OBJECT(srv));
+ test_io_channel_socket_path_exists(listen_addr, false);
} else {
- test_io_channel_setup_sync(listen_addr, connect_addr, &src, &dst);
+ test_io_channel_setup_sync(listen_addr, connect_addr,
+ &srv, &src, &dst);
g_assert(!passFD ||
qio_channel_has_feature(src, QIO_CHANNEL_FEATURE_FD_PASS));
@@ -201,14 +241,25 @@ static void test_io_channel(bool async,
g_assert(qio_channel_has_feature(src, QIO_CHANNEL_FEATURE_SHUTDOWN));
g_assert(qio_channel_has_feature(dst, QIO_CHANNEL_FEATURE_SHUTDOWN));
+ test_io_channel_socket_path_exists(listen_addr, true);
+
test = qio_channel_test_new();
qio_channel_test_run_threads(test, true, src, dst);
qio_channel_test_validate(test);
+ test_io_channel_socket_path_exists(listen_addr, true);
+
+ /* unref without close, to ensure finalize() cleans up */
+
object_unref(OBJECT(src));
object_unref(OBJECT(dst));
+ test_io_channel_socket_path_exists(listen_addr, true);
+
+ object_unref(OBJECT(srv));
+ test_io_channel_socket_path_exists(listen_addr, false);
- test_io_channel_setup_sync(listen_addr, connect_addr, &src, &dst);
+ test_io_channel_setup_sync(listen_addr, connect_addr,
+ &srv, &src, &dst);
g_assert(!passFD ||
qio_channel_has_feature(src, QIO_CHANNEL_FEATURE_FD_PASS));
@@ -221,8 +272,23 @@ static void test_io_channel(bool async,
qio_channel_test_run_threads(test, false, src, dst);
qio_channel_test_validate(test);
+ test_io_channel_socket_path_exists(listen_addr, true);
+
+ /* close before unref, to ensure finalize copes with already closed */
+
+ qio_channel_close(src, &error_abort);
+ qio_channel_close(dst, &error_abort);
+ test_io_channel_socket_path_exists(listen_addr, true);
+
object_unref(OBJECT(src));
object_unref(OBJECT(dst));
+ test_io_channel_socket_path_exists(listen_addr, true);
+
+ qio_channel_close(srv, &error_abort);
+ test_io_channel_socket_path_exists(listen_addr, false);
+
+ object_unref(OBJECT(srv));
+ test_io_channel_socket_path_exists(listen_addr, false);
}
}
@@ -316,7 +382,6 @@ static void test_io_channel_unix(bool async)
qapi_free_SocketAddress(listen_addr);
qapi_free_SocketAddress(connect_addr);
- g_assert(g_file_test(TEST_SOCKET, G_FILE_TEST_EXISTS) == FALSE);
}
@@ -335,7 +400,7 @@ static void test_io_channel_unix_fd_pass(void)
{
SocketAddress *listen_addr = g_new0(SocketAddress, 1);
SocketAddress *connect_addr = g_new0(SocketAddress, 1);
- QIOChannel *src, *dst;
+ QIOChannel *src, *dst, *srv;
int testfd;
int fdsend[3];
int *fdrecv = NULL;
@@ -359,7 +424,7 @@ static void test_io_channel_unix_fd_pass(void)
connect_addr->type = SOCKET_ADDRESS_TYPE_UNIX;
connect_addr->u.q_unix.path = g_strdup(TEST_SOCKET);
- test_io_channel_setup_sync(listen_addr, connect_addr, &src, &dst);
+ test_io_channel_setup_sync(listen_addr, connect_addr, &srv, &src, &dst);
memcpy(bufsend, "Hello World", G_N_ELEMENTS(bufsend));
@@ -412,6 +477,7 @@ static void test_io_channel_unix_fd_pass(void)
object_unref(OBJECT(src));
object_unref(OBJECT(dst));
+ object_unref(OBJECT(srv));
qapi_free_SocketAddress(listen_addr);
qapi_free_SocketAddress(connect_addr);
unlink(TEST_SOCKET);
--
2.20.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PULL 0/1] Qio next patches
2019-01-24 12:26 [Qemu-devel] [PULL 0/1] Qio next patches Daniel P. Berrangé
2019-01-24 12:26 ` [Qemu-devel] [PULL 1/1] io: ensure UNIX client doesn't unlink server socket Daniel P. Berrangé
@ 2019-01-25 9:54 ` Peter Maydell
1 sibling, 0 replies; 3+ messages in thread
From: Peter Maydell @ 2019-01-25 9:54 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: QEMU Developers
On Thu, 24 Jan 2019 at 12:26, Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> The following changes since commit 6d809e7da943bb4b95b408fbf3d80d097c0f7d38:
>
> Merge remote-tracking branch 'remotes/xtensa/tags/20190122-xtensa' into staging (2019-01-23 21:50:49 +0000)
>
> are available in the Git repository at:
>
> https://github.com/berrange/qemu tags/qio-next-pull-request
>
> for you to fetch changes up to 73564c407caedf992a1c688b5fea776a8b56ba2a:
>
> io: ensure UNIX client doesn't unlink server socket (2019-01-24 12:23:35 +0000)
>
> ----------------------------------------------------------------
> Merge qio 2019/01/24
>
> Fixes accidental deletion of VNC server UNIX listener socket
>
> ----------------------------------------------------------------
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/4.0
for any user-visible changes.
-- PMM
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-01-25 9:54 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-01-24 12:26 [Qemu-devel] [PULL 0/1] Qio next patches Daniel P. Berrangé
2019-01-24 12:26 ` [Qemu-devel] [PULL 1/1] io: ensure UNIX client doesn't unlink server socket Daniel P. Berrangé
2019-01-25 9:54 ` [Qemu-devel] [PULL 0/1] Qio next patches Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).