* [PATCH v2 01/12] iotests: Drop execute permissions on vvfat.out
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 15:57 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 02/12] qio: Add trace points to net_listener Eric Blake
` (10 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
Output files are not executables. Noticed while preparing another
iotest addition.
Fixes: c8f60bfb43 ("iotests: Add `vvfat` tests", v9.1.0)
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: split this change to separate patch
---
tests/qemu-iotests/tests/vvfat.out | 0
1 file changed, 0 insertions(+), 0 deletions(-)
mode change 100755 => 100644 tests/qemu-iotests/tests/vvfat.out
diff --git a/tests/qemu-iotests/tests/vvfat.out b/tests/qemu-iotests/tests/vvfat.out
old mode 100755
new mode 100644
--
2.51.1
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v2 01/12] iotests: Drop execute permissions on vvfat.out
2025-11-08 22:59 ` [PATCH v2 01/12] iotests: Drop execute permissions on vvfat.out Eric Blake
@ 2025-11-10 15:57 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 15:57 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:22PM -0600, Eric Blake wrote:
> Output files are not executables. Noticed while preparing another
> iotest addition.
>
> Fixes: c8f60bfb43 ("iotests: Add `vvfat` tests", v9.1.0)
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: split this change to separate patch
> ---
> tests/qemu-iotests/tests/vvfat.out | 0
> 1 file changed, 0 insertions(+), 0 deletions(-)
> mode change 100755 => 100644 tests/qemu-iotests/tests/vvfat.out
>
> diff --git a/tests/qemu-iotests/tests/vvfat.out b/tests/qemu-iotests/tests/vvfat.out
> old mode 100755
> new mode 100644
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 02/12] qio: Add trace points to net_listener
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
2025-11-08 22:59 ` [PATCH v2 01/12] iotests: Drop execute permissions on vvfat.out Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 15:58 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 03/12] qio: Unwatch before notify in QIONetListener Eric Blake
` (9 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
Upcoming patches will adjust how net_listener watches for new client
connections; adding trace points now makes it easier to debug that the
changes work as intended. For example, adding
--trace='qio_net_listener*' to the qemu-storage-daemon command line
before --nbd-server will track when the server first starts listening
for clients.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: Make tracepoints unconditional, rename to match later refactoring
patch, document io_func in unwatch
---
io/net-listener.c | 10 ++++++++++
io/trace-events | 5 +++++
2 files changed, 15 insertions(+)
diff --git a/io/net-listener.c b/io/net-listener.c
index 47405965a66..007acbd5b18 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -23,6 +23,7 @@
#include "io/dns-resolver.h"
#include "qapi/error.h"
#include "qemu/module.h"
+#include "trace.h"
QIONetListener *qio_net_listener_new(void)
{
@@ -50,6 +51,7 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
return TRUE;
}
+ trace_qio_net_listener_callback(listener, listener->io_func);
if (listener->io_func) {
listener->io_func(listener, sioc, listener->io_data);
}
@@ -123,6 +125,7 @@ void qio_net_listener_add(QIONetListener *listener,
object_ref(OBJECT(sioc));
listener->connected = true;
+ trace_qio_net_listener_watch(listener, listener->io_func, "add");
if (listener->io_func != NULL) {
object_ref(OBJECT(listener));
listener->io_source[listener->nsioc] = qio_channel_add_watch_source(
@@ -143,6 +146,8 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
{
size_t i;
+ trace_qio_net_listener_unwatch(listener, listener->io_func,
+ "set_client_func");
if (listener->io_notify) {
listener->io_notify(listener->io_data);
}
@@ -158,6 +163,8 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
}
}
+ trace_qio_net_listener_watch(listener, listener->io_func,
+ "set_client_func");
if (listener->io_func != NULL) {
for (i = 0; i < listener->nsioc; i++) {
object_ref(OBJECT(listener));
@@ -218,6 +225,7 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
};
size_t i;
+ trace_qio_net_listener_unwatch(listener, listener->io_func, "wait_client");
for (i = 0; i < listener->nsioc; i++) {
if (listener->io_source[i]) {
g_source_destroy(listener->io_source[i]);
@@ -247,6 +255,7 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
g_main_loop_unref(loop);
g_main_context_unref(ctxt);
+ trace_qio_net_listener_watch(listener, listener->io_func, "wait_client");
if (listener->io_func != NULL) {
for (i = 0; i < listener->nsioc; i++) {
object_ref(OBJECT(listener));
@@ -268,6 +277,7 @@ void qio_net_listener_disconnect(QIONetListener *listener)
return;
}
+ trace_qio_net_listener_unwatch(listener, listener->io_func, "disconnect");
for (i = 0; i < listener->nsioc; i++) {
if (listener->io_source[i]) {
g_source_destroy(listener->io_source[i]);
diff --git a/io/trace-events b/io/trace-events
index dc3a63ba1fb..10976eca5fe 100644
--- a/io/trace-events
+++ b/io/trace-events
@@ -72,3 +72,8 @@ qio_channel_command_new_pid(void *ioc, int writefd, int readfd, int pid) "Comman
qio_channel_command_new_spawn(void *ioc, const char *binary, int flags) "Command new spawn ioc=%p binary=%s flags=%d"
qio_channel_command_abort(void *ioc, int pid) "Command abort ioc=%p pid=%d"
qio_channel_command_wait(void *ioc, int pid, int ret, int status) "Command abort ioc=%p pid=%d ret=%d status=%d"
+
+# net-listener.c
+qio_net_listener_watch(void *listener, void *func, const char *extra) "Net listener=%p watch enabled func=%p by %s"
+qio_net_listener_unwatch(void *listener, void *func, const char *extra) "Net listener=%p watch disabled func=%p by %s"
+qio_net_listener_callback(void *listener, void *func) "Net listener=%p callback forwarding to func=%p"
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 02/12] qio: Add trace points to net_listener
2025-11-08 22:59 ` [PATCH v2 02/12] qio: Add trace points to net_listener Eric Blake
@ 2025-11-10 15:58 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 15:58 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:23PM -0600, Eric Blake wrote:
> Upcoming patches will adjust how net_listener watches for new client
> connections; adding trace points now makes it easier to debug that the
> changes work as intended. For example, adding
> --trace='qio_net_listener*' to the qemu-storage-daemon command line
> before --nbd-server will track when the server first starts listening
> for clients.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: Make tracepoints unconditional, rename to match later refactoring
> patch, document io_func in unwatch
> ---
> io/net-listener.c | 10 ++++++++++
> io/trace-events | 5 +++++
> 2 files changed, 15 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 03/12] qio: Unwatch before notify in QIONetListener
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
2025-11-08 22:59 ` [PATCH v2 01/12] iotests: Drop execute permissions on vvfat.out Eric Blake
2025-11-08 22:59 ` [PATCH v2 02/12] qio: Add trace points to net_listener Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 16:00 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 04/12] qio: Remember context of qio_net_listener_set_client_func_full Eric Blake
` (8 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf, qemu-stable
When changing the callback registered with QIONetListener, the code
was calling notify on the old opaque data prior to actually removing
the old GSource objects still pointing to that data. Similarly,
during finalize, it called notify before tearing down the various
GSource objects tied to the data.
In practice, a grep of the QEMU code base found that every existing
client of QIONetListener passes in a NULL notifier (the opaque data,
if non-NULL, outlives the NetListener and so does not need cleanup
when the NetListener is torn down), so this patch has no impact. And
even if a caller had passed in a reference-counted object with a
notifier of object_unref but kept its own reference on the data, then
the early notify would merely reduce a refcount from (say) 2 to 1, but
not free the object. However, it is a latent bug waiting to bite any
future caller that passes in data where the notifier actually frees
the object, because the GSource could then trigger a use-after-free if
it loses the race on a last-minute client connection resulting in the
data being passed to one final use of the async callback.
Better is to delay the notify call until after all GSource that have
been given a copy of the opaque data are torn down.
CC: qemu-stable@nongnu.org
Fixes: 530473924d "io: introduce a network socket listener API", v2.12.0
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: new patch, split out from 4/8 to leave that one as just pure
refactoring, and call attention to this being a latent bug fix
---
io/net-listener.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/io/net-listener.c b/io/net-listener.c
index 007acbd5b18..d71b65270e0 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -148,13 +148,6 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
trace_qio_net_listener_unwatch(listener, listener->io_func,
"set_client_func");
- if (listener->io_notify) {
- listener->io_notify(listener->io_data);
- }
- listener->io_func = func;
- listener->io_data = data;
- listener->io_notify = notify;
-
for (i = 0; i < listener->nsioc; i++) {
if (listener->io_source[i]) {
g_source_destroy(listener->io_source[i]);
@@ -163,6 +156,13 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
}
}
+ if (listener->io_notify) {
+ listener->io_notify(listener->io_data);
+ }
+ listener->io_func = func;
+ listener->io_data = data;
+ listener->io_notify = notify;
+
trace_qio_net_listener_watch(listener, listener->io_func,
"set_client_func");
if (listener->io_func != NULL) {
@@ -300,10 +300,10 @@ static void qio_net_listener_finalize(Object *obj)
QIONetListener *listener = QIO_NET_LISTENER(obj);
size_t i;
+ qio_net_listener_disconnect(listener);
if (listener->io_notify) {
listener->io_notify(listener->io_data);
}
- qio_net_listener_disconnect(listener);
for (i = 0; i < listener->nsioc; i++) {
object_unref(OBJECT(listener->sioc[i]));
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 03/12] qio: Unwatch before notify in QIONetListener
2025-11-08 22:59 ` [PATCH v2 03/12] qio: Unwatch before notify in QIONetListener Eric Blake
@ 2025-11-10 16:00 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 16:00 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf, qemu-stable
On Sat, Nov 08, 2025 at 04:59:24PM -0600, Eric Blake wrote:
> When changing the callback registered with QIONetListener, the code
> was calling notify on the old opaque data prior to actually removing
> the old GSource objects still pointing to that data. Similarly,
> during finalize, it called notify before tearing down the various
> GSource objects tied to the data.
>
> In practice, a grep of the QEMU code base found that every existing
> client of QIONetListener passes in a NULL notifier (the opaque data,
> if non-NULL, outlives the NetListener and so does not need cleanup
> when the NetListener is torn down), so this patch has no impact. And
> even if a caller had passed in a reference-counted object with a
> notifier of object_unref but kept its own reference on the data, then
> the early notify would merely reduce a refcount from (say) 2 to 1, but
> not free the object. However, it is a latent bug waiting to bite any
> future caller that passes in data where the notifier actually frees
> the object, because the GSource could then trigger a use-after-free if
> it loses the race on a last-minute client connection resulting in the
> data being passed to one final use of the async callback.
>
> Better is to delay the notify call until after all GSource that have
> been given a copy of the opaque data are torn down.
>
> CC: qemu-stable@nongnu.org
> Fixes: 530473924d "io: introduce a network socket listener API", v2.12.0
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: new patch, split out from 4/8 to leave that one as just pure
> refactoring, and call attention to this being a latent bug fix
> ---
> io/net-listener.c | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 04/12] qio: Remember context of qio_net_listener_set_client_func_full
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (2 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 03/12] qio: Unwatch before notify in QIONetListener Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 16:08 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 05/12] qio: Minor optimization when callback function is unchanged Eric Blake
` (7 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf, qemu-stable
io/net-listener.c has two modes of use: asynchronous (the user calls
qio_net_listener_set_client_func to wake up the callback via the
global GMainContext, or qio_net_listener_set_client_func_full to wake
up the callback via the caller's own alternative GMainContext), and
synchronous (the user calls qio_net_listener_wait_client which creates
its own GMainContext and waits for the first client connection before
returning, with no need for a user's callback). But commit 938c8b79
has a latent logic flaw: when qio_net_listener_wait_client finishes on
its temporary context, it reverts all of the siocs back to the global
GMainContext rather than the potentially non-NULL context they might
have been originally registered with. Similarly, if the user creates
a net-listener, adds initial addresses, registers an async callback
with a non-default context (which ties to all siocs for the initial
addresses), then adds more addresses with qio_net_listener_add, the
siocs for later addresses are blindly placed in the global context,
rather than sharing the context of the earlier ones.
In practice, I don't think this has caused issues. As pointed out by
the original commit, all async callers prior to that commit were
already okay with the NULL default context; and the typical usage
pattern is to first add ALL the addresses the listener will pay
attention to before ever setting the async callback. Likewise, if a
file uses only qio_net_listener_set_client_func instead of
qio_net_listener_set_client_func_full, then it is never using a custom
context, so later assignments of async callbacks will still be to the
same global context as earlier ones. Meanwhile, any callers that want
to do the sync operation to grab the first client are unlikely to
register an async callback; altogether bypassing the question of
whether later assignments of a GSource are being tied to a different
context over time.
I do note that chardev/char-socket.c is the only file that calls both
qio_net_listener_wait_client (sync for a single client in
tcp_chr_accept_server_sync), and qio_net_listener_set_client_func_full
(several places, all with chr->gcontext, but sometimes with a NULL
callback function during teardown). But as far as I can tell, the two
uses are mutually exclusive, based on the is_waitconnect parameter to
qmp_chardev_open_socket_server.
That said, it is more robust to remember when an async callback
function is tied to a non-default context, and have both the sync wait
and any late address additions honor that same context. That way, the
code will be robust even if a later user performs a sync wait for a
specific client in the middle of servicing a longer-lived
QIONetListener that has an async callback for all other clients.
CC: qemu-stable@nongnu.org
Fixes: 938c8b79 ("qio: store gsources for net listeners", v2.12.0)
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: move earlier in series, trace context
---
include/io/net-listener.h | 1 +
io/net-listener.c | 25 ++++++++++++++++---------
io/trace-events | 6 +++---
3 files changed, 20 insertions(+), 12 deletions(-)
diff --git a/include/io/net-listener.h b/include/io/net-listener.h
index ab9f291ed62..42fbfab5467 100644
--- a/include/io/net-listener.h
+++ b/include/io/net-listener.h
@@ -50,6 +50,7 @@ struct QIONetListener {
QIOChannelSocket **sioc;
GSource **io_source;
size_t nsioc;
+ GMainContext *context;
bool connected;
diff --git a/io/net-listener.c b/io/net-listener.c
index d71b65270e0..0f16b78fbbd 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -51,7 +51,8 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
return TRUE;
}
- trace_qio_net_listener_callback(listener, listener->io_func);
+ trace_qio_net_listener_callback(listener, listener->io_func,
+ listener->context);
if (listener->io_func) {
listener->io_func(listener, sioc, listener->io_data);
}
@@ -125,13 +126,14 @@ void qio_net_listener_add(QIONetListener *listener,
object_ref(OBJECT(sioc));
listener->connected = true;
- trace_qio_net_listener_watch(listener, listener->io_func, "add");
+ trace_qio_net_listener_watch(listener, listener->io_func,
+ listener->context, "add");
if (listener->io_func != NULL) {
object_ref(OBJECT(listener));
listener->io_source[listener->nsioc] = qio_channel_add_watch_source(
QIO_CHANNEL(listener->sioc[listener->nsioc]), G_IO_IN,
qio_net_listener_channel_func,
- listener, (GDestroyNotify)object_unref, NULL);
+ listener, (GDestroyNotify)object_unref, listener->context);
}
listener->nsioc++;
@@ -147,7 +149,8 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
size_t i;
trace_qio_net_listener_unwatch(listener, listener->io_func,
- "set_client_func");
+ listener->context, "set_client_func");
+
for (i = 0; i < listener->nsioc; i++) {
if (listener->io_source[i]) {
g_source_destroy(listener->io_source[i]);
@@ -162,9 +165,10 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
listener->io_func = func;
listener->io_data = data;
listener->io_notify = notify;
+ listener->context = context;
trace_qio_net_listener_watch(listener, listener->io_func,
- "set_client_func");
+ listener->context, "set_client_func");
if (listener->io_func != NULL) {
for (i = 0; i < listener->nsioc; i++) {
object_ref(OBJECT(listener));
@@ -225,7 +229,8 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
};
size_t i;
- trace_qio_net_listener_unwatch(listener, listener->io_func, "wait_client");
+ trace_qio_net_listener_unwatch(listener, listener->io_func,
+ listener->context, "wait_client");
for (i = 0; i < listener->nsioc; i++) {
if (listener->io_source[i]) {
g_source_destroy(listener->io_source[i]);
@@ -255,14 +260,15 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
g_main_loop_unref(loop);
g_main_context_unref(ctxt);
- trace_qio_net_listener_watch(listener, listener->io_func, "wait_client");
+ trace_qio_net_listener_watch(listener, listener->io_func,
+ listener->context, "wait_client");
if (listener->io_func != NULL) {
for (i = 0; i < listener->nsioc; i++) {
object_ref(OBJECT(listener));
listener->io_source[i] = qio_channel_add_watch_source(
QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
qio_net_listener_channel_func,
- listener, (GDestroyNotify)object_unref, NULL);
+ listener, (GDestroyNotify)object_unref, listener->context);
}
}
@@ -277,7 +283,8 @@ void qio_net_listener_disconnect(QIONetListener *listener)
return;
}
- trace_qio_net_listener_unwatch(listener, listener->io_func, "disconnect");
+ trace_qio_net_listener_unwatch(listener, listener->io_func,
+ listener->context, "disconnect");
for (i = 0; i < listener->nsioc; i++) {
if (listener->io_source[i]) {
g_source_destroy(listener->io_source[i]);
diff --git a/io/trace-events b/io/trace-events
index 10976eca5fe..0cb77d579b6 100644
--- a/io/trace-events
+++ b/io/trace-events
@@ -74,6 +74,6 @@ qio_channel_command_abort(void *ioc, int pid) "Command abort ioc=%p pid=%d"
qio_channel_command_wait(void *ioc, int pid, int ret, int status) "Command abort ioc=%p pid=%d ret=%d status=%d"
# net-listener.c
-qio_net_listener_watch(void *listener, void *func, const char *extra) "Net listener=%p watch enabled func=%p by %s"
-qio_net_listener_unwatch(void *listener, void *func, const char *extra) "Net listener=%p watch disabled func=%p by %s"
-qio_net_listener_callback(void *listener, void *func) "Net listener=%p callback forwarding to func=%p"
+qio_net_listener_watch(void *listener, void *func, void *ctx, const char *extra) "Net listener=%p watch enabled func=%p ctx=%p by %s"
+qio_net_listener_unwatch(void *listener, void *func, void *ctx, const char *extra) "Net listener=%p watch disabled func=%p ctx=%p by %s"
+qio_net_listener_callback(void *listener, void *func, void *ctx) "Net listener=%p callback forwarding to func=%p ctx=%p"
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 04/12] qio: Remember context of qio_net_listener_set_client_func_full
2025-11-08 22:59 ` [PATCH v2 04/12] qio: Remember context of qio_net_listener_set_client_func_full Eric Blake
@ 2025-11-10 16:08 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 16:08 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf, qemu-stable
On Sat, Nov 08, 2025 at 04:59:25PM -0600, Eric Blake wrote:
> io/net-listener.c has two modes of use: asynchronous (the user calls
> qio_net_listener_set_client_func to wake up the callback via the
> global GMainContext, or qio_net_listener_set_client_func_full to wake
> up the callback via the caller's own alternative GMainContext), and
> synchronous (the user calls qio_net_listener_wait_client which creates
> its own GMainContext and waits for the first client connection before
> returning, with no need for a user's callback). But commit 938c8b79
> has a latent logic flaw: when qio_net_listener_wait_client finishes on
> its temporary context, it reverts all of the siocs back to the global
> GMainContext rather than the potentially non-NULL context they might
> have been originally registered with. Similarly, if the user creates
> a net-listener, adds initial addresses, registers an async callback
> with a non-default context (which ties to all siocs for the initial
> addresses), then adds more addresses with qio_net_listener_add, the
> siocs for later addresses are blindly placed in the global context,
> rather than sharing the context of the earlier ones.
>
> In practice, I don't think this has caused issues. As pointed out by
> the original commit, all async callers prior to that commit were
> already okay with the NULL default context; and the typical usage
> pattern is to first add ALL the addresses the listener will pay
> attention to before ever setting the async callback. Likewise, if a
> file uses only qio_net_listener_set_client_func instead of
> qio_net_listener_set_client_func_full, then it is never using a custom
> context, so later assignments of async callbacks will still be to the
> same global context as earlier ones. Meanwhile, any callers that want
> to do the sync operation to grab the first client are unlikely to
> register an async callback; altogether bypassing the question of
> whether later assignments of a GSource are being tied to a different
> context over time.
>
> I do note that chardev/char-socket.c is the only file that calls both
> qio_net_listener_wait_client (sync for a single client in
> tcp_chr_accept_server_sync), and qio_net_listener_set_client_func_full
> (several places, all with chr->gcontext, but sometimes with a NULL
> callback function during teardown). But as far as I can tell, the two
> uses are mutually exclusive, based on the is_waitconnect parameter to
> qmp_chardev_open_socket_server.
>
> That said, it is more robust to remember when an async callback
> function is tied to a non-default context, and have both the sync wait
> and any late address additions honor that same context. That way, the
> code will be robust even if a later user performs a sync wait for a
> specific client in the middle of servicing a longer-lived
> QIONetListener that has an async callback for all other clients.
>
> CC: qemu-stable@nongnu.org
> Fixes: 938c8b79 ("qio: store gsources for net listeners", v2.12.0)
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: move earlier in series, trace context
> ---
> include/io/net-listener.h | 1 +
> io/net-listener.c | 25 ++++++++++++++++---------
> io/trace-events | 6 +++---
> 3 files changed, 20 insertions(+), 12 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 05/12] qio: Minor optimization when callback function is unchanged
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (3 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 04/12] qio: Remember context of qio_net_listener_set_client_func_full Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 16:09 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 06/12] qio: Factor out helpers qio_net_listener_[un]watch Eric Blake
` (6 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
In qemu-nbd and other NBD server setups where parallel clients are
supported, it is common that the caller will re-register the same
callback function as long as it has not reached its limit on
simultaneous clients. In that case, there is no need to tear down and
reinstall GSource watches in the GMainContext.
In practice, all callers currently pass NULL for notify, and no caller
ever changes context across calls (for async uses, either the caller
consistently uses qio_net_listener_set_client_func_full with the same
context, or the caller consistently uses only
qio_net_listener_set_client_func which always uses the global
context); but the time spent checking these two fields in addition to
the more important func and data is still less than the savings of not
churning through extra GSource manipulations when the result will be
unchanged.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: move later in series, also ensure notify and context are the same
---
io/net-listener.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/io/net-listener.c b/io/net-listener.c
index 0f16b78fbbd..a9da8e4e069 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -148,6 +148,11 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
{
size_t i;
+ if (listener->io_func == func && listener->io_data == data &&
+ listener->io_notify == notify && listener->context == context) {
+ return;
+ }
+
trace_qio_net_listener_unwatch(listener, listener->io_func,
listener->context, "set_client_func");
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 05/12] qio: Minor optimization when callback function is unchanged
2025-11-08 22:59 ` [PATCH v2 05/12] qio: Minor optimization when callback function is unchanged Eric Blake
@ 2025-11-10 16:09 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 16:09 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:26PM -0600, Eric Blake wrote:
> In qemu-nbd and other NBD server setups where parallel clients are
> supported, it is common that the caller will re-register the same
> callback function as long as it has not reached its limit on
> simultaneous clients. In that case, there is no need to tear down and
> reinstall GSource watches in the GMainContext.
>
> In practice, all callers currently pass NULL for notify, and no caller
> ever changes context across calls (for async uses, either the caller
> consistently uses qio_net_listener_set_client_func_full with the same
> context, or the caller consistently uses only
> qio_net_listener_set_client_func which always uses the global
> context); but the time spent checking these two fields in addition to
> the more important func and data is still less than the savings of not
> churning through extra GSource manipulations when the result will be
> unchanged.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: move later in series, also ensure notify and context are the same
> ---
> io/net-listener.c | 5 +++++
> 1 file changed, 5 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 06/12] qio: Factor out helpers qio_net_listener_[un]watch
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (4 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 05/12] qio: Minor optimization when callback function is unchanged Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 16:14 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 07/12] qio: Hoist ref of listener outside loop Eric Blake
` (5 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
The code had three similar repetitions of an iteration over one or all
of nsiocs to set up a GSource, and likewise for teardown. Since an
upcoming patch wants to tweak whether GSource or AioContext is used,
its better to consolidate that into one helper function for fewer
places to edit later.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: rebase to changes on the tracepoints earlier in series
---
io/net-listener.c | 108 ++++++++++++++++++++--------------------------
1 file changed, 47 insertions(+), 61 deletions(-)
diff --git a/io/net-listener.c b/io/net-listener.c
index a9da8e4e069..ad097175eba 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -107,6 +107,47 @@ int qio_net_listener_open_sync(QIONetListener *listener,
}
}
+/*
+ * i == 0 to set watch on entire array, non-zero to only set watch on
+ * recent additions when earlier entries are already watched.
+ */
+static void
+qio_net_listener_watch(QIONetListener *listener, size_t i, const char *caller)
+{
+ if (!listener->io_func) {
+ return;
+ }
+
+ trace_qio_net_listener_watch(listener, listener->io_func,
+ listener->context, caller);
+ for ( ; i < listener->nsioc; i++) {
+ object_ref(OBJECT(listener));
+ listener->io_source[i] = qio_channel_add_watch_source(
+ QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
+ qio_net_listener_channel_func,
+ listener, (GDestroyNotify)object_unref, listener->context);
+ }
+}
+
+static void
+qio_net_listener_unwatch(QIONetListener *listener, const char *caller)
+{
+ size_t i;
+
+ if (!listener->io_func) {
+ return;
+ }
+
+ trace_qio_net_listener_unwatch(listener, listener->io_func,
+ listener->context, caller);
+ for (i = 0; i < listener->nsioc; i++) {
+ if (listener->io_source[i]) {
+ g_source_destroy(listener->io_source[i]);
+ g_source_unref(listener->io_source[i]);
+ listener->io_source[i] = NULL;
+ }
+ }
+}
void qio_net_listener_add(QIONetListener *listener,
QIOChannelSocket *sioc)
@@ -126,17 +167,8 @@ void qio_net_listener_add(QIONetListener *listener,
object_ref(OBJECT(sioc));
listener->connected = true;
- trace_qio_net_listener_watch(listener, listener->io_func,
- listener->context, "add");
- if (listener->io_func != NULL) {
- object_ref(OBJECT(listener));
- listener->io_source[listener->nsioc] = qio_channel_add_watch_source(
- QIO_CHANNEL(listener->sioc[listener->nsioc]), G_IO_IN,
- qio_net_listener_channel_func,
- listener, (GDestroyNotify)object_unref, listener->context);
- }
-
listener->nsioc++;
+ qio_net_listener_watch(listener, listener->nsioc - 1, "add");
}
@@ -146,24 +178,12 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
GDestroyNotify notify,
GMainContext *context)
{
- size_t i;
-
if (listener->io_func == func && listener->io_data == data &&
listener->io_notify == notify && listener->context == context) {
return;
}
- trace_qio_net_listener_unwatch(listener, listener->io_func,
- listener->context, "set_client_func");
-
- for (i = 0; i < listener->nsioc; i++) {
- if (listener->io_source[i]) {
- g_source_destroy(listener->io_source[i]);
- g_source_unref(listener->io_source[i]);
- listener->io_source[i] = NULL;
- }
- }
-
+ qio_net_listener_unwatch(listener, "set_client_func");
if (listener->io_notify) {
listener->io_notify(listener->io_data);
}
@@ -172,17 +192,7 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
listener->io_notify = notify;
listener->context = context;
- trace_qio_net_listener_watch(listener, listener->io_func,
- listener->context, "set_client_func");
- if (listener->io_func != NULL) {
- for (i = 0; i < listener->nsioc; i++) {
- object_ref(OBJECT(listener));
- listener->io_source[i] = qio_channel_add_watch_source(
- QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
- qio_net_listener_channel_func,
- listener, (GDestroyNotify)object_unref, context);
- }
- }
+ qio_net_listener_watch(listener, 0, "set_client_func");
}
void qio_net_listener_set_client_func(QIONetListener *listener,
@@ -234,15 +244,7 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
};
size_t i;
- trace_qio_net_listener_unwatch(listener, listener->io_func,
- listener->context, "wait_client");
- for (i = 0; i < listener->nsioc; i++) {
- if (listener->io_source[i]) {
- g_source_destroy(listener->io_source[i]);
- g_source_unref(listener->io_source[i]);
- listener->io_source[i] = NULL;
- }
- }
+ qio_net_listener_unwatch(listener, "wait_client");
sources = g_new0(GSource *, listener->nsioc);
for (i = 0; i < listener->nsioc; i++) {
@@ -265,17 +267,7 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
g_main_loop_unref(loop);
g_main_context_unref(ctxt);
- trace_qio_net_listener_watch(listener, listener->io_func,
- listener->context, "wait_client");
- if (listener->io_func != NULL) {
- for (i = 0; i < listener->nsioc; i++) {
- object_ref(OBJECT(listener));
- listener->io_source[i] = qio_channel_add_watch_source(
- QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
- qio_net_listener_channel_func,
- listener, (GDestroyNotify)object_unref, listener->context);
- }
- }
+ qio_net_listener_watch(listener, 0, "wait_client");
return data.sioc;
}
@@ -288,14 +280,8 @@ void qio_net_listener_disconnect(QIONetListener *listener)
return;
}
- trace_qio_net_listener_unwatch(listener, listener->io_func,
- listener->context, "disconnect");
+ qio_net_listener_unwatch(listener, "disconnect");
for (i = 0; i < listener->nsioc; i++) {
- if (listener->io_source[i]) {
- g_source_destroy(listener->io_source[i]);
- g_source_unref(listener->io_source[i]);
- listener->io_source[i] = NULL;
- }
qio_channel_close(QIO_CHANNEL(listener->sioc[i]), NULL);
}
listener->connected = false;
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 06/12] qio: Factor out helpers qio_net_listener_[un]watch
2025-11-08 22:59 ` [PATCH v2 06/12] qio: Factor out helpers qio_net_listener_[un]watch Eric Blake
@ 2025-11-10 16:14 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 16:14 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:27PM -0600, Eric Blake wrote:
> The code had three similar repetitions of an iteration over one or all
> of nsiocs to set up a GSource, and likewise for teardown. Since an
> upcoming patch wants to tweak whether GSource or AioContext is used,
> its better to consolidate that into one helper function for fewer
> places to edit later.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: rebase to changes on the tracepoints earlier in series
> ---
> io/net-listener.c | 108 ++++++++++++++++++++--------------------------
> 1 file changed, 47 insertions(+), 61 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 07/12] qio: Hoist ref of listener outside loop
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (5 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 06/12] qio: Factor out helpers qio_net_listener_[un]watch Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-11 14:43 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc Eric Blake
` (4 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
The point of QIONetListener is to allow a server to listen to more
than one socket address at a time, and respond to clients in a
first-come-first-serve order across any of those addresses. While
some servers (like NBD) really do want to serve multiple simultaneous
clients, others only care about the first client to connect, and will
immediately deregister the callback, possibly by dropping their
reference to the QIONetListener. The existing code ensures that the
reference count on the listener will not drop to zero while there are
any pending GSource (since each GSource will not call the notify of
object_unref() until it is no longer servicing a callback); however,
it is also possible to guarantee the same setup by merely holding an
overall reference to the listener as long as there is at least one
GSource, as well as also holding a local reference around any callback
function that has not yet run to completion.
Hoisting the reference like this will make it easier for an upcoming
patch to still ensure the listener cannot be prematurely finalized
during the user's callback, even when using AioContext (where we have
not plumbed in support for a notify function) rather than GSource.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: also grab reference around callback execution
---
io/net-listener.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/io/net-listener.c b/io/net-listener.c
index ad097175eba..dd3522c9b3c 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -54,7 +54,9 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
trace_qio_net_listener_callback(listener, listener->io_func,
listener->context);
if (listener->io_func) {
+ object_ref(OBJECT(listener));
listener->io_func(listener, sioc, listener->io_data);
+ object_unref(OBJECT(listener));
}
object_unref(OBJECT(sioc));
@@ -120,12 +122,14 @@ qio_net_listener_watch(QIONetListener *listener, size_t i, const char *caller)
trace_qio_net_listener_watch(listener, listener->io_func,
listener->context, caller);
- for ( ; i < listener->nsioc; i++) {
+ if (i == 0) {
object_ref(OBJECT(listener));
+ }
+ for ( ; i < listener->nsioc; i++) {
listener->io_source[i] = qio_channel_add_watch_source(
QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
qio_net_listener_channel_func,
- listener, (GDestroyNotify)object_unref, listener->context);
+ listener, NULL, listener->context);
}
}
@@ -147,6 +151,7 @@ qio_net_listener_unwatch(QIONetListener *listener, const char *caller)
listener->io_source[i] = NULL;
}
}
+ object_unref(OBJECT(listener));
}
void qio_net_listener_add(QIONetListener *listener,
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 07/12] qio: Hoist ref of listener outside loop
2025-11-08 22:59 ` [PATCH v2 07/12] qio: Hoist ref of listener outside loop Eric Blake
@ 2025-11-11 14:43 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-11 14:43 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:28PM -0600, Eric Blake wrote:
> The point of QIONetListener is to allow a server to listen to more
> than one socket address at a time, and respond to clients in a
> first-come-first-serve order across any of those addresses. While
> some servers (like NBD) really do want to serve multiple simultaneous
> clients, others only care about the first client to connect, and will
> immediately deregister the callback, possibly by dropping their
> reference to the QIONetListener. The existing code ensures that the
> reference count on the listener will not drop to zero while there are
> any pending GSource (since each GSource will not call the notify of
> object_unref() until it is no longer servicing a callback); however,
> it is also possible to guarantee the same setup by merely holding an
> overall reference to the listener as long as there is at least one
> GSource, as well as also holding a local reference around any callback
> function that has not yet run to completion.
>
> Hoisting the reference like this will make it easier for an upcoming
> patch to still ensure the listener cannot be prematurely finalized
> during the user's callback, even when using AioContext (where we have
> not plumbed in support for a notify function) rather than GSource.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: also grab reference around callback execution
Still not comfortable with this. I've sent a reply to your v1
patch though, so we keep the analysis in one thread.
> ---
> io/net-listener.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/io/net-listener.c b/io/net-listener.c
> index ad097175eba..dd3522c9b3c 100644
> --- a/io/net-listener.c
> +++ b/io/net-listener.c
> @@ -54,7 +54,9 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
> trace_qio_net_listener_callback(listener, listener->io_func,
> listener->context);
> if (listener->io_func) {
> + object_ref(OBJECT(listener));
> listener->io_func(listener, sioc, listener->io_data);
> + object_unref(OBJECT(listener));
> }
>
> object_unref(OBJECT(sioc));
> @@ -120,12 +122,14 @@ qio_net_listener_watch(QIONetListener *listener, size_t i, const char *caller)
>
> trace_qio_net_listener_watch(listener, listener->io_func,
> listener->context, caller);
> - for ( ; i < listener->nsioc; i++) {
> + if (i == 0) {
> object_ref(OBJECT(listener));
> + }
> + for ( ; i < listener->nsioc; i++) {
> listener->io_source[i] = qio_channel_add_watch_source(
> QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
> qio_net_listener_channel_func,
> - listener, (GDestroyNotify)object_unref, listener->context);
> + listener, NULL, listener->context);
> }
> }
>
> @@ -147,6 +151,7 @@ qio_net_listener_unwatch(QIONetListener *listener, const char *caller)
> listener->io_source[i] = NULL;
> }
> }
> + object_unref(OBJECT(listener));
> }
>
> void qio_net_listener_add(QIONetListener *listener,
> --
> 2.51.1
>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (6 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 07/12] qio: Hoist ref of listener outside loop Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 18:31 ` Eric Blake
2025-11-11 14:15 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 09/12] qio: Prepare NetListener to use AioContext Eric Blake
` (3 subsequent siblings)
11 siblings, 2 replies; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, berrange, kwolf, Marc-André Lureau,
Paolo Bonzini, Peter Xu, Fabiano Rosas
An upcoming patch needs to pass more than just sioc as the opaque
pointer to an AioContext; but since our AioContext code in general
(and its QIO Channel wrapper code) lacks a notify callback present
with GSource, we do not have the trivial option of just g_malloc'ing a
small struct to hold all that data coupled with a notify of g_free.
Instead, the data pointer must outlive the registered handler; in
fact, having the data pointer have the same lifetime as QIONetListener
is adequate.
But the cleanest way to stick such a helper struct in QIONetListener
will be to rearrange internal struct members. And that in turn means
that all existing code that currently directly accesses
listener->nsioc and listener->sioc[] should instead go through
accessor functions, to be immune to the upcoming struct layout
changes. So this patch adds accessor methods qio_net_listener_nsioc()
and qio_net_listener_sioc(), and puts them to use.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: new patch
---
include/io/net-listener.h | 24 ++++++++++++++++++++++++
chardev/char-socket.c | 3 ++-
io/net-listener.c | 12 ++++++++++++
migration/socket.c | 5 +++--
ui/vnc.c | 36 +++++++++++++++++++++++-------------
5 files changed, 64 insertions(+), 16 deletions(-)
diff --git a/include/io/net-listener.h b/include/io/net-listener.h
index 42fbfab5467..2605d6aae1e 100644
--- a/include/io/net-listener.h
+++ b/include/io/net-listener.h
@@ -184,4 +184,28 @@ void qio_net_listener_disconnect(QIONetListener *listener);
*/
bool qio_net_listener_is_connected(QIONetListener *listener);
+
+/**
+ * qio_net_listener_nsioc:
+ * @listener: the network listener object
+ *
+ * Determine the number of listener channels currently owned by the
+ * given listener.
+ *
+ * Returns: number of channels, or 0 if not listening
+ */
+size_t qio_net_listener_nsioc(QIONetListener *listener);
+
+
+/**
+ * qio_net_listener_sioc:
+ * @listener: the network listener object
+ * @n: index of the sioc to grab
+ *
+ * Accessor for the nth sioc owned by the listener.
+ *
+ * Returns: the requested listener, or #NULL if not in bounds
+ */
+QIOChannelSocket *qio_net_listener_sioc(QIONetListener *listener, size_t n);
+
#endif /* QIO_NET_LISTENER_H */
diff --git a/chardev/char-socket.c b/chardev/char-socket.c
index 62852e3caf5..022ae47d726 100644
--- a/chardev/char-socket.c
+++ b/chardev/char-socket.c
@@ -1255,7 +1255,8 @@ static int qmp_chardev_open_socket_server(Chardev *chr,
}
qapi_free_SocketAddress(s->addr);
- s->addr = socket_local_address(s->listener->sioc[0]->fd, errp);
+ s->addr = socket_local_address(qio_net_listener_sioc(s->listener, 0)->fd,
+ errp);
skip_listen:
update_disconnected_filename(s);
diff --git a/io/net-listener.c b/io/net-listener.c
index dd3522c9b3c..83c977ecca2 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -298,6 +298,18 @@ bool qio_net_listener_is_connected(QIONetListener *listener)
return listener->connected;
}
+size_t qio_net_listener_nsioc(QIONetListener *listener)
+{
+ return listener->nsioc;
+}
+
+QIOChannelSocket *qio_net_listener_sioc(QIONetListener *listener, size_t n)
+{
+ if (n > listener->nsioc) {
+ return NULL;
+ }
+ return listener->sioc[n];
+}
static void qio_net_listener_finalize(Object *obj)
{
QIONetListener *listener = QIO_NET_LISTENER(obj);
diff --git a/migration/socket.c b/migration/socket.c
index 5ec65b8c039..297de3ee156 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -170,9 +170,10 @@ void socket_start_incoming_migration(SocketAddress *saddr,
NULL, NULL,
g_main_context_get_thread_default());
- for (i = 0; i < listener->nsioc; i++) {
+ for (i = 0; i < qio_net_listener_nsioc(listener); i++) {
SocketAddress *address =
- qio_channel_socket_get_local_address(listener->sioc[i], errp);
+ qio_channel_socket_get_local_address(
+ qio_net_listener_sioc(listener, i), errp);
if (!address) {
return;
}
diff --git a/ui/vnc.c b/ui/vnc.c
index 50016ff7ab4..b5b6f5cd06f 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -235,12 +235,12 @@ static VncServerInfo *vnc_server_info_get(VncDisplay *vd)
VncServerInfo *info;
Error *err = NULL;
- if (!vd->listener || !vd->listener->nsioc) {
+ if (!vd->listener || !qio_net_listener_nsioc(vd->listener)) {
return NULL;
}
info = g_malloc0(sizeof(*info));
- vnc_init_basic_info_from_server_addr(vd->listener->sioc[0],
+ vnc_init_basic_info_from_server_addr(qio_net_listener_sioc(vd->listener, 0),
qapi_VncServerInfo_base(info), &err);
info->auth = g_strdup(vnc_auth_name(vd));
if (err) {
@@ -377,7 +377,7 @@ VncInfo *qmp_query_vnc(Error **errp)
VncDisplay *vd = vnc_display_find(NULL);
SocketAddress *addr = NULL;
- if (vd == NULL || !vd->listener || !vd->listener->nsioc) {
+ if (vd == NULL || !vd->listener || !qio_net_listener_nsioc(vd->listener)) {
info->enabled = false;
} else {
info->enabled = true;
@@ -386,8 +386,8 @@ VncInfo *qmp_query_vnc(Error **errp)
info->has_clients = true;
info->clients = qmp_query_client_list(vd);
- addr = qio_channel_socket_get_local_address(vd->listener->sioc[0],
- errp);
+ addr = qio_channel_socket_get_local_address(
+ qio_net_listener_sioc(vd->listener, 0), errp);
if (!addr) {
goto out_error;
}
@@ -549,6 +549,8 @@ VncInfo2List *qmp_query_vnc_servers(Error **errp)
size_t i;
QTAILQ_FOREACH(vd, &vnc_displays, next) {
+ size_t nsioc = 0;
+
info = g_new0(VncInfo2, 1);
info->id = g_strdup(vd->id);
info->clients = qmp_query_client_list(vd);
@@ -559,14 +561,21 @@ VncInfo2List *qmp_query_vnc_servers(Error **errp)
"device", &error_abort));
info->display = g_strdup(dev->id);
}
- for (i = 0; vd->listener != NULL && i < vd->listener->nsioc; i++) {
- info->server = qmp_query_server_entry(
- vd->listener->sioc[i], false, vd->auth, vd->subauth,
- info->server);
+ if (vd->listener != NULL) {
+ nsioc = qio_net_listener_nsioc(vd->listener);
}
- for (i = 0; vd->wslistener != NULL && i < vd->wslistener->nsioc; i++) {
+ for (i = 0; i < nsioc; i++) {
info->server = qmp_query_server_entry(
- vd->wslistener->sioc[i], true, vd->ws_auth,
+ qio_net_listener_sioc(vd->listener, i), false, vd->auth,
+ vd->subauth, info->server);
+ }
+ nsioc = 0;
+ if (vd->wslistener) {
+ nsioc = qio_net_listener_nsioc(vd->wslistener);
+ }
+ for (i = 0; i < nsioc; i++) {
+ info->server = qmp_query_server_entry(
+ qio_net_listener_sioc(vd->wslistener, i), true, vd->ws_auth,
vd->ws_subauth, info->server);
}
@@ -3550,11 +3559,12 @@ static void vnc_display_print_local_addr(VncDisplay *vd)
{
SocketAddress *addr;
- if (!vd->listener || !vd->listener->nsioc) {
+ if (!vd->listener || !qio_net_listener_nsioc(vd->listener)) {
return;
}
- addr = qio_channel_socket_get_local_address(vd->listener->sioc[0], NULL);
+ addr = qio_channel_socket_get_local_address(
+ qio_net_listener_sioc(vd->listener, 0), NULL);
if (!addr) {
return;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc
2025-11-08 22:59 ` [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc Eric Blake
@ 2025-11-10 18:31 ` Eric Blake
2025-11-11 14:15 ` Daniel P. Berrangé
1 sibling, 0 replies; 27+ messages in thread
From: Eric Blake @ 2025-11-10 18:31 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, berrange, kwolf, Marc-André Lureau,
Paolo Bonzini, Peter Xu, Fabiano Rosas
On Sat, Nov 08, 2025 at 04:59:29PM -0600, Eric Blake wrote:
> An upcoming patch needs to pass more than just sioc as the opaque
> pointer to an AioContext; but since our AioContext code in general
> (and its QIO Channel wrapper code) lacks a notify callback present
> with GSource, we do not have the trivial option of just g_malloc'ing a
> small struct to hold all that data coupled with a notify of g_free.
> Instead, the data pointer must outlive the registered handler; in
> fact, having the data pointer have the same lifetime as QIONetListener
> is adequate.
>
> But the cleanest way to stick such a helper struct in QIONetListener
> will be to rearrange internal struct members. And that in turn means
> that all existing code that currently directly accesses
> listener->nsioc and listener->sioc[] should instead go through
> accessor functions, to be immune to the upcoming struct layout
> changes. So this patch adds accessor methods qio_net_listener_nsioc()
> and qio_net_listener_sioc(), and puts them to use.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> +++ b/io/net-listener.c
> @@ -298,6 +298,18 @@ bool qio_net_listener_is_connected(QIONetListener *listener)
> return listener->connected;
> }
>
> +size_t qio_net_listener_nsioc(QIONetListener *listener)
> +{
> + return listener->nsioc;
> +}
> +
> +QIOChannelSocket *qio_net_listener_sioc(QIONetListener *listener, size_t n)
> +{
> + if (n > listener->nsioc) {
> + return NULL;
Off-by-1; this should be 'n >= listener->nsioc'.
> + }
> + return listener->sioc[n];
> +}
--
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization: qemu.org | libguestfs.org
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc
2025-11-08 22:59 ` [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc Eric Blake
2025-11-10 18:31 ` Eric Blake
@ 2025-11-11 14:15 ` Daniel P. Berrangé
1 sibling, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-11 14:15 UTC (permalink / raw)
To: Eric Blake
Cc: qemu-devel, qemu-block, kwolf, Marc-André Lureau,
Paolo Bonzini, Peter Xu, Fabiano Rosas
On Sat, Nov 08, 2025 at 04:59:29PM -0600, Eric Blake wrote:
> An upcoming patch needs to pass more than just sioc as the opaque
> pointer to an AioContext; but since our AioContext code in general
> (and its QIO Channel wrapper code) lacks a notify callback present
> with GSource, we do not have the trivial option of just g_malloc'ing a
> small struct to hold all that data coupled with a notify of g_free.
> Instead, the data pointer must outlive the registered handler; in
> fact, having the data pointer have the same lifetime as QIONetListener
> is adequate.
>
> But the cleanest way to stick such a helper struct in QIONetListener
> will be to rearrange internal struct members. And that in turn means
> that all existing code that currently directly accesses
> listener->nsioc and listener->sioc[] should instead go through
> accessor functions, to be immune to the upcoming struct layout
> changes. So this patch adds accessor methods qio_net_listener_nsioc()
> and qio_net_listener_sioc(), and puts them to use.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: new patch
> ---
> include/io/net-listener.h | 24 ++++++++++++++++++++++++
> chardev/char-socket.c | 3 ++-
> io/net-listener.c | 12 ++++++++++++
> migration/socket.c | 5 +++--
> ui/vnc.c | 36 +++++++++++++++++++++++-------------
> 5 files changed, 64 insertions(+), 16 deletions(-)
>
> diff --git a/include/io/net-listener.h b/include/io/net-listener.h
> index 42fbfab5467..2605d6aae1e 100644
> --- a/include/io/net-listener.h
> +++ b/include/io/net-listener.h
> @@ -184,4 +184,28 @@ void qio_net_listener_disconnect(QIONetListener *listener);
> */
> bool qio_net_listener_is_connected(QIONetListener *listener);
>
> +
> +/**
> + * qio_net_listener_nsioc:
> + * @listener: the network listener object
> + *
> + * Determine the number of listener channels currently owned by the
> + * given listener.
> + *
> + * Returns: number of channels, or 0 if not listening
> + */
> +size_t qio_net_listener_nsioc(QIONetListener *listener);
> +
> +
> +/**
> + * qio_net_listener_sioc:
> + * @listener: the network listener object
> + * @n: index of the sioc to grab
> + *
> + * Accessor for the nth sioc owned by the listener.
> + *
> + * Returns: the requested listener, or #NULL if not in bounds
> + */
> +QIOChannelSocket *qio_net_listener_sioc(QIONetListener *listener, size_t n);
> +
> #endif /* QIO_NET_LISTENER_H */
> diff --git a/chardev/char-socket.c b/chardev/char-socket.c
> index 62852e3caf5..022ae47d726 100644
> --- a/chardev/char-socket.c
> +++ b/chardev/char-socket.c
> @@ -1255,7 +1255,8 @@ static int qmp_chardev_open_socket_server(Chardev *chr,
> }
>
> qapi_free_SocketAddress(s->addr);
> - s->addr = socket_local_address(s->listener->sioc[0]->fd, errp);
> + s->addr = socket_local_address(qio_net_listener_sioc(s->listener, 0)->fd,
> + errp);
Oh pre-existing bug / undesirable code pattern. This should be calling
qio_channel_socket_get_local_address, which avoids the needs to re-call
getsockname() as QIOChanelSocket has cached the local address.
Once we do that, then we would have 4 cases in this patch doing
qio_channel_socket_get_local_address(
qio_net_listener_sioc(listener, i), errp);
which would suggest adding a helper we could call as
qio_net_listener_get_local_address(listener, i, errp)
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 09/12] qio: Prepare NetListener to use AioContext
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (7 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 08/12] qio: Provide accessor around QIONetListener->sioc Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-11 14:17 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 10/12] qio: Add QIONetListener API for using AioContext Eric Blake
` (2 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
For ease of review, this patch adds an AioContext pointer to the
QIONetListener struct, the code to trace it, and refactors
listener->io_source to instead be an array of utility structs; but the
aio_context pointer is always NULL until the next patch adds an API to
set it. There should be no semantic change in this patch.
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: new patch, replacing the earlier "qio: Let listening sockets remember
their owning QIONetListener"
---
include/io/net-listener.h | 6 ++-
io/net-listener.c | 102 ++++++++++++++++++++++++++------------
io/trace-events | 6 +--
3 files changed, 76 insertions(+), 38 deletions(-)
diff --git a/include/io/net-listener.h b/include/io/net-listener.h
index 2605d6aae1e..7188721cb34 100644
--- a/include/io/net-listener.h
+++ b/include/io/net-listener.h
@@ -28,6 +28,7 @@
OBJECT_DECLARE_SIMPLE_TYPE(QIONetListener,
QIO_NET_LISTENER)
+typedef struct QIONetListenerSource QIONetListenerSource;
typedef void (*QIONetListenerClientFunc)(QIONetListener *listener,
QIOChannelSocket *sioc,
@@ -47,10 +48,11 @@ struct QIONetListener {
Object parent;
char *name;
- QIOChannelSocket **sioc;
- GSource **io_source;
+ QIONetListenerSource **source;
size_t nsioc;
+ /* At most one of context or aio_context will be set */
GMainContext *context;
+ AioContext *aio_context;
bool connected;
diff --git a/io/net-listener.c b/io/net-listener.c
index 83c977ecca2..ebc61f81ed6 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -23,8 +23,15 @@
#include "io/dns-resolver.h"
#include "qapi/error.h"
#include "qemu/module.h"
+#include "qemu/main-loop.h"
#include "trace.h"
+struct QIONetListenerSource {
+ QIOChannelSocket *sioc;
+ GSource *io_source;
+ QIONetListener *listener;
+};
+
QIONetListener *qio_net_listener_new(void)
{
return QIO_NET_LISTENER(object_new(TYPE_QIO_NET_LISTENER));
@@ -52,7 +59,7 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
}
trace_qio_net_listener_callback(listener, listener->io_func,
- listener->context);
+ listener->context, listener->aio_context);
if (listener->io_func) {
object_ref(OBJECT(listener));
listener->io_func(listener, sioc, listener->io_data);
@@ -121,15 +128,25 @@ qio_net_listener_watch(QIONetListener *listener, size_t i, const char *caller)
}
trace_qio_net_listener_watch(listener, listener->io_func,
- listener->context, caller);
+ listener->context, listener->aio_context,
+ caller);
if (i == 0) {
object_ref(OBJECT(listener));
}
for ( ; i < listener->nsioc; i++) {
- listener->io_source[i] = qio_channel_add_watch_source(
- QIO_CHANNEL(listener->sioc[i]), G_IO_IN,
- qio_net_listener_channel_func,
- listener, NULL, listener->context);
+ if (!listener->aio_context) {
+ /*
+ * The user passed a GMainContext with the async callback;
+ * they plan on running the default or their own g_main_loop.
+ */
+ listener->source[i]->io_source = qio_channel_add_watch_source(
+ QIO_CHANNEL(listener->source[i]->sioc), G_IO_IN,
+ qio_net_listener_channel_func,
+ listener, NULL, listener->context);
+ } else {
+ /* The user passed an AioContext. Not supported yet. */
+ g_assert_not_reached();
+ }
}
}
@@ -143,12 +160,17 @@ qio_net_listener_unwatch(QIONetListener *listener, const char *caller)
}
trace_qio_net_listener_unwatch(listener, listener->io_func,
- listener->context, caller);
+ listener->context, listener->aio_context,
+ caller);
for (i = 0; i < listener->nsioc; i++) {
- if (listener->io_source[i]) {
- g_source_destroy(listener->io_source[i]);
- g_source_unref(listener->io_source[i]);
- listener->io_source[i] = NULL;
+ if (!listener->aio_context) {
+ if (listener->source[i]->io_source) {
+ g_source_destroy(listener->source[i]->io_source);
+ g_source_unref(listener->source[i]->io_source);
+ listener->source[i]->io_source = NULL;
+ }
+ } else {
+ g_assert_not_reached();
}
}
object_unref(OBJECT(listener));
@@ -161,13 +183,12 @@ void qio_net_listener_add(QIONetListener *listener,
qio_channel_set_name(QIO_CHANNEL(sioc), listener->name);
}
- listener->sioc = g_renew(QIOChannelSocket *, listener->sioc,
- listener->nsioc + 1);
- listener->io_source = g_renew(typeof(listener->io_source[0]),
- listener->io_source,
- listener->nsioc + 1);
- listener->sioc[listener->nsioc] = sioc;
- listener->io_source[listener->nsioc] = NULL;
+ listener->source = g_renew(typeof(listener->source[0]),
+ listener->source,
+ listener->nsioc + 1);
+ listener->source[listener->nsioc] = g_new0(QIONetListenerSource, 1);
+ listener->source[listener->nsioc]->sioc = sioc;
+ listener->source[listener->nsioc]->listener = listener;
object_ref(OBJECT(sioc));
listener->connected = true;
@@ -177,14 +198,17 @@ void qio_net_listener_add(QIONetListener *listener,
}
-void qio_net_listener_set_client_func_full(QIONetListener *listener,
- QIONetListenerClientFunc func,
- gpointer data,
- GDestroyNotify notify,
- GMainContext *context)
+static void
+qio_net_listener_set_client_func_internal(QIONetListener *listener,
+ QIONetListenerClientFunc func,
+ gpointer data,
+ GDestroyNotify notify,
+ GMainContext *context,
+ AioContext *aio_context)
{
if (listener->io_func == func && listener->io_data == data &&
- listener->io_notify == notify && listener->context == context) {
+ listener->io_notify == notify && listener->context == context &&
+ listener->aio_context == aio_context) {
return;
}
@@ -196,17 +220,28 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener,
listener->io_data = data;
listener->io_notify = notify;
listener->context = context;
+ listener->aio_context = aio_context;
qio_net_listener_watch(listener, 0, "set_client_func");
}
+void qio_net_listener_set_client_func_full(QIONetListener *listener,
+ QIONetListenerClientFunc func,
+ gpointer data,
+ GDestroyNotify notify,
+ GMainContext *context)
+{
+ qio_net_listener_set_client_func_internal(listener, func, data,
+ notify, context, NULL);
+}
+
void qio_net_listener_set_client_func(QIONetListener *listener,
QIONetListenerClientFunc func,
gpointer data,
GDestroyNotify notify)
{
- qio_net_listener_set_client_func_full(listener, func, data,
- notify, NULL);
+ qio_net_listener_set_client_func_internal(listener, func, data,
+ notify, NULL, NULL);
}
struct QIONetListenerClientWaitData {
@@ -253,8 +288,8 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener)
sources = g_new0(GSource *, listener->nsioc);
for (i = 0; i < listener->nsioc; i++) {
- sources[i] = qio_channel_create_watch(QIO_CHANNEL(listener->sioc[i]),
- G_IO_IN);
+ sources[i] = qio_channel_create_watch(
+ QIO_CHANNEL(listener->source[i]->sioc), G_IO_IN);
g_source_set_callback(sources[i],
(GSourceFunc)qio_net_listener_wait_client_func,
@@ -287,7 +322,7 @@ void qio_net_listener_disconnect(QIONetListener *listener)
qio_net_listener_unwatch(listener, "disconnect");
for (i = 0; i < listener->nsioc; i++) {
- qio_channel_close(QIO_CHANNEL(listener->sioc[i]), NULL);
+ qio_channel_close(QIO_CHANNEL(listener->source[i]->sioc), NULL);
}
listener->connected = false;
}
@@ -308,8 +343,9 @@ QIOChannelSocket *qio_net_listener_sioc(QIONetListener *listener, size_t n)
if (n > listener->nsioc) {
return NULL;
}
- return listener->sioc[n];
+ return listener->source[n]->sioc;
}
+
static void qio_net_listener_finalize(Object *obj)
{
QIONetListener *listener = QIO_NET_LISTENER(obj);
@@ -321,10 +357,10 @@ static void qio_net_listener_finalize(Object *obj)
}
for (i = 0; i < listener->nsioc; i++) {
- object_unref(OBJECT(listener->sioc[i]));
+ object_unref(OBJECT(listener->source[i]->sioc));
+ g_free(listener->source[i]);
}
- g_free(listener->io_source);
- g_free(listener->sioc);
+ g_free(listener->source);
g_free(listener->name);
}
diff --git a/io/trace-events b/io/trace-events
index 0cb77d579b6..ec91453335a 100644
--- a/io/trace-events
+++ b/io/trace-events
@@ -74,6 +74,6 @@ qio_channel_command_abort(void *ioc, int pid) "Command abort ioc=%p pid=%d"
qio_channel_command_wait(void *ioc, int pid, int ret, int status) "Command abort ioc=%p pid=%d ret=%d status=%d"
# net-listener.c
-qio_net_listener_watch(void *listener, void *func, void *ctx, const char *extra) "Net listener=%p watch enabled func=%p ctx=%p by %s"
-qio_net_listener_unwatch(void *listener, void *func, void *ctx, const char *extra) "Net listener=%p watch disabled func=%p ctx=%p by %s"
-qio_net_listener_callback(void *listener, void *func, void *ctx) "Net listener=%p callback forwarding to func=%p ctx=%p"
+qio_net_listener_watch(void *listener, void *func, void *gctx, void *actx, const char *extra) "Net listener=%p watch enabled func=%p ctx=%p/%p by %s"
+qio_net_listener_unwatch(void *listener, void *func, void *gctx, void *actx, const char *extra) "Net listener=%p watch disabled func=%p ctx=%p/%p by %s"
+qio_net_listener_callback(void *listener, void *func, void *gctx, void *actx) "Net listener=%p callback forwarding to func=%p ctx=%p/%p"
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 09/12] qio: Prepare NetListener to use AioContext
2025-11-08 22:59 ` [PATCH v2 09/12] qio: Prepare NetListener to use AioContext Eric Blake
@ 2025-11-11 14:17 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-11 14:17 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:30PM -0600, Eric Blake wrote:
> For ease of review, this patch adds an AioContext pointer to the
> QIONetListener struct, the code to trace it, and refactors
> listener->io_source to instead be an array of utility structs; but the
> aio_context pointer is always NULL until the next patch adds an API to
> set it. There should be no semantic change in this patch.
>
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: new patch, replacing the earlier "qio: Let listening sockets remember
> their owning QIONetListener"
> ---
> include/io/net-listener.h | 6 ++-
> io/net-listener.c | 102 ++++++++++++++++++++++++++------------
> io/trace-events | 6 +--
> 3 files changed, 76 insertions(+), 38 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 10/12] qio: Add QIONetListener API for using AioContext
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (8 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 09/12] qio: Prepare NetListener to use AioContext Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-11 14:18 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 11/12] nbd: Avoid deadlock in client connecting to same-process server Eric Blake
2025-11-08 22:59 ` [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix Eric Blake
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, berrange, kwolf
The user "John Doe" reported a deadlock when attempting to use
qemu-storage-daemon to serve both a base file over NBD, and a qcow2
file with that NBD export as its backing file, from the same process,
even though it worked just fine when there were two q-s-d processes.
The bulk of the NBD server code properly uses coroutines to make
progress in an event-driven manner, but the code for spawning a new
coroutine at the point when listen(2) detects a new client was
hard-coded to use the global GMainContext; in other words, the
callback that triggers nbd_client_new to let the server start the
negotiation sequence with the client requires the main loop to be
making progress. However, the code for bdrv_open of a qcow2 image
with an NBD backing file uses an AIO_WAIT_WHILE nested event loop to
ensure that the entire qcow2 backing chain is either fully loaded or
rejected, without any side effects from the main loop causing unwanted
changes to the disk being loaded (in short, an AioContext represents
the set of actions that are known to be safe while handling block
layer I/O, while excluding any other pending actions in the global
main loop with potentially larger risk of unwanted side effects).
This creates a classic case of deadlock: the server can't progress to
the point of accept(2)ing the client to write to the NBD socket
because the main loop is being starved until the AIO_WAIT_WHILE
completes the bdrv_open, but the AIO_WAIT_WHILE can't progress because
it is blocked on the client coroutine stuck in a read() of the
expected magic number from the server side of the socket.
This patch adds a new API to allow clients to opt in to listening via
an AioContext rather than a GMainContext. This will allow NBD to fix
the deadlock by performing all actions during bdrv_open in the main
loop AioContext.
An upcoming patch will then add a unit test (kept separate to make it
easier to rearrange the series to demonstrate the deadlock without
this patch).
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: Retitle and add new API rather than changing semantics of
existing qio_net_listener_set_client_func; use qio accessor rather
than direct access to the sioc fd and a lower-level aio call
---
include/io/net-listener.h | 16 ++++++++++++++++
io/net-listener.c | 36 +++++++++++++++++++++++++++++++++---
2 files changed, 49 insertions(+), 3 deletions(-)
diff --git a/include/io/net-listener.h b/include/io/net-listener.h
index 7188721cb34..e93efd5d96a 100644
--- a/include/io/net-listener.h
+++ b/include/io/net-listener.h
@@ -151,6 +151,22 @@ void qio_net_listener_set_client_func(QIONetListener *listener,
gpointer data,
GDestroyNotify notify);
+/**
+ * qio_net_listener_set_client_aio_func:
+ * @listener: the network listener object
+ * @func: the callback function
+ * @data: opaque data to pass to @func
+ * @context: AioContext that @func will be bound to; if #NULL, this will
+ * will use qemu_get_aio_context().
+ *
+ * Similar to qio_net_listener_set_client_func_full(), except that the polling
+ * will be done by an AioContext rather than a GMainContext.
+ */
+void qio_net_listener_set_client_aio_func(QIONetListener *listener,
+ QIONetListenerClientFunc func,
+ void *data,
+ AioContext *context);
+
/**
* qio_net_listener_wait_client:
* @listener: the network listener object
diff --git a/io/net-listener.c b/io/net-listener.c
index ebc61f81ed6..53f2e7091d7 100644
--- a/io/net-listener.c
+++ b/io/net-listener.c
@@ -72,6 +72,17 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc,
}
+static void qio_net_listener_aio_func(void *opaque)
+{
+ QIONetListenerSource *data = opaque;
+
+ assert(data->io_source == NULL);
+ assert(data->listener->aio_context != NULL);
+ qio_net_listener_channel_func(QIO_CHANNEL(data->sioc), G_IO_IN,
+ data->listener);
+}
+
+
int qio_net_listener_open_sync(QIONetListener *listener,
SocketAddress *addr,
int num,
@@ -144,8 +155,12 @@ qio_net_listener_watch(QIONetListener *listener, size_t i, const char *caller)
qio_net_listener_channel_func,
listener, NULL, listener->context);
} else {
- /* The user passed an AioContext. Not supported yet. */
- g_assert_not_reached();
+ /* The user passed an AioContext. */
+ assert(listener->context == NULL);
+ qio_channel_set_aio_fd_handler(
+ QIO_CHANNEL(listener->source[i]->sioc),
+ listener->aio_context, qio_net_listener_aio_func,
+ NULL, NULL, listener->source[i]);
}
}
}
@@ -170,7 +185,10 @@ qio_net_listener_unwatch(QIONetListener *listener, const char *caller)
listener->source[i]->io_source = NULL;
}
} else {
- g_assert_not_reached();
+ assert(listener->context == NULL);
+ qio_channel_set_aio_fd_handler(
+ QIO_CHANNEL(listener->source[i]->sioc),
+ NULL, NULL, NULL, NULL, NULL);
}
}
object_unref(OBJECT(listener));
@@ -244,6 +262,18 @@ void qio_net_listener_set_client_func(QIONetListener *listener,
notify, NULL, NULL);
}
+void qio_net_listener_set_client_aio_func(QIONetListener *listener,
+ QIONetListenerClientFunc func,
+ void *data,
+ AioContext *context)
+{
+ if (!context) {
+ context = qemu_get_aio_context();
+ }
+ qio_net_listener_set_client_func_internal(listener, func, data,
+ NULL, NULL, context);
+}
+
struct QIONetListenerClientWaitData {
QIOChannelSocket *sioc;
GMainLoop *loop;
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 10/12] qio: Add QIONetListener API for using AioContext
2025-11-08 22:59 ` [PATCH v2 10/12] qio: Add QIONetListener API for using AioContext Eric Blake
@ 2025-11-11 14:18 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-11 14:18 UTC (permalink / raw)
To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf
On Sat, Nov 08, 2025 at 04:59:31PM -0600, Eric Blake wrote:
> The user "John Doe" reported a deadlock when attempting to use
> qemu-storage-daemon to serve both a base file over NBD, and a qcow2
> file with that NBD export as its backing file, from the same process,
> even though it worked just fine when there were two q-s-d processes.
> The bulk of the NBD server code properly uses coroutines to make
> progress in an event-driven manner, but the code for spawning a new
> coroutine at the point when listen(2) detects a new client was
> hard-coded to use the global GMainContext; in other words, the
> callback that triggers nbd_client_new to let the server start the
> negotiation sequence with the client requires the main loop to be
> making progress. However, the code for bdrv_open of a qcow2 image
> with an NBD backing file uses an AIO_WAIT_WHILE nested event loop to
> ensure that the entire qcow2 backing chain is either fully loaded or
> rejected, without any side effects from the main loop causing unwanted
> changes to the disk being loaded (in short, an AioContext represents
> the set of actions that are known to be safe while handling block
> layer I/O, while excluding any other pending actions in the global
> main loop with potentially larger risk of unwanted side effects).
>
> This creates a classic case of deadlock: the server can't progress to
> the point of accept(2)ing the client to write to the NBD socket
> because the main loop is being starved until the AIO_WAIT_WHILE
> completes the bdrv_open, but the AIO_WAIT_WHILE can't progress because
> it is blocked on the client coroutine stuck in a read() of the
> expected magic number from the server side of the socket.
>
> This patch adds a new API to allow clients to opt in to listening via
> an AioContext rather than a GMainContext. This will allow NBD to fix
> the deadlock by performing all actions during bdrv_open in the main
> loop AioContext.
>
> An upcoming patch will then add a unit test (kept separate to make it
> easier to rearrange the series to demonstrate the deadlock without
> this patch).
>
> Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: Retitle and add new API rather than changing semantics of
> existing qio_net_listener_set_client_func; use qio accessor rather
> than direct access to the sioc fd and a lower-level aio call
> ---
> include/io/net-listener.h | 16 ++++++++++++++++
> io/net-listener.c | 36 +++++++++++++++++++++++++++++++++---
> 2 files changed, 49 insertions(+), 3 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 11/12] nbd: Avoid deadlock in client connecting to same-process server
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (9 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 10/12] qio: Add QIONetListener API for using AioContext Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-11 14:20 ` Daniel P. Berrangé
2025-11-08 22:59 ` [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix Eric Blake
11 siblings, 1 reply; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, berrange, kwolf, Vladimir Sementsov-Ogievskiy,
Hanna Reitz
See the previous patch for a longer description of the deadlock. Now
that QIONetListener supports waiting for clients in the main loop
AioContext, NBD can use that to ensure that the server can make
progress even when a client is intentionally starving the GMainContext
from any activity not tied to an AioContext.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: new patch
---
blockdev-nbd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index 1e3e634b87d..696474aea93 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -94,10 +94,10 @@ static void nbd_update_server_watch(NBDServerData *s)
{
if (s->listener) {
if (!s->max_connections || s->connections < s->max_connections) {
- qio_net_listener_set_client_func(s->listener, nbd_accept, NULL,
+ qio_net_listener_set_client_aio_func(s->listener, nbd_accept, NULL,
NULL);
} else {
- qio_net_listener_set_client_func(s->listener, NULL, NULL, NULL);
+ qio_net_listener_set_client_aio_func(s->listener, NULL, NULL, NULL);
}
}
}
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 11/12] nbd: Avoid deadlock in client connecting to same-process server
2025-11-08 22:59 ` [PATCH v2 11/12] nbd: Avoid deadlock in client connecting to same-process server Eric Blake
@ 2025-11-11 14:20 ` Daniel P. Berrangé
0 siblings, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-11 14:20 UTC (permalink / raw)
To: Eric Blake
Cc: qemu-devel, qemu-block, kwolf, Vladimir Sementsov-Ogievskiy,
Hanna Reitz
On Sat, Nov 08, 2025 at 04:59:32PM -0600, Eric Blake wrote:
> See the previous patch for a longer description of the deadlock. Now
> that QIONetListener supports waiting for clients in the main loop
> AioContext, NBD can use that to ensure that the server can make
> progress even when a client is intentionally starving the GMainContext
> from any activity not tied to an AioContext.
>
> Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: new patch
> ---
> blockdev-nbd.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix
2025-11-08 22:59 [PATCH v2 00/12] Fix deadlock with bdrv_open of self-served NBD Eric Blake
` (10 preceding siblings ...)
2025-11-08 22:59 ` [PATCH v2 11/12] nbd: Avoid deadlock in client connecting to same-process server Eric Blake
@ 2025-11-08 22:59 ` Eric Blake
2025-11-10 16:19 ` Daniel P. Berrangé
2025-11-12 6:35 ` Vladimir Sementsov-Ogievskiy
11 siblings, 2 replies; 27+ messages in thread
From: Eric Blake @ 2025-11-08 22:59 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, berrange, kwolf, Vladimir Sementsov-Ogievskiy,
Hanna Reitz
Test that all images in a qcow2 chain using an NBD backing file can be
served by the same process. Prior to the recent QIONetListener fixes,
this test would demonstrate deadlock.
The test borrows heavily from the original formula by "John Doe" in
the gitlab bug, but uses a Unix socket rather than TCP to avoid port
contention, and uses a full-blown QEMU rather than qemu-storage-daemon
since both programs were impacted.
The test starts out with the even simpler task of directly adding an
NBD client without qcow2 chain ('client'), which also provokes the
deadlock; but commenting out the 'Adding explicit NBD client' section
will still show deadlock when reaching the 'Adding wrapper image...'.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
Signed-off-by: Eric Blake <eblake@redhat.com>
---
v2: split out vvfat changes, add an explicit client in addition to the
implicit one through qcow2
---
tests/qemu-iotests/tests/nbd-in-qcow2-chain | 94 +++++++++++++++++++
.../qemu-iotests/tests/nbd-in-qcow2-chain.out | 75 +++++++++++++++
2 files changed, 169 insertions(+)
create mode 100755 tests/qemu-iotests/tests/nbd-in-qcow2-chain
create mode 100644 tests/qemu-iotests/tests/nbd-in-qcow2-chain.out
diff --git a/tests/qemu-iotests/tests/nbd-in-qcow2-chain b/tests/qemu-iotests/tests/nbd-in-qcow2-chain
new file mode 100755
index 00000000000..455ddfa86fe
--- /dev/null
+++ b/tests/qemu-iotests/tests/nbd-in-qcow2-chain
@@ -0,0 +1,94 @@
+#!/usr/bin/env bash
+# group: rw quick
+#
+# Test of opening both server and client NBD in a qcow2 backing chain
+#
+# Copyright (C) Red Hat, Inc.
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+# creator
+owner=eblake@redhat.com
+
+seq=`basename $0`
+echo "QA output created by $seq"
+
+status=1 # failure is the default!
+
+_cleanup()
+{
+ _cleanup_qemu
+ _cleanup_test_img
+ rm -f "$SOCK_DIR/nbd"
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+cd ..
+. ./common.rc
+. ./common.filter
+. ./common.qemu
+. ./common.nbd
+
+_supported_fmt qcow2 # Hardcoded to qcow2 command line and QMP below
+_supported_proto file
+
+size=100M
+
+echo
+echo "=== Preparing base image ==="
+
+TEST_IMG="$TEST_IMG.base" _make_test_img $size
+
+echo
+echo "=== Starting QEMU and exposing base image ==="
+
+_launch_qemu -machine q35
+h1=$QEMU_HANDLE
+_send_qemu_cmd $QEMU_HANDLE '{"execute": "qmp_capabilities"}' 'return'
+_send_qemu_cmd $QEMU_HANDLE '{"execute": "blockdev-add",
+ "arguments": {"node-name":"base", "driver":"qcow2",
+ "file":{"driver":"file", "filename":"'"$TEST_IMG.base"'"}}}' 'return'
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"nbd-server-start",
+ "arguments": {"addr":{"type":"unix",
+ "data":{"path":"'"$SOCK_DIR/nbd"'"}}}}' 'return'
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"nbd-server-add",
+ "arguments": {"device":"base","name":"base"}}' 'return'
+
+echo
+echo "=== Adding explicit NBD client ==="
+
+_send_qemu_cmd $QEMU_HANDLE '{"execute": "blockdev-add",
+ "arguments": {"node-name":"client", "driver":"nbd",
+ "read-only":true, "export":"base",
+ "server":{"type":"unix", "path":"'"$SOCK_DIR/nbd"'"}}}' 'return'
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"nbd-server-add",
+ "arguments": {"device":"client","name":"client"}}' 'return'
+
+echo
+echo "=== Creating wrapper image ==="
+
+_make_test_img -F raw -b "nbd+unix:///base?socket=$SOCK_DIR/nbd" $size
+
+echo
+echo "=== Adding wrapper image with implicit client from qcow2 file ==="
+
+_send_qemu_cmd $QEMU_HANDLE '{"execute": "blockdev-add",
+ "arguments": {"node-name":"wrap", "driver":"qcow2",
+ "file":{"driver":"file", "filename":"'"$TEST_IMG"'"}}}' 'return'
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"nbd-server-add",
+ "arguments": {"device":"wrap","name":"wrap"}}' 'return'
+
+echo
+echo "=== Checking NBD server ==="
+
+$QEMU_NBD --list -k $SOCK_DIR/nbd
+
+echo
+echo "=== Cleaning up ==="
+
+_send_qemu_cmd $QEMU_HANDLE '{"execute":"quit"}' ''
+
+echo "*** done"
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/tests/nbd-in-qcow2-chain.out b/tests/qemu-iotests/tests/nbd-in-qcow2-chain.out
new file mode 100644
index 00000000000..b65cdaa4f25
--- /dev/null
+++ b/tests/qemu-iotests/tests/nbd-in-qcow2-chain.out
@@ -0,0 +1,75 @@
+QA output created by nbd-in-qcow2-chain
+
+=== Preparing base image ===
+Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=104857600
+
+=== Starting QEMU and exposing base image ===
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "blockdev-add",
+ "arguments": {"node-name":"base", "driver":"IMGFMT",
+ "file":{"driver":"file", "filename":"TEST_DIR/t.IMGFMT.base"}}}
+{"return": {}}
+{"execute":"nbd-server-start",
+ "arguments": {"addr":{"type":"unix",
+ "data":{"path":"SOCK_DIR/nbd"}}}}
+{"return": {}}
+{"execute":"nbd-server-add",
+ "arguments": {"device":"base","name":"base"}}
+{"return": {}}
+
+=== Adding explicit NBD client ===
+{"execute": "blockdev-add",
+ "arguments": {"node-name":"client", "driver":"nbd",
+ "read-only":true, "export":"base",
+ "server":{"type":"unix", "path":"SOCK_DIR/nbd"}}}
+{"return": {}}
+{"execute":"nbd-server-add",
+ "arguments": {"device":"client","name":"client"}}
+{"return": {}}
+
+=== Creating wrapper image ===
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=104857600 backing_file=nbd+unix:///base?socket=SOCK_DIR/nbd backing_fmt=raw
+
+=== Adding wrapper image with implicit client from qcow2 file ===
+{"execute": "blockdev-add",
+ "arguments": {"node-name":"wrap", "driver":"IMGFMT",
+ "file":{"driver":"file", "filename":"TEST_DIR/t.IMGFMT"}}}
+{"return": {}}
+{"execute":"nbd-server-add",
+ "arguments": {"device":"wrap","name":"wrap"}}
+{"return": {}}
+
+=== Checking NBD server ===
+exports available: 3
+ export: 'base'
+ size: 104857600
+ flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
+ min block: 1
+ opt block: 4096
+ max block: 33554432
+ transaction size: 64-bit
+ available meta contexts: 1
+ base:allocation
+ export: 'client'
+ size: 104857600
+ flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
+ min block: 1
+ opt block: 4096
+ max block: 33554432
+ transaction size: 64-bit
+ available meta contexts: 1
+ base:allocation
+ export: 'wrap'
+ size: 104857600
+ flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
+ min block: 1
+ opt block: 4096
+ max block: 33554432
+ transaction size: 64-bit
+ available meta contexts: 1
+ base:allocation
+
+=== Cleaning up ===
+{"execute":"quit"}
+*** done
--
2.51.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix
2025-11-08 22:59 ` [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix Eric Blake
@ 2025-11-10 16:19 ` Daniel P. Berrangé
2025-11-12 6:35 ` Vladimir Sementsov-Ogievskiy
1 sibling, 0 replies; 27+ messages in thread
From: Daniel P. Berrangé @ 2025-11-10 16:19 UTC (permalink / raw)
To: Eric Blake
Cc: qemu-devel, qemu-block, kwolf, Vladimir Sementsov-Ogievskiy,
Hanna Reitz
On Sat, Nov 08, 2025 at 04:59:33PM -0600, Eric Blake wrote:
> Test that all images in a qcow2 chain using an NBD backing file can be
> served by the same process. Prior to the recent QIONetListener fixes,
> this test would demonstrate deadlock.
>
> The test borrows heavily from the original formula by "John Doe" in
> the gitlab bug, but uses a Unix socket rather than TCP to avoid port
> contention, and uses a full-blown QEMU rather than qemu-storage-daemon
> since both programs were impacted.
>
> The test starts out with the even simpler task of directly adding an
> NBD client without qcow2 chain ('client'), which also provokes the
> deadlock; but commenting out the 'Adding explicit NBD client' section
> will still show deadlock when reaching the 'Adding wrapper image...'.
>
> Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169
> Signed-off-by: Eric Blake <eblake@redhat.com>
>
> ---
> v2: split out vvfat changes, add an explicit client in addition to the
> implicit one through qcow2
> ---
> tests/qemu-iotests/tests/nbd-in-qcow2-chain | 94 +++++++++++++++++++
> .../qemu-iotests/tests/nbd-in-qcow2-chain.out | 75 +++++++++++++++
> 2 files changed, 169 insertions(+)
> create mode 100755 tests/qemu-iotests/tests/nbd-in-qcow2-chain
> create mode 100644 tests/qemu-iotests/tests/nbd-in-qcow2-chain.out
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix
2025-11-08 22:59 ` [PATCH v2 12/12] iotests: Add coverage of recent NBD qio deadlock fix Eric Blake
2025-11-10 16:19 ` Daniel P. Berrangé
@ 2025-11-12 6:35 ` Vladimir Sementsov-Ogievskiy
1 sibling, 0 replies; 27+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2025-11-12 6:35 UTC (permalink / raw)
To: Eric Blake, qemu-devel; +Cc: qemu-block, berrange, kwolf, Hanna Reitz
On 09.11.25 01:59, Eric Blake wrote:
> Test that all images in a qcow2 chain using an NBD backing file can be
> served by the same process. Prior to the recent QIONetListener fixes,
> this test would demonstrate deadlock.
>
> The test borrows heavily from the original formula by "John Doe" in
> the gitlab bug, but uses a Unix socket rather than TCP to avoid port
> contention, and uses a full-blown QEMU rather than qemu-storage-daemon
> since both programs were impacted.
>
> The test starts out with the even simpler task of directly adding an
> NBD client without qcow2 chain ('client'), which also provokes the
> deadlock; but commenting out the 'Adding explicit NBD client' section
> will still show deadlock when reaching the 'Adding wrapper image...'.
>
> Fixes:https://gitlab.com/qemu-project/qemu/-/issues/3169
> Signed-off-by: Eric Blake<eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 27+ messages in thread