* Re: [PATCH v3 05/13] qio: Protect NetListener callback with mutex [not found] ` <20251112224032.864420-20-eblake@redhat.com> @ 2025-11-13 8:56 ` Daniel P. Berrangé 0 siblings, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-11-13 8:56 UTC (permalink / raw) To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf, qemu-stable On Wed, Nov 12, 2025 at 04:31:05PM -0600, Eric Blake wrote: > Without a mutex, NetListener can run into this data race between a > thread changing the async callback callback function to use when a > client connects, and the thread servicing polling of the listening > sockets: > > Thread 1: > qio_net_listener_set_client_func(lstnr, f1, ...); > => foreach sock: socket > => object_ref(lstnr) > => sock_src = qio_channel_socket_add_watch_source(sock, ...., lstnr, object_unref); > > Thread 2: > poll() > => event POLLIN on socket > => ref(GSourceCallback) > => if (lstnr->io_func) // while lstnr->io_func is f1 > ...interrupt.. > > Thread 1: > qio_net_listener_set_client_func(lstnr, f2, ...); > => foreach sock: socket > => g_source_unref(sock_src) > => foreach sock: socket > => object_ref(lstnr) > => sock_src = qio_channel_socket_add_watch_source(sock, ...., lstnr, object_unref); > > Thread 2: > => call lstnr->io_func(lstnr->io_data) // now sees f2 > => return dispatch(sock) > => unref(GSourceCallback) > => destroy-notify > => object_unref > > Found by inspection. This is a SEGFAULT waiting to happen if f2 can > become NULL because thread 1 deregisters the user's callback while > thread 2 is trying to service the callback. Other messes are also > theoretically possible, such as running callback f1 with an opaque > pointer that should only be passed to f2 (if the client code were to > use more than just a binary choice between a single async function or > NULL). > > Mitigating factor: if the code that modifies the QIONetListener can > only be reached by the same thread that is executing the polling and > async callbacks, then we are not in a two-thread race documented above > (even though poll can see two clients trying to connect in the same > window of time, any changes made to the listener by the first async > callback will be completed before the thread moves on to the second > client). However, QEMU is complex enough that I was unable to state > with certainty whether a QMP command (such as nbd-server-stop, which > does modify the net listener) can ever be serviced in a thread > distinct from the one trying to do the async callbacks. Similarly, I > did not spend the time trying to add sleeps or execute under gdb to > try and actually trigger the race in practice. > > At any rate, it is worth having the API be robust. To ensure that > modifying a NetListener can be safely done from any thread, add a > mutex that guarantees atomicity to all members of a listener object > related to callbacks. This problem has been present since > QIONetListener was introduced. > > Note that this does NOT prevent the case of a second round of the > user's old async callback being invoked with the old opaque data, even > when the user has already tried to change the async callback during > the first async callback; it is only about ensuring that there is no > sharding (the eventual io_func(io_data) call that does get made will > correspond to a particular combination that the user had requested at > some point in time, and not be sharded to a combination that never > existed in practice). In other words, this patch maintains the status > quo that a user's async callback function already needs to be robust > to parallel clients landing in the same window of poll servicing, even > when only one client is desired. > > CC: qemu-stable@nongnu.org > Fixes: 53047392 ("io: introduce a network socket listener API", v2.12.0) > Signed-off-by: Eric Blake <eblake@redhat.com> > > --- > v3: new patch > --- > include/io/net-listener.h | 1 + > io/net-listener.c | 58 +++++++++++++++++++++++++++++---------- > 2 files changed, 44 insertions(+), 15 deletions(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20251112224032.864420-22-eblake@redhat.com>]
* Re: [PATCH v3 07/13] qio: Factor out helpers qio_net_listener_[un]watch [not found] ` <20251112224032.864420-22-eblake@redhat.com> @ 2025-11-13 9:01 ` Daniel P. Berrangé 0 siblings, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-11-13 9:01 UTC (permalink / raw) To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf On Wed, Nov 12, 2025 at 04:31:07PM -0600, Eric Blake wrote: > The code had three similar repetitions of an iteration over one or all > of nsiocs to set up a GSource, and likewise for teardown. Since an > upcoming patch wants to tweak whether GSource or AioContext is used, > its better to consolidate that into one helper function for fewer > places to edit later. > > Signed-off-by: Eric Blake <eblake@redhat.com> > > --- > v2: rebase to changes on the tracepoints earlier in series > v3: rebase to mutex protections, R-b dropped > --- > io/net-listener.c | 122 ++++++++++++++++++++-------------------------- > 1 file changed, 52 insertions(+), 70 deletions(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20251112224032.864420-23-eblake@redhat.com>]
* Re: [PATCH v3 08/13] chardev: Reuse channel's cached local address [not found] ` <20251112224032.864420-23-eblake@redhat.com> @ 2025-11-13 9:01 ` Daniel P. Berrangé 0 siblings, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-11-13 9:01 UTC (permalink / raw) To: Eric Blake Cc: qemu-devel, qemu-block, kwolf, Marc-André Lureau, Paolo Bonzini On Wed, Nov 12, 2025 at 04:31:08PM -0600, Eric Blake wrote: > Directly accessing the fd member of a QIOChannelSocket is an > undesirable leaky abstraction. What's more, grabbing that fd merely > to force an eventual call to getsockname() can be wasteful, since the > channel is often able to return its cached local name. > > Reported-by: Daniel P. Berrangé <berrange@redhat.com> > Signed-off-by: Eric Blake <eblake@redhat.com> > > --- > v3: new patch > --- > chardev/char-socket.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20251112224032.864420-24-eblake@redhat.com>]
* Re: [PATCH v3 09/13] qio: Provide accessor around QIONetListener->sioc [not found] ` <20251112224032.864420-24-eblake@redhat.com> @ 2025-11-13 9:03 ` Daniel P. Berrangé 0 siblings, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-11-13 9:03 UTC (permalink / raw) To: Eric Blake Cc: qemu-devel, qemu-block, kwolf, Marc-André Lureau, Paolo Bonzini, Peter Xu, Fabiano Rosas On Wed, Nov 12, 2025 at 04:31:09PM -0600, Eric Blake wrote: > An upcoming patch needs to pass more than just sioc as the opaque > pointer to an AioContext; but since our AioContext code in general > (and its QIO Channel wrapper code) lacks a notify callback present > with GSource, we do not have the trivial option of just g_malloc'ing a > small struct to hold all that data coupled with a notify of g_free. > Instead, the data pointer must outlive the registered handler; in > fact, having the data pointer have the same lifetime as QIONetListener > is adequate. > > But the cleanest way to stick such a helper struct in QIONetListener > will be to rearrange internal struct members. And that in turn means > that all existing code that currently directly accesses > listener->nsioc and listener->sioc[] should instead go through > accessor functions, to be immune to the upcoming struct layout > changes. So this patch adds accessor methods qio_net_listener_nsioc() > and qio_net_listener_sioc(), and puts them to use. > > While at it, notice that the pattern of grabbing an sioc from the > listener only to turn around can call > qio_channel_socket_get_local_address is common enough to also warrant > the helper of qio_net_listener_get_local_address, and fix a copy-paste > error in the corresponding documentation. > > Signed-off-by: Eric Blake <eblake@redhat.com> > > --- > v2: new patch > v3: fix off-by-one, also add qio_net_listener_get_local_address > --- > include/io/channel-socket.h | 2 +- > include/io/net-listener.h | 42 +++++++++++++++++++++++++++++++++++++ > chardev/char-socket.c | 2 +- > io/net-listener.c | 27 ++++++++++++++++++++++++ > migration/socket.c | 4 ++-- > ui/vnc.c | 34 ++++++++++++++++++------------ > 6 files changed, 94 insertions(+), 17 deletions(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20251112224032.864420-26-eblake@redhat.com>]
* Re: [PATCH v3 11/13] qio: Add QIONetListener API for using AioContext [not found] ` <20251112224032.864420-26-eblake@redhat.com> @ 2025-11-13 9:05 ` Daniel P. Berrangé 0 siblings, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-11-13 9:05 UTC (permalink / raw) To: Eric Blake; +Cc: qemu-devel, qemu-block, kwolf On Wed, Nov 12, 2025 at 04:31:11PM -0600, Eric Blake wrote: > The user calling himself "John Doe" reported a deadlock when > attempting to use qemu-storage-daemon to serve both a base file over > NBD, and a qcow2 file with that NBD export as its backing file, from > the same process, even though it worked just fine when there were two > q-s-d processes. The bulk of the NBD server code properly uses > coroutines to make progress in an event-driven manner, but the code > for spawning a new coroutine at the point when listen(2) detects a new > client was hard-coded to use the global GMainContext; in other words, > the callback that triggers nbd_client_new to let the server start the > negotiation sequence with the client requires the main loop to be > making progress. However, the code for bdrv_open of a qcow2 image > with an NBD backing file uses an AIO_WAIT_WHILE nested event loop to > ensure that the entire qcow2 backing chain is either fully loaded or > rejected, without any side effects from the main loop causing unwanted > changes to the disk being loaded (in short, an AioContext represents > the set of actions that are known to be safe while handling block > layer I/O, while excluding any other pending actions in the global > main loop with potentially larger risk of unwanted side effects). > > This creates a classic case of deadlock: the server can't progress to > the point of accept(2)ing the client to write to the NBD socket > because the main loop is being starved until the AIO_WAIT_WHILE > completes the bdrv_open, but the AIO_WAIT_WHILE can't progress because > it is blocked on the client coroutine stuck in a read() of the > expected magic number from the server side of the socket. > > This patch adds a new API to allow clients to opt in to listening via > an AioContext rather than a GMainContext. This will allow NBD to fix > the deadlock by performing all actions during bdrv_open in the main > loop AioContext. > > Technical debt warning: I would have loved to utilize a notify > function with AioContext to guarantee that we don't finalize listener > due to an object_unref if there is any callback still running (the way > GSource does), but wiring up notify functions into AioContext is a > bigger task that will be deferred to a later QEMU release. But for > solving the NBD deadlock, it is sufficient to note that the QMP > commands for enabling and disabling the NBD server are really the only > points where we want to change the listener's callback. Furthermore, > those commands are serviced in the main loop, which is the same > AioContext that is also listening for connections. Since a thread > cannot interrupt itself, we are ensured that at the point where we are > changing the watch, there are no callbacks active. This is NOT as > powerful as the GSource cross-thread safety, but sufficient for the > needs of today. > > An upcoming patch will then add a unit test (kept separate to make it > easier to rearrange the series to demonstrate the deadlock without > this patch). > > Fixes: https://gitlab.com/qemu-project/qemu/-/issues/3169 > Signed-off-by: Eric Blake <eblake@redhat.com> > > --- > v2: Retitle and add new API rather than changing semantics of > existing qio_net_listener_set_client_func; use qio accessor rather > than direct access to the sioc fd and a lower-level aio call > v3: limit reference counting change to just AioContext, R-b dropped > --- > include/io/net-listener.h | 21 +++++++++++++ > io/net-listener.c | 66 +++++++++++++++++++++++++++++++++++++-- > 2 files changed, 84 insertions(+), 3 deletions(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 for-10.2 00/13] Fix deadlock with bdrv_open of self-served NBD @ 2025-11-13 1:11 Eric Blake 2025-11-13 1:11 ` [PATCH v3 05/13] qio: Protect NetListener callback with mutex Eric Blake 0 siblings, 1 reply; 6+ messages in thread From: Eric Blake @ 2025-11-13 1:11 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-block, kwolf, berrange v2 was here: https://lists.nongnu.org/archive/html/qemu-devel/2025-11/msg01243.html Since then: - drop patch 7/12; refcounting for GSource case is now unchanged - add a couple of patches: fix a chardev leaky abstraction, and add a mutex lock for cross-thread safety - improve commit messages to document why NBD is safe now, even without adding notify callbacks to AioContext Now that the new behavior is opt-in rather than a change of defaults, and only NBD opts in, it should still be safe to include in 10.2. But the technical debt here means that we really should consider improving the AioContext API for 11.0 to allow for a notify function similar to what GSource provides. Key: [----] : patches are identical [####] : number of functional differences between upstream/downstream patch [down] : patch is downstream-only The flags [FC] indicate (F)unctional and (C)ontextual differences, respectively 001/13:[----] [--] 'iotests: Drop execute permissions on vvfat.out' 002/13:[----] [--] 'qio: Add trace points to net_listener' 003/13:[----] [--] 'qio: Unwatch before notify in QIONetListener' 004/13:[----] [--] 'qio: Remember context of qio_net_listener_set_client_func_full' 005/13:[down] 'qio: Protect NetListener callback with mutex' 006/13:[----] [-C] 'qio: Minor optimization when callback function is unchanged' 007/13:[0042] [FC] 'qio: Factor out helpers qio_net_listener_[un]watch' 008/13:[down] 'chardev: Reuse channel's cached local address' 009/13:[0053] [FC] 'qio: Provide accessor around QIONetListener->sioc' 010/13:[0013] [FC] 'qio: Prepare NetListener to use AioContext' 011/13:[0041] [FC] 'qio: Add QIONetListener API for using AioContext' 012/13:[----] [--] 'nbd: Avoid deadlock in client connecting to same-process server' 013/13:[----] [--] 'iotests: Add coverage of recent NBD qio deadlock fix' Eric Blake (13): iotests: Drop execute permissions on vvfat.out qio: Add trace points to net_listener qio: Unwatch before notify in QIONetListener qio: Remember context of qio_net_listener_set_client_func_full qio: Protect NetListener callback with mutex qio: Minor optimization when callback function is unchanged qio: Factor out helpers qio_net_listener_[un]watch chardev: Reuse channel's cached local address qio: Provide accessor around QIONetListener->sioc qio: Prepare NetListener to use AioContext qio: Add QIONetListener API for using AioContext nbd: Avoid deadlock in client connecting to same-process server iotests: Add coverage of recent NBD qio deadlock fix include/io/channel-socket.h | 2 +- include/io/net-listener.h | 71 ++++- blockdev-nbd.c | 4 +- chardev/char-socket.c | 2 +- io/net-listener.c | 300 +++++++++++++----- migration/socket.c | 4 +- ui/vnc.c | 34 +- io/trace-events | 5 + tests/qemu-iotests/tests/nbd-in-qcow2-chain | 94 ++++++ .../qemu-iotests/tests/nbd-in-qcow2-chain.out | 75 +++++ tests/qemu-iotests/tests/vvfat.out | 0 11 files changed, 498 insertions(+), 93 deletions(-) create mode 100755 tests/qemu-iotests/tests/nbd-in-qcow2-chain create mode 100644 tests/qemu-iotests/tests/nbd-in-qcow2-chain.out mode change 100755 => 100644 tests/qemu-iotests/tests/vvfat.out -- 2.51.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 05/13] qio: Protect NetListener callback with mutex 2025-11-13 1:11 [PATCH v3 for-10.2 00/13] Fix deadlock with bdrv_open of self-served NBD Eric Blake @ 2025-11-13 1:11 ` Eric Blake 0 siblings, 0 replies; 6+ messages in thread From: Eric Blake @ 2025-11-13 1:11 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-block, kwolf, berrange, qemu-stable Without a mutex, NetListener can run into this data race between a thread changing the async callback callback function to use when a client connects, and the thread servicing polling of the listening sockets: Thread 1: qio_net_listener_set_client_func(lstnr, f1, ...); => foreach sock: socket => object_ref(lstnr) => sock_src = qio_channel_socket_add_watch_source(sock, ...., lstnr, object_unref); Thread 2: poll() => event POLLIN on socket => ref(GSourceCallback) => if (lstnr->io_func) // while lstnr->io_func is f1 ...interrupt.. Thread 1: qio_net_listener_set_client_func(lstnr, f2, ...); => foreach sock: socket => g_source_unref(sock_src) => foreach sock: socket => object_ref(lstnr) => sock_src = qio_channel_socket_add_watch_source(sock, ...., lstnr, object_unref); Thread 2: => call lstnr->io_func(lstnr->io_data) // now sees f2 => return dispatch(sock) => unref(GSourceCallback) => destroy-notify => object_unref Found by inspection. This is a SEGFAULT waiting to happen if f2 can become NULL because thread 1 deregisters the user's callback while thread 2 is trying to service the callback. Other messes are also theoretically possible, such as running callback f1 with an opaque pointer that should only be passed to f2 (if the client code were to use more than just a binary choice between a single async function or NULL). Mitigating factor: if the code that modifies the QIONetListener can only be reached by the same thread that is executing the polling and async callbacks, then we are not in a two-thread race documented above (even though poll can see two clients trying to connect in the same window of time, any changes made to the listener by the first async callback will be completed before the thread moves on to the second client). However, QEMU is complex enough that I was unable to state with certainty whether a QMP command (such as nbd-server-stop, which does modify the net listener) can ever be serviced in a thread distinct from the one trying to do the async callbacks. Similarly, I did not spend the time trying to add sleeps or execute under gdb to try and actually trigger the race in practice. At any rate, it is worth having the API be robust. To ensure that modifying a NetListener can be safely done from any thread, add a mutex that guarantees atomicity to all members of a listener object related to callbacks. This problem has been present since QIONetListener was introduced. Note that this does NOT prevent the case of a second round of the user's old async callback being invoked with the old opaque data, even when the user has already tried to change the async callback during the first async callback; it is only about ensuring that there is no sharding (the eventual io_func(io_data) call that does get made will correspond to a particular combination that the user had requested at some point in time, and not be sharded to a combination that never existed in practice). In other words, this patch maintains the status quo that a user's async callback function already needs to be robust to parallel clients landing in the same window of poll servicing, even when only one client is desired. CC: qemu-stable@nongnu.org Fixes: 53047392 ("io: introduce a network socket listener API", v2.12.0) Signed-off-by: Eric Blake <eblake@redhat.com> --- v3: new patch --- include/io/net-listener.h | 1 + io/net-listener.c | 58 +++++++++++++++++++++++++++++---------- 2 files changed, 44 insertions(+), 15 deletions(-) diff --git a/include/io/net-listener.h b/include/io/net-listener.h index 42fbfab5467..c2165dc1669 100644 --- a/include/io/net-listener.h +++ b/include/io/net-listener.h @@ -54,6 +54,7 @@ struct QIONetListener { bool connected; + QemuMutex lock; /* Protects remaining fields */ QIONetListenerClientFunc io_func; gpointer io_data; GDestroyNotify io_notify; diff --git a/io/net-listener.c b/io/net-listener.c index 0f16b78fbbd..f70acdfc5ce 100644 --- a/io/net-listener.c +++ b/io/net-listener.c @@ -23,11 +23,16 @@ #include "io/dns-resolver.h" #include "qapi/error.h" #include "qemu/module.h" +#include "qemu/lockable.h" #include "trace.h" QIONetListener *qio_net_listener_new(void) { - return QIO_NET_LISTENER(object_new(TYPE_QIO_NET_LISTENER)); + QIONetListener *listener; + + listener = QIO_NET_LISTENER(object_new(TYPE_QIO_NET_LISTENER)); + qemu_mutex_init(&listener->lock); + return listener; } void qio_net_listener_set_name(QIONetListener *listener, @@ -44,6 +49,9 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc, { QIONetListener *listener = QIO_NET_LISTENER(opaque); QIOChannelSocket *sioc; + QIONetListenerClientFunc io_func; + gpointer io_data; + GMainContext *context; sioc = qio_channel_socket_accept(QIO_CHANNEL_SOCKET(ioc), NULL); @@ -51,10 +59,15 @@ static gboolean qio_net_listener_channel_func(QIOChannel *ioc, return TRUE; } - trace_qio_net_listener_callback(listener, listener->io_func, - listener->context); - if (listener->io_func) { - listener->io_func(listener, sioc, listener->io_data); + WITH_QEMU_LOCK_GUARD(&listener->lock) { + io_func = listener->io_func; + io_data = listener->io_data; + context = listener->context; + } + + trace_qio_net_listener_callback(listener, io_func, context); + if (io_func) { + io_func(listener, sioc, io_data); } object_unref(OBJECT(sioc)); @@ -111,6 +124,9 @@ int qio_net_listener_open_sync(QIONetListener *listener, void qio_net_listener_add(QIONetListener *listener, QIOChannelSocket *sioc) { + QIONetListenerClientFunc io_func; + GMainContext *context; + if (listener->name) { qio_channel_set_name(QIO_CHANNEL(sioc), listener->name); } @@ -126,14 +142,18 @@ void qio_net_listener_add(QIONetListener *listener, object_ref(OBJECT(sioc)); listener->connected = true; - trace_qio_net_listener_watch(listener, listener->io_func, - listener->context, "add"); - if (listener->io_func != NULL) { + WITH_QEMU_LOCK_GUARD(&listener->lock) { + io_func = listener->io_func; + context = listener->context; + } + + trace_qio_net_listener_watch(listener, io_func, context, "add"); + if (io_func) { object_ref(OBJECT(listener)); listener->io_source[listener->nsioc] = qio_channel_add_watch_source( QIO_CHANNEL(listener->sioc[listener->nsioc]), G_IO_IN, qio_net_listener_channel_func, - listener, (GDestroyNotify)object_unref, listener->context); + listener, (GDestroyNotify)object_unref, context); } listener->nsioc++; @@ -148,6 +168,7 @@ void qio_net_listener_set_client_func_full(QIONetListener *listener, { size_t i; + QEMU_LOCK_GUARD(&listener->lock); trace_qio_net_listener_unwatch(listener, listener->io_func, listener->context, "set_client_func"); @@ -228,9 +249,15 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener) .loop = loop }; size_t i; + QIONetListenerClientFunc io_func; + GMainContext *context; - trace_qio_net_listener_unwatch(listener, listener->io_func, - listener->context, "wait_client"); + WITH_QEMU_LOCK_GUARD(&listener->lock) { + io_func = listener->io_func; + context = listener->context; + } + + trace_qio_net_listener_unwatch(listener, io_func, context, "wait_client"); for (i = 0; i < listener->nsioc; i++) { if (listener->io_source[i]) { g_source_destroy(listener->io_source[i]); @@ -260,15 +287,14 @@ QIOChannelSocket *qio_net_listener_wait_client(QIONetListener *listener) g_main_loop_unref(loop); g_main_context_unref(ctxt); - trace_qio_net_listener_watch(listener, listener->io_func, - listener->context, "wait_client"); - if (listener->io_func != NULL) { + trace_qio_net_listener_watch(listener, io_func, context, "wait_client"); + if (io_func != NULL) { for (i = 0; i < listener->nsioc; i++) { object_ref(OBJECT(listener)); listener->io_source[i] = qio_channel_add_watch_source( QIO_CHANNEL(listener->sioc[i]), G_IO_IN, qio_net_listener_channel_func, - listener, (GDestroyNotify)object_unref, listener->context); + listener, (GDestroyNotify)object_unref, context); } } @@ -283,6 +309,7 @@ void qio_net_listener_disconnect(QIONetListener *listener) return; } + QEMU_LOCK_GUARD(&listener->lock); trace_qio_net_listener_unwatch(listener, listener->io_func, listener->context, "disconnect"); for (i = 0; i < listener->nsioc; i++) { @@ -318,6 +345,7 @@ static void qio_net_listener_finalize(Object *obj) g_free(listener->io_source); g_free(listener->sioc); g_free(listener->name); + qemu_mutex_destroy(&listener->lock); } static const TypeInfo qio_net_listener_info = { -- 2.51.1 ^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-11-13 9:06 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20251112224032.864420-15-eblake@redhat.com>
[not found] ` <20251112224032.864420-20-eblake@redhat.com>
2025-11-13 8:56 ` [PATCH v3 05/13] qio: Protect NetListener callback with mutex Daniel P. Berrangé
[not found] ` <20251112224032.864420-22-eblake@redhat.com>
2025-11-13 9:01 ` [PATCH v3 07/13] qio: Factor out helpers qio_net_listener_[un]watch Daniel P. Berrangé
[not found] ` <20251112224032.864420-23-eblake@redhat.com>
2025-11-13 9:01 ` [PATCH v3 08/13] chardev: Reuse channel's cached local address Daniel P. Berrangé
[not found] ` <20251112224032.864420-24-eblake@redhat.com>
2025-11-13 9:03 ` [PATCH v3 09/13] qio: Provide accessor around QIONetListener->sioc Daniel P. Berrangé
[not found] ` <20251112224032.864420-26-eblake@redhat.com>
2025-11-13 9:05 ` [PATCH v3 11/13] qio: Add QIONetListener API for using AioContext Daniel P. Berrangé
2025-11-13 1:11 [PATCH v3 for-10.2 00/13] Fix deadlock with bdrv_open of self-served NBD Eric Blake
2025-11-13 1:11 ` [PATCH v3 05/13] qio: Protect NetListener callback with mutex Eric Blake
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).