* [PATCH v6 0/2] Add zerocopy partial flush support for live migrations @ 2025-10-13 9:21 Tejus GK 2025-10-13 9:21 ` [PATCH v6 1/2] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK 2025-10-13 9:21 ` [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK 0 siblings, 2 replies; 6+ messages in thread From: Tejus GK @ 2025-10-13 9:21 UTC (permalink / raw) To: qemu-devel; +Cc: Tejus GK Hi all, This series introduces support for partial flushing of the socket error queue during a zerocopy enabled live migration. This will help reduce live migration errors due to ENOBUFS in scenarios where a lot of out-of-order processing may happen. V6: 1. Dropped QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE, since it's redundant. V5: 1. Introduced a new write flag QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE, which will let callers decide if they want to do a partial flush on an ENOBUFS. 2. Added a "blocking" field to QIOChannelSocket, which indicates if the socket is in blocking mode or not. V4: 1. Minor nit to rename s/zero_copy_flush_pending/zerocopy_flushed_once. V3: 1. Add the dirty_sync_missed_zero_copy migration stat again. V2: 1. Removed the dirty_sync_missed_zero_copy migration stat. 2. Made the call to qio_channel_socket_flush_internal() from qio_channel_socket_writev() non-blocking regards, tejus Manish Mishra (1): QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK (1): QIOChannelSocket: add a "blocking" field to QIOChannelSocket include/io/channel-socket.h | 6 +++ io/channel-socket.c | 77 ++++++++++++++++++++++++++++++------- 2 files changed, 69 insertions(+), 14 deletions(-) -- 2.43.7 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v6 1/2] QIOChannelSocket: add a "blocking" field to QIOChannelSocket 2025-10-13 9:21 [PATCH v6 0/2] Add zerocopy partial flush support for live migrations Tejus GK @ 2025-10-13 9:21 ` Tejus GK 2025-10-13 9:23 ` Daniel P. Berrangé 2025-10-13 9:21 ` [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK 1 sibling, 1 reply; 6+ messages in thread From: Tejus GK @ 2025-10-13 9:21 UTC (permalink / raw) To: qemu-devel, Daniel P. Berrangé; +Cc: Tejus GK Add a 'blocking' boolean field to QIOChannelSocket to track whether the underlying socket is in blocking or non-blocking mode. Signed-off-by: Tejus GK <tejus.gk@nutanix.com> --- include/io/channel-socket.h | 1 + io/channel-socket.c | 2 ++ 2 files changed, 3 insertions(+) diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h index a88cf8b3a9..26319fa98b 100644 --- a/include/io/channel-socket.h +++ b/include/io/channel-socket.h @@ -49,6 +49,7 @@ struct QIOChannelSocket { socklen_t remoteAddrLen; ssize_t zero_copy_queued; ssize_t zero_copy_sent; + bool blocking; }; diff --git a/io/channel-socket.c b/io/channel-socket.c index 712b793eaf..8b30d5b7f7 100644 --- a/io/channel-socket.c +++ b/io/channel-socket.c @@ -65,6 +65,7 @@ qio_channel_socket_new(void) sioc->fd = -1; sioc->zero_copy_queued = 0; sioc->zero_copy_sent = 0; + sioc->blocking = false; ioc = QIO_CHANNEL(sioc); qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN); @@ -859,6 +860,7 @@ qio_channel_socket_set_blocking(QIOChannel *ioc, Error **errp) { QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc); + sioc->blocking = enabled; if (!qemu_set_blocking(sioc->fd, enabled, errp)) { return -1; -- 2.43.7 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v6 1/2] QIOChannelSocket: add a "blocking" field to QIOChannelSocket 2025-10-13 9:21 ` [PATCH v6 1/2] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK @ 2025-10-13 9:23 ` Daniel P. Berrangé 0 siblings, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-10-13 9:23 UTC (permalink / raw) To: Tejus GK; +Cc: qemu-devel On Mon, Oct 13, 2025 at 09:21:21AM +0000, Tejus GK wrote: > Add a 'blocking' boolean field to QIOChannelSocket to track whether the > underlying socket is in blocking or non-blocking mode. > > Signed-off-by: Tejus GK <tejus.gk@nutanix.com> > --- > include/io/channel-socket.h | 1 + > io/channel-socket.c | 2 ++ > 2 files changed, 3 insertions(+) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF 2025-10-13 9:21 [PATCH v6 0/2] Add zerocopy partial flush support for live migrations Tejus GK 2025-10-13 9:21 ` [PATCH v6 1/2] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK @ 2025-10-13 9:21 ` Tejus GK 2025-10-13 9:34 ` Tejus GK 2025-10-13 9:38 ` Daniel P. Berrangé 1 sibling, 2 replies; 6+ messages in thread From: Tejus GK @ 2025-10-13 9:21 UTC (permalink / raw) To: qemu-devel, Daniel P. Berrangé; +Cc: Manish Mishra, Tejus GK From: Manish Mishra <manish.mishra@nutanix.com> The kernel allocates extra metadata SKBs in case of a zerocopy send, eventually used for zerocopy's notification mechanism. This metadata memory is accounted for in the OPTMEM limit. The kernel queues completion notifications on the socket error queue and this error queue is freed when userspace reads it. Usually, in the case of in-order processing, the kernel will batch the notifications and merge the metadata into a single SKB and free the rest. As a result, it never exceeds the OPTMEM limit. However, if there is any out-of-order processing or intermittent zerocopy failures, this error chain can grow significantly, exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to allocate any new SKB, leading to an ENOBUF error. Depending on the amount of data queued before the flush (i.e., large live migration iterations), even large OPTMEM limits are prone to failure. To work around this, if we encounter an ENOBUF error with a zerocopy sendmsg, flush the error queue and retry once more. Co-authored-by: Manish Mishra <manish.mishra@nutanix.com> Signed-off-by: Tejus GK <tejus.gk@nutanix.com> --- include/io/channel-socket.h | 5 +++ io/channel-socket.c | 75 ++++++++++++++++++++++++++++++------- 2 files changed, 66 insertions(+), 14 deletions(-) diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h index 26319fa98b..fcfd489c6c 100644 --- a/include/io/channel-socket.h +++ b/include/io/channel-socket.h @@ -50,6 +50,11 @@ struct QIOChannelSocket { ssize_t zero_copy_queued; ssize_t zero_copy_sent; bool blocking; + /** + * This flag indicates whether any new data was successfully sent with + * zerocopy since the last qio_channel_socket_flush() call. + */ + bool new_zero_copy_sent_success; }; diff --git a/io/channel-socket.c b/io/channel-socket.c index 8b30d5b7f7..7cd9f3666d 100644 --- a/io/channel-socket.c +++ b/io/channel-socket.c @@ -37,6 +37,12 @@ #define SOCKET_MAX_FDS 16 +#ifdef QEMU_MSG_ZEROCOPY +static int qio_channel_socket_flush_internal(QIOChannel *ioc, + bool block, + Error **errp); +#endif + SocketAddress * qio_channel_socket_get_local_address(QIOChannelSocket *ioc, Error **errp) @@ -66,6 +72,7 @@ qio_channel_socket_new(void) sioc->zero_copy_queued = 0; sioc->zero_copy_sent = 0; sioc->blocking = false; + sioc->new_zero_copy_sent_success = FALSE; ioc = QIO_CHANNEL(sioc); qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN); @@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc, size_t fdsize = sizeof(int) * nfds; struct cmsghdr *cmsg; int sflags = 0; + bool blocking = sioc->blocking; + bool zerocopy_flushed_once = false; memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS)); @@ -664,9 +673,24 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc, goto retry; case ENOBUFS: if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) { - error_setg_errno(errp, errno, - "Process can't lock enough memory for using MSG_ZEROCOPY"); - return -1; + /** + * Socket error queueing may exhaust the OPTMEM limit. Try + * flushing the error queue once. + */ + if (!zerocopy_flushed_once) { + ret = qio_channel_socket_flush_internal(ioc, blocking, + errp); + if (ret < 0) { + return -1; + } + zerocopy_flushed_once = TRUE; + goto retry; + } else { + error_setg_errno(errp, errno, + "Process can't lock enough memory for " + "using MSG_ZEROCOPY"); + return -1; + } } break; } @@ -777,8 +801,9 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc, #ifdef QEMU_MSG_ZEROCOPY -static int qio_channel_socket_flush(QIOChannel *ioc, - Error **errp) +static int qio_channel_socket_flush_internal(QIOChannel *ioc, + bool block, + Error **errp) { QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc); struct msghdr msg = {}; @@ -786,7 +811,6 @@ static int qio_channel_socket_flush(QIOChannel *ioc, struct cmsghdr *cm; char control[CMSG_SPACE(sizeof(*serr))]; int received; - int ret; if (sioc->zero_copy_queued == sioc->zero_copy_sent) { return 0; @@ -796,16 +820,20 @@ static int qio_channel_socket_flush(QIOChannel *ioc, msg.msg_controllen = sizeof(control); memset(control, 0, sizeof(control)); - ret = 1; - while (sioc->zero_copy_sent < sioc->zero_copy_queued) { received = recvmsg(sioc->fd, &msg, MSG_ERRQUEUE); if (received < 0) { switch (errno) { case EAGAIN: - /* Nothing on errqueue, wait until something is available */ - qio_channel_wait(ioc, G_IO_ERR); - continue; + if (block) { + /* + * Nothing on errqueue, wait until something is + * available. + */ + qio_channel_wait(ioc, G_IO_ERR); + continue; + } + return 0; case EINTR: continue; default: @@ -843,13 +871,32 @@ static int qio_channel_socket_flush(QIOChannel *ioc, /* No errors, count successfully finished sendmsg()*/ sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1; - /* If any sendmsg() succeeded using zero copy, return 0 at the end */ + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */ if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) { - ret = 0; + sioc->new_zero_copy_sent_success = TRUE; } } - return ret; + return 0; +} + +static int qio_channel_socket_flush(QIOChannel *ioc, + Error **errp) +{ + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc); + int ret; + + ret = qio_channel_socket_flush_internal(ioc, true, errp); + if (ret < 0) { + return ret; + } + + if (sioc->new_zero_copy_sent_success) { + sioc->new_zero_copy_sent_success = FALSE; + return 0; + } + + return 1; } #endif /* QEMU_MSG_ZEROCOPY */ -- 2.43.7 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF 2025-10-13 9:21 ` [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK @ 2025-10-13 9:34 ` Tejus GK 2025-10-13 9:38 ` Daniel P. Berrangé 1 sibling, 0 replies; 6+ messages in thread From: Tejus GK @ 2025-10-13 9:34 UTC (permalink / raw) To: qemu-devel@nongnu.org, Daniel P. Berrangé, Peter Xu, Fabiano Rosas Cc: Manish Mishra + Peter and Fabiano, since I made the mistake of not cc’ing them earlier. regards, tejus ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF 2025-10-13 9:21 ` [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK 2025-10-13 9:34 ` Tejus GK @ 2025-10-13 9:38 ` Daniel P. Berrangé 1 sibling, 0 replies; 6+ messages in thread From: Daniel P. Berrangé @ 2025-10-13 9:38 UTC (permalink / raw) To: Tejus GK; +Cc: qemu-devel, Manish Mishra On Mon, Oct 13, 2025 at 09:21:22AM +0000, Tejus GK wrote: > From: Manish Mishra <manish.mishra@nutanix.com> > > The kernel allocates extra metadata SKBs in case of a zerocopy send, > eventually used for zerocopy's notification mechanism. This metadata > memory is accounted for in the OPTMEM limit. The kernel queues > completion notifications on the socket error queue and this error queue > is freed when userspace reads it. > > Usually, in the case of in-order processing, the kernel will batch the > notifications and merge the metadata into a single SKB and free the > rest. As a result, it never exceeds the OPTMEM limit. However, if there > is any out-of-order processing or intermittent zerocopy failures, this > error chain can grow significantly, exhausting the OPTMEM limit. As a > result, all new sendmsg requests fail to allocate any new SKB, leading > to an ENOBUF error. Depending on the amount of data queued before the > flush (i.e., large live migration iterations), even large OPTMEM limits > are prone to failure. > > To work around this, if we encounter an ENOBUF error with a zerocopy > sendmsg, flush the error queue and retry once more. > > Co-authored-by: Manish Mishra <manish.mishra@nutanix.com> > Signed-off-by: Tejus GK <tejus.gk@nutanix.com> > --- > include/io/channel-socket.h | 5 +++ > io/channel-socket.c | 75 ++++++++++++++++++++++++++++++------- > 2 files changed, 66 insertions(+), 14 deletions(-) > > diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h > index 26319fa98b..fcfd489c6c 100644 > --- a/include/io/channel-socket.h > +++ b/include/io/channel-socket.h > @@ -50,6 +50,11 @@ struct QIOChannelSocket { > ssize_t zero_copy_queued; > ssize_t zero_copy_sent; > bool blocking; > + /** > + * This flag indicates whether any new data was successfully sent with > + * zerocopy since the last qio_channel_socket_flush() call. > + */ > + bool new_zero_copy_sent_success; > }; > > > diff --git a/io/channel-socket.c b/io/channel-socket.c > index 8b30d5b7f7..7cd9f3666d 100644 > --- a/io/channel-socket.c > +++ b/io/channel-socket.c > @@ -37,6 +37,12 @@ > > #define SOCKET_MAX_FDS 16 > > +#ifdef QEMU_MSG_ZEROCOPY > +static int qio_channel_socket_flush_internal(QIOChannel *ioc, > + bool block, > + Error **errp); > +#endif > + > SocketAddress * > qio_channel_socket_get_local_address(QIOChannelSocket *ioc, > Error **errp) > @@ -66,6 +72,7 @@ qio_channel_socket_new(void) > sioc->zero_copy_queued = 0; > sioc->zero_copy_sent = 0; > sioc->blocking = false; > + sioc->new_zero_copy_sent_success = FALSE; > > ioc = QIO_CHANNEL(sioc); > qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN); > @@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc, > size_t fdsize = sizeof(int) * nfds; > struct cmsghdr *cmsg; > int sflags = 0; > + bool blocking = sioc->blocking; > + bool zerocopy_flushed_once = false; > > memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS)); > > @@ -664,9 +673,24 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc, > goto retry; > case ENOBUFS: > if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) { > - error_setg_errno(errp, errno, > - "Process can't lock enough memory for using MSG_ZEROCOPY"); > - return -1; > + /** > + * Socket error queueing may exhaust the OPTMEM limit. Try > + * flushing the error queue once. > + */ > + if (!zerocopy_flushed_once) { > + ret = qio_channel_socket_flush_internal(ioc, blocking, > + errp); > + if (ret < 0) { > + return -1; > + } > + zerocopy_flushed_once = TRUE; > + goto retry; > + } else { > + error_setg_errno(errp, errno, > + "Process can't lock enough memory for " > + "using MSG_ZEROCOPY"); > + return -1; > + } > } > break; > } > @@ -777,8 +801,9 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc, > > > #ifdef QEMU_MSG_ZEROCOPY > -static int qio_channel_socket_flush(QIOChannel *ioc, > - Error **errp) > +static int qio_channel_socket_flush_internal(QIOChannel *ioc, > + bool block, > + Error **errp) > { > QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc); > struct msghdr msg = {}; > @@ -786,7 +811,6 @@ static int qio_channel_socket_flush(QIOChannel *ioc, > struct cmsghdr *cm; > char control[CMSG_SPACE(sizeof(*serr))]; > int received; > - int ret; > > if (sioc->zero_copy_queued == sioc->zero_copy_sent) { > return 0; > @@ -796,16 +820,20 @@ static int qio_channel_socket_flush(QIOChannel *ioc, > msg.msg_controllen = sizeof(control); > memset(control, 0, sizeof(control)); > > - ret = 1; > - > while (sioc->zero_copy_sent < sioc->zero_copy_queued) { > received = recvmsg(sioc->fd, &msg, MSG_ERRQUEUE); > if (received < 0) { > switch (errno) { > case EAGAIN: > - /* Nothing on errqueue, wait until something is available */ > - qio_channel_wait(ioc, G_IO_ERR); > - continue; > + if (block) { > + /* > + * Nothing on errqueue, wait until something is > + * available. > + */ > + qio_channel_wait(ioc, G_IO_ERR); > + continue; Why G_IO_ERR ? If we're waiting for recvmsg() to become ready, then it would need to be G_IO_IN we're waiting for. > + } > + return 0; > case EINTR: > continue; > default: With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-13 9:39 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-10-13 9:21 [PATCH v6 0/2] Add zerocopy partial flush support for live migrations Tejus GK 2025-10-13 9:21 ` [PATCH v6 1/2] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK 2025-10-13 9:23 ` Daniel P. Berrangé 2025-10-13 9:21 ` [PATCH v6 2/2] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK 2025-10-13 9:34 ` Tejus GK 2025-10-13 9:38 ` Daniel P. Berrangé
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).