qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] Add zerocopy partial flush support for live migrations
@ 2025-10-09 10:14 Tejus GK
  2025-10-09 10:14 ` [PATCH v5 1/3] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Tejus GK @ 2025-10-09 10:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Tejus GK

Hi all,

This series introduces support for partial flushing of the socket error
queue during a zerocopy enabled live migration. This will help reduce
live migration errors due to ENOBUFS in scenarios where a lot of
out-of-order processing may happen.

V5:
  1. Introduced a new write flag
     QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE, which will let callers
     decide if they want to do a partial flush on an ENOBUFS.
  2. Added a "blocking" field to QIOChannelSocket, which indicates if
     the socket is in blocking mode or not.

V4:
  1. Minor nit to rename s/zero_copy_flush_pending/zerocopy_flushed_once.

V3:
  1. Add the dirty_sync_missed_zero_copy migration stat again.

V2:
  1. Removed the dirty_sync_missed_zero_copy migration stat.
  2. Made the call to qio_channel_socket_flush_internal() from
     qio_channel_socket_writev() non-blocking

regards,
tejus

Manish Mishra (1):
  QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure
    due to ENOBUF

Tejus GK (2):
  QIOChannelSocket: add a "blocking" field to QIOChannelSocket
  io: add a write flag for partial flushing during a zerocopy write

 include/io/channel-socket.h |  6 +++
 include/io/channel.h        |  1 +
 io/channel-socket.c         | 80 ++++++++++++++++++++++++++++++-------
 migration/multifd-nocomp.c  |  3 +-
 migration/multifd.c         |  3 +-
 5 files changed, 76 insertions(+), 17 deletions(-)

-- 
2.43.7



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v5 1/3] QIOChannelSocket: add a "blocking" field to QIOChannelSocket
  2025-10-09 10:14 [PATCH v5 0/3] Add zerocopy partial flush support for live migrations Tejus GK
@ 2025-10-09 10:14 ` Tejus GK
  2025-10-09 10:14 ` [PATCH v5 2/3] io: add a write flag for partial flushing during a zerocopy write Tejus GK
  2025-10-09 10:14 ` [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK
  2 siblings, 0 replies; 8+ messages in thread
From: Tejus GK @ 2025-10-09 10:14 UTC (permalink / raw)
  To: qemu-devel, Daniel P. Berrangé; +Cc: Tejus GK

Add a 'blocking' boolean field to QIOChannelSocket to track whether the
underlying socket is in blocking or non-blocking mode.

Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
---
 include/io/channel-socket.h | 1 +
 io/channel-socket.c         | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
index a88cf8b3a9..26319fa98b 100644
--- a/include/io/channel-socket.h
+++ b/include/io/channel-socket.h
@@ -49,6 +49,7 @@ struct QIOChannelSocket {
     socklen_t remoteAddrLen;
     ssize_t zero_copy_queued;
     ssize_t zero_copy_sent;
+    bool blocking;
 };
 
 
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 712b793eaf..8b30d5b7f7 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -65,6 +65,7 @@ qio_channel_socket_new(void)
     sioc->fd = -1;
     sioc->zero_copy_queued = 0;
     sioc->zero_copy_sent = 0;
+    sioc->blocking = false;
 
     ioc = QIO_CHANNEL(sioc);
     qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
@@ -859,6 +860,7 @@ qio_channel_socket_set_blocking(QIOChannel *ioc,
                                 Error **errp)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
+    sioc->blocking = enabled;
 
     if (!qemu_set_blocking(sioc->fd, enabled, errp)) {
         return -1;
-- 
2.43.7



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 2/3] io: add a write flag for partial flushing during a zerocopy write
  2025-10-09 10:14 [PATCH v5 0/3] Add zerocopy partial flush support for live migrations Tejus GK
  2025-10-09 10:14 ` [PATCH v5 1/3] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK
@ 2025-10-09 10:14 ` Tejus GK
  2025-10-09 10:14 ` [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK
  2 siblings, 0 replies; 8+ messages in thread
From: Tejus GK @ 2025-10-09 10:14 UTC (permalink / raw)
  To: qemu-devel, Daniel P. Berrangé; +Cc: Tejus GK

While doing a sendmsg() on a socket with MSG_ZEROCOPY enabled, one might
encounter an ENOBUFS, due to the socket error queue being full. To get
around this, the caller can pass the QIO_CHANNEL_WRITE_FLAG_ZERO_COPY
flag, which will try partially flushing the socket error queue once, and
retry the sendmsg().

Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
---
 include/io/channel.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/io/channel.h b/include/io/channel.h
index 0f25ae0069..41b903d104 100644
--- a/include/io/channel.h
+++ b/include/io/channel.h
@@ -33,6 +33,7 @@ OBJECT_DECLARE_TYPE(QIOChannel, QIOChannelClass,
 #define QIO_CHANNEL_ERR_BLOCK -2
 
 #define QIO_CHANNEL_WRITE_FLAG_ZERO_COPY 0x1
+#define QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE 0x2
 
 #define QIO_CHANNEL_READ_FLAG_MSG_PEEK 0x1
 #define QIO_CHANNEL_READ_FLAG_RELAXED_EOF 0x2
-- 
2.43.7



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF
  2025-10-09 10:14 [PATCH v5 0/3] Add zerocopy partial flush support for live migrations Tejus GK
  2025-10-09 10:14 ` [PATCH v5 1/3] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK
  2025-10-09 10:14 ` [PATCH v5 2/3] io: add a write flag for partial flushing during a zerocopy write Tejus GK
@ 2025-10-09 10:14 ` Tejus GK
  2025-10-09 10:32   ` Daniel P. Berrangé
  2 siblings, 1 reply; 8+ messages in thread
From: Tejus GK @ 2025-10-09 10:14 UTC (permalink / raw)
  To: qemu-devel, Daniel P. Berrangé, Peter Xu, Fabiano Rosas
  Cc: Manish Mishra, Tejus GK

From: Manish Mishra <manish.mishra@nutanix.com>

The kernel allocates extra metadata SKBs in case of a zerocopy send,
eventually used for zerocopy's notification mechanism. This metadata
memory is accounted for in the OPTMEM limit. The kernel queues
completion notifications on the socket error queue and this error queue
is freed when userspace reads it.

Usually, in the case of in-order processing, the kernel will batch the
notifications and merge the metadata into a single SKB and free the
rest. As a result, it never exceeds the OPTMEM limit. However, if there
is any out-of-order processing or intermittent zerocopy failures, this
error chain can grow significantly, exhausting the OPTMEM limit. As a
result, all new sendmsg requests fail to allocate any new SKB, leading
to an ENOBUF error. Depending on the amount of data queued before the
flush (i.e., large live migration iterations), even large OPTMEM limits
are prone to failure.

To work around this, if we encounter an ENOBUF error with a zerocopy
sendmsg, flush the error queue and retry once more.

Co-authored-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
---
 include/io/channel-socket.h |  5 +++
 io/channel-socket.c         | 78 ++++++++++++++++++++++++++++++-------
 migration/multifd-nocomp.c  |  3 +-
 migration/multifd.c         |  3 +-
 4 files changed, 72 insertions(+), 17 deletions(-)

diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
index 26319fa98b..fcfd489c6c 100644
--- a/include/io/channel-socket.h
+++ b/include/io/channel-socket.h
@@ -50,6 +50,11 @@ struct QIOChannelSocket {
     ssize_t zero_copy_queued;
     ssize_t zero_copy_sent;
     bool blocking;
+    /**
+     * This flag indicates whether any new data was successfully sent with
+     * zerocopy since the last qio_channel_socket_flush() call.
+     */
+    bool new_zero_copy_sent_success;
 };
 
 
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 8b30d5b7f7..22d59cd3cf 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -37,6 +37,12 @@
 
 #define SOCKET_MAX_FDS 16
 
+#ifdef QEMU_MSG_ZEROCOPY
+static int qio_channel_socket_flush_internal(QIOChannel *ioc,
+                                             bool block,
+                                             Error **errp);
+#endif
+
 SocketAddress *
 qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
                                      Error **errp)
@@ -66,6 +72,7 @@ qio_channel_socket_new(void)
     sioc->zero_copy_queued = 0;
     sioc->zero_copy_sent = 0;
     sioc->blocking = false;
+    sioc->new_zero_copy_sent_success = FALSE;
 
     ioc = QIO_CHANNEL(sioc);
     qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
@@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
     size_t fdsize = sizeof(int) * nfds;
     struct cmsghdr *cmsg;
     int sflags = 0;
+    bool blocking = sioc->blocking;
+    bool zerocopy_flushed_once = false;
 
     memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
 
@@ -663,10 +672,26 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
         case EINTR:
             goto retry;
         case ENOBUFS:
-            if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
-                error_setg_errno(errp, errno,
-                                 "Process can't lock enough memory for using MSG_ZEROCOPY");
-                return -1;
+            if (flags & (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
+                QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE)) {
+                /**
+                 * Socket error queueing may exhaust the OPTMEM limit. Try
+                 * flushing the error queue once.
+                 */
+                if (!zerocopy_flushed_once) {
+                    ret = qio_channel_socket_flush_internal(ioc, blocking,
+                                                            errp);
+                    if (ret < 0) {
+                        return -1;
+                    }
+                    zerocopy_flushed_once = TRUE;
+                    goto retry;
+                } else {
+                    error_setg_errno(errp, errno,
+                                     "Process can't lock enough memory for "
+                                     "using MSG_ZEROCOPY");
+                    return -1;
+                }
             }
             break;
         }
@@ -777,8 +802,9 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
 
 
 #ifdef QEMU_MSG_ZEROCOPY
-static int qio_channel_socket_flush(QIOChannel *ioc,
-                                    Error **errp)
+static int qio_channel_socket_flush_internal(QIOChannel *ioc,
+                                             bool block,
+                                             Error **errp)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
     struct msghdr msg = {};
@@ -786,7 +812,6 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
     struct cmsghdr *cm;
     char control[CMSG_SPACE(sizeof(*serr))];
     int received;
-    int ret;
 
     if (sioc->zero_copy_queued == sioc->zero_copy_sent) {
         return 0;
@@ -796,16 +821,20 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
     msg.msg_controllen = sizeof(control);
     memset(control, 0, sizeof(control));
 
-    ret = 1;
-
     while (sioc->zero_copy_sent < sioc->zero_copy_queued) {
         received = recvmsg(sioc->fd, &msg, MSG_ERRQUEUE);
         if (received < 0) {
             switch (errno) {
             case EAGAIN:
-                /* Nothing on errqueue, wait until something is available */
-                qio_channel_wait(ioc, G_IO_ERR);
-                continue;
+                if (block) {
+                    /*
+                     * Nothing on errqueue, wait until something is
+                     * available.
+                     */
+                    qio_channel_wait(ioc, G_IO_ERR);
+                    continue;
+                }
+                return 0;
             case EINTR:
                 continue;
             default:
@@ -843,13 +872,32 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
         /* No errors, count successfully finished sendmsg()*/
         sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
 
-        /* If any sendmsg() succeeded using zero copy, return 0 at the end */
+        /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
         if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
-            ret = 0;
+            sioc->new_zero_copy_sent_success = TRUE;
         }
     }
 
-    return ret;
+    return 0;
+}
+
+static int qio_channel_socket_flush(QIOChannel *ioc,
+                                    Error **errp)
+{
+    QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
+    int ret;
+
+    ret = qio_channel_socket_flush_internal(ioc, true, errp);
+    if (ret < 0) {
+        return ret;
+    }
+
+    if (sioc->new_zero_copy_sent_success) {
+        sioc->new_zero_copy_sent_success = FALSE;
+        return 0;
+    }
+
+    return 1;
 }
 
 #endif /* QEMU_MSG_ZEROCOPY */
diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
index b48eae3d86..9a28ccafd6 100644
--- a/migration/multifd-nocomp.c
+++ b/migration/multifd-nocomp.c
@@ -66,7 +66,8 @@ static int multifd_nocomp_send_setup(MultiFDSendParams *p, Error **errp)
     uint32_t page_count = multifd_ram_page_count();
 
     if (migrate_zero_copy_send()) {
-        p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
+        p->write_flags |= (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
+                           QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE);
     }
 
     if (!migrate_mapped_ram()) {
diff --git a/migration/multifd.c b/migration/multifd.c
index 98873cee74..ccfafa6505 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -703,7 +703,8 @@ static void *multifd_send_thread(void *opaque)
                 multifd_device_state_send_prepare(p);
 
                 /* Device state packets cannot be sent via zerocopy */
-                write_flags_masked |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
+                write_flags_masked |= (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
+                    QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE);
             } else {
                 ret = multifd_send_state->ops->send_prepare(p, &local_err);
                 if (ret != 0) {
-- 
2.43.7



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF
  2025-10-09 10:14 ` [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK
@ 2025-10-09 10:32   ` Daniel P. Berrangé
  2025-10-09 11:51     ` Tejus GK
  0 siblings, 1 reply; 8+ messages in thread
From: Daniel P. Berrangé @ 2025-10-09 10:32 UTC (permalink / raw)
  To: Tejus GK; +Cc: qemu-devel, Peter Xu, Fabiano Rosas, Manish Mishra

On Thu, Oct 09, 2025 at 10:14:18AM +0000, Tejus GK wrote:
> From: Manish Mishra <manish.mishra@nutanix.com>
> 
> The kernel allocates extra metadata SKBs in case of a zerocopy send,
> eventually used for zerocopy's notification mechanism. This metadata
> memory is accounted for in the OPTMEM limit. The kernel queues
> completion notifications on the socket error queue and this error queue
> is freed when userspace reads it.
> 
> Usually, in the case of in-order processing, the kernel will batch the
> notifications and merge the metadata into a single SKB and free the
> rest. As a result, it never exceeds the OPTMEM limit. However, if there
> is any out-of-order processing or intermittent zerocopy failures, this
> error chain can grow significantly, exhausting the OPTMEM limit. As a
> result, all new sendmsg requests fail to allocate any new SKB, leading
> to an ENOBUF error. Depending on the amount of data queued before the
> flush (i.e., large live migration iterations), even large OPTMEM limits
> are prone to failure.
> 
> To work around this, if we encounter an ENOBUF error with a zerocopy
> sendmsg, flush the error queue and retry once more.
> 
> Co-authored-by: Manish Mishra <manish.mishra@nutanix.com>
> Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
> ---
>  include/io/channel-socket.h |  5 +++
>  io/channel-socket.c         | 78 ++++++++++++++++++++++++++++++-------
>  migration/multifd-nocomp.c  |  3 +-
>  migration/multifd.c         |  3 +-
>  4 files changed, 72 insertions(+), 17 deletions(-)
> 
> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> index 26319fa98b..fcfd489c6c 100644
> --- a/include/io/channel-socket.h
> +++ b/include/io/channel-socket.h
> @@ -50,6 +50,11 @@ struct QIOChannelSocket {
>      ssize_t zero_copy_queued;
>      ssize_t zero_copy_sent;
>      bool blocking;
> +    /**
> +     * This flag indicates whether any new data was successfully sent with
> +     * zerocopy since the last qio_channel_socket_flush() call.
> +     */
> +    bool new_zero_copy_sent_success;
>  };
>  
>  
> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index 8b30d5b7f7..22d59cd3cf 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c
> @@ -37,6 +37,12 @@
>  
>  #define SOCKET_MAX_FDS 16
>  
> +#ifdef QEMU_MSG_ZEROCOPY
> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> +                                             bool block,
> +                                             Error **errp);
> +#endif
> +
>  SocketAddress *
>  qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>                                       Error **errp)
> @@ -66,6 +72,7 @@ qio_channel_socket_new(void)
>      sioc->zero_copy_queued = 0;
>      sioc->zero_copy_sent = 0;
>      sioc->blocking = false;
> +    sioc->new_zero_copy_sent_success = FALSE;
>  
>      ioc = QIO_CHANNEL(sioc);
>      qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
> @@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>      size_t fdsize = sizeof(int) * nfds;
>      struct cmsghdr *cmsg;
>      int sflags = 0;
> +    bool blocking = sioc->blocking;
> +    bool zerocopy_flushed_once = false;
>  
>      memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>  
> @@ -663,10 +672,26 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>          case EINTR:
>              goto retry;
>          case ENOBUFS:
> -            if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
> -                error_setg_errno(errp, errno,
> -                                 "Process can't lock enough memory for using MSG_ZEROCOPY");
> -                return -1;
> +            if (flags & (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
> +                QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE)) {


There are only two callers where this patch has added use of
QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE.

Both of them already sett QIO_CHANNEL_WRITE_FLAG_ZERO_COPY, so
this code branch was already being taken

IOW, this new QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE looks
pointless.



> +                /**
> +                 * Socket error queueing may exhaust the OPTMEM limit. Try
> +                 * flushing the error queue once.
> +                 */
> +                if (!zerocopy_flushed_once) {
> +                    ret = qio_channel_socket_flush_internal(ioc, blocking,
> +                                                            errp);
> +                    if (ret < 0) {
> +                        return -1;
> +                    }
> +                    zerocopy_flushed_once = TRUE;
> +                    goto retry;
> +                } else {
> +                    error_setg_errno(errp, errno,
> +                                     "Process can't lock enough memory for "
> +                                     "using MSG_ZEROCOPY");
> +                    return -1;
> +                }
>              }
>              break;
>          }
> @@ -777,8 +802,9 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>  
>  
>  #ifdef QEMU_MSG_ZEROCOPY
> -static int qio_channel_socket_flush(QIOChannel *ioc,
> -                                    Error **errp)
> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> +                                             bool block,
> +                                             Error **errp)
>  {
>      QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>      struct msghdr msg = {};
> @@ -786,7 +812,6 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
>      struct cmsghdr *cm;
>      char control[CMSG_SPACE(sizeof(*serr))];
>      int received;
> -    int ret;
>  
>      if (sioc->zero_copy_queued == sioc->zero_copy_sent) {
>          return 0;
> @@ -796,16 +821,20 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
>      msg.msg_controllen = sizeof(control);
>      memset(control, 0, sizeof(control));
>  
> -    ret = 1;
> -
>      while (sioc->zero_copy_sent < sioc->zero_copy_queued) {
>          received = recvmsg(sioc->fd, &msg, MSG_ERRQUEUE);
>          if (received < 0) {
>              switch (errno) {
>              case EAGAIN:
> -                /* Nothing on errqueue, wait until something is available */
> -                qio_channel_wait(ioc, G_IO_ERR);
> -                continue;
> +                if (block) {
> +                    /*
> +                     * Nothing on errqueue, wait until something is
> +                     * available.
> +                     */
> +                    qio_channel_wait(ioc, G_IO_ERR);
> +                    continue;
> +                }
> +                return 0;
>              case EINTR:
>                  continue;
>              default:
> @@ -843,13 +872,32 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
>          /* No errors, count successfully finished sendmsg()*/
>          sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>  
> -        /* If any sendmsg() succeeded using zero copy, return 0 at the end */
> +        /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
>          if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
> -            ret = 0;
> +            sioc->new_zero_copy_sent_success = TRUE;
>          }
>      }
>  
> -    return ret;
> +    return 0;
> +}
> +
> +static int qio_channel_socket_flush(QIOChannel *ioc,
> +                                    Error **errp)
> +{
> +    QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> +    int ret;
> +
> +    ret = qio_channel_socket_flush_internal(ioc, true, errp);
> +    if (ret < 0) {
> +        return ret;
> +    }
> +
> +    if (sioc->new_zero_copy_sent_success) {
> +        sioc->new_zero_copy_sent_success = FALSE;
> +        return 0;
> +    }
> +
> +    return 1;
>  }
>  
>  #endif /* QEMU_MSG_ZEROCOPY */
> diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
> index b48eae3d86..9a28ccafd6 100644
> --- a/migration/multifd-nocomp.c
> +++ b/migration/multifd-nocomp.c
> @@ -66,7 +66,8 @@ static int multifd_nocomp_send_setup(MultiFDSendParams *p, Error **errp)
>      uint32_t page_count = multifd_ram_page_count();
>  
>      if (migrate_zero_copy_send()) {
> -        p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
> +        p->write_flags |= (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
> +                           QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE);
>      }
>  
>      if (!migrate_mapped_ram()) {
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 98873cee74..ccfafa6505 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -703,7 +703,8 @@ static void *multifd_send_thread(void *opaque)
>                  multifd_device_state_send_prepare(p);
>  
>                  /* Device state packets cannot be sent via zerocopy */
> -                write_flags_masked |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
> +                write_flags_masked |= (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
> +                    QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE);
>              } else {
>                  ret = multifd_send_state->ops->send_prepare(p, &local_err);
>                  if (ret != 0) {
> -- 
> 2.43.7
> 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF
  2025-10-09 10:32   ` Daniel P. Berrangé
@ 2025-10-09 11:51     ` Tejus GK
  2025-10-09 16:10       ` Daniel P. Berrangé
  0 siblings, 1 reply; 8+ messages in thread
From: Tejus GK @ 2025-10-09 11:51 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: qemu-devel@nongnu.org, Peter Xu, Fabiano Rosas, Manish Mishra



> On 9 Oct 2025, at 4:02 PM, Daniel P. Berrangé <berrange@redhat.com> wrote:
> 
> !-------------------------------------------------------------------|
>  CAUTION: External Email
> 
> |-------------------------------------------------------------------!
> 
> On Thu, Oct 09, 2025 at 10:14:18AM +0000, Tejus GK wrote:
>> From: Manish Mishra <manish.mishra@nutanix.com>
>> 
>> The kernel allocates extra metadata SKBs in case of a zerocopy send,
>> eventually used for zerocopy's notification mechanism. This metadata
>> memory is accounted for in the OPTMEM limit. The kernel queues
>> completion notifications on the socket error queue and this error queue
>> is freed when userspace reads it.
>> 
>> Usually, in the case of in-order processing, the kernel will batch the
>> notifications and merge the metadata into a single SKB and free the
>> rest. As a result, it never exceeds the OPTMEM limit. However, if there
>> is any out-of-order processing or intermittent zerocopy failures, this
>> error chain can grow significantly, exhausting the OPTMEM limit. As a
>> result, all new sendmsg requests fail to allocate any new SKB, leading
>> to an ENOBUF error. Depending on the amount of data queued before the
>> flush (i.e., large live migration iterations), even large OPTMEM limits
>> are prone to failure.
>> 
>> To work around this, if we encounter an ENOBUF error with a zerocopy
>> sendmsg, flush the error queue and retry once more.
>> 
>> Co-authored-by: Manish Mishra <manish.mishra@nutanix.com>
>> Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
>> ---
>> include/io/channel-socket.h |  5 +++
>> io/channel-socket.c         | 78 ++++++++++++++++++++++++++++++-------
>> migration/multifd-nocomp.c  |  3 +-
>> migration/multifd.c         |  3 +-
>> 4 files changed, 72 insertions(+), 17 deletions(-)
>> 
>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>> index 26319fa98b..fcfd489c6c 100644
>> --- a/include/io/channel-socket.h
>> +++ b/include/io/channel-socket.h
>> @@ -50,6 +50,11 @@ struct QIOChannelSocket {
>>     ssize_t zero_copy_queued;
>>     ssize_t zero_copy_sent;
>>     bool blocking;
>> +    /**
>> +     * This flag indicates whether any new data was successfully sent with
>> +     * zerocopy since the last qio_channel_socket_flush() call.
>> +     */
>> +    bool new_zero_copy_sent_success;
>> };
>> 
>> 
>> diff --git a/io/channel-socket.c b/io/channel-socket.c
>> index 8b30d5b7f7..22d59cd3cf 100644
>> --- a/io/channel-socket.c
>> +++ b/io/channel-socket.c
>> @@ -37,6 +37,12 @@
>> 
>> #define SOCKET_MAX_FDS 16
>> 
>> +#ifdef QEMU_MSG_ZEROCOPY
>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>> +                                             bool block,
>> +                                             Error **errp);
>> +#endif
>> +
>> SocketAddress *
>> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>>                                      Error **errp)
>> @@ -66,6 +72,7 @@ qio_channel_socket_new(void)
>>     sioc->zero_copy_queued = 0;
>>     sioc->zero_copy_sent = 0;
>>     sioc->blocking = false;
>> +    sioc->new_zero_copy_sent_success = FALSE;
>> 
>>     ioc = QIO_CHANNEL(sioc);
>>     qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
>> @@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>     size_t fdsize = sizeof(int) * nfds;
>>     struct cmsghdr *cmsg;
>>     int sflags = 0;
>> +    bool blocking = sioc->blocking;
>> +    bool zerocopy_flushed_once = false;
>> 
>>     memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>> 
>> @@ -663,10 +672,26 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>         case EINTR:
>>             goto retry;
>>         case ENOBUFS:
>> -            if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
>> -                error_setg_errno(errp, errno,
>> -                                 "Process can't lock enough memory for using MSG_ZEROCOPY");
>> -                return -1;
>> +            if (flags & (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
>> +                QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE)) {
> 
> 
> There are only two callers where this patch has added use of
> QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE.
> 
> Both of them already sett QIO_CHANNEL_WRITE_FLAG_ZERO_COPY, so
> this code branch was already being taken
> 
> IOW, this new QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE looks
> pointless.
> 

The intention on adding QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE was to let callers decide if they want QEMU to silently flush and not throw an ENOBUFS.
Right now it fits for the migration use-case, since migration is the only caller using MSG_ZEROCOPY in the first place, but in case someone else decides to use MSG_ZEROCOPY, and wants the regular semantics of a write, i.e, expecting an ENOBUFS on a socket error queue pileup, they may choose not to pass the flag, rather than letting QEMU doing it unconditionally.

However, I might be overthinking this, if the usecase which I mention above doesn't make sense, I can drop this flag.


regards,
tejus


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF
  2025-10-09 11:51     ` Tejus GK
@ 2025-10-09 16:10       ` Daniel P. Berrangé
  2025-10-10 12:06         ` Tejus GK
  0 siblings, 1 reply; 8+ messages in thread
From: Daniel P. Berrangé @ 2025-10-09 16:10 UTC (permalink / raw)
  To: Tejus GK; +Cc: qemu-devel@nongnu.org, Peter Xu, Fabiano Rosas, Manish Mishra

On Thu, Oct 09, 2025 at 11:51:51AM +0000, Tejus GK wrote:
> 
> 
> > On 9 Oct 2025, at 4:02 PM, Daniel P. Berrangé <berrange@redhat.com> wrote:
> > 
> > !-------------------------------------------------------------------|
> >  CAUTION: External Email
> > 
> > |-------------------------------------------------------------------!
> > 
> > On Thu, Oct 09, 2025 at 10:14:18AM +0000, Tejus GK wrote:
> >> From: Manish Mishra <manish.mishra@nutanix.com>
> >> 
> >> The kernel allocates extra metadata SKBs in case of a zerocopy send,
> >> eventually used for zerocopy's notification mechanism. This metadata
> >> memory is accounted for in the OPTMEM limit. The kernel queues
> >> completion notifications on the socket error queue and this error queue
> >> is freed when userspace reads it.
> >> 
> >> Usually, in the case of in-order processing, the kernel will batch the
> >> notifications and merge the metadata into a single SKB and free the
> >> rest. As a result, it never exceeds the OPTMEM limit. However, if there
> >> is any out-of-order processing or intermittent zerocopy failures, this
> >> error chain can grow significantly, exhausting the OPTMEM limit. As a
> >> result, all new sendmsg requests fail to allocate any new SKB, leading
> >> to an ENOBUF error. Depending on the amount of data queued before the
> >> flush (i.e., large live migration iterations), even large OPTMEM limits
> >> are prone to failure.
> >> 
> >> To work around this, if we encounter an ENOBUF error with a zerocopy
> >> sendmsg, flush the error queue and retry once more.
> >> 
> >> Co-authored-by: Manish Mishra <manish.mishra@nutanix.com>
> >> Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
> >> ---
> >> include/io/channel-socket.h |  5 +++
> >> io/channel-socket.c         | 78 ++++++++++++++++++++++++++++++-------
> >> migration/multifd-nocomp.c  |  3 +-
> >> migration/multifd.c         |  3 +-
> >> 4 files changed, 72 insertions(+), 17 deletions(-)
> >> 
> >> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> >> index 26319fa98b..fcfd489c6c 100644
> >> --- a/include/io/channel-socket.h
> >> +++ b/include/io/channel-socket.h
> >> @@ -50,6 +50,11 @@ struct QIOChannelSocket {
> >>     ssize_t zero_copy_queued;
> >>     ssize_t zero_copy_sent;
> >>     bool blocking;
> >> +    /**
> >> +     * This flag indicates whether any new data was successfully sent with
> >> +     * zerocopy since the last qio_channel_socket_flush() call.
> >> +     */
> >> +    bool new_zero_copy_sent_success;
> >> };
> >> 
> >> 
> >> diff --git a/io/channel-socket.c b/io/channel-socket.c
> >> index 8b30d5b7f7..22d59cd3cf 100644
> >> --- a/io/channel-socket.c
> >> +++ b/io/channel-socket.c
> >> @@ -37,6 +37,12 @@
> >> 
> >> #define SOCKET_MAX_FDS 16
> >> 
> >> +#ifdef QEMU_MSG_ZEROCOPY
> >> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> >> +                                             bool block,
> >> +                                             Error **errp);
> >> +#endif
> >> +
> >> SocketAddress *
> >> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
> >>                                      Error **errp)
> >> @@ -66,6 +72,7 @@ qio_channel_socket_new(void)
> >>     sioc->zero_copy_queued = 0;
> >>     sioc->zero_copy_sent = 0;
> >>     sioc->blocking = false;
> >> +    sioc->new_zero_copy_sent_success = FALSE;
> >> 
> >>     ioc = QIO_CHANNEL(sioc);
> >>     qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
> >> @@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> >>     size_t fdsize = sizeof(int) * nfds;
> >>     struct cmsghdr *cmsg;
> >>     int sflags = 0;
> >> +    bool blocking = sioc->blocking;
> >> +    bool zerocopy_flushed_once = false;
> >> 
> >>     memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
> >> 
> >> @@ -663,10 +672,26 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> >>         case EINTR:
> >>             goto retry;
> >>         case ENOBUFS:
> >> -            if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
> >> -                error_setg_errno(errp, errno,
> >> -                                 "Process can't lock enough memory for using MSG_ZEROCOPY");
> >> -                return -1;
> >> +            if (flags & (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
> >> +                QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE)) {
> > 
> > 
> > There are only two callers where this patch has added use of
> > QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE.
> > 
> > Both of them already sett QIO_CHANNEL_WRITE_FLAG_ZERO_COPY, so
> > this code branch was already being taken
> > 
> > IOW, this new QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE looks
> > pointless.
> 
> The intention on adding QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE
> was to let callers decide if they want QEMU to silently flush
> and not throw an ENOBUFS.

This doesn't make any sense. sendmsg man page says that ENOBUFS
is never returned on Linux ordinarily. So AFAICT, the only time
we'll see ENOBUFS is if we used MSG_ZEROCOPY in the sendmsg
input. The MSG_ZEROCOPY flag for sendmsg is only set if we have
QIO_CHANNEL_WRITE_FLAG_ZERO_COPY flag set in the qio_channel_write
call. So again QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE appears
to serve no functional purpose.

> Right now it fits for the migration use-case, since migration is
> the only caller using MSG_ZEROCOPY in the first place, but in case
> someone else decides to use MSG_ZEROCOPY, and wants the regular
> semantics of a write, i.e, expecting an ENOBUFS on a socket error
> queue pileup, they may choose not to pass the flag, rather than
> letting QEMU doing it unconditionally.

This patch was previously addressing a functional bug in the current
code.  Satisfying any new use case should be a completly separate
patch with justification in the commit message, as any single commit
should only address 1 problem at a time.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF
  2025-10-09 16:10       ` Daniel P. Berrangé
@ 2025-10-10 12:06         ` Tejus GK
  0 siblings, 0 replies; 8+ messages in thread
From: Tejus GK @ 2025-10-10 12:06 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: qemu-devel@nongnu.org, Peter Xu, Fabiano Rosas, Manish Mishra



> On 9 Oct 2025, at 9:40 PM, Daniel P. Berrangé <berrange@redhat.com> wrote:
> 
> !-------------------------------------------------------------------|
>  CAUTION: External Email
> 
> |-------------------------------------------------------------------!
> 
> On Thu, Oct 09, 2025 at 11:51:51AM +0000, Tejus GK wrote:
>> 
>> 
>>> On 9 Oct 2025, at 4:02 PM, Daniel P. Berrangé <berrange@redhat.com> wrote:
>>> 
>>> !-------------------------------------------------------------------|
>>> CAUTION: External Email
>>> 
>>> |-------------------------------------------------------------------!
>>> 
>>> On Thu, Oct 09, 2025 at 10:14:18AM +0000, Tejus GK wrote:
>>>> From: Manish Mishra <manish.mishra@nutanix.com>
>>>> 
>>>> The kernel allocates extra metadata SKBs in case of a zerocopy send,
>>>> eventually used for zerocopy's notification mechanism. This metadata
>>>> memory is accounted for in the OPTMEM limit. The kernel queues
>>>> completion notifications on the socket error queue and this error queue
>>>> is freed when userspace reads it.
>>>> 
>>>> Usually, in the case of in-order processing, the kernel will batch the
>>>> notifications and merge the metadata into a single SKB and free the
>>>> rest. As a result, it never exceeds the OPTMEM limit. However, if there
>>>> is any out-of-order processing or intermittent zerocopy failures, this
>>>> error chain can grow significantly, exhausting the OPTMEM limit. As a
>>>> result, all new sendmsg requests fail to allocate any new SKB, leading
>>>> to an ENOBUF error. Depending on the amount of data queued before the
>>>> flush (i.e., large live migration iterations), even large OPTMEM limits
>>>> are prone to failure.
>>>> 
>>>> To work around this, if we encounter an ENOBUF error with a zerocopy
>>>> sendmsg, flush the error queue and retry once more.
>>>> 
>>>> Co-authored-by: Manish Mishra <manish.mishra@nutanix.com>
>>>> Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
>>>> ---
>>>> include/io/channel-socket.h |  5 +++
>>>> io/channel-socket.c         | 78 ++++++++++++++++++++++++++++++-------
>>>> migration/multifd-nocomp.c  |  3 +-
>>>> migration/multifd.c         |  3 +-
>>>> 4 files changed, 72 insertions(+), 17 deletions(-)
>>>> 
>>>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>>>> index 26319fa98b..fcfd489c6c 100644
>>>> --- a/include/io/channel-socket.h
>>>> +++ b/include/io/channel-socket.h
>>>> @@ -50,6 +50,11 @@ struct QIOChannelSocket {
>>>>    ssize_t zero_copy_queued;
>>>>    ssize_t zero_copy_sent;
>>>>    bool blocking;
>>>> +    /**
>>>> +     * This flag indicates whether any new data was successfully sent with
>>>> +     * zerocopy since the last qio_channel_socket_flush() call.
>>>> +     */
>>>> +    bool new_zero_copy_sent_success;
>>>> };
>>>> 
>>>> 
>>>> diff --git a/io/channel-socket.c b/io/channel-socket.c
>>>> index 8b30d5b7f7..22d59cd3cf 100644
>>>> --- a/io/channel-socket.c
>>>> +++ b/io/channel-socket.c
>>>> @@ -37,6 +37,12 @@
>>>> 
>>>> #define SOCKET_MAX_FDS 16
>>>> 
>>>> +#ifdef QEMU_MSG_ZEROCOPY
>>>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>>>> +                                             bool block,
>>>> +                                             Error **errp);
>>>> +#endif
>>>> +
>>>> SocketAddress *
>>>> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>>>>                                     Error **errp)
>>>> @@ -66,6 +72,7 @@ qio_channel_socket_new(void)
>>>>    sioc->zero_copy_queued = 0;
>>>>    sioc->zero_copy_sent = 0;
>>>>    sioc->blocking = false;
>>>> +    sioc->new_zero_copy_sent_success = FALSE;
>>>> 
>>>>    ioc = QIO_CHANNEL(sioc);
>>>>    qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
>>>> @@ -618,6 +625,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>>>    size_t fdsize = sizeof(int) * nfds;
>>>>    struct cmsghdr *cmsg;
>>>>    int sflags = 0;
>>>> +    bool blocking = sioc->blocking;
>>>> +    bool zerocopy_flushed_once = false;
>>>> 
>>>>    memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>>>> 
>>>> @@ -663,10 +672,26 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>>>        case EINTR:
>>>>            goto retry;
>>>>        case ENOBUFS:
>>>> -            if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
>>>> -                error_setg_errno(errp, errno,
>>>> -                                 "Process can't lock enough memory for using MSG_ZEROCOPY");
>>>> -                return -1;
>>>> +            if (flags & (QIO_CHANNEL_WRITE_FLAG_ZERO_COPY |
>>>> +                QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE)) {
>>> 
>>> 
>>> There are only two callers where this patch has added use of
>>> QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE.
>>> 
>>> Both of them already sett QIO_CHANNEL_WRITE_FLAG_ZERO_COPY, so
>>> this code branch was already being taken
>>> 
>>> IOW, this new QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE looks
>>> pointless.
>> 
>> The intention on adding QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE
>> was to let callers decide if they want QEMU to silently flush
>> and not throw an ENOBUFS.
> 
> This doesn't make any sense. sendmsg man page says that ENOBUFS
> is never returned on Linux ordinarily. So AFAICT, the only time
> we'll see ENOBUFS is if we used MSG_ZEROCOPY in the sendmsg
> input. The MSG_ZEROCOPY flag for sendmsg is only set if we have
> QIO_CHANNEL_WRITE_FLAG_ZERO_COPY flag set in the qio_channel_write
> call. So again QIO_CHANNEL_WRITE_FLAG_ZERO_COPY_FLUSH_ONCE appears
> to serve no functional purpose.
> 
>> Right now it fits for the migration use-case, since migration is
>> the only caller using MSG_ZEROCOPY in the first place, but in case
>> someone else decides to use MSG_ZEROCOPY, and wants the regular
>> semantics of a write, i.e, expecting an ENOBUFS on a socket error
>> queue pileup, they may choose not to pass the flag, rather than
>> letting QEMU doing it unconditionally.
> 
> This patch was previously addressing a functional bug in the current
> code.  Satisfying any new use case should be a completly separate
> patch with justification in the commit message, as any single commit
> should only address 1 problem at a time.

Ack, I shall drop the flag in the next revision. 


regards,
tejus


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-10-10 12:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-09 10:14 [PATCH v5 0/3] Add zerocopy partial flush support for live migrations Tejus GK
2025-10-09 10:14 ` [PATCH v5 1/3] QIOChannelSocket: add a "blocking" field to QIOChannelSocket Tejus GK
2025-10-09 10:14 ` [PATCH v5 2/3] io: add a write flag for partial flushing during a zerocopy write Tejus GK
2025-10-09 10:14 ` [PATCH v5 3/3] QIOChannelSocket: flush zerocopy socket error queue on sendmsg failure due to ENOBUF Tejus GK
2025-10-09 10:32   ` Daniel P. Berrangé
2025-10-09 11:51     ` Tejus GK
2025-10-09 16:10       ` Daniel P. Berrangé
2025-10-10 12:06         ` Tejus GK

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).