* [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
@ 2025-02-21 9:44 Manish Mishra
2025-02-24 20:35 ` Peter Xu
2025-02-25 9:07 ` Daniel P. Berrangé
0 siblings, 2 replies; 7+ messages in thread
From: Manish Mishra @ 2025-02-21 9:44 UTC (permalink / raw)
To: qemu-devel; +Cc: leobras, peterx, berrange, Manish Mishra
We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
is accounted for in the OPTMEM limit. If there is any error with sending
zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
socket error queue. This error queue is freed when userspace reads it.
Usually, if there are continuous failures, we merge the metadata into a single
SKB and free another one. However, if there is any out-of-order processing or
an intermittent zerocopy failures, this error chain can grow significantly,
exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
allocate any new SKB, leading to an ENOBUF error.
To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
we flush the error queue and retry once more.
Signed-off-by: Manish Mishra <manish.mishra@nutanix.com>
---
include/io/channel-socket.h | 1 +
io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
2 files changed, 46 insertions(+), 7 deletions(-)
diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
index ab15577d38..6cfc66eb5b 100644
--- a/include/io/channel-socket.h
+++ b/include/io/channel-socket.h
@@ -49,6 +49,7 @@ struct QIOChannelSocket {
socklen_t remoteAddrLen;
ssize_t zero_copy_queued;
ssize_t zero_copy_sent;
+ bool new_zero_copy_sent_success;
};
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 608bcf066e..c7f576290f 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -37,6 +37,11 @@
#define SOCKET_MAX_FDS 16
+#ifdef QEMU_MSG_ZEROCOPY
+static int qio_channel_socket_flush_internal(QIOChannel *ioc,
+ Error **errp);
+#endif
+
SocketAddress *
qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
Error **errp)
@@ -65,6 +70,7 @@ qio_channel_socket_new(void)
sioc->fd = -1;
sioc->zero_copy_queued = 0;
sioc->zero_copy_sent = 0;
+ sioc->new_zero_copy_sent_success = FALSE;
ioc = QIO_CHANNEL(sioc);
qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
@@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
size_t fdsize = sizeof(int) * nfds;
struct cmsghdr *cmsg;
int sflags = 0;
+ bool zero_copy_flush_pending = TRUE;
memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
@@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
goto retry;
case ENOBUFS:
if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
- error_setg_errno(errp, errno,
- "Process can't lock enough memory for using MSG_ZEROCOPY");
- return -1;
+ if (zero_copy_flush_pending) {
+ ret = qio_channel_socket_flush_internal(ioc, errp);
+ if (ret < 0) {
+ error_setg_errno(errp, errno,
+ "Zerocopy flush failed");
+ return -1;
+ }
+ zero_copy_flush_pending = FALSE;
+ goto retry;
+ } else {
+ error_setg_errno(errp, errno,
+ "Process can't lock enough memory for "
+ "using MSG_ZEROCOPY");
+ return -1;
+ }
}
break;
}
@@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
#ifdef QEMU_MSG_ZEROCOPY
-static int qio_channel_socket_flush(QIOChannel *ioc,
- Error **errp)
+static int qio_channel_socket_flush_internal(QIOChannel *ioc,
+ Error **errp)
{
QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
struct msghdr msg = {};
@@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
/* No errors, count successfully finished sendmsg()*/
sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
- /* If any sendmsg() succeeded using zero copy, return 0 at the end */
+ /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
- ret = 0;
+ sioc->new_zero_copy_sent_success = TRUE;
}
}
return ret;
}
+static int qio_channel_socket_flush(QIOChannel *ioc,
+ Error **errp)
+{
+ QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
+ int ret;
+
+ ret = qio_channel_socket_flush_internal(ioc, errp);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (sioc->new_zero_copy_sent_success) {
+ sioc->new_zero_copy_sent_success = FALSE;
+ ret = 0;
+ }
+
+ return ret;
+}
+
#endif /* QEMU_MSG_ZEROCOPY */
static int
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
2025-02-21 9:44 [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg Manish Mishra
@ 2025-02-24 20:35 ` Peter Xu
2025-02-27 16:54 ` Manish
2025-02-25 9:07 ` Daniel P. Berrangé
1 sibling, 1 reply; 7+ messages in thread
From: Peter Xu @ 2025-02-24 20:35 UTC (permalink / raw)
To: Manish Mishra; +Cc: qemu-devel, leobras, berrange, Fabiano Rosas
On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
> We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
> is accounted for in the OPTMEM limit. If there is any error with sending
> zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
> socket error queue. This error queue is freed when userspace reads it.
>
> Usually, if there are continuous failures, we merge the metadata into a single
> SKB and free another one. However, if there is any out-of-order processing or
> an intermittent zerocopy failures, this error chain can grow significantly,
> exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
> allocate any new SKB, leading to an ENOBUF error.
>
> To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
> we flush the error queue and retry once more.
Could you add some more info on how this issue could be reproduced? When
it happens, it should only report zerocopy skipped and that's the only case
below would be helpful, am I right? (otherwise it should fail sendmsg()
anyway in other form)
Please copy Fabiano when repost.
>
> Signed-off-by: Manish Mishra <manish.mishra@nutanix.com>
> ---
> include/io/channel-socket.h | 1 +
> io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
> 2 files changed, 46 insertions(+), 7 deletions(-)
>
> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> index ab15577d38..6cfc66eb5b 100644
> --- a/include/io/channel-socket.h
> +++ b/include/io/channel-socket.h
> @@ -49,6 +49,7 @@ struct QIOChannelSocket {
> socklen_t remoteAddrLen;
> ssize_t zero_copy_queued;
> ssize_t zero_copy_sent;
> + bool new_zero_copy_sent_success;
> };
>
>
> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index 608bcf066e..c7f576290f 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c
> @@ -37,6 +37,11 @@
>
> #define SOCKET_MAX_FDS 16
>
> +#ifdef QEMU_MSG_ZEROCOPY
> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> + Error **errp);
> +#endif
> +
> SocketAddress *
> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
> Error **errp)
> @@ -65,6 +70,7 @@ qio_channel_socket_new(void)
> sioc->fd = -1;
> sioc->zero_copy_queued = 0;
> sioc->zero_copy_sent = 0;
> + sioc->new_zero_copy_sent_success = FALSE;
>
> ioc = QIO_CHANNEL(sioc);
> qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
> @@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> size_t fdsize = sizeof(int) * nfds;
> struct cmsghdr *cmsg;
> int sflags = 0;
> + bool zero_copy_flush_pending = TRUE;
It'll be nice to add some comment above this variable explaining the issue.
>
> memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>
> @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> goto retry;
> case ENOBUFS:
> if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
> - error_setg_errno(errp, errno,
> - "Process can't lock enough memory for using MSG_ZEROCOPY");
> - return -1;
> + if (zero_copy_flush_pending) {
> + ret = qio_channel_socket_flush_internal(ioc, errp);
Calling a socket specific helper in generic qiochannel path may not be a
good idea.
Maybe we could still stick with qio_channel_flush(), but insteadx add a new
parameter to qio_channel_flush(..., bool *succeeded), only pass in
"succeeded == NULL" here, not collect the cached result. Then the other
multifd use case it can pass in a valid "succeeded" pointer.
> + if (ret < 0) {
> + error_setg_errno(errp, errno,
> + "Zerocopy flush failed");
> + return -1;
> + }
> + zero_copy_flush_pending = FALSE;
> + goto retry;
> + } else {
> + error_setg_errno(errp, errno,
> + "Process can't lock enough memory for "
> + "using MSG_ZEROCOPY");
> + return -1;
> + }
> }
> break;
> }
> @@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>
>
> #ifdef QEMU_MSG_ZEROCOPY
> -static int qio_channel_socket_flush(QIOChannel *ioc,
> - Error **errp)
> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> + Error **errp)
> {
> QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> struct msghdr msg = {};
> @@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
> /* No errors, count successfully finished sendmsg()*/
> sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>
> - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
> + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
> if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
> - ret = 0;
After this change, IIUC this function will return either -1 or 1, never 0.
Not sure whether it's intended. May worth add a comment above the function.
> + sioc->new_zero_copy_sent_success = TRUE;
> }
> }
>
> return ret;
> }
>
> +static int qio_channel_socket_flush(QIOChannel *ioc,
> + Error **errp)
> +{
> + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> + int ret;
> +
> + ret = qio_channel_socket_flush_internal(ioc, errp);
> + if (ret < 0) {
> + return ret;
> + }
> +
> + if (sioc->new_zero_copy_sent_success) {
> + sioc->new_zero_copy_sent_success = FALSE;
> + ret = 0;
> + }
> +
> + return ret;
> +}
> +
> #endif /* QEMU_MSG_ZEROCOPY */
>
> static int
> --
> 2.43.0
>
--
Peter Xu
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
2025-02-21 9:44 [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg Manish Mishra
2025-02-24 20:35 ` Peter Xu
@ 2025-02-25 9:07 ` Daniel P. Berrangé
2025-02-27 17:00 ` Manish
1 sibling, 1 reply; 7+ messages in thread
From: Daniel P. Berrangé @ 2025-02-25 9:07 UTC (permalink / raw)
To: Manish Mishra; +Cc: qemu-devel, leobras, peterx
On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
> We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
> is accounted for in the OPTMEM limit. If there is any error with sending
> zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
> socket error queue. This error queue is freed when userspace reads it.
>
> Usually, if there are continuous failures, we merge the metadata into a single
> SKB and free another one. However, if there is any out-of-order processing or
> an intermittent zerocopy failures, this error chain can grow significantly,
> exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
> allocate any new SKB, leading to an ENOBUF error.
IIUC, you are effectively saying that the migration code is calling
qio_channel_write() too many times, before it calls qio_channel_flush(.)
Can you clarify what yu mean by the "OPTMEM limit" here ? I'm wondering
if this is potentially triggered by suboptimal tuning of the deployment
environment or we need to document tuning better.
> To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
> we flush the error queue and retry once more.
>
> Signed-off-by: Manish Mishra <manish.mishra@nutanix.com>
> ---
> include/io/channel-socket.h | 1 +
> io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
> 2 files changed, 46 insertions(+), 7 deletions(-)
>
> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> index ab15577d38..6cfc66eb5b 100644
> --- a/include/io/channel-socket.h
> +++ b/include/io/channel-socket.h
> @@ -49,6 +49,7 @@ struct QIOChannelSocket {
> socklen_t remoteAddrLen;
> ssize_t zero_copy_queued;
> ssize_t zero_copy_sent;
> + bool new_zero_copy_sent_success;
> };
>
>
> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index 608bcf066e..c7f576290f 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c
> @@ -37,6 +37,11 @@
>
> #define SOCKET_MAX_FDS 16
>
> +#ifdef QEMU_MSG_ZEROCOPY
> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> + Error **errp);
> +#endif
> +
> SocketAddress *
> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
> Error **errp)
> @@ -65,6 +70,7 @@ qio_channel_socket_new(void)
> sioc->fd = -1;
> sioc->zero_copy_queued = 0;
> sioc->zero_copy_sent = 0;
> + sioc->new_zero_copy_sent_success = FALSE;
>
> ioc = QIO_CHANNEL(sioc);
> qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
> @@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> size_t fdsize = sizeof(int) * nfds;
> struct cmsghdr *cmsg;
> int sflags = 0;
> + bool zero_copy_flush_pending = TRUE;
>
> memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>
> @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> goto retry;
> case ENOBUFS:
> if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
> - error_setg_errno(errp, errno,
> - "Process can't lock enough memory for using MSG_ZEROCOPY");
> - return -1;
> + if (zero_copy_flush_pending) {
> + ret = qio_channel_socket_flush_internal(ioc, errp);
Calling this is problematic, because qio_channel_socket_flush is
designed to block execution until all buffers are flushed. When
ioc is in non-blocking mode, this breaks the required semantics
of qio_channel_socket_writev which must return EAGAIN and not
block.
IOW, if we need to be able to flush at this point, we must be
able to do a partial flush, rather than blocking on a full
flush
> + if (ret < 0) {
> + error_setg_errno(errp, errno,
> + "Zerocopy flush failed");
> + return -1;
> + }
> + zero_copy_flush_pending = FALSE;
> + goto retry;
> + } else {
> + error_setg_errno(errp, errno,
> + "Process can't lock enough memory for "
> + "using MSG_ZEROCOPY");
> + return -1;
> + }
> }
> break;
> }
> @@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>
>
> #ifdef QEMU_MSG_ZEROCOPY
> -static int qio_channel_socket_flush(QIOChannel *ioc,
> - Error **errp)
> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> + Error **errp)
> {
> QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> struct msghdr msg = {};
> @@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
> /* No errors, count successfully finished sendmsg()*/
> sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>
> - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
> + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
> if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
> - ret = 0;
> + sioc->new_zero_copy_sent_success = TRUE;
> }
> }
>
> return ret;
> }
>
> +static int qio_channel_socket_flush(QIOChannel *ioc,
> + Error **errp)
> +{
> + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> + int ret;
> +
> + ret = qio_channel_socket_flush_internal(ioc, errp);
> + if (ret < 0) {
> + return ret;
> + }
> +
> + if (sioc->new_zero_copy_sent_success) {
> + sioc->new_zero_copy_sent_success = FALSE;
> + ret = 0;
> + }
> +
> + return ret;
> +}
I don't see the point in these changes adding new_zero_copy_sent_success.
IIUC, you seem to be trying to make it so that qio_channel_socket_flush
will return 0, if qio_channel_socket_writev had called
qio_channel_socket_flush_internal behind the scenes.
That should already be working though, as the first thing we do in flush
is to check if anything was pending and, if not, then return zero:
if (sioc->zero_copy_queued == sioc->zero_copy_sent) {
return 0;
}
....do the real flush logic....
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
2025-02-24 20:35 ` Peter Xu
@ 2025-02-27 16:54 ` Manish
0 siblings, 0 replies; 7+ messages in thread
From: Manish @ 2025-02-27 16:54 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, leobras, berrange, Fabiano Rosas
Really sorry, looks like there is some issue with my mail filters. I was
not aware this patch was already reviewed. Just checked by chance
qemu-devels weblink and saw reviews. Sorry for missing it. I will
re-post by early next week.
On 25/02/25 2:05 am, Peter Xu wrote:
> !-------------------------------------------------------------------|
> CAUTION: External Email
>
> |-------------------------------------------------------------------!
>
> On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
>> We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
>> is accounted for in the OPTMEM limit. If there is any error with sending
>> zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
>> socket error queue. This error queue is freed when userspace reads it.
>>
>> Usually, if there are continuous failures, we merge the metadata into a single
>> SKB and free another one. However, if there is any out-of-order processing or
>> an intermittent zerocopy failures, this error chain can grow significantly,
>> exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
>> allocate any new SKB, leading to an ENOBUF error.
>>
>> To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
>> we flush the error queue and retry once more.
> Could you add some more info on how this issue could be reproduced? When
> it happens, it should only report zerocopy skipped and that's the only case
> below would be helpful, am I right? (otherwise it should fail sendmsg()
> anyway in other form)
We allocate some memory for zerocopy metadata, this is not accounted in
tcp_send_queue but it is accounted in optmem_limit.
https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1607
Also when the zerocopy data is sent and acked, we try to free this
allocated skb as we can see in below code.
https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1751
In case, we get __msg_zerocopy_callback() on continous ranges and
skb_zerocopy_notify_extend() passes, we merge the ranges and free up the
current skb. But if that is not the case, we insert that skb in error
queue and it won't be freed until we do error flush from userspace. This
is possible when either zerocopy packets are skipped in between or it is
always skipped but we get out of order acks on packets. As a result this
error chain keeps growing, exhausthing the optmem_limit. As a result
when new zerocopy sendmsg request comes, it won't be able to allocate
the metadata and returns with ENOBUF.
I understand there is another bug of why zerocopy pakcets are getting
skipped and which could be our deployment specific. But anyway live
migrations should not fail, it is fine to mark zerocopy skipped but not
fail?
>
> Please copy Fabiano when repost.
sure
>
>> Signed-off-by: Manish Mishra <manish.mishra@nutanix.com>
>> ---
>> include/io/channel-socket.h | 1 +
>> io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
>> 2 files changed, 46 insertions(+), 7 deletions(-)
>>
>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>> index ab15577d38..6cfc66eb5b 100644
>> --- a/include/io/channel-socket.h
>> +++ b/include/io/channel-socket.h
>> @@ -49,6 +49,7 @@ struct QIOChannelSocket {
>> socklen_t remoteAddrLen;
>> ssize_t zero_copy_queued;
>> ssize_t zero_copy_sent;
>> + bool new_zero_copy_sent_success;
>> };
>>
>>
>> diff --git a/io/channel-socket.c b/io/channel-socket.c
>> index 608bcf066e..c7f576290f 100644
>> --- a/io/channel-socket.c
>> +++ b/io/channel-socket.c
>> @@ -37,6 +37,11 @@
>>
>> #define SOCKET_MAX_FDS 16
>>
>> +#ifdef QEMU_MSG_ZEROCOPY
>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>> + Error **errp);
>> +#endif
>> +
>> SocketAddress *
>> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>> Error **errp)
>> @@ -65,6 +70,7 @@ qio_channel_socket_new(void)
>> sioc->fd = -1;
>> sioc->zero_copy_queued = 0;
>> sioc->zero_copy_sent = 0;
>> + sioc->new_zero_copy_sent_success = FALSE;
>>
>> ioc = QIO_CHANNEL(sioc);
>> qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
>> @@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>> size_t fdsize = sizeof(int) * nfds;
>> struct cmsghdr *cmsg;
>> int sflags = 0;
>> + bool zero_copy_flush_pending = TRUE;
> It'll be nice to add some comment above this variable explaining the issue.
sure
>
>>
>> memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>>
>> @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>> goto retry;
>> case ENOBUFS:
>> if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
>> - error_setg_errno(errp, errno,
>> - "Process can't lock enough memory for using MSG_ZEROCOPY");
>> - return -1;
>> + if (zero_copy_flush_pending) {
>> + ret = qio_channel_socket_flush_internal(ioc, errp);
> Calling a socket specific helper in generic qiochannel path may not be a
> good idea.
>
> Maybe we could still stick with qio_channel_flush(), but insteadx add a new
> parameter to qio_channel_flush(..., bool *succeeded), only pass in
> "succeeded == NULL" here, not collect the cached result. Then the other
> multifd use case it can pass in a valid "succeeded" pointer.
sure
>
>> + if (ret < 0) {
>> + error_setg_errno(errp, errno,
>> + "Zerocopy flush failed");
>> + return -1;
>> + }
>> + zero_copy_flush_pending = FALSE;
>> + goto retry;
>> + } else {
>> + error_setg_errno(errp, errno,
>> + "Process can't lock enough memory for "
>> + "using MSG_ZEROCOPY");
>> + return -1;
>> + }
>> }
>> break;
>> }
>> @@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>
>>
>> #ifdef QEMU_MSG_ZEROCOPY
>> -static int qio_channel_socket_flush(QIOChannel *ioc,
>> - Error **errp)
>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>> + Error **errp)
>> {
>> QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>> struct msghdr msg = {};
>> @@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
>> /* No errors, count successfully finished sendmsg()*/
>> sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>>
>> - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
>> + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
>> if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
>> - ret = 0;
> After this change, IIUC this function will return either -1 or 1, never 0.
> Not sure whether it's intended. May worth add a comment above the function.
yes, this function won't return 0, but it is anyway a new and internal
functions?
qio_channel_socket_flush() will still have the same behavior.
>> + sioc->new_zero_copy_sent_success = TRUE;
>> }
>> }
>>
>> return ret;
>> }
>>
>> +static int qio_channel_socket_flush(QIOChannel *ioc,
>> + Error **errp)
>> +{
>> + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>> + int ret;
>> +
>> + ret = qio_channel_socket_flush_internal(ioc, errp);
>> + if (ret < 0) {
>> + return ret;
>> + }
>> +
>> + if (sioc->new_zero_copy_sent_success) {
>> + sioc->new_zero_copy_sent_success = FALSE;
>> + ret = 0;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> #endif /* QEMU_MSG_ZEROCOPY */
>>
>> static int
>> --
>> 2.43.0
>>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
2025-02-25 9:07 ` Daniel P. Berrangé
@ 2025-02-27 17:00 ` Manish
2025-02-27 17:56 ` Peter Xu
0 siblings, 1 reply; 7+ messages in thread
From: Manish @ 2025-02-27 17:00 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: qemu-devel, leobras, peterx
[-- Attachment #1: Type: text/plain, Size: 8797 bytes --]
Again really sorry, missed this due to some issue with my mail filters
and came to know about it via qemu-devel weblink. :)
On 25/02/25 2:37 pm, Daniel P. Berrangé wrote:
> !-------------------------------------------------------------------|
> CAUTION: External Email
>
> |-------------------------------------------------------------------!
>
> On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
>> We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
>> is accounted for in the OPTMEM limit. If there is any error with sending
>> zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
>> socket error queue. This error queue is freed when userspace reads it.
>>
>> Usually, if there are continuous failures, we merge the metadata into a single
>> SKB and free another one. However, if there is any out-of-order processing or
>> an intermittent zerocopy failures, this error chain can grow significantly,
>> exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
>> allocate any new SKB, leading to an ENOBUF error.
> IIUC, you are effectively saying that the migration code is calling
> qio_channel_write() too many times, before it calls qio_channel_flush(.)
>
> Can you clarify what yu mean by the "OPTMEM limit" here ? I'm wondering
> if this is potentially triggered by suboptimal tuning of the deployment
> environment or we need to document tuning better.
I replied it on other thread. Posting it again.
We allocate some memory for zerocopy metadata, this is not accounted in
tcp_send_queue but it is accounted in optmem_limit.
https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1607
Also when the zerocopy data is sent and acked, we try to free this
allocated skb as we can see in below code.
https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1751
In case, we get __msg_zerocopy_callback() on continous ranges and
skb_zerocopy_notify_extend() passes, we merge the ranges and free up the
current skb. But if that is not the case, we insert that skb in error
queue and it won't be freed until we do error flush from userspace. This
is possible when either zerocopy packets are skipped in between or it is
always skipped but we get out of order acks on packets. As a result this
error chain keeps growing, exhausthing the optmem_limit. As a result
when new zerocopy sendmsg request comes, it won't be able to allocate
the metadata and returns with ENOBUF.
I understand there is another bug of why zerocopy pakcets are getting
skipped and which could be our deployment specific. But anyway live
migrations should not fail, it is fine to mark zerocopy skipped but not
fail?
>> To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
>> we flush the error queue and retry once more.
>>
>> Signed-off-by: Manish Mishra<manish.mishra@nutanix.com>
>> ---
>> include/io/channel-socket.h | 1 +
>> io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
>> 2 files changed, 46 insertions(+), 7 deletions(-)
>>
>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>> index ab15577d38..6cfc66eb5b 100644
>> --- a/include/io/channel-socket.h
>> +++ b/include/io/channel-socket.h
>> @@ -49,6 +49,7 @@ struct QIOChannelSocket {
>> socklen_t remoteAddrLen;
>> ssize_t zero_copy_queued;
>> ssize_t zero_copy_sent;
>> + bool new_zero_copy_sent_success;
>> };
>>
>>
>> diff --git a/io/channel-socket.c b/io/channel-socket.c
>> index 608bcf066e..c7f576290f 100644
>> --- a/io/channel-socket.c
>> +++ b/io/channel-socket.c
>> @@ -37,6 +37,11 @@
>>
>> #define SOCKET_MAX_FDS 16
>>
>> +#ifdef QEMU_MSG_ZEROCOPY
>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>> + Error **errp);
>> +#endif
>> +
>> SocketAddress *
>> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>> Error **errp)
>> @@ -65,6 +70,7 @@ qio_channel_socket_new(void)
>> sioc->fd = -1;
>> sioc->zero_copy_queued = 0;
>> sioc->zero_copy_sent = 0;
>> + sioc->new_zero_copy_sent_success = FALSE;
>>
>> ioc = QIO_CHANNEL(sioc);
>> qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
>> @@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>> size_t fdsize = sizeof(int) * nfds;
>> struct cmsghdr *cmsg;
>> int sflags = 0;
>> + bool zero_copy_flush_pending = TRUE;
>>
>> memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>>
>> @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>> goto retry;
>> case ENOBUFS:
>> if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
>> - error_setg_errno(errp, errno,
>> - "Process can't lock enough memory for using MSG_ZEROCOPY");
>> - return -1;
>> + if (zero_copy_flush_pending) {
>> + ret = qio_channel_socket_flush_internal(ioc, errp);
> Calling this is problematic, because qio_channel_socket_flush is
> designed to block execution until all buffers are flushed. When
> ioc is in non-blocking mode, this breaks the required semantics
> of qio_channel_socket_writev which must return EAGAIN and not
> block.
>
> IOW, if we need to be able to flush at this point, we must be
> able to do a partial flush, rather than blocking on a full
> flush
sure
>> + if (ret < 0) {
>> + error_setg_errno(errp, errno,
>> + "Zerocopy flush failed");
>> + return -1;
>> + }
>> + zero_copy_flush_pending = FALSE;
>> + goto retry;
>> + } else {
>> + error_setg_errno(errp, errno,
>> + "Process can't lock enough memory for "
>> + "using MSG_ZEROCOPY");
>> + return -1;
>> + }
>> }
>> break;
>> }
>> @@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>
>>
>> #ifdef QEMU_MSG_ZEROCOPY
>> -static int qio_channel_socket_flush(QIOChannel *ioc,
>> - Error **errp)
>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>> + Error **errp)
>> {
>> QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>> struct msghdr msg = {};
>> @@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
>> /* No errors, count successfully finished sendmsg()*/
>> sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>>
>> - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
>> + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
>> if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
>> - ret = 0;
>> + sioc->new_zero_copy_sent_success = TRUE;
>> }
>> }
>>
>> return ret;
>> }
>>
>> +static int qio_channel_socket_flush(QIOChannel *ioc,
>> + Error **errp)
>> +{
>> + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>> + int ret;
>> +
>> + ret = qio_channel_socket_flush_internal(ioc, errp);
>> + if (ret < 0) {
>> + return ret;
>> + }
>> +
>> + if (sioc->new_zero_copy_sent_success) {
>> + sioc->new_zero_copy_sent_success = FALSE;
>> + ret = 0;
>> + }
>> +
>> + return ret;
>> +}
> I don't see the point in these changes adding new_zero_copy_sent_success.
>
> IIUC, you seem to be trying to make it so that qio_channel_socket_flush
> will return 0, if qio_channel_socket_writev had called
> qio_channel_socket_flush_internal behind the scenes.
> That should already be working though, as the first thing we do in flush
> is to check if anything was pending and, if not, then return zero:
>
> if (sioc->zero_copy_queued == sioc->zero_copy_sent) {
> return 0;
> }
> ....do the real flush logic....
>
>
> With regards,
> Daniel
yes but current logic is, if there was any successful zerocopy send in
the iteration, return 0. I did not want to change the behavior. Now
qio_channel_socket_flush_internal() can be called at any point of time
and as many times depending on when the optmem_limit is full. So we may
or may not have any data to process when actual
qio_channel_socket_flush() comes.
Thanks
Manish Mishra
[-- Attachment #2: Type: text/html, Size: 28685 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
2025-02-27 17:00 ` Manish
@ 2025-02-27 17:56 ` Peter Xu
2025-02-27 18:58 ` Manish
0 siblings, 1 reply; 7+ messages in thread
From: Peter Xu @ 2025-02-27 17:56 UTC (permalink / raw)
To: Manish; +Cc: Daniel P. Berrangé, qemu-devel, leobras
On Thu, Feb 27, 2025 at 10:30:31PM +0530, Manish wrote:
> Again really sorry, missed this due to some issue with my mail filters and
> came to know about it via qemu-devel weblink. :)
>
> On 25/02/25 2:37 pm, Daniel P. Berrangé wrote:
> > !-------------------------------------------------------------------|
> > CAUTION: External Email
> >
> > |-------------------------------------------------------------------!
> >
> > On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
> > > We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
> > > is accounted for in the OPTMEM limit. If there is any error with sending
> > > zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
> > > socket error queue. This error queue is freed when userspace reads it.
> > >
> > > Usually, if there are continuous failures, we merge the metadata into a single
> > > SKB and free another one. However, if there is any out-of-order processing or
> > > an intermittent zerocopy failures, this error chain can grow significantly,
> > > exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
> > > allocate any new SKB, leading to an ENOBUF error.
> > IIUC, you are effectively saying that the migration code is calling
> > qio_channel_write() too many times, before it calls qio_channel_flush(.)
> >
> > Can you clarify what yu mean by the "OPTMEM limit" here ? I'm wondering
> > if this is potentially triggered by suboptimal tuning of the deployment
> > environment or we need to document tuning better.
>
>
> I replied it on other thread. Posting it again.
>
> We allocate some memory for zerocopy metadata, this is not accounted in
> tcp_send_queue but it is accounted in optmem_limit.
>
> https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1607
>
> Also when the zerocopy data is sent and acked, we try to free this
> allocated skb as we can see in below code.
>
> https://github.com/torvalds/linux/blob/dd83757f6e686a2188997cb58b5975f744bb7786/net/core/skbuff.c#L1751
>
> In case, we get __msg_zerocopy_callback() on continous ranges and
> skb_zerocopy_notify_extend() passes, we merge the ranges and free up the
> current skb. But if that is not the case, we insert that skb in error
> queue and it won't be freed until we do error flush from userspace. This
> is possible when either zerocopy packets are skipped in between or it is
> always skipped but we get out of order acks on packets. As a result this
> error chain keeps growing, exhausthing the optmem_limit. As a result
> when new zerocopy sendmsg request comes, it won't be able to allocate
> the metadata and returns with ENOBUF.
>
> I understand there is another bug of why zerocopy pakcets are getting
> skipped and which could be our deployment specific. But anyway live
> migrations should not fail, it is fine to mark zerocopy skipped but not
> fail?
>
>
> > > To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
> > > we flush the error queue and retry once more.
> > >
> > > Signed-off-by: Manish Mishra<manish.mishra@nutanix.com>
> > > ---
> > > include/io/channel-socket.h | 1 +
> > > io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
> > > 2 files changed, 46 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> > > index ab15577d38..6cfc66eb5b 100644
> > > --- a/include/io/channel-socket.h
> > > +++ b/include/io/channel-socket.h
> > > @@ -49,6 +49,7 @@ struct QIOChannelSocket {
> > > socklen_t remoteAddrLen;
> > > ssize_t zero_copy_queued;
> > > ssize_t zero_copy_sent;
> > > + bool new_zero_copy_sent_success;
> > > };
> > > diff --git a/io/channel-socket.c b/io/channel-socket.c
> > > index 608bcf066e..c7f576290f 100644
> > > --- a/io/channel-socket.c
> > > +++ b/io/channel-socket.c
> > > @@ -37,6 +37,11 @@
> > > #define SOCKET_MAX_FDS 16
> > > +#ifdef QEMU_MSG_ZEROCOPY
> > > +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> > > + Error **errp);
> > > +#endif
> > > +
> > > SocketAddress *
> > > qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
> > > Error **errp)
> > > @@ -65,6 +70,7 @@ qio_channel_socket_new(void)
> > > sioc->fd = -1;
> > > sioc->zero_copy_queued = 0;
> > > sioc->zero_copy_sent = 0;
> > > + sioc->new_zero_copy_sent_success = FALSE;
> > > ioc = QIO_CHANNEL(sioc);
> > > qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
> > > @@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> > > size_t fdsize = sizeof(int) * nfds;
> > > struct cmsghdr *cmsg;
> > > int sflags = 0;
> > > + bool zero_copy_flush_pending = TRUE;
> > > memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
> > > @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> > > goto retry;
> > > case ENOBUFS:
> > > if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
> > > - error_setg_errno(errp, errno,
> > > - "Process can't lock enough memory for using MSG_ZEROCOPY");
> > > - return -1;
> > > + if (zero_copy_flush_pending) {
> > > + ret = qio_channel_socket_flush_internal(ioc, errp);
> > Calling this is problematic, because qio_channel_socket_flush is
> > designed to block execution until all buffers are flushed. When
> > ioc is in non-blocking mode, this breaks the required semantics
> > of qio_channel_socket_writev which must return EAGAIN and not
> > block.
> >
> > IOW, if we need to be able to flush at this point, we must be
> > able to do a partial flush, rather than blocking on a full
> > flush
>
> sure
>
> > > + if (ret < 0) {
> > > + error_setg_errno(errp, errno,
> > > + "Zerocopy flush failed");
> > > + return -1;
> > > + }
> > > + zero_copy_flush_pending = FALSE;
> > > + goto retry;
> > > + } else {
> > > + error_setg_errno(errp, errno,
> > > + "Process can't lock enough memory for "
> > > + "using MSG_ZEROCOPY");
> > > + return -1;
> > > + }
> > > }
> > > break;
> > > }
> > > @@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> > > #ifdef QEMU_MSG_ZEROCOPY
> > > -static int qio_channel_socket_flush(QIOChannel *ioc,
> > > - Error **errp)
> > > +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> > > + Error **errp)
> > > {
> > > QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> > > struct msghdr msg = {};
> > > @@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
> > > /* No errors, count successfully finished sendmsg()*/
> > > sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
> > > - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
> > > + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
> > > if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
> > > - ret = 0;
> > > + sioc->new_zero_copy_sent_success = TRUE;
> > > }
> > > }
> > > return ret;
> > > }
> > > +static int qio_channel_socket_flush(QIOChannel *ioc,
> > > + Error **errp)
> > > +{
> > > + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
> > > + int ret;
> > > +
> > > + ret = qio_channel_socket_flush_internal(ioc, errp);
> > > + if (ret < 0) {
> > > + return ret;
> > > + }
> > > +
> > > + if (sioc->new_zero_copy_sent_success) {
> > > + sioc->new_zero_copy_sent_success = FALSE;
> > > + ret = 0;
> > > + }
> > > +
> > > + return ret;
> > > +}
> > I don't see the point in these changes adding new_zero_copy_sent_success.
> >
> > IIUC, you seem to be trying to make it so that qio_channel_socket_flush
> > will return 0, if qio_channel_socket_writev had called
> > qio_channel_socket_flush_internal behind the scenes.
> > That should already be working though, as the first thing we do in flush
> > is to check if anything was pending and, if not, then return zero:
> >
> > if (sioc->zero_copy_queued == sioc->zero_copy_sent) {
> > return 0;
> > }
> > ....do the real flush logic....
> >
> >
> > With regards,
> > Daniel
>
>
> yes but current logic is, if there was any successful zerocopy send in the
> iteration, return 0. I did not want to change the behavior. Now
> qio_channel_socket_flush_internal() can be called at any point of time and
> as many times depending on when the optmem_limit is full. So we may or may
> not have any data to process when actual qio_channel_socket_flush() comes.
IIUC the whole point of that flag was to guarantee the current stat counter
(dirty_sync_missed_zero_copy) keeps working. That counter looks like pretty
much for debugging purpose. If we want, I think we can drop that but
replacing it with a tracepoint:
diff --git a/io/channel-socket.c b/io/channel-socket.c
index 608bcf066e..bccb464c9b 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -791,10 +791,9 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
/* No errors, count successfully finished sendmsg()*/
sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
- /* If any sendmsg() succeeded using zero copy, return 0 at the end */
- if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
- ret = 0;
- }
+ /* Enable this tracepoint if one wants to track zerocopy fallbacks */
+ trace_qio_channel_socket_zerocopy_fallback(
+ serr->ee_code == SO_EE_CODE_ZEROCOPY_COPIED);
}
Then we deprecate this entry in QAPI:
# @dirty-sync-missed-zero-copy: Number of times dirty RAM
# synchronization could not avoid copying dirty pages. This is
# between 0 and @dirty-sync-count * @multifd-channels. (since
# 7.1)
I doubt any mgmt consumes it at all. Probably we shouldn't ever expose
that to QAPI..
--
Peter Xu
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg
2025-02-27 17:56 ` Peter Xu
@ 2025-02-27 18:58 ` Manish
0 siblings, 0 replies; 7+ messages in thread
From: Manish @ 2025-02-27 18:58 UTC (permalink / raw)
To: Peter Xu; +Cc: Daniel P. Berrangé, qemu-devel, leobras
On 27/02/25 11:26 pm, Peter Xu wrote:
> !-------------------------------------------------------------------|
> CAUTION: External Email
>
> |-------------------------------------------------------------------!
>
> On Thu, Feb 27, 2025 at 10:30:31PM +0530, Manish wrote:
>> Again really sorry, missed this due to some issue with my mail filters and
>> came to know about it via qemu-devel weblink. :)
>>
>> On 25/02/25 2:37 pm, Daniel P. Berrangé wrote:
>>> !-------------------------------------------------------------------|
>>> CAUTION: External Email
>>>
>>> |-------------------------------------------------------------------!
>>>
>>> On Fri, Feb 21, 2025 at 04:44:48AM -0500, Manish Mishra wrote:
>>>> We allocate extra metadata SKBs in case of zerocopy send. This metadata memory
>>>> is accounted for in the OPTMEM limit. If there is any error with sending
>>>> zerocopy data or if zerocopy was skipped, these metadata SKBs are queued in the
>>>> socket error queue. This error queue is freed when userspace reads it.
>>>>
>>>> Usually, if there are continuous failures, we merge the metadata into a single
>>>> SKB and free another one. However, if there is any out-of-order processing or
>>>> an intermittent zerocopy failures, this error chain can grow significantly,
>>>> exhausting the OPTMEM limit. As a result, all new sendmsg requests fail to
>>>> allocate any new SKB, leading to an ENOBUF error.
>>> IIUC, you are effectively saying that the migration code is calling
>>> qio_channel_write() too many times, before it calls qio_channel_flush(.)
>>>
>>> Can you clarify what yu mean by the "OPTMEM limit" here ? I'm wondering
>>> if this is potentially triggered by suboptimal tuning of the deployment
>>> environment or we need to document tuning better.
>>
>> I replied it on other thread. Posting it again.
>>
>> We allocate some memory for zerocopy metadata, this is not accounted in
>> tcp_send_queue but it is accounted in optmem_limit.
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_blob_dd83757f6e686a2188997cb58b5975f744bb7786_net_core_skbuff.c-23L1607&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=c4KON2DiMd-szjwjggQcuUvTsPWblztAL0gVzaHnNmc&m=YA3zea2x_8HvvhQTYxsrstDnnR6I9dBTpwab3ZA3sSlAG5-8Yx7-xXYWLbe97cTe&s=3Wy9sMKSYoYsFN2cMzzIoa-C-wu4Uz8EHwizX5bGHaw&e=
>>
>> Also when the zerocopy data is sent and acked, we try to free this
>> allocated skb as we can see in below code.
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_blob_dd83757f6e686a2188997cb58b5975f744bb7786_net_core_skbuff.c-23L1751&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=c4KON2DiMd-szjwjggQcuUvTsPWblztAL0gVzaHnNmc&m=YA3zea2x_8HvvhQTYxsrstDnnR6I9dBTpwab3ZA3sSlAG5-8Yx7-xXYWLbe97cTe&s=rF8-LBBR4gvzKz2mE7dopv2uUYJavJuF2wmKUDmeFgE&e=
>>
>> In case, we get __msg_zerocopy_callback() on continous ranges and
>> skb_zerocopy_notify_extend() passes, we merge the ranges and free up the
>> current skb. But if that is not the case, we insert that skb in error
>> queue and it won't be freed until we do error flush from userspace. This
>> is possible when either zerocopy packets are skipped in between or it is
>> always skipped but we get out of order acks on packets. As a result this
>> error chain keeps growing, exhausthing the optmem_limit. As a result
>> when new zerocopy sendmsg request comes, it won't be able to allocate
>> the metadata and returns with ENOBUF.
>>
>> I understand there is another bug of why zerocopy pakcets are getting
>> skipped and which could be our deployment specific. But anyway live
>> migrations should not fail, it is fine to mark zerocopy skipped but not
>> fail?
>>
>>
>>>> To workaround this, if we encounter an ENOBUF error with a zerocopy sendmsg,
>>>> we flush the error queue and retry once more.
>>>>
>>>> Signed-off-by: Manish Mishra<manish.mishra@nutanix.com>
>>>> ---
>>>> include/io/channel-socket.h | 1 +
>>>> io/channel-socket.c | 52 ++++++++++++++++++++++++++++++++-----
>>>> 2 files changed, 46 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
>>>> index ab15577d38..6cfc66eb5b 100644
>>>> --- a/include/io/channel-socket.h
>>>> +++ b/include/io/channel-socket.h
>>>> @@ -49,6 +49,7 @@ struct QIOChannelSocket {
>>>> socklen_t remoteAddrLen;
>>>> ssize_t zero_copy_queued;
>>>> ssize_t zero_copy_sent;
>>>> + bool new_zero_copy_sent_success;
>>>> };
>>>> diff --git a/io/channel-socket.c b/io/channel-socket.c
>>>> index 608bcf066e..c7f576290f 100644
>>>> --- a/io/channel-socket.c
>>>> +++ b/io/channel-socket.c
>>>> @@ -37,6 +37,11 @@
>>>> #define SOCKET_MAX_FDS 16
>>>> +#ifdef QEMU_MSG_ZEROCOPY
>>>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>>>> + Error **errp);
>>>> +#endif
>>>> +
>>>> SocketAddress *
>>>> qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
>>>> Error **errp)
>>>> @@ -65,6 +70,7 @@ qio_channel_socket_new(void)
>>>> sioc->fd = -1;
>>>> sioc->zero_copy_queued = 0;
>>>> sioc->zero_copy_sent = 0;
>>>> + sioc->new_zero_copy_sent_success = FALSE;
>>>> ioc = QIO_CHANNEL(sioc);
>>>> qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
>>>> @@ -566,6 +572,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>>> size_t fdsize = sizeof(int) * nfds;
>>>> struct cmsghdr *cmsg;
>>>> int sflags = 0;
>>>> + bool zero_copy_flush_pending = TRUE;
>>>> memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
>>>> @@ -612,9 +619,21 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>>> goto retry;
>>>> case ENOBUFS:
>>>> if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
>>>> - error_setg_errno(errp, errno,
>>>> - "Process can't lock enough memory for using MSG_ZEROCOPY");
>>>> - return -1;
>>>> + if (zero_copy_flush_pending) {
>>>> + ret = qio_channel_socket_flush_internal(ioc, errp);
>>> Calling this is problematic, because qio_channel_socket_flush is
>>> designed to block execution until all buffers are flushed. When
>>> ioc is in non-blocking mode, this breaks the required semantics
>>> of qio_channel_socket_writev which must return EAGAIN and not
>>> block.
>>>
>>> IOW, if we need to be able to flush at this point, we must be
>>> able to do a partial flush, rather than blocking on a full
>>> flush
>> sure
>>
>>>> + if (ret < 0) {
>>>> + error_setg_errno(errp, errno,
>>>> + "Zerocopy flush failed");
>>>> + return -1;
>>>> + }
>>>> + zero_copy_flush_pending = FALSE;
>>>> + goto retry;
>>>> + } else {
>>>> + error_setg_errno(errp, errno,
>>>> + "Process can't lock enough memory for "
>>>> + "using MSG_ZEROCOPY");
>>>> + return -1;
>>>> + }
>>>> }
>>>> break;
>>>> }
>>>> @@ -725,8 +744,8 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
>>>> #ifdef QEMU_MSG_ZEROCOPY
>>>> -static int qio_channel_socket_flush(QIOChannel *ioc,
>>>> - Error **errp)
>>>> +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
>>>> + Error **errp)
>>>> {
>>>> QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>>>> struct msghdr msg = {};
>>>> @@ -791,15 +810,34 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
>>>> /* No errors, count successfully finished sendmsg()*/
>>>> sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>>>> - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
>>>> + /* If any sendmsg() succeeded using zero copy, mark zerocopy success */
>>>> if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
>>>> - ret = 0;
>>>> + sioc->new_zero_copy_sent_success = TRUE;
>>>> }
>>>> }
>>>> return ret;
>>>> }
>>>> +static int qio_channel_socket_flush(QIOChannel *ioc,
>>>> + Error **errp)
>>>> +{
>>>> + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
>>>> + int ret;
>>>> +
>>>> + ret = qio_channel_socket_flush_internal(ioc, errp);
>>>> + if (ret < 0) {
>>>> + return ret;
>>>> + }
>>>> +
>>>> + if (sioc->new_zero_copy_sent_success) {
>>>> + sioc->new_zero_copy_sent_success = FALSE;
>>>> + ret = 0;
>>>> + }
>>>> +
>>>> + return ret;
>>>> +}
>>> I don't see the point in these changes adding new_zero_copy_sent_success.
>>>
>>> IIUC, you seem to be trying to make it so that qio_channel_socket_flush
>>> will return 0, if qio_channel_socket_writev had called
>>> qio_channel_socket_flush_internal behind the scenes.
>>> That should already be working though, as the first thing we do in flush
>>> is to check if anything was pending and, if not, then return zero:
>>>
>>> if (sioc->zero_copy_queued == sioc->zero_copy_sent) {
>>> return 0;
>>> }
>>> ....do the real flush logic....
>>>
>>>
>>> With regards,
>>> Daniel
>>
>> yes but current logic is, if there was any successful zerocopy send in the
>> iteration, return 0. I did not want to change the behavior. Now
>> qio_channel_socket_flush_internal() can be called at any point of time and
>> as many times depending on when the optmem_limit is full. So we may or may
>> not have any data to process when actual qio_channel_socket_flush() comes.
> IIUC the whole point of that flag was to guarantee the current stat counter
> (dirty_sync_missed_zero_copy) keeps working. That counter looks like pretty
> much for debugging purpose. If we want, I think we can drop that but
> replacing it with a tracepoint:
>
> diff --git a/io/channel-socket.c b/io/channel-socket.c
> index 608bcf066e..bccb464c9b 100644
> --- a/io/channel-socket.c
> +++ b/io/channel-socket.c
> @@ -791,10 +791,9 @@ static int qio_channel_socket_flush(QIOChannel *ioc,
> /* No errors, count successfully finished sendmsg()*/
> sioc->zero_copy_sent += serr->ee_data - serr->ee_info + 1;
>
> - /* If any sendmsg() succeeded using zero copy, return 0 at the end */
> - if (serr->ee_code != SO_EE_CODE_ZEROCOPY_COPIED) {
> - ret = 0;
> - }
> + /* Enable this tracepoint if one wants to track zerocopy fallbacks */
> + trace_qio_channel_socket_zerocopy_fallback(
> + serr->ee_code == SO_EE_CODE_ZEROCOPY_COPIED);
> }
>
> Then we deprecate this entry in QAPI:
>
> # @dirty-sync-missed-zero-copy: Number of times dirty RAM
> # synchronization could not avoid copying dirty pages. This is
> # between 0 and @dirty-sync-count * @multifd-channels. (since
> # 7.1)
>
> I doubt any mgmt consumes it at all. Probably we shouldn't ever expose
> that to QAPI..
>
Sure, I will send a updated patch early next week.
Thanks
Manish Mishra
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-02-27 18:59 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-21 9:44 [PATCH] QIOChannelSocket: Flush zerocopy socket error queue on ENOBUF failure for sendmsg Manish Mishra
2025-02-24 20:35 ` Peter Xu
2025-02-27 16:54 ` Manish
2025-02-25 9:07 ` Daniel P. Berrangé
2025-02-27 17:00 ` Manish
2025-02-27 17:56 ` Peter Xu
2025-02-27 18:58 ` Manish
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).