qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Manish <manish.mishra@nutanix.com>
Cc: Fabiano Rosas <farosas@suse.de>,
	qemu-devel@nongnu.org, peterx@redhat.com, leobras@redhat.com
Subject: Re: [PATCH v4] QIOChannelSocket: Flush zerocopy socket error queue on sendmsg failure due to ENOBUF
Date: Tue, 15 Apr 2025 10:26:42 +0100	[thread overview]
Message-ID: <Z_4mUkuAXcTXyx5B@redhat.com> (raw)
In-Reply-To: <14f644e9-aa8a-45b0-9d0a-972d72345409@nutanix.com>

On Tue, Apr 15, 2025 at 02:50:39PM +0530, Manish wrote:
> 
> On 14/04/25 7:56 pm, Fabiano Rosas wrote:
> > !-------------------------------------------------------------------|
> >    CAUTION: External Email
> > 
> > |-------------------------------------------------------------------!
> > 
> > Manish Mishra <manish.mishra@nutanix.com> writes:
> > 
> > > We allocate extra metadata SKBs in case of a zerocopy send. This metadata
> > > memory is accounted for in the OPTMEM limit. If there is any error while
> > > sending zerocopy packets or if zerocopy is skipped, these metadata SKBs are
> > > queued in the socket error queue. This error queue is freed when userspace
> > > reads it.
> > > 
> > > Usually, if there are continuous failures, we merge the metadata into a single
> > > SKB and free another one. As a result, it never exceeds the OPTMEM limit.
> > > However, if there is any out-of-order processing or intermittent zerocopy
> > > failures, this error chain can grow significantly, exhausting the OPTMEM limit.
> > > As a result, all new sendmsg requests fail to allocate any new SKB, leading to
> > > an ENOBUF error. Depending on the amount of data queued before the flush
> > > (i.e., large live migration iterations), even large OPTMEM limits are prone to
> > > failure.
> > > 
> > > To work around this, if we encounter an ENOBUF error with a zerocopy sendmsg,
> > > we flush the error queue and retry once more.
> > > 
> > > Signed-off-by: Manish Mishra <manish.mishra@nutanix.com>
> > > ---
> > >   include/io/channel-socket.h |  5 +++
> > >   io/channel-socket.c         | 74 ++++++++++++++++++++++++++++++-------
> > >   2 files changed, 65 insertions(+), 14 deletions(-)
> > > 
> > > V2:
> > >    1. Removed the dirty_sync_missed_zero_copy migration stat.
> > >    2. Made the call to qio_channel_socket_flush_internal() from
> > >       qio_channel_socket_writev() non-blocking.
> > > 
> > > V3:
> > >    1. Add the dirty_sync_missed_zero_copy migration stat again.
> > > 
> > > V4:
> > >    1. Minor nit to rename s/zero_copy_flush_pending/zerocopy_flushed_once.
> > > 
> > > diff --git a/include/io/channel-socket.h b/include/io/channel-socket.h
> > > index ab15577d38..2c48b972e8 100644
> > > --- a/include/io/channel-socket.h
> > > +++ b/include/io/channel-socket.h
> > > @@ -49,6 +49,11 @@ struct QIOChannelSocket {
> > >       socklen_t remoteAddrLen;
> > >       ssize_t zero_copy_queued;
> > >       ssize_t zero_copy_sent;
> > > +    /**
> > > +     * This flag indicates whether any new data was successfully sent with
> > > +     * zerocopy since the last qio_channel_socket_flush() call.
> > > +     */
> > > +    bool new_zero_copy_sent_success;
> > >   };
> > > diff --git a/io/channel-socket.c b/io/channel-socket.c
> > > index 608bcf066e..d5882c16fe 100644
> > > --- a/io/channel-socket.c
> > > +++ b/io/channel-socket.c
> > > @@ -37,6 +37,12 @@
> > >   #define SOCKET_MAX_FDS 16
> > > +#ifdef QEMU_MSG_ZEROCOPY
> > > +static int qio_channel_socket_flush_internal(QIOChannel *ioc,
> > > +                                             bool block,
> > > +                                             Error **errp);
> > > +#endif
> > > +
> > >   SocketAddress *
> > >   qio_channel_socket_get_local_address(QIOChannelSocket *ioc,
> > >                                        Error **errp)
> > > @@ -65,6 +71,7 @@ qio_channel_socket_new(void)
> > >       sioc->fd = -1;
> > >       sioc->zero_copy_queued = 0;
> > >       sioc->zero_copy_sent = 0;
> > > +    sioc->new_zero_copy_sent_success = FALSE;
> > >       ioc = QIO_CHANNEL(sioc);
> > >       qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN);
> > > @@ -566,6 +573,7 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> > >       size_t fdsize = sizeof(int) * nfds;
> > >       struct cmsghdr *cmsg;
> > >       int sflags = 0;
> > > +    bool zerocopy_flushed_once = FALSE;
> > >       memset(control, 0, CMSG_SPACE(sizeof(int) * SOCKET_MAX_FDS));
> > > @@ -612,9 +620,25 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
> > >               goto retry;
> > >           case ENOBUFS:
> > >               if (flags & QIO_CHANNEL_WRITE_FLAG_ZERO_COPY) {
> > > -                error_setg_errno(errp, errno,
> > > -                                 "Process can't lock enough memory for using MSG_ZEROCOPY");
> > > -                return -1;
> > > +                /**
> > > +                 * Socket error queueing may exhaust the OPTMEM limit. Try
> > > +                 * flushing the error queue once.
> > > +                 */
> > > +                if (!zerocopy_flushed_once) {
> > > +                    ret = qio_channel_socket_flush_internal(ioc, false, errp);
> > I'm not following this closely so I might have missed some disussion,
> > but let me point out that the previous version had a comment regarding
> > hardcoding 'false' here that I don't see addressed nor any comments
> > explaining why it wasn't addressed.
> 
> Hi Fabiano, I did reply to that in last comment for v3. Please let me know
> if that does not make sense. https://lore.kernel.org/all/c7a86623-db04-459f-afd5-6a318475bb92@nutanix.com/T/

That comment doesn't really address the problem.

If the socket is in blocking mode, we *MUST* block to send
all data. Returning early with a partial send when zerocopy
buffers are full isn't matching the requested semantics
for blocking mode.

> 
> 
> > 
> > > +                    if (ret < 0) {
> > > +                        error_setg_errno(errp, errno,
> > > +                                         "Zerocopy flush failed");
> > > +                        return -1;
> > > +                    }
> > > +                    zerocopy_flushed_once = TRUE;
> > > +                    goto retry;
> > > +                } else {
> > > +                    error_setg_errno(errp, errno,
> > > +                                     "Process can't lock enough memory for "
> > > +                                     "using MSG_ZEROCOPY");
> > > +                    return -1;
> > > +                }
> > >               }
> > >               break;

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



  reply	other threads:[~2025-04-15  9:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-03  8:21 [PATCH v4] QIOChannelSocket: Flush zerocopy socket error queue on sendmsg failure due to ENOBUF Manish Mishra
2025-04-14 14:26 ` Fabiano Rosas
2025-04-15  9:20   ` Manish
2025-04-15  9:26     ` Daniel P. Berrangé [this message]
2025-04-15  9:33       ` Manish
2025-04-15  9:27 ` Daniel P. Berrangé

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z_4mUkuAXcTXyx5B@redhat.com \
    --to=berrange@redhat.com \
    --cc=farosas@suse.de \
    --cc=leobras@redhat.com \
    --cc=manish.mishra@nutanix.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).