From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: netdev@vger.kernel.org, "Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
linux-kernel@vger.kernel.org, "Simon Horman" <horms@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Jason Wang" <jasowang@redhat.com>,
kvm@vger.kernel.org, "Stefan Hajnoczi" <stefanha@redhat.com>,
virtualization@lists.linux.dev,
"Eric Dumazet" <edumazet@google.com>,
"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH net v3 1/2] vsock/virtio: reset connection on receiving queue overflow
Date: Thu, 14 May 2026 11:16:23 -0400 [thread overview]
Message-ID: <20260514111513-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <agXh36gUe1K6__Bk@sgarzare-redhat>
On Thu, May 14, 2026 at 04:57:16PM +0200, Stefano Garzarella wrote:
> On Wed, May 13, 2026 at 12:54:16PM +0200, Stefano Garzarella wrote:
> > From: Stefano Garzarella <sgarzare@redhat.com>
> >
> > When there is no more space to queue an incoming packet, the packet is
> > silently dropped. This causes data loss without any notification to
> > either peer, since there is no retransmission.
> >
> > Under normal circumstances, this should never happen. However, it could
> > happen if the other peer doesn't respect the credit, or if the skb
> > overhead, which we recently began to take into account with commit
> > 059b7dbd20a6 ("vsock/virtio: fix potential unbounded skb queue"),
> > is too high.
> >
> > Fix this by resetting the connection and setting the local socket error
> > to ENOBUFS when virtio_transport_recv_enqueue() can no longer queue a
> > packet, so both peers are explicitly notified of the failure rather than
> > silently losing data.
> >
> > Fixes: ae6fcfbf5f03 ("vsock/virtio: discard packets if credit is not respected")
> > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > ---
> > net/vmw_vsock/virtio_transport_common.c | 19 ++++++++++++++-----
> > 1 file changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
> > index 989cc252d3d3..4a4ac69d1ad1 100644
> > --- a/net/vmw_vsock/virtio_transport_common.c
> > +++ b/net/vmw_vsock/virtio_transport_common.c
> > @@ -1350,7 +1350,7 @@ virtio_transport_recv_connecting(struct sock *sk,
> > return err;
> > }
> >
> > -static void
> > +static bool
> > virtio_transport_recv_enqueue(struct vsock_sock *vsk,
> > struct sk_buff *skb)
> > {
> > @@ -1365,10 +1365,8 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk,
> > spin_lock_bh(&vvs->rx_lock);
> >
> > can_enqueue = virtio_transport_inc_rx_pkt(vvs, len);
> > - if (!can_enqueue) {
> > - free_pkt = true;
> > + if (!can_enqueue)
> > goto out;
> > - }
> >
> > if (le32_to_cpu(hdr->flags) & VIRTIO_VSOCK_SEQ_EOM)
> > vvs->msg_count++;
> > @@ -1408,6 +1406,8 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk,
> > spin_unlock_bh(&vvs->rx_lock);
> > if (free_pkt)
> > kfree_skb(skb);
> > +
> > + return can_enqueue;
> > }
> >
> > static int
> > @@ -1420,7 +1420,16 @@ virtio_transport_recv_connected(struct sock *sk,
> >
> > switch (le16_to_cpu(hdr->op)) {
> > case VIRTIO_VSOCK_OP_RW:
> > - virtio_transport_recv_enqueue(vsk, skb);
> > + if (!virtio_transport_recv_enqueue(vsk, skb)) {
> > + /* There is no more space to queue the packet, so let's
> > + * close the connection; otherwise, we'll lose data.
> > + */
> > + (void)virtio_transport_reset(vsk, skb);
> > + sk->sk_state = TCP_CLOSE;
> > + sk->sk_err = ENOBUFS;
> > + sk_error_report(sk);
>
> sashiko reported some issues related to setting TCP_CLOSE state and not
> removing the socket from the connect table:
> https://sashiko.dev/#/patchset/20260513105417.56761-1-sgarzare%40redhat.com
>
> I'll change this by calling virtio_transport_do_close() and
> vsock_remove_sock() in the next version.
>
> Stefano
>
> > + break;
> > + }
> > vsock_data_ready(sk);
> > return err;
> > case VIRTIO_VSOCK_OP_CREDIT_REQUEST:
> > --
> > 2.54.0
> >
And so the bag of hacks grows. I feel this is energy not well spent.
Please, let us fix this properly *first*. And then worry about how to
backport. Maybe it will not be so terrible to backport after all.
next prev parent reply other threads:[~2026-05-14 15:16 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-13 10:54 [PATCH net v3 0/2] vsock/virtio: fix skb overhead accounting to preserve full buf_alloc Stefano Garzarella
2026-05-13 10:54 ` [PATCH net v3 1/2] vsock/virtio: reset connection on receiving queue overflow Stefano Garzarella
2026-05-14 14:57 ` Stefano Garzarella
2026-05-14 15:16 ` Michael S. Tsirkin [this message]
2026-05-14 16:45 ` Stefano Garzarella
2026-05-14 17:44 ` Michael S. Tsirkin
2026-05-13 10:54 ` [PATCH net v3 2/2] vsock/virtio: fix skb overhead accounting to preserve full buf_alloc Stefano Garzarella
2026-05-14 14:44 ` Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260514111513-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=horms@kernel.org \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox