From: Stefano Garzarella <sgarzare@redhat.com>
To: netdev@vger.kernel.org
Cc: virtualization@lists.linux.dev,
"Michael S. Tsirkin" <mst@redhat.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Paolo Abeni" <pabeni@redhat.com>,
kvm@vger.kernel.org, "Eric Dumazet" <edumazet@google.com>,
linux-kernel@vger.kernel.org, "Jason Wang" <jasowang@redhat.com>,
"Claudio Imbrenda" <imbrenda@linux.vnet.ibm.com>,
"Simon Horman" <horms@kernel.org>,
"David S. Miller" <davem@davemloft.net>,
"Arseniy Krasnov" <AVKrasnov@sberdevices.ru>,
"Asias He" <asias@redhat.com>,
"Stefano Garzarella" <sgarzare@redhat.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Melbin K Mathew" <mlbnkm1@gmail.com>
Subject: [PATCH RESEND net v5 3/4] vsock/virtio: cap TX credit to local buffer size
Date: Fri, 16 Jan 2026 21:15:16 +0100 [thread overview]
Message-ID: <20260116201517.273302-4-sgarzare@redhat.com> (raw)
In-Reply-To: <20260116201517.273302-1-sgarzare@redhat.com>
From: Melbin K Mathew <mlbnkm1@gmail.com>
The virtio transports derives its TX credit directly from peer_buf_alloc,
which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value.
On the host side this means that the amount of data we are willing to
queue for a connection is scaled by a guest-chosen buffer size, rather
than the host's own vsock configuration. A malicious guest can advertise
a large buffer and read slowly, causing the host to allocate a
correspondingly large amount of sk_buff memory.
The same thing would happen in the guest with a malicious host, since
virtio transports share the same code base.
Introduce a small helper, virtio_transport_tx_buf_size(), that
returns min(peer_buf_alloc, buf_alloc), and use it wherever we consume
peer_buf_alloc.
This ensures the effective TX window is bounded by both the peer's
advertised buffer and our own buf_alloc (already clamped to
buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer
cannot force the other to queue more data than allowed by its own
vsock settings.
On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with
32 guest vsock connections advertising 2 GiB each and reading slowly
drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only
recovered after killing the QEMU process. That said, if QEMU memory is
limited with cgroups, the maximum memory used will be limited.
With this patch applied:
Before:
MemFree: ~61.6 GiB
Slab: ~142 MiB
SUnreclaim: ~117 MiB
After 32 high-credit connections:
MemFree: ~61.5 GiB
Slab: ~178 MiB
SUnreclaim: ~152 MiB
Only ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest
remains responsive.
Compatibility with non-virtio transports:
- VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per
socket based on the local vsk->buffer_* values; the remote side
cannot enlarge those queues beyond what the local endpoint
configured.
- Hyper-V's vsock transport uses fixed-size VMBus ring buffers and
an MTU bound; there is no peer-controlled credit field comparable
to peer_buf_alloc, and the remote endpoint cannot drive in-flight
kernel memory above those ring sizes.
- The loopback path reuses virtio_transport_common.c, so it
naturally follows the same semantics as the virtio transport.
This change is limited to virtio_transport_common.c and thus affects
virtio-vsock, vhost-vsock, and loopback, bringing them in line with the
"remote window intersected with local policy" behaviour that VMCI and
Hyper-V already effectively have.
Fixes: 06a8fc78367d ("VSOCK: Introduce virtio_vsock_common.ko")
Suggested-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Melbin K Mathew <mlbnkm1@gmail.com>
[Stefano: small adjustments after changing the previous patch]
[Stefano: tweak the commit message]
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
net/vmw_vsock/virtio_transport_common.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 2fe341be6ce2..00f4cf86beac 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -821,6 +821,15 @@ virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
}
EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
+static u32 virtio_transport_tx_buf_size(struct virtio_vsock_sock *vvs)
+{
+ /* The peer advertises its receive buffer via peer_buf_alloc, but we
+ * cap it to our local buf_alloc so a remote peer cannot force us to
+ * queue more data than our own buffer configuration allows.
+ */
+ return min(vvs->peer_buf_alloc, vvs->buf_alloc);
+}
+
int
virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk,
struct msghdr *msg,
@@ -830,7 +839,7 @@ virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk,
spin_lock_bh(&vvs->tx_lock);
- if (len > vvs->peer_buf_alloc) {
+ if (len > virtio_transport_tx_buf_size(vvs)) {
spin_unlock_bh(&vvs->tx_lock);
return -EMSGSIZE;
}
@@ -884,7 +893,8 @@ static s64 virtio_transport_has_space(struct virtio_vsock_sock *vvs)
* we have bytes in flight (tx_cnt - peer_fwd_cnt), the subtraction
* does not underflow.
*/
- bytes = (s64)vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
+ bytes = (s64)virtio_transport_tx_buf_size(vvs) -
+ (vvs->tx_cnt - vvs->peer_fwd_cnt);
if (bytes < 0)
bytes = 0;
--
2.52.0
next prev parent reply other threads:[~2026-01-16 20:15 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-16 20:15 [PATCH RESEND net v5 0/4] vsock/virtio: fix TX credit handling Stefano Garzarella
2026-01-16 20:15 ` [PATCH RESEND net v5 1/4] vsock/virtio: fix potential underflow in virtio_transport_get_credit() Stefano Garzarella
2026-01-16 20:15 ` [PATCH RESEND net v5 2/4] vsock/test: fix seqpacket message bounds test Stefano Garzarella
2026-01-16 20:15 ` Stefano Garzarella [this message]
2026-01-16 20:15 ` [PATCH RESEND net v5 4/4] vsock/test: add stream TX credit " Stefano Garzarella
2026-01-19 18:17 ` [PATCH RESEND net v5 0/4] vsock/virtio: fix TX credit handling Jakub Kicinski
2026-01-20 9:37 ` Stefano Garzarella
2026-01-20 23:18 ` Jakub Kicinski
2026-01-21 9:45 ` Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260116201517.273302-4-sgarzare@redhat.com \
--to=sgarzare@redhat.com \
--cc=AVKrasnov@sberdevices.ru \
--cc=asias@redhat.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=horms@kernel.org \
--cc=imbrenda@linux.vnet.ibm.com \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mlbnkm1@gmail.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox