From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E060B283FCE; Mon, 30 Jun 2025 13:06:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751288818; cv=none; b=Fg6+XM9TAdEK8/Grkv9mJtztj2kDsxFBt7P0oeoc94vpNnX2akzqxzHwAJr0y3OEJgf/3QzfVxpfxO8RBeizlvBJfbZU5UrjzjJbcl2/4QG1f6t7X/bIEpSBejMeZSff/TSpWJ89hV30ZJa0CoT1UmfzED+tGfmf/eAOefa8mXw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751288818; c=relaxed/simple; bh=t/bQaJgksnkhC8SrnPoDlqGUAnUGdbY94CBRM7dU3Ag=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=MuzY99jLzjdjw/TQzlRAQkBIv2xcdy8Zd61B75/BiD0Cb0PKDV1IcRoF0Cf/1WelQuJcGRQsICt9hjsIH+7N8ndDvVghx8GNEsMa3Qlz3trdK3BfyGaDIY8m1Qki/feNj1JejuHakKOtDt339Tql5XVQMV5dAgbjOGR7lu1Zeik= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WGcmm3OG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WGcmm3OG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4C19C4CEE3; Mon, 30 Jun 2025 13:06:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751288817; bh=t/bQaJgksnkhC8SrnPoDlqGUAnUGdbY94CBRM7dU3Ag=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WGcmm3OGEJP4Yegahnbv3NjUrCik9edY6saeXWrT5QlXJcZOMUEK9mVjc9+x7MrbP PbiQfTknsEbxE5XKUuzQPvPPKPeTWdx/B+xB4dz8MKy1fPRWRxgVk14sR2imlX4XtO VLPxH1/es3xeH94QWxdX2q9oBIEXeMfDvX7Q7d+QS6i5NEFGRzmsZ1z6rkAKDvYoYG UY/iGIxccs9oAihs0V+I0SD4oT5ifYA4Xa+SEb+tILg9u3Sd2ZB2u1whrjebaPp/8h IoOnKDnjcsTg8z9+DKezHzth8jQvdOAosJ6vHK8e+oPkMKns91lnK/4muSTO9qCbgb lxIIo+dX9tgrQ== Date: Mon, 30 Jun 2025 14:06:52 +0100 From: Will Deacon To: Stefano Garzarella Cc: linux-kernel@vger.kernel.org, Keir Fraser , Steven Moreland , Frederick Mayle , Stefan Hajnoczi , "Michael S. Tsirkin" , Jason Wang , Eugenio =?iso-8859-1?Q?P=E9rez?= , netdev@vger.kernel.org, virtualization@lists.linux.dev Subject: Re: [PATCH 2/5] vsock/virtio: Resize receive buffers so that each SKB fits in a page Message-ID: References: <20250625131543.5155-1-will@kernel.org> <20250625131543.5155-3-will@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Fri, Jun 27, 2025 at 12:41:48PM +0200, Stefano Garzarella wrote: > On Wed, Jun 25, 2025 at 02:15:40PM +0100, Will Deacon wrote: > > When allocating receive buffers for the vsock virtio RX virtqueue, an > > SKB is allocated with a 4140 data payload (the 44-byte packet header + > > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE). Even when factoring in the SKB > > overhead, the resulting 8KiB allocation thanks to the rounding in > > kmalloc_reserve() is wasteful (~3700 unusable bytes) and results in a > > higher-order page allocation for the sake of a few hundred bytes of > > packet data. > > > > Limit the vsock virtio RX buffers to a page per SKB, resulting in much > > better memory utilisation and removing the need to allocate higher-order > > pages entirely. > > > > Signed-off-by: Will Deacon > > --- > > include/linux/virtio_vsock.h | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h > > index 36fb3edfa403..67ffb64325ef 100644 > > --- a/include/linux/virtio_vsock.h > > +++ b/include/linux/virtio_vsock.h > > @@ -111,7 +111,8 @@ static inline size_t virtio_vsock_skb_len(struct sk_buff *skb) > > return (size_t)(skb_end_pointer(skb) - skb->head); > > } > > > > -#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4) > > +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (SKB_WITH_OVERHEAD(PAGE_SIZE) \ > > + - VIRTIO_VSOCK_SKB_HEADROOM) > > This is only used in net/vmw_vsock/virtio_transport.c : > > static void virtio_vsock_rx_fill(struct virtio_vsock *vsock) > { > int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE + VIRTIO_VSOCK_SKB_HEADROOM; > > > What about just remove VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE and use > `SKB_WITH_OVERHEAD(PAGE_SIZE)` there? (maybe with a comment summarizing > the issue we found). Sure, works for me. That gets rid of the funny +- VIRTIO_VSOCK_SKB_HEADROOM too. Will