From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51B3D66B52; Tue, 23 Jan 2024 01:01:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705971694; cv=none; b=L1x9YKRjILYnZZ4Z52VekDeN4ffhw8AtWk0Bf0SsA/FtS7mAfv+yfGj7vojk/IQjTMMDFDIQO8ppWkL1iXPxdGeR2zd3BYfrA/MsJiK1nT8hjo0CB0UpCtwaV9F86w3JlZQO2oStWnOS5/Mcm0avJUUtuGf/TLWEtkIiw0ijLcE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705971694; c=relaxed/simple; bh=MzDjHC1Wne30Ps335lrYQ3y6pzhAX1WdRmdO+nnsdFc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BFUpY0WB++FvTxgamFX3AUzNrJ3Rierc4A3MayspSLmIvVYsroZWwyhWfcu/gvkQDpIsaJ8MAvi6VGG7svYMo09bMsD/r36gDX9nR4wUxSk0qmFvMqr8smVM0xPqDW63hg2ZIHkZC/xRY/Pw8nnnTcM+Qan7AiWt0N/bdRxlMQU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=0ShzZPrB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="0ShzZPrB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A775FC43390; Tue, 23 Jan 2024 01:01:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1705971694; bh=MzDjHC1Wne30Ps335lrYQ3y6pzhAX1WdRmdO+nnsdFc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0ShzZPrBUhWNXv1YbNYJlyGh+n5Fxbxcne/Xq7MJaM/3YezOiD67TWJ1r1SstGfsv Kj60+UaEcUthewSvziY2t79U1JrJok2+bf0v0XSNLceA+pyUrdqCpZSv9ZIffD4T4j yeMKkkMRqJAyMHLyPwC9m6eHRsSngcyjL4CF/bdI= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Jan Beulich , Juergen Gross , Paul Durrant Subject: [PATCH 5.10 208/286] xen-netback: dont produce zero-size SKB frags Date: Mon, 22 Jan 2024 15:58:34 -0800 Message-ID: <20240122235740.115709016@linuxfoundation.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240122235732.009174833@linuxfoundation.org> References: <20240122235732.009174833@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Jan Beulich commit c7ec4f2d684e17d69bbdd7c4324db0ef5daac26a upstream. While frontends may submit zero-size requests (wasting a precious slot), core networking code as of at least 3ece782693c4b ("sock: skb_copy_ubufs support for compound pages") can't deal with SKBs when they have all zero-size fragments. Respond to empty requests right when populating fragments; all further processing is fragment based and hence won't encounter these empty requests anymore. In a way this should have been that way from the beginning: When no data is to be transferred for a particular request, there's not even a point in validating the respective grant ref. That's no different from e.g. passing NULL into memcpy() when at the same time the size is 0. This is XSA-448 / CVE-2023-46838. Cc: stable@vger.kernel.org Signed-off-by: Jan Beulich Reviewed-by: Juergen Gross Reviewed-by: Paul Durrant Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netback/netback.c | 44 ++++++++++++++++++++++++++++++++------ 1 file changed, 38 insertions(+), 6 deletions(-) --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -463,12 +463,25 @@ static void xenvif_get_requests(struct x } for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS; - shinfo->nr_frags++, gop++, nr_slots--) { + nr_slots--) { + if (unlikely(!txp->size)) { + unsigned long flags; + + spin_lock_irqsave(&queue->response_lock, flags); + make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY); + push_tx_responses(queue); + spin_unlock_irqrestore(&queue->response_lock, flags); + ++txp; + continue; + } + index = pending_index(queue->pending_cons++); pending_idx = queue->pending_ring[index]; xenvif_tx_create_map_op(queue, pending_idx, txp, txp == first ? extra_count : 0, gop); frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx); + ++shinfo->nr_frags; + ++gop; if (txp == first) txp = txfrags; @@ -481,20 +494,39 @@ static void xenvif_get_requests(struct x shinfo = skb_shinfo(nskb); frags = shinfo->frags; - for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; - shinfo->nr_frags++, txp++, gop++) { + for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) { + if (unlikely(!txp->size)) { + unsigned long flags; + + spin_lock_irqsave(&queue->response_lock, flags); + make_tx_response(queue, txp, 0, + XEN_NETIF_RSP_OKAY); + push_tx_responses(queue); + spin_unlock_irqrestore(&queue->response_lock, + flags); + continue; + } + index = pending_index(queue->pending_cons++); pending_idx = queue->pending_ring[index]; xenvif_tx_create_map_op(queue, pending_idx, txp, 0, gop); frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx); + ++shinfo->nr_frags; + ++gop; } - skb_shinfo(skb)->frag_list = nskb; - } else if (nskb) { + if (shinfo->nr_frags) { + skb_shinfo(skb)->frag_list = nskb; + nskb = NULL; + } + } + + if (nskb) { /* A frag_list skb was allocated but it is no longer needed - * because enough slots were converted to copy ops above. + * because enough slots were converted to copy ops above or some + * were empty. */ kfree_skb(nskb); }