From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC36AC7EE23 for ; Mon, 15 May 2023 17:32:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243031AbjEORcP (ORCPT ); Mon, 15 May 2023 13:32:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244342AbjEORbg (ORCPT ); Mon, 15 May 2023 13:31:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 924E24C9DB for ; Mon, 15 May 2023 10:28:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 725046205E for ; Mon, 15 May 2023 17:28:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62A3CC433EF; Mon, 15 May 2023 17:28:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1684171706; bh=D3uckklQ1UeCpHz7LholJ0IBq7LyCObOReAb7nvozY0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xmqKjDGnoyAGIDcQL+VQPcNqPQk7dba+fKeHUl6lpPaq0awSNQ+7Syvvc9REjUUFZ kWwwv70W5fCfbJHkILck1UzwRiGOvLCzKtV0ZNs1SMV21kFwYjT7XVo30a/WVy7yBy Y2/ZS/4jdjo+NusEhpHuY1wmeWXdhmRr1PAm8J2k= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Xuan Zhuo , Jason Wang , "Michael S. Tsirkin" , Sasha Levin Subject: [PATCH 5.15 045/134] virtio_net: split free_unused_bufs() Date: Mon, 15 May 2023 18:28:42 +0200 Message-Id: <20230515161704.639534608@linuxfoundation.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230515161702.887638251@linuxfoundation.org> References: <20230515161702.887638251@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Xuan Zhuo [ Upstream commit 6e345f8c7cd029ad3aaece15ad4425ac26e4eb63 ] This patch separates two functions for freeing sq buf and rq buf from free_unused_bufs(). When supporting the enable/disable tx/rq queue in the future, it is necessary to support separate recovery of a sq buf or a rq buf. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang Message-Id: <20220801063902.129329-40-xuanzhuo@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin Stable-dep-of: f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs") Signed-off-by: Sasha Levin --- drivers/net/virtio_net.c | 41 ++++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 8a380086ac257..cff3e2a7ce7fc 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2814,6 +2814,27 @@ static void free_receive_page_frags(struct virtnet_info *vi) put_page(vi->rq[i].alloc_frag.page); } +static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf) +{ + if (!is_xdp_frame(buf)) + dev_kfree_skb(buf); + else + xdp_return_frame(ptr_to_xdp(buf)); +} + +static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf) +{ + struct virtnet_info *vi = vq->vdev->priv; + int i = vq2rxq(vq); + + if (vi->mergeable_rx_bufs) + put_page(virt_to_head_page(buf)); + else if (vi->big_packets) + give_pages(&vi->rq[i], buf); + else + put_page(virt_to_head_page(buf)); +} + static void free_unused_bufs(struct virtnet_info *vi) { void *buf; @@ -2821,26 +2842,14 @@ static void free_unused_bufs(struct virtnet_info *vi) for (i = 0; i < vi->max_queue_pairs; i++) { struct virtqueue *vq = vi->sq[i].vq; - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { - if (!is_xdp_frame(buf)) - dev_kfree_skb(buf); - else - xdp_return_frame(ptr_to_xdp(buf)); - } + while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) + virtnet_sq_free_unused_buf(vq, buf); } for (i = 0; i < vi->max_queue_pairs; i++) { struct virtqueue *vq = vi->rq[i].vq; - - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { - if (vi->mergeable_rx_bufs) { - put_page(virt_to_head_page(buf)); - } else if (vi->big_packets) { - give_pages(&vi->rq[i], buf); - } else { - put_page(virt_to_head_page(buf)); - } - } + while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) + virtnet_rq_free_unused_buf(vq, buf); } } -- 2.39.2