From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81D5AC77B7C for ; Fri, 12 May 2023 16:37:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id B717A613CA; Fri, 12 May 2023 16:37:25 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org B717A613CA Authentication-Results: smtp3.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=i6BI4hX6 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QRLSi0LmaFdi; Fri, 12 May 2023 16:37:24 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp3.osuosl.org (Postfix) with ESMTPS id 163E461151; Fri, 12 May 2023 16:37:24 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 163E461151 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id DBDD7C0037; Fri, 12 May 2023 16:37:23 +0000 (UTC) Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 4B10FC002A for ; Fri, 12 May 2023 16:37:23 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 126488489D for ; Fri, 12 May 2023 16:37:23 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 126488489D Authentication-Results: smtp1.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=i6BI4hX6 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uiJ0U-FnfUTC for ; Fri, 12 May 2023 16:37:21 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 9169F8485E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp1.osuosl.org (Postfix) with ESMTPS id 9169F8485E for ; Fri, 12 May 2023 16:37:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683909440; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ryX5sovAmXV6c1qawtyVZ+Us+777l7mLCNx8VtBdO6Q=; b=i6BI4hX6SCz9RHao1iXHYGwa13yR121s3JGtgz9Boicl2oZIbIORkYzAEY3VfB3CXflqy+ FeUqf67jKCL5U1XGcoE3kbigThrj/B2wkUJoBaEcP1L/VgMKPSZSVi4N9rI2gojiYdSjNm 1Vq6Szj2S2wMO8OUcK1F6aS0sbKN7VE= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-523-3ZaydlrNN7iJ3Ten2CaXPQ-1; Fri, 12 May 2023 12:37:19 -0400 X-MC-Unique: 3ZaydlrNN7iJ3Ten2CaXPQ-1 Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-3f38b6a2682so40202921cf.1 for ; Fri, 12 May 2023 09:37:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683909438; x=1686501438; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ryX5sovAmXV6c1qawtyVZ+Us+777l7mLCNx8VtBdO6Q=; b=letIfyLClQXJ8fe0vUpESMGlbVx3Q5J72B84iVfeIPQZYh/26I0fnCXbINs1/TfhL4 juHLkSluo1wee4w7q0aaOpD4WRcF7Fr0OFLoHysE9hAVs14Sy+ZISanuHQHcWo2+dMNJ h+39FQTO6hlBlTinL7pxQ6BNjBlbXWpNZKltbpn47jL8IwegoySjfHCYYMhdCSrVKqa2 CszohmfxW06SAlhyQ418VJmDTjjtmjG0OVfhW/phiMTve+d7DkZAHgq15ut02qqs3b3s aYQiLYYNMLGA0UcGWu2/s1mS9PhkzSkPSwEEEPe20f1x2bO6PX9JQJHGsXvQWihBG4P3 F+Kw== X-Gm-Message-State: AC+VfDw7X5b/WXj/pcQybWXw3oHjrM3pyGm7CB8RmWVHTOiZLyBH1oPn Xqps/tVLQvNNmKK71uV4yKN2PfR0pPcQ+QE/QHAe2InqGZ4qhzid8j0HroGltzWjxmK4XPn/BGU 017ecYC27vc8bGiIP6FddbFg+8rybxFOB1a+z58LR4Q== X-Received: by 2002:a05:622a:1441:b0:3e6:4b7b:250 with SMTP id v1-20020a05622a144100b003e64b7b0250mr38562978qtx.25.1683909438720; Fri, 12 May 2023 09:37:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6o892SUq2q7IoDztPxxSs8dPIhgSg2bX0bcnDACuZVhTgium0mRlZboNQGlt/vZAlhkyMFew== X-Received: by 2002:a05:622a:1441:b0:3e6:4b7b:250 with SMTP id v1-20020a05622a144100b003e64b7b0250mr38562954qtx.25.1683909438451; Fri, 12 May 2023 09:37:18 -0700 (PDT) Received: from redhat.com ([37.19.196.5]) by smtp.gmail.com with ESMTPSA id l12-20020ac8078c000000b003e9c6a4a381sm3184802qth.54.2023.05.12.09.37.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 May 2023 09:37:18 -0700 (PDT) Date: Fri, 12 May 2023 12:37:12 -0400 From: "Michael S. Tsirkin" To: Feng Liu Subject: Re: [PATCH net v6] virtio_net: Fix error unwinding of XDP initialization Message-ID: <20230512123705-mutt-send-email-mst@kernel.org> References: <20230512151812.1806-1-feliu@nvidia.com> MIME-Version: 1.0 In-Reply-To: <20230512151812.1806-1-feliu@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline Cc: Xuan Zhuo , netdev@vger.kernel.org, Jiri Pirko , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, William Tu , Simon Horman , Bodong Wang , bpf@vger.kernel.org X-BeenThere: virtualization@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Linux virtualization List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: virtualization-bounces@lists.linux-foundation.org Sender: "Virtualization" On Fri, May 12, 2023 at 11:18:12AM -0400, Feng Liu wrote: > When initializing XDP in virtnet_open(), some rq xdp initialization > may hit an error causing net device open failed. However, previous > rqs have already initialized XDP and enabled NAPI, which is not the > expected behavior. Need to roll back the previous rq initialization > to avoid leaks in error unwinding of init code. > > Also extract helper functions of disable and enable queue pairs. > Use newly introduced disable helper function in error unwinding and > virtnet_close. Use enable helper function in virtnet_open. > > Fixes: 754b8a21a96d ("virtio_net: setup xdp_rxq_info") > Signed-off-by: Feng Liu > Reviewed-by: Jiri Pirko > Reviewed-by: William Tu Acked-by: Michael S. Tsirkin > --- > v5 -> v6 > feedbacks from Xuan Zhuo > - add disable_delayed_refill and cancel_delayed_work_sync > > v4 -> v5 > feedbacks from Michael S. Tsirkin > - rename helper as virtnet_disable_queue_pair > - rename helper as virtnet_enable_queue_pair > > v3 -> v4 > feedbacks from Jiri Pirko > - Add symmetric helper function virtnet_enable_qp to enable queues. > - Error handle: cleanup current queue pair in virtnet_enable_qp, > and complete the reset queue pairs cleanup in virtnet_open. > - Fix coding style. > feedbacks from Parav Pandit > - Remove redundant debug message and white space. > > v2 -> v3 > feedbacks from Michael S. Tsirkin > - Remove redundant comment. > > v1 -> v2 > feedbacks from Michael S. Tsirkin > - squash two patches together. > > --- > drivers/net/virtio_net.c | 61 +++++++++++++++++++++++++++++----------- > 1 file changed, 44 insertions(+), 17 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index a12ae26db0e2..56ca1d270304 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -1868,6 +1868,38 @@ static int virtnet_poll(struct napi_struct *napi, int budget) > return received; > } > > +static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index) > +{ > + virtnet_napi_tx_disable(&vi->sq[qp_index].napi); > + napi_disable(&vi->rq[qp_index].napi); > + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); > +} > + > +static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index) > +{ > + struct net_device *dev = vi->dev; > + int err; > + > + err = xdp_rxq_info_reg(&vi->rq[qp_index].xdp_rxq, dev, qp_index, > + vi->rq[qp_index].napi.napi_id); > + if (err < 0) > + return err; > + > + err = xdp_rxq_info_reg_mem_model(&vi->rq[qp_index].xdp_rxq, > + MEM_TYPE_PAGE_SHARED, NULL); > + if (err < 0) > + goto err_xdp_reg_mem_model; > + > + virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi); > + virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi); > + > + return 0; > + > +err_xdp_reg_mem_model: > + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); > + return err; > +} > + > static int virtnet_open(struct net_device *dev) > { > struct virtnet_info *vi = netdev_priv(dev); > @@ -1881,22 +1913,20 @@ static int virtnet_open(struct net_device *dev) > if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) > schedule_delayed_work(&vi->refill, 0); > > - err = xdp_rxq_info_reg(&vi->rq[i].xdp_rxq, dev, i, vi->rq[i].napi.napi_id); > + err = virtnet_enable_queue_pair(vi, i); > if (err < 0) > - return err; > - > - err = xdp_rxq_info_reg_mem_model(&vi->rq[i].xdp_rxq, > - MEM_TYPE_PAGE_SHARED, NULL); > - if (err < 0) { > - xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq); > - return err; > - } > - > - virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); > - virtnet_napi_tx_enable(vi, vi->sq[i].vq, &vi->sq[i].napi); > + goto err_enable_qp; > } > > return 0; > + > +err_enable_qp: > + disable_delayed_refill(vi); > + cancel_delayed_work_sync(&vi->refill); > + > + for (i--; i >= 0; i--) > + virtnet_disable_queue_pair(vi, i); > + return err; > } > > static int virtnet_poll_tx(struct napi_struct *napi, int budget) > @@ -2305,11 +2335,8 @@ static int virtnet_close(struct net_device *dev) > /* Make sure refill_work doesn't re-enable napi! */ > cancel_delayed_work_sync(&vi->refill); > > - for (i = 0; i < vi->max_queue_pairs; i++) { > - virtnet_napi_tx_disable(&vi->sq[i].napi); > - napi_disable(&vi->rq[i].napi); > - xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq); > - } > + for (i = 0; i < vi->max_queue_pairs; i++) > + virtnet_disable_queue_pair(vi, i); > > return 0; > } > -- > 2.37.1 (Apple Git-137.1) _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization