From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C796C77B7D for ; Mon, 15 May 2023 04:47:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237973AbjEOEq7 (ORCPT ); Mon, 15 May 2023 00:46:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238231AbjEOEqp (ORCPT ); Mon, 15 May 2023 00:46:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 167362712 for ; Sun, 14 May 2023 21:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684125956; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ryX5sovAmXV6c1qawtyVZ+Us+777l7mLCNx8VtBdO6Q=; b=JHoQobCRcz+GtmHM2s7uPt7xiWzxaAp51Eag4HQcTiNKOzgDa3KjOKFW/IBMWUh7eKRC3I uHWv0tnagHpT3wpJXSNSdBZ4nPMu0egSITBtkQhXZhhHLuu3SyP5I91kd5+90BFVWuCu+S 9zxc7DFlrXWqFSHSkJE9lYhpgFs0c/k= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-343-sr6QnoBhOgiUwj0zpIhntw-1; Mon, 15 May 2023 00:45:54 -0400 X-MC-Unique: sr6QnoBhOgiUwj0zpIhntw-1 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-306286b3573so7327540f8f.2 for ; Sun, 14 May 2023 21:45:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684125954; x=1686717954; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ryX5sovAmXV6c1qawtyVZ+Us+777l7mLCNx8VtBdO6Q=; b=CFqXauBIKzyqJPaAWplqsJNoEjslVVEBFp1ObjNxa1Ji9T1PVHUo79I3TSi2hwKLmC AnAqN67BPCpZ6F/1Itff8bS1BeAExyjuhMysqW5Hsomyegmzoenk5JswtPF6T2XMoDBT UylQjMVLaT8p9vcBTnho5/CLhDhbD3e5WHauNYa5RRdmfppKRzHpFBj8d/LVAXFJvRLj UXYZsAZURWrxYwDqBPBXyInBH5T7APPQOZaiEz0ZCCRe2fx3IqclrE6HbsJKqiltUSGQ VFpCv6NFatNA1cGzu5Ss+1WjsNDxMfHyKQeKByMhclNw8T1Pb1EWcgwM2ZfKvQ/cfypR 0ZKA== X-Gm-Message-State: AC+VfDx0AEgl5PsnVjd0PPAROZtqvjcEmTuNQXjuP+toeotyOdUcoM2A N8bMDTSt7usEvnJ9xxoA43mq29vwjVQ6AzCAwbrYHV4IehPiUpLcqJsIxw8ingv3KgmBI3t+1UK rC0sp+q4rNn0fcCwxsE+aWnaU X-Received: by 2002:a5d:6683:0:b0:301:2452:e4d with SMTP id l3-20020a5d6683000000b0030124520e4dmr23428610wru.46.1684125953903; Sun, 14 May 2023 21:45:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ42y5Y36NSzazTv8MjmNVkJslp/Zk6xu1eJXmwaycyOF+fLj/EzxbUv7iCGkEDR/oCPfHIFIw== X-Received: by 2002:a5d:6683:0:b0:301:2452:e4d with SMTP id l3-20020a5d6683000000b0030124520e4dmr23428600wru.46.1684125953545; Sun, 14 May 2023 21:45:53 -0700 (PDT) Received: from redhat.com ([2.52.146.3]) by smtp.gmail.com with ESMTPSA id o10-20020a1c750a000000b003f42bb3a5adsm17946718wmc.4.2023.05.14.21.45.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 May 2023 21:45:52 -0700 (PDT) Date: Mon, 15 May 2023 00:45:49 -0400 From: "Michael S. Tsirkin" To: Feng Liu Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Jason Wang , Xuan Zhuo , Simon Horman , Bodong Wang , Jiri Pirko , William Tu Subject: Re: [PATCH net v6] virtio_net: Fix error unwinding of XDP initialization Message-ID: <20230515004542-mutt-send-email-mst@kernel.org> References: <20230512151812.1806-1-feliu@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230512151812.1806-1-feliu@nvidia.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 12, 2023 at 11:18:12AM -0400, Feng Liu wrote: > When initializing XDP in virtnet_open(), some rq xdp initialization > may hit an error causing net device open failed. However, previous > rqs have already initialized XDP and enabled NAPI, which is not the > expected behavior. Need to roll back the previous rq initialization > to avoid leaks in error unwinding of init code. > > Also extract helper functions of disable and enable queue pairs. > Use newly introduced disable helper function in error unwinding and > virtnet_close. Use enable helper function in virtnet_open. > > Fixes: 754b8a21a96d ("virtio_net: setup xdp_rxq_info") > Signed-off-by: Feng Liu > Reviewed-by: Jiri Pirko > Reviewed-by: William Tu Acked-by: Michael S. Tsirkin > --- > v5 -> v6 > feedbacks from Xuan Zhuo > - add disable_delayed_refill and cancel_delayed_work_sync > > v4 -> v5 > feedbacks from Michael S. Tsirkin > - rename helper as virtnet_disable_queue_pair > - rename helper as virtnet_enable_queue_pair > > v3 -> v4 > feedbacks from Jiri Pirko > - Add symmetric helper function virtnet_enable_qp to enable queues. > - Error handle: cleanup current queue pair in virtnet_enable_qp, > and complete the reset queue pairs cleanup in virtnet_open. > - Fix coding style. > feedbacks from Parav Pandit > - Remove redundant debug message and white space. > > v2 -> v3 > feedbacks from Michael S. Tsirkin > - Remove redundant comment. > > v1 -> v2 > feedbacks from Michael S. Tsirkin > - squash two patches together. > > --- > drivers/net/virtio_net.c | 61 +++++++++++++++++++++++++++++----------- > 1 file changed, 44 insertions(+), 17 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index a12ae26db0e2..56ca1d270304 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -1868,6 +1868,38 @@ static int virtnet_poll(struct napi_struct *napi, int budget) > return received; > } > > +static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index) > +{ > + virtnet_napi_tx_disable(&vi->sq[qp_index].napi); > + napi_disable(&vi->rq[qp_index].napi); > + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); > +} > + > +static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index) > +{ > + struct net_device *dev = vi->dev; > + int err; > + > + err = xdp_rxq_info_reg(&vi->rq[qp_index].xdp_rxq, dev, qp_index, > + vi->rq[qp_index].napi.napi_id); > + if (err < 0) > + return err; > + > + err = xdp_rxq_info_reg_mem_model(&vi->rq[qp_index].xdp_rxq, > + MEM_TYPE_PAGE_SHARED, NULL); > + if (err < 0) > + goto err_xdp_reg_mem_model; > + > + virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi); > + virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi); > + > + return 0; > + > +err_xdp_reg_mem_model: > + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); > + return err; > +} > + > static int virtnet_open(struct net_device *dev) > { > struct virtnet_info *vi = netdev_priv(dev); > @@ -1881,22 +1913,20 @@ static int virtnet_open(struct net_device *dev) > if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) > schedule_delayed_work(&vi->refill, 0); > > - err = xdp_rxq_info_reg(&vi->rq[i].xdp_rxq, dev, i, vi->rq[i].napi.napi_id); > + err = virtnet_enable_queue_pair(vi, i); > if (err < 0) > - return err; > - > - err = xdp_rxq_info_reg_mem_model(&vi->rq[i].xdp_rxq, > - MEM_TYPE_PAGE_SHARED, NULL); > - if (err < 0) { > - xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq); > - return err; > - } > - > - virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); > - virtnet_napi_tx_enable(vi, vi->sq[i].vq, &vi->sq[i].napi); > + goto err_enable_qp; > } > > return 0; > + > +err_enable_qp: > + disable_delayed_refill(vi); > + cancel_delayed_work_sync(&vi->refill); > + > + for (i--; i >= 0; i--) > + virtnet_disable_queue_pair(vi, i); > + return err; > } > > static int virtnet_poll_tx(struct napi_struct *napi, int budget) > @@ -2305,11 +2335,8 @@ static int virtnet_close(struct net_device *dev) > /* Make sure refill_work doesn't re-enable napi! */ > cancel_delayed_work_sync(&vi->refill); > > - for (i = 0; i < vi->max_queue_pairs; i++) { > - virtnet_napi_tx_disable(&vi->sq[i].napi); > - napi_disable(&vi->rq[i].napi); > - xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq); > - } > + for (i = 0; i < vi->max_queue_pairs; i++) > + virtnet_disable_queue_pair(vi, i); > > return 0; > } > -- > 2.37.1 (Apple Git-137.1)