From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Wang Subject: Re: [RFC] virtio_net: add local_bh_disable() around u64_stats_update_begin Date: Fri, 19 Oct 2018 10:17:07 +0800 Message-ID: <642ed362-6b0b-8df0-160c-fab243202e03@redhat.com> References: <20181016165545.guksrl23ulcudxrk@linutronix.de> <68582c17-27bf-43f4-cbe5-96ec712a704c@lab.ntt.co.jp> <20181018084753.wefvsypdevbzoadg@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: Toshiaki Makita , netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, tglx@linutronix.de, "Michael S. Tsirkin" , "David S. Miller" To: Sebastian Andrzej Siewior Return-path: Received: from mx1.redhat.com ([209.132.183.28]:45988 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726463AbeJSKVN (ORCPT ); Fri, 19 Oct 2018 06:21:13 -0400 In-Reply-To: <20181018084753.wefvsypdevbzoadg@linutronix.de> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 2018/10/18 下午4:47, Sebastian Andrzej Siewior wrote: > On 2018-10-17 14:48:02 [+0800], Jason Wang wrote: >> On 2018/10/17 上午9:13, Toshiaki Makita wrote: >>> I'm not sure what condition triggered this warning. > If the seqlock is acquired once in softirq and then in process context > again it is enough evidence for lockdep to trigger this warning. > >>> Toshiaki Makita >> >> Or maybe NAPI is enabled unexpectedly somewhere? >> >> Btw, the schedule_delayed_work() in virtnet_open() is also suspicious, if >> the work is executed before virtnet_napi_enable(), there will be a deadloop >> for napi_disable(). > something like this? It is also likely if it runs OOM on queue 2, it > will run OOM again on queue 3. > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index fbcfb4d272336..87d6ec4765270 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -1263,22 +1263,22 @@ static void refill_work(struct work_struct *work) > { > struct virtnet_info *vi = > container_of(work, struct virtnet_info, refill.work); > - bool still_empty; > + int still_empty = 0; > int i; > > for (i = 0; i < vi->curr_queue_pairs; i++) { > struct receive_queue *rq = &vi->rq[i]; > > napi_disable(&rq->napi); > - still_empty = !try_fill_recv(vi, rq, GFP_KERNEL); > + if (!try_fill_recv(vi, rq, GFP_KERNEL)) > + still_empty++; > virtnet_napi_enable(rq->vq, &rq->napi); > - > - /* In theory, this can happen: if we don't get any buffers in > - * we will *never* try to fill again. > - */ > - if (still_empty) > - schedule_delayed_work(&vi->refill, HZ/2); > } > + /* In theory, this can happen: if we don't get any buffers in > + * we will *never* try to fill again. > + */ > + if (still_empty) > + schedule_delayed_work(&vi->refill, HZ/2); > } I think this part is not a must or an independent optimization? Thanks > > static int virtnet_receive(struct receive_queue *rq, int budget, > @@ -1407,12 +1407,13 @@ static int virtnet_open(struct net_device *dev) > { > struct virtnet_info *vi = netdev_priv(dev); > int i, err; > + int need_refill = 0; > > for (i = 0; i < vi->max_queue_pairs; i++) { > if (i < vi->curr_queue_pairs) > /* Make sure we have some buffers: if oom use wq. */ > if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) > - schedule_delayed_work(&vi->refill, 0); > + need_refill++; > > err = xdp_rxq_info_reg(&vi->rq[i].xdp_rxq, dev, i); > if (err < 0) > @@ -1428,6 +1429,8 @@ static int virtnet_open(struct net_device *dev) > virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); > virtnet_napi_tx_enable(vi, vi->sq[i].vq, &vi->sq[i].napi); > } > + if (need_refill) > + schedule_delayed_work(&vi->refill, 0); > > return 0; > } > @@ -2236,6 +2239,7 @@ static int virtnet_restore_up(struct virtio_device *vdev) > { > struct virtnet_info *vi = vdev->priv; > int err, i; > + int need_refill = 0; > > err = init_vqs(vi); > if (err) > @@ -2246,13 +2250,15 @@ static int virtnet_restore_up(struct virtio_device *vdev) > if (netif_running(vi->dev)) { > for (i = 0; i < vi->curr_queue_pairs; i++) > if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) > - schedule_delayed_work(&vi->refill, 0); > + need_refill++; > > for (i = 0; i < vi->max_queue_pairs; i++) { > virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); > virtnet_napi_tx_enable(vi, vi->sq[i].vq, > &vi->sq[i].napi); > } > + if (need_refill) > + schedule_delayed_work(&vi->refill, 0); > } > > netif_device_attach(vi->dev); > >> Thanks > Sebastian