netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Heng Qi <hengqi@linux.alibaba.com>
Cc: "Jason Wang" <jasowang@redhat.com>,
	netdev@vger.kernel.org, "Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
	"Eugenio Pérez" <eperezma@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>
Subject: Re: [PATCH net-next] virtio_net: Prevent misidentified spurious interrupts from killing the irq
Date: Fri, 2 Aug 2024 09:08:09 -0400	[thread overview]
Message-ID: <20240802085902-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1722584526.9355304-3-hengqi@linux.alibaba.com>

On Fri, Aug 02, 2024 at 03:42:06PM +0800, Heng Qi wrote:
> On Thu, 1 Aug 2024 10:04:05 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Aug 01, 2024 at 09:56:39PM +0800, Heng Qi wrote:
> > > Michael has effectively reduced the number of spurious interrupts in
> > > commit a7766ef18b33 ("virtio_net: disable cb aggressively") by disabling
> > > irq callbacks before cleaning old buffers.
> > > 
> > > But it is still possible that the irq is killed by mistake:
> > > 
> > >   When a delayed tx interrupt arrives, old buffers has been cleaned in
> > >   other paths (start_xmit and virtnet_poll_cleantx), then the interrupt is
> > >   mistakenly identified as a spurious interrupt in vring_interrupt.
> > > 
> > >   We should refrain from labeling it as a spurious interrupt; otherwise,
> > >   note_interrupt may inadvertently kill the legitimate irq.
> > > 
> > > Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
> > 
> > 
> > Is this a practical or theoretical issue? Do you observe an issue
> > and see that this patch fixes it? Or is this from code review?
> 
> 
> This issue was previously documented in our bugzilla, but that was in 2020.
> 
> I can't easily reproduce the issue after a7766ef18b33, but interrupt suppression
> under virtio is unreliable and out of sync, which is still a potential risk for
> DPUs where the VM and the device are not on the same host.
> 
> Thanks.

I find it hard to believe there's a real problem after a7766ef18b33.



> > > ---
> > >  drivers/net/virtio_net.c     |  9 ++++++
> > >  drivers/virtio/virtio_ring.c | 53 ++++++++++++++++++++++++++++++++++++
> > >  include/linux/virtio.h       |  3 ++
> > >  3 files changed, 65 insertions(+)
> > > 
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 0383a3e136d6..6d8739418203 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -2769,6 +2769,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq, int budget)
> > >  		do {
> > >  			virtqueue_disable_cb(sq->vq);
> > >  			free_old_xmit(sq, txq, !!budget);
> > > +			virtqueue_set_tx_oldbuf_cleaned(sq->vq, true);
> > >  		} while (unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
> > >  
> > >  		if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) {
> > > @@ -3035,6 +3036,9 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >  
> > >  		free_old_xmit(sq, txq, false);
> > >  
> > > +		if (use_napi)
> > > +			virtqueue_set_tx_oldbuf_cleaned(sq->vq, true);
> > > +
> > >  	} while (use_napi && !xmit_more &&
> > >  	       unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
> > >  
> > > @@ -3044,6 +3048,11 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >  	/* Try to transmit */
> > >  	err = xmit_skb(sq, skb, !use_napi);
> > >  
> > > +	if (use_napi) {
> > > +		virtqueue_set_tx_newbuf_sent(sq->vq, true);
> > > +		virtqueue_set_tx_oldbuf_cleaned(sq->vq, false);
> > > +	}
> > > +
> > >  	/* This should not happen! */
> > >  	if (unlikely(err)) {
> > >  		DEV_STATS_INC(dev, tx_fifo_errors);
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > index be7309b1e860..fb2afc716371 100644
> > > --- a/drivers/virtio/virtio_ring.c
> > > +++ b/drivers/virtio/virtio_ring.c
> > > @@ -180,6 +180,11 @@ struct vring_virtqueue {
> > >  	 */
> > >  	bool do_unmap;
> > >  
> > > +	/* Has any new data been sent? */
> > > +	bool is_tx_newbuf_sent;
> > > +	/* Is the old data recently sent cleaned up? */
> > > +	bool is_tx_oldbuf_cleaned;
> > > +
> > >  	/* Head of free buffer list. */
> > >  	unsigned int free_head;
> > >  	/* Number we've added since last sync. */
> > > @@ -2092,6 +2097,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
> > >  	vq->use_dma_api = vring_use_dma_api(vdev);
> > >  	vq->premapped = false;
> > >  	vq->do_unmap = vq->use_dma_api;
> > > +	vq->is_tx_newbuf_sent = false; /* Initially, no new buffer to send. */
> > > +	vq->is_tx_oldbuf_cleaned = true; /* Initially, no old buffer to clean. */
> > > +
> > >  
> > >  	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
> > >  		!context;
> > > @@ -2375,6 +2383,38 @@ bool virtqueue_notify(struct virtqueue *_vq)
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_notify);
> > >  
> > > +/**
> > > + * virtqueue_set_tx_newbuf_sent - set whether there is new tx buf to send.
> > > + * @_vq: the struct virtqueue
> > > + *
> > > + * If is_tx_newbuf_sent and is_tx_oldbuf_cleaned are both true, the
> > > + * spurious interrupt is caused by polling TX vq in other paths outside
> > > + * the tx irq callback.
> > > + */
> > > +void virtqueue_set_tx_newbuf_sent(struct virtqueue *_vq, bool val)
> > > +{
> > > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > > +
> > > +	vq->is_tx_newbuf_sent = val;
> > > +}
> > > +EXPORT_SYMBOL_GPL(virtqueue_set_tx_newbuf_sent);
> > > +
> > > +/**
> > > + * virtqueue_set_tx_oldbuf_cleaned - set whether there is old tx buf to clean.
> > > + * @_vq: the struct virtqueue
> > > + *
> > > + * If is_tx_oldbuf_cleaned and is_tx_newbuf_sent are both true, the
> > > + * spurious interrupt is caused by polling TX vq in other paths outside
> > > + * the tx irq callback.
> > > + */
> > > +void virtqueue_set_tx_oldbuf_cleaned(struct virtqueue *_vq, bool val)
> > > +{
> > > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > > +
> > > +	vq->is_tx_oldbuf_cleaned = val;
> > > +}
> > > +EXPORT_SYMBOL_GPL(virtqueue_set_tx_oldbuf_cleaned);
> > > +
> > >  /**
> > >   * virtqueue_kick - update after add_buf
> > >   * @vq: the struct virtqueue
> > > @@ -2572,6 +2612,16 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
> > >  	struct vring_virtqueue *vq = to_vvq(_vq);
> > >  
> > >  	if (!more_used(vq)) {
> > > +		/* When the delayed TX interrupt arrives, the old buffers are
> > > +		 * cleaned in other cases(start_xmit and virtnet_poll_cleantx).
> > > +		 * We'd better not identify it as a spurious interrupt,
> > > +		 * otherwise note_interrupt may kill the interrupt.
> > > +		 */
> > > +		if (unlikely(vq->is_tx_newbuf_sent && vq->is_tx_oldbuf_cleaned)) {
> > > +			vq->is_tx_newbuf_sent = false;
> > > +			return IRQ_HANDLED;
> > > +		}
> > > +

I am not merging anything this ugly.


> > >  		pr_debug("virtqueue interrupt with no work for %p\n", vq);
> > >  		return IRQ_NONE;
> > >  	}
> > > @@ -2637,6 +2687,9 @@ static struct virtqueue *__vring_new_virtqueue(unsigned int index,
> > >  	vq->use_dma_api = vring_use_dma_api(vdev);
> > >  	vq->premapped = false;
> > >  	vq->do_unmap = vq->use_dma_api;
> > > +	vq->is_tx_newbuf_sent = false; /* Initially, no new buffer to send. */
> > > +	vq->is_tx_oldbuf_cleaned = true; /* Initially, no old buffer to clean. */
> > > +
> > >  
> > >  	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
> > >  		!context;
> > > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > > index ecc5cb7b8c91..ba3be9276c09 100644
> > > --- a/include/linux/virtio.h
> > > +++ b/include/linux/virtio.h
> > > @@ -103,6 +103,9 @@ int virtqueue_resize(struct virtqueue *vq, u32 num,
> > >  int virtqueue_reset(struct virtqueue *vq,
> > >  		    void (*recycle)(struct virtqueue *vq, void *buf));
> > >  
> > > +void virtqueue_set_tx_newbuf_sent(struct virtqueue *vq, bool val);
> > > +void virtqueue_set_tx_oldbuf_cleaned(struct virtqueue *vq, bool val);
> > > +
> > >  struct virtio_admin_cmd {
> > >  	__le16 opcode;
> > >  	__le16 group_type;
> > > -- 
> > > 2.32.0.3.g01195cf9f
> > 


  reply	other threads:[~2024-08-02 13:08 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-01 13:56 [PATCH net-next] virtio_net: Prevent misidentified spurious interrupts from killing the irq Heng Qi
2024-08-01 14:04 ` Michael S. Tsirkin
2024-08-02  7:42   ` Heng Qi
2024-08-02 13:08     ` Michael S. Tsirkin [this message]
2024-08-02  3:41 ` Jason Wang
2024-08-02 13:11   ` Michael S. Tsirkin
2024-08-05  3:26     ` Jason Wang
2024-08-05  6:29       ` Michael S. Tsirkin
2024-08-06  3:18         ` Jason Wang
2024-08-06 13:24           ` Michael S. Tsirkin
2024-08-07  4:06             ` Jason Wang
2024-08-07 10:37               ` Michael S. Tsirkin
2024-08-08  2:50                 ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240802085902-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eperezma@redhat.com \
    --cc=hengqi@linux.alibaba.com \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).