virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Albert Huang <huangjie.albert@bytedance.com>
Cc: open list <linux-kernel@vger.kernel.org>,
	"open list:VIRTIO CORE AND NET DRIVERS"
	<virtualization@lists.linux-foundation.org>
Subject: Re: [PATCH] virtio_ring: Suppress tx interrupt when napi_tx disable
Date: Fri, 24 Mar 2023 02:37:06 -0400	[thread overview]
Message-ID: <20230324023552-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20230324015954-mutt-send-email-mst@kernel.org>

Hmm I sent a bit too fast, and my testing rig is down now.
So please do send a new version, I sent comments on what to fix
in this one.

On Fri, Mar 24, 2023 at 02:08:55AM -0400, Michael S. Tsirkin wrote:
> Thanks for the patch!
> I picked it up.
> I made small changes, please look at it in my branch,
> both to see what I changed for your next submission,
> and to test that it still addresses the problem for you.
> Waiting for your confirmation to send upstream.
> Thanks!
> 
> 
> On Tue, Mar 21, 2023 at 04:59:53PM +0800, Albert Huang wrote:
> > From: "huangjie.albert" <huangjie.albert@bytedance.com>
> > 
> > fix commit 8d622d21d248 ("virtio: fix up virtio_disable_cb")
> > 
> > if we disable the napi_tx. when we triger a tx interrupt, the
> > vq->event_triggered will be set to true. It will no longer be
> > set to false. Unless we explicitly call virtqueue_enable_cb_delayed
> > or virtqueue_enable_cb_prepare
> > 
> > if we disable the napi_tx, It will only be called when the tx ring
> > buffer is relatively small:
> > virtio_net->start_xmit:
> > 	if (sq->vq->num_free < 2+MAX_SKB_FRAGS) {
> > 		netif_stop_subqueue(dev, qnum);
> > 		if (!use_napi &&
> > 		    unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
> > 			/* More just got used, free them then recheck. */
> > 			free_old_xmit_skbs(sq, false);
> > 			if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
> > 				netif_start_subqueue(dev, qnum);
> > 				virtqueue_disable_cb(sq->vq);
> > 			}
> > 		}
> > 	}
> > Because event_triggered is true.Therefore, VRING_AVAIL_F_NO_INTERRUPT or
> > VRING_PACKED_EVENT_FLAG_DISABLE will not be set.So we update
> > vring_used_event(&vq->split.vring) or vq->packed.vring.driver->off_wrap
> > every time we call virtqueue_get_buf_ctx.This will bring more interruptions.
> > 
> > if event_triggered is set to true, do not update vring_used_event(&vq->split.vring)
> > or vq->packed.vring.driver->off_wrap
> > 
> > Signed-off-by: huangjie.albert <huangjie.albert@bytedance.com>
> > ---
> >  drivers/virtio/virtio_ring.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 307e139cb11d..f486cccadbeb 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -795,7 +795,8 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq,
> >  	/* If we expect an interrupt for the next entry, tell host
> >  	 * by writing event index and flush out the write before
> >  	 * the read in the next get_buf call. */
> > -	if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
> > +	if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)
> > +			&& (vq->event_triggered == false))
> >  		virtio_store_mb(vq->weak_barriers,
> >  				&vring_used_event(&vq->split.vring),
> >  				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
> > @@ -1529,7 +1530,8 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> >  	 * by writing event index and flush out the write before
> >  	 * the read in the next get_buf call.
> >  	 */
> > -	if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC)
> > +	if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC
> > +			&& (vq->event_triggered == false))
> >  		virtio_store_mb(vq->weak_barriers,
> >  				&vq->packed.vring.driver->off_wrap,
> >  				cpu_to_le16(vq->last_used_idx));
> > -- 
> > 2.31.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

      reply	other threads:[~2023-03-24  6:37 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230321085953.24949-1-huangjie.albert@bytedance.com>
2023-03-22  2:36 ` [PATCH] virtio_ring: Suppress tx interrupt when napi_tx disable Jason Wang
     [not found]   ` <CABKxMyNSp1-pJW11B3YuDm39mg=eT48JspDsrEePjKFrHNK8NQ@mail.gmail.com>
2023-03-24  3:41     ` [External] " Jason Wang
2023-03-24  5:59       ` Michael S. Tsirkin
2023-03-24  6:32         ` Jason Wang
2023-03-24  6:42           ` Michael S. Tsirkin
2023-03-24  6:47             ` Jason Wang
2023-03-24  7:00               ` Michael S. Tsirkin
2023-03-24  7:37                 ` Jason Wang
2023-03-24  9:05                   ` Michael S. Tsirkin
2023-03-24  6:08 ` Michael S. Tsirkin
2023-03-24  6:37   ` Michael S. Tsirkin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230324023552-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=huangjie.albert@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).