From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Heng Qi" <hengqi@linux.alibaba.com>,
netdev@vger.kernel.org, "Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>
Subject: Re: [PATCH net-next] virtio_net: Prevent misidentified spurious interrupts from killing the irq
Date: Tue, 6 Aug 2024 09:24:47 -0400 [thread overview]
Message-ID: <20240806091923-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CACGkMEsL6fyf9ecY8_LpT5_=hHKFzW7==4DBer_w9xEpGUkRtw@mail.gmail.com>
On Tue, Aug 06, 2024 at 11:18:14AM +0800, Jason Wang wrote:
> On Mon, Aug 5, 2024 at 2:29 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Mon, Aug 05, 2024 at 11:26:56AM +0800, Jason Wang wrote:
> > > On Fri, Aug 2, 2024 at 9:11 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > On Fri, Aug 02, 2024 at 11:41:57AM +0800, Jason Wang wrote:
> > > > > On Thu, Aug 1, 2024 at 9:56 PM Heng Qi <hengqi@linux.alibaba.com> wrote:
> > > > > >
> > > > > > Michael has effectively reduced the number of spurious interrupts in
> > > > > > commit a7766ef18b33 ("virtio_net: disable cb aggressively") by disabling
> > > > > > irq callbacks before cleaning old buffers.
> > > > > >
> > > > > > But it is still possible that the irq is killed by mistake:
> > > > > >
> > > > > > When a delayed tx interrupt arrives, old buffers has been cleaned in
> > > > > > other paths (start_xmit and virtnet_poll_cleantx), then the interrupt is
> > > > > > mistakenly identified as a spurious interrupt in vring_interrupt.
> > > > > >
> > > > > > We should refrain from labeling it as a spurious interrupt; otherwise,
> > > > > > note_interrupt may inadvertently kill the legitimate irq.
> > > > >
> > > > > I think the evil came from where we do free_old_xmit() in
> > > > > start_xmit(). I know it is for performance, but we may need to make
> > > > > the code work correctly instead of adding endless hacks. Personally, I
> > > > > think the virtio-net TX path is over-complicated. We probably pay too
> > > > > much (e.g there's netif_tx_lock in TX NAPI path) to try to "optimize"
> > > > > the performance.
> > > > >
> > > > > How about just don't do free_old_xmit and do that solely in the TX NAPI?
> > > >
> > > > Not getting interrupts is always better than getting interrupts.
> > >
> > > Not sure. For example letting 1 cpu to do the transmission without the
> > > dealing of xmit skbs should give us better performance.
> >
> > Hmm. It's a subtle thing. I suspect until certain limit
> > (e.g. ping pong test) free_old_xmit will win anyway.
>
> Not sure I understand here.
If you transmit 1 packet and then wait for another one anyway,
you are better off just handling the tx interrupt.
> >
> > > > This is not new code, there are no plans to erase it all and start
> > > > anew "to make it work correctly" - it's widely deployed,
> > > > you will cause performance regressions and they are hard
> > > > to debug.
> > >
> > > I actually meant the TX NAPI mode, we tried to hold the TX lock in the
> > > TX NAPI, which turns out to slow down both the transmission and the
> > > NAPI itself.
> > >
> > > Thanks
> >
> > We do need to synchronize anyway though, virtio expects drivers to do
> > their own serialization of vq operations.
>
> Right, but currently add and get needs to be serialized which is a
> bottleneck. I don't see any issue to parallelize that.
Do you see this in traces?
> > You could try to instead move
> > skbs to some kind of array under the tx lock, then free them all up
> > later after unlocking tx.
> >
> > Can be helpful for batching as well?
>
> It's worth a try and see.
Why not.
> >
> >
> > I also always wondered whether it is an issue that free_old_xmit
> > just polls vq until it is empty, without a limit.
>
> Did you mean schedule a NAPI if free_old_xmit() exceeds the NAPI quota?
yes
> > napi is supposed to poll until a limit is reached.
> > I guess not many people have very deep vqs.
>
> Current NAPI weight is 64, so I think we can meet it in stressful workload.
>
> Thanks
yes, but it's just a random number. since we hold the tx lock,
we get at most vq size bufs, so it's limited.
> >
> > --
> > MST
> >
next prev parent reply other threads:[~2024-08-06 13:25 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-01 13:56 [PATCH net-next] virtio_net: Prevent misidentified spurious interrupts from killing the irq Heng Qi
2024-08-01 14:04 ` Michael S. Tsirkin
2024-08-02 7:42 ` Heng Qi
2024-08-02 13:08 ` Michael S. Tsirkin
2024-08-02 3:41 ` Jason Wang
2024-08-02 13:11 ` Michael S. Tsirkin
2024-08-05 3:26 ` Jason Wang
2024-08-05 6:29 ` Michael S. Tsirkin
2024-08-06 3:18 ` Jason Wang
2024-08-06 13:24 ` Michael S. Tsirkin [this message]
2024-08-07 4:06 ` Jason Wang
2024-08-07 10:37 ` Michael S. Tsirkin
2024-08-08 2:50 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240806091923-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=hengqi@linux.alibaba.com \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).