netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Jason Wang <jasowang@redhat.com>,
	Koichiro Den <den@klaipeden.com>,
	virtualization@lists.linux-foundation.org,
	Network Development <netdev@vger.kernel.org>
Subject: Re: [PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
Date: Wed, 6 Sep 2017 06:27:12 +0300	[thread overview]
Message-ID: <20170906062441-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CAF=yD-JQ5vL8fhLwtz9yY9B4NZR3Tf0kmZHAi-9ht_-Bz0n4Sw@mail.gmail.com>

On Tue, Sep 05, 2017 at 04:09:19PM +0200, Willem de Bruijn wrote:
> On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > On 2017年09月02日 00:17, Willem de Bruijn wrote:
> >>>>>
> >>>>> This is not a 50/50 split, which impliesTw that some packets from the
> >>>>> large
> >>>>> packet flow are still converted to copying. Without the change the rate
> >>>>> without queue was 80k zerocopy vs 80k copy, so this choice of
> >>>>> (vq->num >> 2) appears too conservative.
> >>>>>
> >>>>> However, testing with (vq->num >> 1) was not as effective at mitigating
> >>>>> stalls. I did not save that data, unfortunately. Can run more tests on
> >>>>> fine
> >>>>> tuning this variable, if the idea sounds good.
> >>>>
> >>>>
> >>>> Looks like there're still two cases were left:
> >>>
> >>> To be clear, this patch is not intended to fix all issues. It is a small
> >>> improvement to avoid HoL blocking due to queued zerocopy skbs.
> >
> >
> > Right, just want to see if there's anything left.
> >
> >>>
> >>> The trade-off is that reverting to copying in these cases increases
> >>> cycle cost. I think that that is a trade-off worth making compared to
> >>> the alternative drop in throughput. It probably would be good to be
> >>> able to measure this without kernel instrumentation: export
> >>> counters similar to net->tx_zcopy_err and net->tx_packets (though
> >>> without reset to zero, as in vhost_net_tx_packet).
> >
> >
> > I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
> >
> >>>
> >>>> 1) sndbuf is not INT_MAX
> >>>
> >>> You mean the case where the device stalls, later zerocopy notifications
> >>> are queued, but these are never cleaned in free_old_xmit_skbs,
> >>> because it requires a start_xmit and by now the (only) socket is out of
> >>> descriptors?
> >>
> >> Typo, sorry. I meant out of sndbuf.
> >
> >
> > I mean e.g for tun. If its sndbuf is smaller than e.g (vq->num >> 1) *
> > $pkt_size and if all packet were held by some modules, limitation like
> > vq->num >> 1 won't work since we hit sudbuf before it.
> 
> Good point.

I agree however anyone using SNDBUF < infinity already hits HOQ blocking
in some scenarios.


> >>
> >>> A watchdog would help somewhat. With tx-napi, this case cannot occur,
> >>> either, as free_old_xmit_skbs no longer depends on a call to start_xmit.
> >>>
> >>>> 2) tx napi is used for virtio-net
> >>>
> >>> I am not aware of any issue specific to the use of tx-napi?
> >
> >
> > Might not be clear here, I mean e.g virtio_net (tx-napi) in guest +
> > vhost_net (zerocopy) in host. In this case, even if we switch to datacopy if
> > ubuf counts exceeds vq->num >> 1, we still complete tx buffers in order, tx
> > interrupt could be delayed for indefinite time.
> 
> Copied buffers are completed immediately in handle_tx.
> 
> Do you mean when a process sends fewer packets than vq->num >> 1,
> so that all are queued? Yes, then the latency is indeed that of the last
> element leaving the qdisc.
> 
> >>>
> >>>> 1) could be a corner case, and for 2) what your suggest here may not
> >>>> solve
> >>>> the issue since it still do in order completion.
> >>>
> >>> Somewhat tangential, but it might also help to break the in-order
> >>> completion processing in vhost_zerocopy_signal_used. Complete
> >>> all descriptors between done_idx and upend_idx. done_idx should
> >>> then only be forward to the oldest still not-completed descriptor.
> >>>
> >>> In the test I ran, where the oldest descriptors are held in a queue and
> >>> all newer ones are tail-dropped,
> >
> >
> > Do you mean the descriptors were tail-dropped by vhost?
> 
> Tail-dropped by netem. The dropped items are completed out of
> order by vhost before the held items.

      reply	other threads:[~2017-09-06  3:27 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-19  6:38 [PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi Koichiro Den
2017-08-20 20:49 ` Willem de Bruijn
2017-08-21 12:40   ` Koichiro Den
2017-08-22 12:11   ` Willem de Bruijn
2017-08-22 14:04     ` Koichiro Den
2017-08-22 17:19       ` Willem de Bruijn
2017-08-23 14:26         ` Koichiro Den
2017-08-21 12:33 ` Jason Wang
2017-08-21 12:58   ` Koichiro Den
2017-08-21 15:41   ` Willem de Bruijn
2017-08-22  2:50     ` Jason Wang
2017-08-22  3:10       ` Willem de Bruijn
2017-08-22 11:47         ` Jason Wang
2017-08-22 13:42         ` Koichiro Den
2017-08-22 17:16           ` Willem de Bruijn
2017-08-23 14:24             ` Koichiro Den
2017-08-22 17:55       ` Michael S. Tsirkin
2017-08-22 18:01         ` David Miller
2017-08-22 18:28           ` Eric Dumazet
2017-08-22 18:39             ` Michael S. Tsirkin
2017-08-23 14:28         ` Koichiro Den
2017-08-23 14:47           ` Koichiro Den
2017-08-23 15:20           ` Willem de Bruijn
2017-08-23 22:57             ` Michael S. Tsirkin
2017-08-24  3:28               ` Willem de Bruijn
2017-08-24  4:34                 ` Michael S. Tsirkin
2017-08-24 13:50                 ` Michael S. Tsirkin
2017-08-24 20:20                   ` Willem de Bruijn
2017-08-24 20:50                     ` Michael S. Tsirkin
2017-08-25 22:44                       ` Willem de Bruijn
2017-08-25 23:32                         ` Michael S. Tsirkin
2017-08-26  1:03                           ` Willem de Bruijn
2017-08-29 19:35                             ` Willem de Bruijn
2017-08-29 19:42                               ` Michael S. Tsirkin
2017-08-29 19:53                                 ` Willem de Bruijn
2017-08-29 20:40                                   ` Michael S. Tsirkin
2017-08-29 22:55                                     ` Willem de Bruijn
2017-08-30  1:45                               ` Jason Wang
2017-08-30  3:11                                 ` Willem de Bruijn
2017-09-01  3:08                                   ` Jason Wang
2017-08-31 14:30                               ` Willem de Bruijn
2017-09-01  3:25                                 ` Jason Wang
2017-09-01 16:15                                   ` Willem de Bruijn
2017-09-01 16:17                                     ` Willem de Bruijn
2017-09-04  3:03                                       ` Jason Wang
2017-09-05 14:09                                         ` Willem de Bruijn
2017-09-06  3:27                                           ` Michael S. Tsirkin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170906062441-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=den@klaipeden.com \
    --cc=jasowang@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).