netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: "Jason Wang" <jasowang@redhat.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	netdev@vger.kernel.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: Tun congestion/BQL
Date: Thu, 11 Apr 2019 11:56:06 +0300	[thread overview]
Message-ID: <79f9e78d6f653a4a4ccd2fad76d8c39622491172.camel@infradead.org> (raw)
In-Reply-To: <a037e8d3-f0b1-1680-be29-3831c9860682@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 1522 bytes --]

On Thu, 2019-04-11 at 15:17 +0800, Jason Wang wrote:
> > > Ideally we want to react when the queue starts building rather than when
> > > it starts getting full; by pushing back on upper layers (or, if
> > > forwarding, dropping packets to signal congestion).
> > 
> > This is precisely what my first accidental if (!ptr_ring_empty())
> > variant was doing, right? :)
> 
> 
> But I give a try on your ptr_ring_full() patch on VM, looks like it 
> works (single flow), no packets were dropped by TAP anymore. How many 
> flows did you use?

Hm, I thought I was only using one. This is just a simple case of
userspace opening /dev/net/tun, TUNSETIFF, and reading/writing.

But if I was stopping the *wrong* queue that might explain things.

This is a persistent tun device.

> 
> > 
> > > In practice, this means tuning the TX ring to the *minimum* size it can
> > > be without starving (this is basically what BQL does for Ethernet), and
> > > keeping packets queued in the qdisc layer instead, where it can be
> > > managed...
> > 
> > I was going to add BQL (as $SUBJECT may have caused you to infer) but
> > trivially adding the netdev_sent_queue() in tun_net_xmit() and
> > netdev_completed_queue() for xdp vs. skb in tun_do_read() was tripping
> > the BUG in dql_completed().
> 
> 
> Something like https://lists.openwall.net/netdev/2012/11/12/6767 ?

Fairly much.

Except again I was being lazy for the proof-of-concept, ignoring 'txq'
and just using netdev_sent_queue() etc.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

  reply	other threads:[~2019-04-11  8:56 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-10 12:01 Tun congestion/BQL David Woodhouse
2019-04-10 13:01 ` David Woodhouse
2019-04-10 13:25   ` Jason Wang
2019-04-10 13:42     ` Toke Høiland-Jørgensen
2019-04-10 14:33       ` David Woodhouse
2019-04-10 15:01         ` Toke Høiland-Jørgensen
2019-04-10 15:32           ` David Woodhouse
2019-04-11  7:22             ` Jason Wang
2019-04-11  9:25               ` David Woodhouse
2019-04-12  4:26                 ` Jason Wang
2019-04-12  5:45                   ` David Woodhouse
2019-04-11  7:17         ` Jason Wang
2019-04-11  8:56           ` David Woodhouse [this message]
2019-04-11  9:04             ` Jason Wang
2019-04-11  9:16               ` David Woodhouse
2019-04-12  4:23                 ` Jason Wang
2019-04-11  7:01       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=79f9e78d6f653a4a4ccd2fad76d8c39622491172.camel@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).