From: Rick Jones <rick.jones2@hp.com>
To: Vimalkumar <j.vimal@gmail.com>
Cc: davem@davemloft.net, eric.dumazet@gmail.com,
Jamal Hadi Salim <jhs@mojatatu.com>,
netdev@vger.kernel.org
Subject: Re: [PATCH] htb: improved accuracy at high rates
Date: Fri, 19 Oct 2012 16:52:08 -0700 [thread overview]
Message-ID: <5081E7A8.2080009@hp.com> (raw)
In-Reply-To: <1350685582-65334-1-git-send-email-j.vimal@gmail.com>
On 10/19/2012 03:26 PM, Vimalkumar wrote:
> Current HTB (and TBF) uses rate table computed by
> the "tc" userspace program, which has the following
> issue:
>
> The rate table has 256 entries to map packet lengths
> to token (time units). With TSO sized packets, the
> 256 entry granularity leads to loss/gain of rate,
> making the token bucket inaccurate.
>
> Thus, instead of relying on rate table, this patch
> explicitly computes the time and accounts for packet
> transmission times with nanosec granularity.
>
> This greatly improves accuracy of HTB with a wide
> range of packet sizes.
>
> Example:
>
> tc qdisc add dev $dev root handle 1: \
> htb default 1
>
> tc class add dev $dev classid 1:1 parent 1: \
> rate 1Gbit mtu 64k
>
> Ideally it should work with all intermediate sized
> packets as well, but...
>
> Test:
> for i in {1..20}; do
> (netperf -H $host -t UDP_STREAM -l 30 -- -m $size &);
> done
>
> With size=400 bytes: achieved rate ~600Mb/s
> With size=1000 bytes: achieved rate ~835Mb/s
> With size=8000 bytes: achieved rate ~1012Mb/s
>
> With new HTB, in all cases, we achieve ~1000Mb/s.
First some netperf/operational kinds of questions:
Did it really take 20 concurrent netperf UDP_STREAM tests to get to
those rates? And why UDP_STREAM rather than TCP_STREAM?
I couldn't recall if GSO did anything for UDP, so did some quick and
dirty tests flipping GSO on and off on a 3.2.0 kernel, and the service
demands didn't seem to change. So, with 8000 bytes of user payload did
HTB actually see 8000ish byte packets, or did it actually see a series
of <= MTU sized IP datagram fragments? Or did the NIC being used have
UFO enabled?
Which reported throughput was used from the UDP_STREAM tests - send side
or receive side?
Is there much/any change in service demand on a netperf test? That is
what is the service demand of a mumble_STREAM test running through the
old HTB versus the new HTB? And/or the performance of a TCP_RR test
(both transactions per second and service demand per transaction) before
vs after.
happy benchmarking,
rick jones
next prev parent reply other threads:[~2012-10-19 23:52 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-19 22:26 [PATCH] htb: improved accuracy at high rates Vimalkumar
2012-10-19 23:52 ` Rick Jones [this message]
2012-10-20 0:51 ` Vimal
2012-10-22 17:35 ` Rick Jones
2012-10-20 7:26 ` Eric Dumazet
2012-10-20 16:41 ` Vimal
2012-10-20 7:47 ` Eric Dumazet
2012-10-20 10:29 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5081E7A8.2080009@hp.com \
--to=rick.jones2@hp.com \
--cc=davem@davemloft.net \
--cc=eric.dumazet@gmail.com \
--cc=j.vimal@gmail.com \
--cc=jhs@mojatatu.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).