netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: willemdebruijn.kernel@gmail.com, daniel@iogearbox.net,
	eric.dumazet@gmail.com
Cc: make0818@gmail.com, netdev@vger.kernel.org, jiri@resnulli.us,
	xiyou.wangcong@gmail.com
Subject: [RFC PATCH 00/17] lockless qdisc
Date: Mon, 13 Nov 2017 12:07:38 -0800	[thread overview]
Message-ID: <20171113195256.6245.64676.stgit@john-Precision-Tower-5810> (raw)

Multiple folks asked me about this series at net(dev)conf so with
a 10+hour flight and a bit testing once back home I think these
are ready to be submitted. Net-next is closed at the moment

  http://vger.kernel.org/~davem/net-next.html

but, once it opens up we can get these in first thing and have
plenty of time to resolve in fallout. Although I haven't seen any
issues with my latest testing.

My first test case uses multiple containers (via cilium) where
multiple client containers use 'wrk' to benchmark connections with
a server container running lighttpd. Where lighttpd is configured
to use multiple threads, one per core. Additionally this test has
a proxy agent running so all traffic takes an extra hop through a
proxy container. In these cases each TCP packet traverses the egress
qdisc layer at least four times and the ingress qdisc layer an
additional four times. This makes for a good stress test IMO, perf
details below.

The other micro-benchmark I run is injecting packets directly into
qdisc layer using pktgen. This uses the benchmark script,

 ./pktgen_bench_xmit_mode_queue_xmit.sh 

Benchmarks taken in two cases, "base" running latest net-next no
changes to qdisc layer and "qdisc" tests run with qdisc lockless
updates. Numbers reported in req/sec. All virtual 'veth' devices
run with pfifo_fast in the qdisc test case.

`wrk -t16 -c $conns -d30 "http://[$SERVER_IP4]:80"`

conns    16      32     64   1024
-----------------------------------------------
base:   18831  20201  21393  29151
qdisc:  19309  21063  23899  29265

notice in all cases we see performance improvement when running
with qdisc case.

Microbenchmarks using pktgen are as follows,

`pktgen_bench_xmit_mode_queue_xmit.sh -t 1 -i eth2 -c 20000000

base(mq):          2.1Mpps
base(pfifo_fast):  2.1Mpps
qdisc(mq):         2.6Mpps
qdisc(pfifo_fast): 2.6Mpps

notice numbers are the same for mq and pfifo_fast because only
testing a single thread here.

Comments and feedback welcome. Anyone willing to do additional testing
would be greatly appreciated. The patches can be pulled here,

  https://github.com/cilium/linux/tree/qdisc

Thanks,
John

---

John Fastabend (17):
      net: sched: cleanup qdisc_run and __qdisc_run semantics
      net: sched: allow qdiscs to handle locking
      net: sched: remove remaining uses for qdisc_qlen in xmit path
      net: sched: provide per cpu qstat helpers
      net: sched: a dflt qdisc may be used with per cpu stats
      net: sched: explicit locking in gso_cpu fallback
      net: sched: drop qdisc_reset from dev_graft_qdisc
      net: sched: use skb list for skb_bad_tx
      net: sched: check for frozen queue before skb_bad_txq check
      net: sched: qdisc_qlen for per cpu logic
      net: sched: helper to sum qlen
      net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mq
      net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio
      net: skb_array: expose peek API
      net: sched: pfifo_fast use skb_array
      net: skb_array additions for unlocked consumer
      net: sched: lock once per bulk dequeue


 0 files changed

             reply	other threads:[~2017-11-13 20:07 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-13 20:07 John Fastabend [this message]
2017-11-13 20:07 ` [RFC PATCH 01/17] net: sched: cleanup qdisc_run and __qdisc_run semantics John Fastabend
2017-11-13 20:08 ` [RFC PATCH 02/17] net: sched: allow qdiscs to handle locking John Fastabend
2017-11-13 20:08 ` [RFC PATCH 03/17] net: sched: remove remaining uses for qdisc_qlen in xmit path John Fastabend
2017-11-15  0:11   ` Willem de Bruijn
2017-11-15  1:56     ` Willem de Bruijn
2017-11-15 15:00       ` John Fastabend
2017-11-13 20:08 ` [RFC PATCH 04/17] net: sched: provide per cpu qstat helpers John Fastabend
2017-11-13 20:09 ` [RFC PATCH 05/17] net: sched: a dflt qdisc may be used with per cpu stats John Fastabend
2017-11-13 20:09 ` [RFC PATCH 06/17] net: sched: explicit locking in gso_cpu fallback John Fastabend
2017-11-15  0:41   ` Willem de Bruijn
2017-11-15  2:04     ` Willem de Bruijn
2017-11-15 15:11     ` John Fastabend
2017-11-15 17:51       ` Willem de Bruijn
2017-11-16 13:31         ` John Fastabend
2017-11-13 20:09 ` [RFC PATCH 07/17] net: sched: drop qdisc_reset from dev_graft_qdisc John Fastabend
2017-11-13 20:10 ` [RFC PATCH 08/17] net: sched: use skb list for skb_bad_tx John Fastabend
2017-11-13 20:10 ` [RFC PATCH 09/17] net: sched: check for frozen queue before skb_bad_txq check John Fastabend
2017-11-13 20:10 ` [RFC PATCH 10/17] net: sched: qdisc_qlen for per cpu logic John Fastabend
2017-11-15  1:16   ` Willem de Bruijn
2017-11-15 15:18     ` John Fastabend
2017-11-13 20:11 ` [RFC PATCH 11/17] net: sched: helper to sum qlen John Fastabend
2017-11-13 20:11 ` [RFC PATCH 12/17] net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mq John Fastabend
2017-11-15  1:22   ` Willem de Bruijn
2017-11-13 20:11 ` [RFC PATCH 13/17] net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio John Fastabend
2017-11-13 20:12 ` [RFC PATCH 14/17] net: skb_array: expose peek API John Fastabend
2017-11-13 20:12 ` [RFC PATCH 15/17] net: sched: pfifo_fast use skb_array John Fastabend
2017-11-14 23:34   ` Willem de Bruijn
2017-11-15 14:57     ` John Fastabend
2017-11-13 20:12 ` [RFC PATCH 16/17] net: skb_array additions for unlocked consumer John Fastabend
2017-11-13 20:13 ` [RFC PATCH 17/17] net: sched: lock once per bulk dequeue John Fastabend

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171113195256.6245.64676.stgit@john-Precision-Tower-5810 \
    --to=john.fastabend@gmail.com \
    --cc=daniel@iogearbox.net \
    --cc=eric.dumazet@gmail.com \
    --cc=jiri@resnulli.us \
    --cc=make0818@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=willemdebruijn.kernel@gmail.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).