netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: Eric Dumazet <edumazet@google.com>,
	"David S . Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Simon Horman <horms@kernel.org>,
	Jamal Hadi Salim <jhs@mojatatu.com>,
	Cong Wang <xiyou.wangcong@gmail.com>,
	Jiri Pirko <jiri@resnulli.us>,
	Kuniyuki Iwashima <kuniyu@google.com>,
	Willem de Bruijn <willemb@google.com>,
	netdev@vger.kernel.org, eric.dumazet@gmail.com,
	Eric Dumazet <edumazet@google.com>
Subject: Re: [PATCH v1 net-next 5/5] net: dev_queue_xmit() llist adoption
Date: Fri, 07 Nov 2025 16:28:16 +0100	[thread overview]
Message-ID: <877bw1ooa7.fsf@toke.dk> (raw)
In-Reply-To: <20251013145416.829707-6-edumazet@google.com>

Eric Dumazet <edumazet@google.com> writes:

> Remove busylock spinlock and use a lockless list (llist)
> to reduce spinlock contention to the minimum.
>
> Idea is that only one cpu might spin on the qdisc spinlock,
> while others simply add their skb in the llist.
>
> After this patch, we get a 300 % improvement on heavy TX workloads.
> - Sending twice the number of packets per second.
> - While consuming 50 % less cycles.
>
> Note that this also allows in the future to submit batches
> to various qdisc->enqueue() methods.
>
> Tested:
>
> - Dual Intel(R) Xeon(R) 6985P-C  (480 hyper threads).
> - 100Gbit NIC, 30 TX queues with FQ packet scheduler.
> - echo 64 >/sys/kernel/slab/skbuff_small_head/cpu_partial (avoid contention in mm)
> - 240 concurrent "netperf -t UDP_STREAM -- -m 120 -n"

Hi Eric

While testing this with sch_cake (to get a new baseline for the mq_cake
patches as Jamal suggested), I found that this patch completely destroys
the performance of cake in particular.

I run a small UDP test (64-byte packets across 16 flows through
xdp-trafficgen, offered load is ~5Mpps) with a single cake instance on
as the root interface qdisc.

With a stock Fedora (6.17.7) kernel, this gets me around 630 Kpps across
8 queues (on an E810-C, ice driver):

Ethtool(ice0p1  ) stat:     40321218 (     40,321,218) <= tx_bytes /sec
Ethtool(ice0p1  ) stat:     42841424 (     42,841,424) <= tx_bytes.nic /sec
Ethtool(ice0p1  ) stat:      5248505 (      5,248,505) <= tx_queue_0_bytes /sec
Ethtool(ice0p1  ) stat:        82008 (         82,008) <= tx_queue_0_packets /sec
Ethtool(ice0p1  ) stat:      3425984 (      3,425,984) <= tx_queue_1_bytes /sec
Ethtool(ice0p1  ) stat:        53531 (         53,531) <= tx_queue_1_packets /sec
Ethtool(ice0p1  ) stat:      5277496 (      5,277,496) <= tx_queue_2_bytes /sec
Ethtool(ice0p1  ) stat:        82461 (         82,461) <= tx_queue_2_packets /sec
Ethtool(ice0p1  ) stat:      5285736 (      5,285,736) <= tx_queue_3_bytes /sec
Ethtool(ice0p1  ) stat:        82590 (         82,590) <= tx_queue_3_packets /sec
Ethtool(ice0p1  ) stat:      5280731 (      5,280,731) <= tx_queue_4_bytes /sec
Ethtool(ice0p1  ) stat:        82511 (         82,511) <= tx_queue_4_packets /sec
Ethtool(ice0p1  ) stat:      5275665 (      5,275,665) <= tx_queue_5_bytes /sec
Ethtool(ice0p1  ) stat:        82432 (         82,432) <= tx_queue_5_packets /sec
Ethtool(ice0p1  ) stat:      5276398 (      5,276,398) <= tx_queue_6_bytes /sec
Ethtool(ice0p1  ) stat:        82444 (         82,444) <= tx_queue_6_packets /sec
Ethtool(ice0p1  ) stat:      5250946 (      5,250,946) <= tx_queue_7_bytes /sec
Ethtool(ice0p1  ) stat:        82046 (         82,046) <= tx_queue_7_packets /sec
Ethtool(ice0p1  ) stat:            1 (              1) <= tx_restart /sec
Ethtool(ice0p1  ) stat:       630023 (        630,023) <= tx_size_127.nic /sec
Ethtool(ice0p1  ) stat:       630019 (        630,019) <= tx_unicast /sec
Ethtool(ice0p1  ) stat:       630020 (        630,020) <= tx_unicast.nic /sec

However, running the same test on a net-next kernel, performance drops
to round 10 Kpps(!):

Ethtool(ice0p1  ) stat:       679003 (        679,003) <= tx_bytes /sec
Ethtool(ice0p1  ) stat:       721440 (        721,440) <= tx_bytes.nic /sec
Ethtool(ice0p1  ) stat:       123539 (        123,539) <= tx_queue_0_bytes /sec
Ethtool(ice0p1  ) stat:         1930 (          1,930) <= tx_queue_0_packets /sec
Ethtool(ice0p1  ) stat:         1776 (          1,776) <= tx_queue_1_bytes /sec
Ethtool(ice0p1  ) stat:           28 (             28) <= tx_queue_1_packets /sec
Ethtool(ice0p1  ) stat:         1837 (          1,837) <= tx_queue_2_bytes /sec
Ethtool(ice0p1  ) stat:           29 (             29) <= tx_queue_2_packets /sec
Ethtool(ice0p1  ) stat:         1776 (          1,776) <= tx_queue_3_bytes /sec
Ethtool(ice0p1  ) stat:           28 (             28) <= tx_queue_3_packets /sec
Ethtool(ice0p1  ) stat:         1654 (          1,654) <= tx_queue_4_bytes /sec
Ethtool(ice0p1  ) stat:           26 (             26) <= tx_queue_4_packets /sec
Ethtool(ice0p1  ) stat:       222026 (        222,026) <= tx_queue_5_bytes /sec
Ethtool(ice0p1  ) stat:         3469 (          3,469) <= tx_queue_5_packets /sec
Ethtool(ice0p1  ) stat:       183072 (        183,072) <= tx_queue_6_bytes /sec
Ethtool(ice0p1  ) stat:         2861 (          2,861) <= tx_queue_6_packets /sec
Ethtool(ice0p1  ) stat:       143322 (        143,322) <= tx_queue_7_bytes /sec
Ethtool(ice0p1  ) stat:         2239 (          2,239) <= tx_queue_7_packets /sec
Ethtool(ice0p1  ) stat:        10609 (         10,609) <= tx_size_127.nic /sec
Ethtool(ice0p1  ) stat:        10609 (         10,609) <= tx_unicast /sec
Ethtool(ice0p1  ) stat:        10609 (         10,609) <= tx_unicast.nic /sec

Reverting commit 100dfa74cad9 ("net: dev_queue_xmit() llist adoption")
(and the followon f8a55d5e71e6 ("net: add a fast path in
__netif_schedule()"), but that alone makes no difference) gets me back
to the previous 630-650 Kpps range.

I couldn't find any other qdisc that suffers in the same way (tried
fq_codel, sfq and netem as single root qdiscs), so this seems to be some
specific interaction between the llist implementation and sch_cake. Any
idea what could be causing this?

-Toke


  reply	other threads:[~2025-11-07 15:28 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13 14:54 [PATCH v1 net-next 0/5] net: optimize TX throughput and efficiency Eric Dumazet
2025-10-13 14:54 ` [PATCH v1 net-next 1/5] net: add add indirect call wrapper in skb_release_head_state() Eric Dumazet
2025-10-13 14:54 ` [PATCH v1 net-next 2/5] net/sched: act_mirred: add loop detection Eric Dumazet
2025-10-13 14:54 ` [PATCH v1 net-next 3/5] Revert "net/sched: Fix mirred deadlock on device recursion" Eric Dumazet
2025-10-13 14:54 ` [PATCH v1 net-next 4/5] net: sched: claim one cache line in Qdisc Eric Dumazet
2025-10-13 14:54 ` [PATCH v1 net-next 5/5] net: dev_queue_xmit() llist adoption Eric Dumazet
2025-11-07 15:28   ` Toke Høiland-Jørgensen [this message]
2025-11-07 15:37     ` Eric Dumazet
2025-11-07 15:46       ` Eric Dumazet
2025-11-09 10:09         ` Eric Dumazet
2025-11-09 12:54           ` Eric Dumazet
2025-11-09 16:33             ` Toke Høiland-Jørgensen
2025-11-09 17:14               ` Eric Dumazet
2025-11-09 19:18               ` Jonas Köppeler
2025-11-09 19:28                 ` Eric Dumazet
2025-11-09 20:18                   ` Eric Dumazet
2025-11-09 20:29                     ` Eric Dumazet
2025-11-10 11:31                       ` Toke Høiland-Jørgensen
2025-11-10 13:26                         ` Eric Dumazet
2025-11-10 14:49                           ` Toke Høiland-Jørgensen
2025-11-10 17:34                             ` Eric Dumazet
2025-11-11 13:44                               ` Jonas Köppeler
2025-11-11 16:42                                 ` Toke Høiland-Jørgensen
2025-10-13 16:23 ` [PATCH v1 net-next 0/5] net: optimize TX throughput and efficiency Toke Høiland-Jørgensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=877bw1ooa7.fsf@toke.dk \
    --to=toke@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=horms@kernel.org \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=kuba@kernel.org \
    --cc=kuniyu@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=willemb@google.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).