bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
To: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: "Alexei Starovoitov" <alexei.starovoitov@gmail.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	"Network Development" <netdev@vger.kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	"Boqun Feng" <boqun.feng@gmail.com>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Frederic Weisbecker" <frederic@kernel.org>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Waiman Long" <longman@redhat.com>,
	"Will Deacon" <will@kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Andrii Nakryiko" <andrii@kernel.org>,
	"Eduard Zingerman" <eddyz87@gmail.com>,
	"Hao Luo" <haoluo@google.com>, "Jiri Olsa" <jolsa@kernel.org>,
	"John Fastabend" <john.fastabend@gmail.com>,
	"KP Singh" <kpsingh@kernel.org>,
	"Martin KaFai Lau" <martin.lau@linux.dev>,
	"Song Liu" <song@kernel.org>,
	"Stanislav Fomichev" <sdf@google.com>,
	"Yonghong Song" <yonghong.song@linux.dev>,
	bpf <bpf@vger.kernel.org>
Subject: Re: [PATCH net-next 14/15 v2] net: Reference bpf_redirect_info via task_struct on PREEMPT_RT.
Date: Fri, 17 May 2024 18:15:53 +0200	[thread overview]
Message-ID: <20240517161553.SSh4BNQO@linutronix.de> (raw)
In-Reply-To: <e4123697-3e6e-4d4a-8b06-f69e1c453225@kernel.org>

On 2024-05-14 14:20:03 [+0200], Jesper Dangaard Brouer wrote:
> Trick for CPU-map to do early drop on remote CPU:
> 
>  # ./xdp-bench redirect-cpu --cpu 3 --remote-action drop ixgbe1
> 
> I recommend using Ctrl+\ while running to show more info like CPUs being
> used and what kthread consumes.  To catch issues e.g. if you are CPU
> redirecting to same CPU as RX happen to run on.

Okay. So I reworked the last two patches make the struct part of
task_struct and then did as you suggested:

Unpatched:
|Sending:
|Show adapter(s) (eno2np1) statistics (ONLY that changed!)
|Ethtool(eno2np1 ) stat:    952102520 (    952,102,520) <= port.tx_bytes /sec
|Ethtool(eno2np1 ) stat:     14876602 (     14,876,602) <= port.tx_size_64 /sec
|Ethtool(eno2np1 ) stat:     14876602 (     14,876,602) <= port.tx_unicast /sec
|Ethtool(eno2np1 ) stat:    446045897 (    446,045,897) <= tx-0.bytes /sec
|Ethtool(eno2np1 ) stat:      7434098 (      7,434,098) <= tx-0.packets /sec
|Ethtool(eno2np1 ) stat:    446556042 (    446,556,042) <= tx-1.bytes /sec
|Ethtool(eno2np1 ) stat:      7442601 (      7,442,601) <= tx-1.packets /sec
|Ethtool(eno2np1 ) stat:    892592523 (    892,592,523) <= tx_bytes /sec
|Ethtool(eno2np1 ) stat:     14876542 (     14,876,542) <= tx_packets /sec
|Ethtool(eno2np1 ) stat:            2 (              2) <= tx_restart /sec
|Ethtool(eno2np1 ) stat:            2 (              2) <= tx_stopped /sec
|Ethtool(eno2np1 ) stat:     14876622 (     14,876,622) <= tx_unicast /sec
|
|Receive:
|eth1->?                 8,732,508 rx/s                  0 err,drop/s
|  receive total         8,732,508 pkt/s                 0 drop/s                0 error/s
|    cpu:10              8,732,508 pkt/s                 0 drop/s                0 error/s
|  enqueue to cpu 3      8,732,510 pkt/s                 0 drop/s             7.00 bulk-avg
|    cpu:10->3           8,732,510 pkt/s                 0 drop/s             7.00 bulk-avg
|  kthread total         8,732,506 pkt/s                 0 drop/s          205,650 sched
|    cpu:3               8,732,506 pkt/s                 0 drop/s          205,650 sched
|    xdp_stats                   0 pass/s        8,732,506 drop/s                0 redir/s
|      cpu:3                     0 pass/s        8,732,506 drop/s                0 redir/s
|  redirect_err                  0 error/s
|  xdp_exception                 0 hit/s

I verified that the "drop only" case hits 14M packets/s while this
redirect part reports 8M packets/s.

Patched:
|Sending:
|Show adapter(s) (eno2np1) statistics (ONLY that changed!)
|Ethtool(eno2np1 ) stat:    952635404 (    952,635,404) <= port.tx_bytes /sec
|Ethtool(eno2np1 ) stat:     14884934 (     14,884,934) <= port.tx_size_64 /sec
|Ethtool(eno2np1 ) stat:     14884928 (     14,884,928) <= port.tx_unicast /sec
|Ethtool(eno2np1 ) stat:    446496117 (    446,496,117) <= tx-0.bytes /sec
|Ethtool(eno2np1 ) stat:      7441602 (      7,441,602) <= tx-0.packets /sec
|Ethtool(eno2np1 ) stat:    446603461 (    446,603,461) <= tx-1.bytes /sec
|Ethtool(eno2np1 ) stat:      7443391 (      7,443,391) <= tx-1.packets /sec
|Ethtool(eno2np1 ) stat:    893086506 (    893,086,506) <= tx_bytes /sec
|Ethtool(eno2np1 ) stat:     14884775 (     14,884,775) <= tx_packets /sec
|Ethtool(eno2np1 ) stat:           14 (             14) <= tx_restart /sec
|Ethtool(eno2np1 ) stat:           14 (             14) <= tx_stopped /sec
|Ethtool(eno2np1 ) stat:     14884937 (     14,884,937) <= tx_unicast /sec
|
|Receive:
|eth1->?                 8,735,198 rx/s                  0 err,drop/s
|  receive total         8,735,198 pkt/s                 0 drop/s                0 error/s
|    cpu:6               8,735,198 pkt/s                 0 drop/s                0 error/s
|  enqueue to cpu 3      8,735,193 pkt/s                 0 drop/s             7.00 bulk-avg
|    cpu:6->3            8,735,193 pkt/s                 0 drop/s             7.00 bulk-avg
|  kthread total         8,735,191 pkt/s                 0 drop/s          208,054 sched
|    cpu:3               8,735,191 pkt/s                 0 drop/s          208,054 sched
|    xdp_stats                   0 pass/s        8,735,191 drop/s                0 redir/s
|      cpu:3                     0 pass/s        8,735,191 drop/s                0 redir/s
|  redirect_err                  0 error/s
|  xdp_exception                 0 hit/s

This looks to be in the same range/ noise level. top wise I have
ksoftirqd at 100% and cpumap/./map at ~60% so I hit CPU speed limit on a
10G link. perf top shows
|   18.37%  bpf_prog_4f0ffbb35139c187_cpumap_l4_hash         [k] bpf_prog_4f0ffbb35139c187_cpumap_l4_hash
|   13.15%  [kernel]                                         [k] cpu_map_kthread_run
|   12.96%  [kernel]                                         [k] ixgbe_poll
|    6.78%  [kernel]                                         [k] page_frag_free
|    5.62%  [kernel]                                         [k] xdp_do_redirect

for the top 5. Is this something that looks reasonable?

Sebastian

  reply	other threads:[~2024-05-17 16:15 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20240503182957.1042122-1-bigeasy@linutronix.de>
2024-05-03 18:25 ` [PATCH net-next 11/15] lwt: Don't disable migration prio invoking BPF Sebastian Andrzej Siewior
2024-05-03 18:25 ` [PATCH net-next 12/15] seg6: Use nested-BH locking for seg6_bpf_srh_states Sebastian Andrzej Siewior
2024-05-03 18:25 ` [PATCH net-next 13/15] net: Use nested-BH locking for bpf_scratchpad Sebastian Andrzej Siewior
2024-05-03 18:25 ` [PATCH net-next 14/15] net: Reference bpf_redirect_info via task_struct on PREEMPT_RT Sebastian Andrzej Siewior
2024-05-06 19:41   ` Toke Høiland-Jørgensen
2024-05-06 23:09     ` Alexei Starovoitov
2024-05-07 12:36       ` Sebastian Andrzej Siewior
2024-05-07 13:27         ` Jesper Dangaard Brouer
2024-05-10 16:21           ` [PATCH net-next 14/15 v2] " Sebastian Andrzej Siewior
2024-05-10 16:22             ` Sebastian Andrzej Siewior
2024-05-14  5:07               ` Jesper Dangaard Brouer
2024-05-14  5:43                 ` Sebastian Andrzej Siewior
2024-05-14 12:20                   ` Jesper Dangaard Brouer
2024-05-17 16:15                     ` Sebastian Andrzej Siewior [this message]
2024-05-22  7:09                       ` Jesper Dangaard Brouer
2024-05-24  7:54                         ` Sebastian Andrzej Siewior
2024-05-24 13:59                         ` Sebastian Andrzej Siewior
2024-05-14 11:54             ` Toke Høiland-Jørgensen
2024-05-15 13:43               ` Sebastian Andrzej Siewior
2024-05-21  1:52                 ` Alexei Starovoitov
2024-05-07 10:57     ` [PATCH net-next 14/15] " Sebastian Andrzej Siewior
2024-05-07 13:50       ` Toke Høiland-Jørgensen
2024-05-03 18:25 ` [PATCH net-next 15/15] net: Move per-CPU flush-lists to bpf_net_context " Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240517161553.SSh4BNQO@linutronix.de \
    --to=bigeasy@linutronix.de \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=boqun.feng@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=eddyz87@gmail.com \
    --cc=edumazet@google.com \
    --cc=frederic@kernel.org \
    --cc=haoluo@google.com \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=martin.lau@linux.dev \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sdf@google.com \
    --cc=song@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=toke@redhat.com \
    --cc=will@kernel.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).