* Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics
[not found] ` <20210804053650.22aa8a5b@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
@ 2021-08-04 16:17 ` David Ahern
[not found] ` <20210804094432.08d0fa86@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
0 siblings, 1 reply; 5+ messages in thread
From: David Ahern @ 2021-08-04 16:17 UTC (permalink / raw)
To: Jakub Kicinski, Saeed Mahameed
Cc: Michal Kubecek, Andrew Lunn, Song Liu, Vladyslav Tarasiuk,
Sameeh Jubran, Michael S. Tsirkin, YueHaibing, Alexei Starovoitov,
Zheng Yongjun, Thomas Petazzoni, Ioana Ciornei, Petr Vorel,
Alexander Duyck, Jian Shen, Arthur Kiyanovski, Daniel Borkmann,
Jonathan Corbet, linux-doc, John Fastabend, Russell King,
Michal Kubiak, Martin Habets, virtualization, Guy Tzalik,
Jesper Dangaard Brouer, Arnd Bergmann, Ido Schimmel,
Lukasz Czapnik, KP Singh, Andrii Nakryiko, Claudiu Manoil,
Alexander Lobakin, Dan Murphy, Yonghong Song, Shay Agroskin,
Marcin Wojtas, Johannes Berg, Danielle Ratson, Michal Swiatkowski,
netdev, bpf, linux-kernel, Martin KaFai Lau, Edward Cree,
Netanel Belgazal, Marcin Kubiak, Yangbo Lu, Saeed Bishara,
Andrew Morton, David S. Miller, Heiner Kallweit
On 8/4/21 6:36 AM, Jakub Kicinski wrote:
>> XDP is going to always be eBPF based ! why not just report such stats
>> to a special BPF_MAP ? BPF stack can collect the stats from the driver
>> and report them to this special MAP upon user request.
> Do you mean replacing the ethtool-netlink / rtnetlink etc. with
> a new BPF_MAP? I don't think adding another category of uAPI thru
> which netdevice stats are exposed would do much good :( Plus it
> doesn't address the "yet another cacheline" concern.
>
> To my understanding the need for stats recognizes the fact that (in
> large organizations) fleet monitoring is done by different teams than
> XDP development. So XDP team may have all the stats they need, but the
> team doing fleet monitoring has no idea how to get to them.
>
> To bridge the two worlds we need a way for the infra team to ask the
> XDP for well-defined stats. Maybe we should take a page from the BPF
> iterators book and create a program type for bridging the two worlds?
> Called by networking core when duping stats to extract from the
> existing BPF maps all the relevant stats and render them into a well
> known struct? Users' XDP design can still use a single per-cpu map with
> all the stats if they so choose, but there's a way to implement more
> optimal designs and still expose well-defined stats.
>
> Maybe that's too complex, IDK.
I was just explaining to someone internally how to get stats at all of
the different points in the stack to track down reasons for dropped packets:
ethtool -S for h/w and driver
tc -s for drops by the qdisc
/proc/net/softnet_stat for drops at the backlog layer
netstat -s for network and transport layer
yet another command and API just adds to the nightmare of explaining and
understanding these stats.
There is real value in continuing to use ethtool API for XDP stats. Not
saying this reorg of the XDP stats is the right thing to do, only that
the existing API has real user benefits.
Does anyone have data that shows bumping a properly implemented counter
causes a noticeable performance degradation and if so by how much? You
mention 'yet another cacheline' but collecting stats on stack and
incrementing the driver structs at the end of the napi loop should not
have a huge impact versus the value the stats provide.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics
[not found] ` <20210804094432.08d0fa86@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
@ 2021-08-04 17:28 ` David Ahern
[not found] ` <11091d33ff7803257e38ee921e4ba9597acfccfc.camel@kernel.org>
0 siblings, 1 reply; 5+ messages in thread
From: David Ahern @ 2021-08-04 17:28 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Michal Kubecek, Andrew Lunn, Song Liu, Vladyslav Tarasiuk,
Sameeh Jubran, Michael S. Tsirkin, YueHaibing, Alexei Starovoitov,
Zheng Yongjun, Thomas Petazzoni, Ioana Ciornei, Petr Vorel,
Alexander Duyck, Jian Shen, Arthur Kiyanovski, Daniel Borkmann,
Jonathan Corbet, linux-doc, John Fastabend, Russell King,
Michal Kubiak, Andrii Nakryiko, Martin Habets, virtualization,
Guy Tzalik, Jesper Dangaard Brouer, Arnd Bergmann, Ido Schimmel,
Lukasz Czapnik, KP Singh, Saeed Mahameed, Claudiu Manoil,
Alexander Lobakin, Dan Murphy, Yonghong Song, Shay Agroskin,
Marcin Wojtas, Johannes Berg, Danielle Ratson, Michal Swiatkowski,
netdev, bpf, linux-kernel, Martin KaFai Lau, Edward Cree,
Netanel Belgazal, Marcin Kubiak, Yangbo Lu, Saeed Bishara,
Andrew Morton, David S. Miller, Heiner Kallweit
On 8/4/21 10:44 AM, Jakub Kicinski wrote:
> On Wed, 4 Aug 2021 10:17:56 -0600 David Ahern wrote:
>> On 8/4/21 6:36 AM, Jakub Kicinski wrote:
>>>> XDP is going to always be eBPF based ! why not just report such stats
>>>> to a special BPF_MAP ? BPF stack can collect the stats from the driver
>>>> and report them to this special MAP upon user request.
>>> Do you mean replacing the ethtool-netlink / rtnetlink etc. with
>>> a new BPF_MAP? I don't think adding another category of uAPI thru
>>> which netdevice stats are exposed would do much good :( Plus it
>>> doesn't address the "yet another cacheline" concern.
>>>
>>> To my understanding the need for stats recognizes the fact that (in
>>> large organizations) fleet monitoring is done by different teams than
>>> XDP development. So XDP team may have all the stats they need, but the
>>> team doing fleet monitoring has no idea how to get to them.
>>>
>>> To bridge the two worlds we need a way for the infra team to ask the
>>> XDP for well-defined stats. Maybe we should take a page from the BPF
>>> iterators book and create a program type for bridging the two worlds?
>>> Called by networking core when duping stats to extract from the
>>> existing BPF maps all the relevant stats and render them into a well
>>> known struct? Users' XDP design can still use a single per-cpu map with
>>> all the stats if they so choose, but there's a way to implement more
>>> optimal designs and still expose well-defined stats.
>>>
>>> Maybe that's too complex, IDK.
>>
>> I was just explaining to someone internally how to get stats at all of
>> the different points in the stack to track down reasons for dropped packets:
>>
>> ethtool -S for h/w and driver
>> tc -s for drops by the qdisc
>> /proc/net/softnet_stat for drops at the backlog layer
>> netstat -s for network and transport layer
>>
>> yet another command and API just adds to the nightmare of explaining and
>> understanding these stats.
>
> Are you referring to RTM_GETSTATS when you say "yet another command"?
> RTM_GETSTATS exists and is used by offloads today.
>
> I'd expect ip -s (-s) to be extended to run GETSTATS and display the xdp
> stats. (Not sure why ip -s was left out of your list :))
It's on my diagram, and yes, forgot to add it here.
>
>> There is real value in continuing to use ethtool API for XDP stats. Not
>> saying this reorg of the XDP stats is the right thing to do, only that
>> the existing API has real user benefits.
>
> RTM_GETSTATS is an existing API. New ethtool stats are intended to be HW
> stats. I don't want to go back to ethtool being a dumping ground for all
> stats because that's what the old interface encouraged.
driver stats are important too. e.g., mlx5's cache stats and per-queue
stats.
>
>> Does anyone have data that shows bumping a properly implemented counter
>> causes a noticeable performance degradation and if so by how much? You
>> mention 'yet another cacheline' but collecting stats on stack and
>> incrementing the driver structs at the end of the napi loop should not
>> have a huge impact versus the value the stats provide.
>
> Not sure, maybe Jesper has some numbers. Maybe Intel folks do?
I just ran some quick tests with my setup and measured about 1.2% worst
case. Certainly not exhaustive. Perhaps Intel or Mellanox can provide
numbers for their high speed nics - e.g. ConnectX-6 and a saturated host.
>
> I'm just allergic to situations when there is a decision made and
> then months later patches are posted disregarding the decision,
> without analysis on why that decision was wrong. And while the
> maintainer who made the decision is on vacation.
>
stats is one of the many sensitive topics. I have been consistent in
defending the need to use existing APIs and tooling and not relying on
XDP program writers to add the relevant stats and then provide whatever
tool is needed to extract and print them. Standardization for
fundamental analysis tools.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics
[not found] ` <11091d33ff7803257e38ee921e4ba9597acfccfc.camel@kernel.org>
@ 2021-08-05 0:43 ` David Ahern
0 siblings, 0 replies; 5+ messages in thread
From: David Ahern @ 2021-08-05 0:43 UTC (permalink / raw)
To: Saeed Mahameed, Jakub Kicinski, Tariq Toukan, Tariq Toukan
Cc: Michal Kubecek, Andrew Lunn, Song Liu, Vladyslav Tarasiuk,
Sameeh Jubran, Michael S. Tsirkin, YueHaibing, Alexei Starovoitov,
Zheng Yongjun, Thomas Petazzoni, Ioana Ciornei, Petr Vorel,
Alexander Duyck, Jian Shen, Arthur Kiyanovski, Daniel Borkmann,
Jonathan Corbet, linux-doc, John Fastabend, Russell King,
Michal Kubiak, Martin Habets, virtualization, Guy Tzalik,
Jesper Dangaard Brouer, Arnd Bergmann, Ido Schimmel,
Lukasz Czapnik, KP Singh, Andrii Nakryiko, Claudiu Manoil,
Alexander Lobakin, Dan Murphy, Yonghong Song, Shay Agroskin,
Marcin Wojtas, Johannes Berg, Danielle Ratson, Michal Swiatkowski,
netdev, bpf, linux-kernel, Martin KaFai Lau, Edward Cree,
Netanel Belgazal, Marcin Kubiak, Yangbo Lu, Saeed Bishara,
Andrew Morton, David S. Miller, Heiner Kallweit
On 8/4/21 12:27 PM, Saeed Mahameed wrote:
>
>> I just ran some quick tests with my setup and measured about 1.2%
>> worst
>
> 1.2% is a lot ! what was the test ? what is the change ?
I did say "quick test ... not exhaustive" and it was definitely
eyeballing a pps change over a small time window.
If multiple counters are bumped 20-25 million times a second (e.g. XDP
drop case), how measurable is it? I was just trying to ballpark the
overhead - 1%, 5%, more? If it is <~ 1% then there is no performance
argument in which case let's do the right thing for users - export via
existing APIs.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics
[not found] ` <20211105164453.29102-1-alexandr.lobakin@intel.com>
@ 2021-11-08 11:37 ` Toke Høiland-Jørgensen
[not found] ` <20211108132113.5152-1-alexandr.lobakin@intel.com>
0 siblings, 1 reply; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-11-08 11:37 UTC (permalink / raw)
To: Alexander Lobakin, Saeed Mahameed
Cc: Michal Kubecek, Andrew Lunn, Song Liu, Vladyslav Tarasiuk,
Sameeh Jubran, Michael S. Tsirkin, Yonghong Song,
Alexei Starovoitov, Zheng Yongjun, Thomas Petazzoni,
Ioana Ciornei, Petr Vorel, Alexander Duyck, Jian Shen,
Arthur Kiyanovski, Daniel Borkmann, Jonathan Corbet, linux-doc,
John Fastabend, Russell King, Michal Kubiak, Martin Habets,
virtualization, Guy Tzalik, YueHaibing, Maciej Fijalkowski,
Jesper Dangaard Brouer, Arnd Bergmann, Ido Schimmel,
Lukasz Czapnik, KP Singh, Andrii Nakryiko, Claudiu Manoil,
Alexander Lobakin, Dan Murphy, Jakub Kicinski, Shay Agroskin,
Marcin Wojtas, Johannes Berg, Danielle Ratson, Michal Swiatkowski,
netdev, bpf, linux-kernel, Martin KaFai Lau, Edward Cree,
Netanel Belgazal, Marcin Kubiak, Yangbo Lu, Saeed Bishara,
Andrew Morton, David S. Miller, Heiner Kallweit
Alexander Lobakin <alexandr.lobakin@intel.com> writes:
> From: Alexander Lobakin <alexandr.lobakin@intel.com>
> Date: Tue, 26 Oct 2021 11:23:23 +0200
>
>> From: Saeed Mahameed <saeed@kernel.org>
>> Date: Tue, 03 Aug 2021 16:57:22 -0700
>>
>> [ snip ]
>>
>> > XDP is going to always be eBPF based ! why not just report such stats
>> > to a special BPF_MAP ? BPF stack can collect the stats from the driver
>> > and report them to this special MAP upon user request.
>>
>> I really dig this idea now. How do you see it?
>> <ifindex:channel:stat_id> as a key and its value as a value or ...?
>
> Ideas, suggestions, anyone?
I don't like the idea of putting statistics in a map instead of the
regular statistics counters. Sure, for bespoke things people want to put
into their XDP programs, use a map, but for regular packet/byte
counters, update the regular counters so XDP isn't "invisible".
As Jesper pointed out, batching the updates so the global counters are
only updated once per NAPI cycle is the way to avoid a huge performance
overhead of this...
-Toke
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics
[not found] ` <20211108132113.5152-1-alexandr.lobakin@intel.com>
@ 2021-11-08 18:09 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-11-08 18:09 UTC (permalink / raw)
To: Alexander Lobakin
Cc: Michal Kubecek, Andrew Lunn, Song Liu, Vladyslav Tarasiuk,
Sameeh Jubran, Michael S. Tsirkin, Yonghong Song,
Alexei Starovoitov, Zheng Yongjun, Thomas Petazzoni,
Ioana Ciornei, Petr Vorel, Alexander Duyck, Jian Shen,
Arthur Kiyanovski, Daniel Borkmann, Jonathan Corbet, linux-doc,
John Fastabend, Russell King, Michal Kubiak, Andrii Nakryiko,
Martin Habets, virtualization, Guy Tzalik, YueHaibing,
Maciej Fijalkowski, Jesper Dangaard Brouer, Arnd Bergmann,
Ido Schimmel, Lukasz Czapnik, KP Singh, Saeed Mahameed,
Claudiu Manoil, Alexander Lobakin, Dan Murphy, Jakub Kicinski,
Shay Agroskin, Marcin Wojtas, Johannes Berg, Danielle Ratson,
Michal Swiatkowski, netdev, bpf, linux-kernel, Martin KaFai Lau,
Edward Cree, Netanel Belgazal, Marcin Kubiak, Yangbo Lu,
Saeed Bishara, Andrew Morton, David S. Miller, Heiner Kallweit
Alexander Lobakin <alexandr.lobakin@intel.com> writes:
> From: Toke Høiland-Jørgensen <toke@redhat.com>
> Date: Mon, 08 Nov 2021 12:37:54 +0100
>
>> Alexander Lobakin <alexandr.lobakin@intel.com> writes:
>>
>> > From: Alexander Lobakin <alexandr.lobakin@intel.com>
>> > Date: Tue, 26 Oct 2021 11:23:23 +0200
>> >
>> >> From: Saeed Mahameed <saeed@kernel.org>
>> >> Date: Tue, 03 Aug 2021 16:57:22 -0700
>> >>
>> >> [ snip ]
>> >>
>> >> > XDP is going to always be eBPF based ! why not just report such stats
>> >> > to a special BPF_MAP ? BPF stack can collect the stats from the driver
>> >> > and report them to this special MAP upon user request.
>> >>
>> >> I really dig this idea now. How do you see it?
>> >> <ifindex:channel:stat_id> as a key and its value as a value or ...?
>> >
>> > Ideas, suggestions, anyone?
>>
>> I don't like the idea of putting statistics in a map instead of the
>> regular statistics counters. Sure, for bespoke things people want to put
>> into their XDP programs, use a map, but for regular packet/byte
>> counters, update the regular counters so XDP isn't "invisible".
>
> I wanted to provide an `ip link` command for getting these stats
> from maps and printing them in a usual format as well, but seems
> like that's an unneeded overcomplication of things since using
> maps for "regular"/"generic" XDP stats really has no reason except
> for "XDP means eBPF means maps".
Yeah, don't really see why it would have to: to me, one of the benefits
of XDP is being integrated closely with the kernel so we can have a
"fast path" *without* reinventing everything...
>> As Jesper pointed out, batching the updates so the global counters are
>> only updated once per NAPI cycle is the way to avoid a huge performance
>> overhead of this...
>
> That's how I do things currently, seems to work just fine.
Awesome!
-Toke
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-11-08 18:09 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20210803163641.3743-1-alexandr.lobakin@intel.com>
[not found] ` <20210803163641.3743-4-alexandr.lobakin@intel.com>
[not found] ` <20210803134900.578b4c37@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
[not found] ` <ec0aefbc987575d1979f9102d331bd3e8f809824.camel@kernel.org>
[not found] ` <20210804053650.22aa8a5b@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
2021-08-04 16:17 ` [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics David Ahern
[not found] ` <20210804094432.08d0fa86@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
2021-08-04 17:28 ` David Ahern
[not found] ` <11091d33ff7803257e38ee921e4ba9597acfccfc.camel@kernel.org>
2021-08-05 0:43 ` David Ahern
[not found] ` <20211026092323.165-1-alexandr.lobakin@intel.com>
[not found] ` <20211105164453.29102-1-alexandr.lobakin@intel.com>
2021-11-08 11:37 ` Toke Høiland-Jørgensen
[not found] ` <20211108132113.5152-1-alexandr.lobakin@intel.com>
2021-11-08 18:09 ` Toke Høiland-Jørgensen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).