* Stacks leading into skb:kfree_skb @ 2023-07-14 22:13 Ivan Babrou 2023-07-15 0:54 ` David Ahern 0 siblings, 1 reply; 9+ messages in thread From: Ivan Babrou @ 2023-07-14 22:13 UTC (permalink / raw) To: Linux Kernel Network Developers Cc: kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, David Ahern, Jakub Kicinski As requested by Jakub Kicinski and David Ahern here: * https://lore.kernel.org/netdev/20230713201427.2c50fc7b@kernel.org/ I made some aggregations for the stacks we see leading into skb:kfree_skb endpoint. There's a lot of data that is not easily digestible, so I lightly massaged the data and added flamegraphs in addition to raw stack counts. Here's the gist link: * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 Let me know if any other format works better for you. I have perf script output stashed just in case. As a reminder (also mentioned in the gist), we're on v6.1, which is the latest LTS. I can't explain the reasons for all the network paths we have, but our kernel / network people are CC'd if you have any questions. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-14 22:13 Stacks leading into skb:kfree_skb Ivan Babrou @ 2023-07-15 0:54 ` David Ahern 2023-07-18 22:33 ` Ivan Babrou 2023-07-18 22:36 ` Jakub Kicinski 0 siblings, 2 replies; 9+ messages in thread From: David Ahern @ 2023-07-15 0:54 UTC (permalink / raw) To: Ivan Babrou, Linux Kernel Network Developers Cc: kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Jakub Kicinski On 7/14/23 4:13 PM, Ivan Babrou wrote: > As requested by Jakub Kicinski and David Ahern here: > > * https://lore.kernel.org/netdev/20230713201427.2c50fc7b@kernel.org/ > > I made some aggregations for the stacks we see leading into > skb:kfree_skb endpoint. There's a lot of data that is not easily > digestible, so I lightly massaged the data and added flamegraphs in > addition to raw stack counts. Here's the gist link: > > * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 I see a lot of packet_rcv as the tip before kfree_skb. How many packet sockets do you have running on that box? Can you accumulate the total packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining stacktrace? > > Let me know if any other format works better for you. I have perf > script output stashed just in case. I was expecting more like perf report which should consolidate the similar stack traces, but the flamegraph worked. > > As a reminder (also mentioned in the gist), we're on v6.1, which is > the latest LTS. > > I can't explain the reasons for all the network paths we have, but our > kernel / network people are CC'd if you have any questions. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-15 0:54 ` David Ahern @ 2023-07-18 22:33 ` Ivan Babrou 2023-07-18 22:43 ` David Ahern 2023-07-18 22:36 ` Jakub Kicinski 1 sibling, 1 reply; 9+ messages in thread From: Ivan Babrou @ 2023-07-18 22:33 UTC (permalink / raw) To: David Ahern Cc: Linux Kernel Network Developers, kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Jakub Kicinski On Fri, Jul 14, 2023 at 5:54 PM David Ahern <dsahern@kernel.org> wrote: > > On 7/14/23 4:13 PM, Ivan Babrou wrote: > > As requested by Jakub Kicinski and David Ahern here: > > > > * https://lore.kernel.org/netdev/20230713201427.2c50fc7b@kernel.org/ > > > > I made some aggregations for the stacks we see leading into > > skb:kfree_skb endpoint. There's a lot of data that is not easily > > digestible, so I lightly massaged the data and added flamegraphs in > > addition to raw stack counts. Here's the gist link: > > > > * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 > > I see a lot of packet_rcv as the tip before kfree_skb. How many packet > sockets do you have running on that box? Can you accumulate the total > packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining > stacktrace? Yan will respond regarding the packet sockets later in the day, he knows this stuff better than I do. In the meantime, here are the aggregations you requested: * Normal: https://gist.githubusercontent.com/bobrik/0e57671c732d9b13ac49fed85a2b2290/raw/ae8aa1bc3b22fad6cf541afeb51aa8049d122d02/flamegraph.normal.packet_rcv.aggregated.svg * Spike: https://gist.githubusercontent.com/bobrik/0e57671c732d9b13ac49fed85a2b2290/raw/ae8aa1bc3b22fad6cf541afeb51aa8049d122d02/flamegraph.spike.packet_rcv.aggregated.svg I just realized that Github links make flamegraphs non-interactive. If you download them and open a local copy, they should work better: * Expand to your screen width * Working search with highlights * Tooltips with counts and percentages * Working zoom ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-18 22:33 ` Ivan Babrou @ 2023-07-18 22:43 ` David Ahern 0 siblings, 0 replies; 9+ messages in thread From: David Ahern @ 2023-07-18 22:43 UTC (permalink / raw) To: Ivan Babrou Cc: Linux Kernel Network Developers, kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Jakub Kicinski On 7/18/23 4:33 PM, Ivan Babrou wrote: > On Fri, Jul 14, 2023 at 5:54 PM David Ahern <dsahern@kernel.org> wrote: >> >> On 7/14/23 4:13 PM, Ivan Babrou wrote: >>> As requested by Jakub Kicinski and David Ahern here: >>> >>> * https://lore.kernel.org/netdev/20230713201427.2c50fc7b@kernel.org/ >>> >>> I made some aggregations for the stacks we see leading into >>> skb:kfree_skb endpoint. There's a lot of data that is not easily >>> digestible, so I lightly massaged the data and added flamegraphs in >>> addition to raw stack counts. Here's the gist link: >>> >>> * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 >> >> I see a lot of packet_rcv as the tip before kfree_skb. How many packet >> sockets do you have running on that box? Can you accumulate the total >> packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining >> stacktrace? > > Yan will respond regarding the packet sockets later in the day, he > knows this stuff better than I do. > > In the meantime, here are the aggregations you requested: > > * Normal: https://gist.githubusercontent.com/bobrik/0e57671c732d9b13ac49fed85a2b2290/raw/ae8aa1bc3b22fad6cf541afeb51aa8049d122d02/flamegraph.normal.packet_rcv.aggregated.svg > * Spike: https://gist.githubusercontent.com/bobrik/0e57671c732d9b13ac49fed85a2b2290/raw/ae8aa1bc3b22fad6cf541afeb51aa8049d122d02/flamegraph.spike.packet_rcv.aggregated.svg For the spike, 97% are drops in packet_rcv. Each raw packet socket causes every packet to be cloned which makes an N-factor on the number of skbs to be freed. If this is tcpdump or lldp with a filter that would be what Jakub mentioned in his response. > > I just realized that Github links make flamegraphs non-interactive. If > you download them and open a local copy, they should work better: Firefox shows the graphs just fine. > > * Expand to your screen width > * Working search with highlights > * Tooltips with counts and percentages > * Working zoom ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-15 0:54 ` David Ahern 2023-07-18 22:33 ` Ivan Babrou @ 2023-07-18 22:36 ` Jakub Kicinski 2023-07-19 3:10 ` Yan Zhai 1 sibling, 1 reply; 9+ messages in thread From: Jakub Kicinski @ 2023-07-18 22:36 UTC (permalink / raw) To: David Ahern, Ivan Babrou Cc: Linux Kernel Network Developers, kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Willem de Bruijn On Fri, 14 Jul 2023 18:54:14 -0600 David Ahern wrote: > > I made some aggregations for the stacks we see leading into > > skb:kfree_skb endpoint. There's a lot of data that is not easily > > digestible, so I lightly massaged the data and added flamegraphs in > > addition to raw stack counts. Here's the gist link: > > > > * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 > > I see a lot of packet_rcv as the tip before kfree_skb. How many packet > sockets do you have running on that box? Can you accumulate the total > packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining > stacktrace? On a quick look we have 3 branches which can get us to kfree_skb from packet_rcv: if (skb->pkt_type == PACKET_LOOPBACK) goto drop; ... if (!net_eq(dev_net(dev), sock_net(sk))) goto drop; ... res = run_filter(skb, sk, snaplen); if (!res) goto drop_n_restore; I'd guess is the last one? Which we should mark with the SOCKET_FILTER drop reason? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-18 22:36 ` Jakub Kicinski @ 2023-07-19 3:10 ` Yan Zhai 2023-07-19 3:50 ` David Ahern 0 siblings, 1 reply; 9+ messages in thread From: Yan Zhai @ 2023-07-19 3:10 UTC (permalink / raw) To: Jakub Kicinski Cc: David Ahern, Ivan Babrou, Linux Kernel Network Developers, kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Willem de Bruijn On Tue, Jul 18, 2023 at 5:36 PM Jakub Kicinski <kuba@kernel.org> wrote: > > On Fri, 14 Jul 2023 18:54:14 -0600 David Ahern wrote: > > > I made some aggregations for the stacks we see leading into > > > skb:kfree_skb endpoint. There's a lot of data that is not easily > > > digestible, so I lightly massaged the data and added flamegraphs in > > > addition to raw stack counts. Here's the gist link: > > > > > > * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 > > > > I see a lot of packet_rcv as the tip before kfree_skb. How many packet > > sockets do you have running on that box? Can you accumulate the total > > packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining > > stacktrace? > > On a quick look we have 3 branches which can get us to kfree_skb from > packet_rcv: > > if (skb->pkt_type == PACKET_LOOPBACK) > goto drop; > ... > if (!net_eq(dev_net(dev), sock_net(sk))) > goto drop; > ... > res = run_filter(skb, sk, snaplen); > if (!res) > goto drop_n_restore; > > I'd guess is the last one? Which we should mark with the SOCKET_FILTER > drop reason? So we have multiple packet socket consumers on our edge: * systemd-networkd: listens on ETH_P_LLDPD, which is the role model that does not do excessive things * lldpd: I am not sure why we needed this one in presence of systemd-networkd, but it is running atm, which contributes to constant packet_rcv calls. It listens on ETH_P_ALL because of https://github.com/lldpd/lldpd/pull/414. But its filter is doing the correct work, so packets hitting this one is mostly "consumed" Now the bad kids: * arping: listens on ETH_P_ALL. This one contributes all the skb:kfree_skb spikes, and the reason is sk_rmem_alloc overflows rcvbuf. I suspect it is due to a poorly constructed filter so too many packets get queued too fast. * conduit-watcher: a health checker, sending packets on ETH_P_IP in non-init netns. Majority of packet_rcv on this one goes to direct drop due to netns difference. So to conclude, it might be useful to set a reason for rcvbuf related drops at least. On the other hand, almost all packets entered packet_rcv are shared, so clone failure probably can also be a thing under memory pressure. -- Yan ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-19 3:10 ` Yan Zhai @ 2023-07-19 3:50 ` David Ahern 2023-07-19 13:18 ` Willem de Bruijn 0 siblings, 1 reply; 9+ messages in thread From: David Ahern @ 2023-07-19 3:50 UTC (permalink / raw) To: Yan Zhai, Jakub Kicinski Cc: Ivan Babrou, Linux Kernel Network Developers, kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Willem de Bruijn On 7/18/23 9:10 PM, Yan Zhai wrote: > On Tue, Jul 18, 2023 at 5:36 PM Jakub Kicinski <kuba@kernel.org> wrote: >> >> On Fri, 14 Jul 2023 18:54:14 -0600 David Ahern wrote: >>>> I made some aggregations for the stacks we see leading into >>>> skb:kfree_skb endpoint. There's a lot of data that is not easily >>>> digestible, so I lightly massaged the data and added flamegraphs in >>>> addition to raw stack counts. Here's the gist link: >>>> >>>> * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 >>> >>> I see a lot of packet_rcv as the tip before kfree_skb. How many packet >>> sockets do you have running on that box? Can you accumulate the total >>> packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining >>> stacktrace? >> >> On a quick look we have 3 branches which can get us to kfree_skb from >> packet_rcv: >> >> if (skb->pkt_type == PACKET_LOOPBACK) >> goto drop; >> ... >> if (!net_eq(dev_net(dev), sock_net(sk))) >> goto drop; >> ... >> res = run_filter(skb, sk, snaplen); >> if (!res) >> goto drop_n_restore; >> >> I'd guess is the last one? Which we should mark with the SOCKET_FILTER >> drop reason? > > So we have multiple packet socket consumers on our edge: > * systemd-networkd: listens on ETH_P_LLDPD, which is the role model > that does not do excessive things ETH level means raw packet socket which means *all* packets are duplicated. > * lldpd: I am not sure why we needed this one in presence of > systemd-networkd, but it is running atm, which contributes to constant > packet_rcv calls. It listens on ETH_P_ALL because of > https://github.com/lldpd/lldpd/pull/414. But its filter is doing the > correct work, so packets hitting this one is mostly "consumed" This one I am familiar with and its filter -- the fact that the filter applies *after* the clone means it still contributes to the packet load. Together these 2 sockets might explain why the filter drop shows up in packet_rcv. > > Now the bad kids: > * arping: listens on ETH_P_ALL. This one contributes all the > skb:kfree_skb spikes, and the reason is sk_rmem_alloc overflows > rcvbuf. I suspect it is due to a poorly constructed filter so too many > packets get queued too fast. Any packet socket is the problem because the filter is applied to the clone. Clone the packet, run the filter, kfree the packet. > * conduit-watcher: a health checker, sending packets on ETH_P_IP in > non-init netns. Majority of packet_rcv on this one goes to direct drop > due to netns difference. So this the raw packet socket at L3 that shows up. This one should not be as large of a contributor to the increases packet load. > > So to conclude, it might be useful to set a reason for rcvbuf related > drops at least. On the other hand, almost all packets entered > packet_rcv are shared, so clone failure probably can also be a thing > under memory pressure. > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-19 3:50 ` David Ahern @ 2023-07-19 13:18 ` Willem de Bruijn 2023-07-19 13:26 ` Eric Dumazet 0 siblings, 1 reply; 9+ messages in thread From: Willem de Bruijn @ 2023-07-19 13:18 UTC (permalink / raw) To: David Ahern, Yan Zhai, Jakub Kicinski Cc: Ivan Babrou, Linux Kernel Network Developers, kernel-team, Eric Dumazet, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu, Willem de Bruijn David Ahern wrote: > On 7/18/23 9:10 PM, Yan Zhai wrote: > > On Tue, Jul 18, 2023 at 5:36 PM Jakub Kicinski <kuba@kernel.org> wrote: > >> > >> On Fri, 14 Jul 2023 18:54:14 -0600 David Ahern wrote: > >>>> I made some aggregations for the stacks we see leading into > >>>> skb:kfree_skb endpoint. There's a lot of data that is not easily > >>>> digestible, so I lightly massaged the data and added flamegraphs in > >>>> addition to raw stack counts. Here's the gist link: > >>>> > >>>> * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 > >>> > >>> I see a lot of packet_rcv as the tip before kfree_skb. How many packet > >>> sockets do you have running on that box? Can you accumulate the total > >>> packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining > >>> stacktrace? > >> > >> On a quick look we have 3 branches which can get us to kfree_skb from > >> packet_rcv: > >> > >> if (skb->pkt_type == PACKET_LOOPBACK) > >> goto drop; > >> ... > >> if (!net_eq(dev_net(dev), sock_net(sk))) > >> goto drop; > >> ... > >> res = run_filter(skb, sk, snaplen); > >> if (!res) > >> goto drop_n_restore; > >> > >> I'd guess is the last one? Which we should mark with the SOCKET_FILTER > >> drop reason? > > > > So we have multiple packet socket consumers on our edge: > > * systemd-networkd: listens on ETH_P_LLDPD, which is the role model > > that does not do excessive things > > ETH level means raw packet socket which means *all* packets are duplicated. > > > * lldpd: I am not sure why we needed this one in presence of > > systemd-networkd, but it is running atm, which contributes to constant > > packet_rcv calls. It listens on ETH_P_ALL because of > > https://github.com/lldpd/lldpd/pull/414. But its filter is doing the > > correct work, so packets hitting this one is mostly "consumed" > > This one I am familiar with and its filter -- the fact that the filter > applies *after* the clone means it still contributes to the packet load. > > Together these 2 sockets might explain why the filter drop shows up in > packet_rcv. > > > > > Now the bad kids: > > * arping: listens on ETH_P_ALL. This one contributes all the > > skb:kfree_skb spikes, and the reason is sk_rmem_alloc overflows > > rcvbuf. I suspect it is due to a poorly constructed filter so too many > > packets get queued too fast. > > Any packet socket is the problem because the filter is applied to the > clone. Clone the packet, run the filter, kfree the packet. Small clarification: on receive in __netif_receive_skb_core, the skb is only cloned if accepted by packet_rcv. deliver_skb increases skb->users to ensure that the skb is not freed if a filter declines. On transmit, dev_queue_xmit_nit does create an initial clone. But then passes this one clone to all sockets, again using deliver_skb. A packet socket which filter accepts the skb is worse, then, as that clones the initial shared skb. > > > * conduit-watcher: a health checker, sending packets on ETH_P_IP in > > non-init netns. Majority of packet_rcv on this one goes to direct drop > > due to netns difference. > > So this the raw packet socket at L3 that shows up. This one should not > be as large of a contributor to the increases packet load. > > > > > So to conclude, it might be useful to set a reason for rcvbuf related > > drops at least. On the other hand, almost all packets entered > > packet_rcv are shared, so clone failure probably can also be a thing > > under memory pressure. kfree_skb is changed across the stack into kfree_skb_reason. Doing the same for PF_PACKET sounds entirely reasonable to me. Just be careful about false positives where only the filter does not matches and the shared skb is dereferenced. This is WAI and not cause for a report. > > > > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Stacks leading into skb:kfree_skb 2023-07-19 13:18 ` Willem de Bruijn @ 2023-07-19 13:26 ` Eric Dumazet 0 siblings, 0 replies; 9+ messages in thread From: Eric Dumazet @ 2023-07-19 13:26 UTC (permalink / raw) To: Willem de Bruijn Cc: David Ahern, Yan Zhai, Jakub Kicinski, Ivan Babrou, Linux Kernel Network Developers, kernel-team, David S. Miller, Paolo Abeni, Steven Rostedt, Masami Hiramatsu On Wed, Jul 19, 2023 at 3:18 PM Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: > > David Ahern wrote: > > On 7/18/23 9:10 PM, Yan Zhai wrote: > > > On Tue, Jul 18, 2023 at 5:36 PM Jakub Kicinski <kuba@kernel.org> wrote: > > >> > > >> On Fri, 14 Jul 2023 18:54:14 -0600 David Ahern wrote: > > >>>> I made some aggregations for the stacks we see leading into > > >>>> skb:kfree_skb endpoint. There's a lot of data that is not easily > > >>>> digestible, so I lightly massaged the data and added flamegraphs in > > >>>> addition to raw stack counts. Here's the gist link: > > >>>> > > >>>> * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290 > > >>> > > >>> I see a lot of packet_rcv as the tip before kfree_skb. How many packet > > >>> sockets do you have running on that box? Can you accumulate the total > > >>> packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining > > >>> stacktrace? > > >> > > >> On a quick look we have 3 branches which can get us to kfree_skb from > > >> packet_rcv: > > >> > > >> if (skb->pkt_type == PACKET_LOOPBACK) > > >> goto drop; > > >> ... > > >> if (!net_eq(dev_net(dev), sock_net(sk))) > > >> goto drop; > > >> ... > > >> res = run_filter(skb, sk, snaplen); > > >> if (!res) > > >> goto drop_n_restore; > > >> > > >> I'd guess is the last one? Which we should mark with the SOCKET_FILTER > > >> drop reason? > > > > > > So we have multiple packet socket consumers on our edge: > > > * systemd-networkd: listens on ETH_P_LLDPD, which is the role model > > > that does not do excessive things > > > > ETH level means raw packet socket which means *all* packets are duplicated. > > > > > * lldpd: I am not sure why we needed this one in presence of > > > systemd-networkd, but it is running atm, which contributes to constant > > > packet_rcv calls. It listens on ETH_P_ALL because of > > > https://github.com/lldpd/lldpd/pull/414. But its filter is doing the > > > correct work, so packets hitting this one is mostly "consumed" > > > > This one I am familiar with and its filter -- the fact that the filter > > applies *after* the clone means it still contributes to the packet load. > > > > Together these 2 sockets might explain why the filter drop shows up in > > packet_rcv. > > > > > > > > Now the bad kids: > > > * arping: listens on ETH_P_ALL. This one contributes all the > > > skb:kfree_skb spikes, and the reason is sk_rmem_alloc overflows > > > rcvbuf. I suspect it is due to a poorly constructed filter so too many > > > packets get queued too fast. > > > > Any packet socket is the problem because the filter is applied to the > > clone. Clone the packet, run the filter, kfree the packet. > > Small clarification: on receive in __netif_receive_skb_core, the skb > is only cloned if accepted by packet_rcv. deliver_skb increases > skb->users to ensure that the skb is not freed if a filter declines. > > On transmit, dev_queue_xmit_nit does create an initial clone. But > then passes this one clone to all sockets, again using deliver_skb. > > A packet socket which filter accepts the skb is worse, then, as that > clones the initial shared skb. > > > > > > * conduit-watcher: a health checker, sending packets on ETH_P_IP in > > > non-init netns. Majority of packet_rcv on this one goes to direct drop > > > due to netns difference. > > > > So this the raw packet socket at L3 that shows up. This one should not > > be as large of a contributor to the increases packet load. > > > > > > > > So to conclude, it might be useful to set a reason for rcvbuf related > > > drops at least. On the other hand, almost all packets entered > > > packet_rcv are shared, so clone failure probably can also be a thing > > > under memory pressure. > > kfree_skb is changed across the stack into kfree_skb_reason. Doing the > same for PF_PACKET sounds entirely reasonable to me. > > Just be careful about false positives where only the filter does not > matches and the shared skb is dereferenced. This is WAI and not cause > for a report. > Relevant prior work was: commit da37845fdce24e174f44d020bc4085ddd1c8a6bd Author: Weongyo Jeong <weongyo.linux@gmail.com> Date: Thu Apr 14 14:10:04 2016 -0700 packet: uses kfree_skb() for errors. consume_skb() isn't for error cases that kfree_skb() is more proper one. At this patch, it fixed tpacket_rcv() and packet_rcv() to be consistent for error or non-error cases letting perf trace its event properly. ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-07-19 13:27 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-07-14 22:13 Stacks leading into skb:kfree_skb Ivan Babrou 2023-07-15 0:54 ` David Ahern 2023-07-18 22:33 ` Ivan Babrou 2023-07-18 22:43 ` David Ahern 2023-07-18 22:36 ` Jakub Kicinski 2023-07-19 3:10 ` Yan Zhai 2023-07-19 3:50 ` David Ahern 2023-07-19 13:18 ` Willem de Bruijn 2023-07-19 13:26 ` Eric Dumazet
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).