netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] net: flush_backlog() small changes
@ 2025-02-04 14:48 Eric Dumazet
  2025-02-05 15:22 ` Jason Xing
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Eric Dumazet @ 2025-02-04 14:48 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, Kuniyuki Iwashima, Simon Horman, eric.dumazet,
	Eric Dumazet

Add READ_ONCE() around reads of skb->dev->reg_state, because
this field can be changed from other threads/cpus.

Instead of calling dev_kfree_skb_irq() and kfree_skb()
while interrupts are masked and locks held,
use a temporary list and use __skb_queue_purge_reason()

Use SKB_DROP_REASON_DEV_READY drop reason to better
describe why these skbs are dropped.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/core/dev.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index c0021cbd28fc11e4c4eb6184d98a2505fa674871..cd31e78a7d8a2229e3dc17d08bb638f862148823 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6119,16 +6119,18 @@ EXPORT_SYMBOL(netif_receive_skb_list);
 static void flush_backlog(struct work_struct *work)
 {
 	struct sk_buff *skb, *tmp;
+	struct sk_buff_head list;
 	struct softnet_data *sd;
 
+	__skb_queue_head_init(&list);
 	local_bh_disable();
 	sd = this_cpu_ptr(&softnet_data);
 
 	backlog_lock_irq_disable(sd);
 	skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
-		if (skb->dev->reg_state == NETREG_UNREGISTERING) {
+		if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
 			__skb_unlink(skb, &sd->input_pkt_queue);
-			dev_kfree_skb_irq(skb);
+			__skb_queue_tail(&list, skb);
 			rps_input_queue_head_incr(sd);
 		}
 	}
@@ -6136,14 +6138,16 @@ static void flush_backlog(struct work_struct *work)
 
 	local_lock_nested_bh(&softnet_data.process_queue_bh_lock);
 	skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
-		if (skb->dev->reg_state == NETREG_UNREGISTERING) {
+		if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
 			__skb_unlink(skb, &sd->process_queue);
-			kfree_skb(skb);
+			__skb_queue_tail(&list, skb);
 			rps_input_queue_head_incr(sd);
 		}
 	}
 	local_unlock_nested_bh(&softnet_data.process_queue_bh_lock);
 	local_bh_enable();
+
+	__skb_queue_purge_reason(&list, SKB_DROP_REASON_DEV_READY);
 }
 
 static bool flush_required(int cpu)
-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next] net: flush_backlog() small changes
  2025-02-04 14:48 [PATCH net-next] net: flush_backlog() small changes Eric Dumazet
@ 2025-02-05 15:22 ` Jason Xing
  2025-02-05 16:00   ` Eric Dumazet
  2025-02-05 16:43 ` Jason Xing
  2025-02-06  2:50 ` patchwork-bot+netdevbpf
  2 siblings, 1 reply; 7+ messages in thread
From: Jason Xing @ 2025-02-05 15:22 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
	Kuniyuki Iwashima, Simon Horman, eric.dumazet

Hi Eric,

On Tue, Feb 4, 2025 at 10:49 PM Eric Dumazet <edumazet@google.com> wrote:
>
> Add READ_ONCE() around reads of skb->dev->reg_state, because
> this field can be changed from other threads/cpus.
>
> Instead of calling dev_kfree_skb_irq() and kfree_skb()
> while interrupts are masked and locks held,
> use a temporary list and use __skb_queue_purge_reason()
>
> Use SKB_DROP_REASON_DEV_READY drop reason to better
> describe why these skbs are dropped.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> ---
>  net/core/dev.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index c0021cbd28fc11e4c4eb6184d98a2505fa674871..cd31e78a7d8a2229e3dc17d08bb638f862148823 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -6119,16 +6119,18 @@ EXPORT_SYMBOL(netif_receive_skb_list);
>  static void flush_backlog(struct work_struct *work)
>  {
>         struct sk_buff *skb, *tmp;
> +       struct sk_buff_head list;
>         struct softnet_data *sd;
>
> +       __skb_queue_head_init(&list);
>         local_bh_disable();
>         sd = this_cpu_ptr(&softnet_data);
>
>         backlog_lock_irq_disable(sd);
>         skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
> -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
>                         __skb_unlink(skb, &sd->input_pkt_queue);
> -                       dev_kfree_skb_irq(skb);
> +                       __skb_queue_tail(&list, skb);

I wonder why we cannot simply replace the above function with
'dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_DEV_READY);'?

>                         rps_input_queue_head_incr(sd);
>                 }
>         }
> @@ -6136,14 +6138,16 @@ static void flush_backlog(struct work_struct *work)
>
>         local_lock_nested_bh(&softnet_data.process_queue_bh_lock);
>         skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
> -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
>                         __skb_unlink(skb, &sd->process_queue);
> -                       kfree_skb(skb);
> +                       __skb_queue_tail(&list, skb);

Same here.

>                         rps_input_queue_head_incr(sd);
>                 }
>         }
>         local_unlock_nested_bh(&softnet_data.process_queue_bh_lock);
>         local_bh_enable();
> +
> +       __skb_queue_purge_reason(&list, SKB_DROP_REASON_DEV_READY);

I'm also worried that dev_kfree_skb_irq() is not the same as
kfree_skb_reason() because of the following commit:
commit 7df5cb75cfb8acf96c7f2342530eb41e0c11f4c3
Author: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
Date:   Thu Jul 23 11:31:48 2020 -0600

    dev: Defer free of skbs in flush_backlog

    IRQs are disabled when freeing skbs in input queue.
    Use the IRQ safe variant to free skbs here.

    Fixes: 145dd5f9c88f ("net: flush the softnet backlog in process context")
    Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
    Signed-off-by: David S. Miller <davem@davemloft.net>

Thanks,
Jason

>  }
>
>  static bool flush_required(int cpu)
> --
> 2.48.1.362.g079036d154-goog
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next] net: flush_backlog() small changes
  2025-02-05 15:22 ` Jason Xing
@ 2025-02-05 16:00   ` Eric Dumazet
  2025-02-05 16:17     ` Jason Xing
  0 siblings, 1 reply; 7+ messages in thread
From: Eric Dumazet @ 2025-02-05 16:00 UTC (permalink / raw)
  To: Jason Xing
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
	Kuniyuki Iwashima, Simon Horman, eric.dumazet

On Wed, Feb 5, 2025 at 4:22 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> Hi Eric,
>
> On Tue, Feb 4, 2025 at 10:49 PM Eric Dumazet <edumazet@google.com> wrote:
> >
> > Add READ_ONCE() around reads of skb->dev->reg_state, because
> > this field can be changed from other threads/cpus.
> >
> > Instead of calling dev_kfree_skb_irq() and kfree_skb()
> > while interrupts are masked and locks held,
> > use a temporary list and use __skb_queue_purge_reason()
> >
> > Use SKB_DROP_REASON_DEV_READY drop reason to better
> > describe why these skbs are dropped.
> >
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > ---
> >  net/core/dev.c | 12 ++++++++----
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index c0021cbd28fc11e4c4eb6184d98a2505fa674871..cd31e78a7d8a2229e3dc17d08bb638f862148823 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -6119,16 +6119,18 @@ EXPORT_SYMBOL(netif_receive_skb_list);
> >  static void flush_backlog(struct work_struct *work)
> >  {
> >         struct sk_buff *skb, *tmp;
> > +       struct sk_buff_head list;
> >         struct softnet_data *sd;
> >
> > +       __skb_queue_head_init(&list);
> >         local_bh_disable();
> >         sd = this_cpu_ptr(&softnet_data);
> >
> >         backlog_lock_irq_disable(sd);
> >         skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
> > -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> > +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
> >                         __skb_unlink(skb, &sd->input_pkt_queue);
> > -                       dev_kfree_skb_irq(skb);
> > +                       __skb_queue_tail(&list, skb);
>
> I wonder why we cannot simply replace the above function with
> 'dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_DEV_READY);'?

Because this horribly expensive thing pushes packets to another perc-cpu list,
and raises a softirq to perform the freeing later from BH.


>
> >                         rps_input_queue_head_incr(sd);
> >                 }
> >         }
> > @@ -6136,14 +6138,16 @@ static void flush_backlog(struct work_struct *work)
> >
> >         local_lock_nested_bh(&softnet_data.process_queue_bh_lock);
> >         skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
> > -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> > +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
> >                         __skb_unlink(skb, &sd->process_queue);
> > -                       kfree_skb(skb);
> > +                       __skb_queue_tail(&list, skb);
>
> Same here.

Please read the changelog, I think you missed the point.

>
> >                         rps_input_queue_head_incr(sd);
> >                 }
> >         }
> >         local_unlock_nested_bh(&softnet_data.process_queue_bh_lock);
> >         local_bh_enable();
> > +
> > +       __skb_queue_purge_reason(&list, SKB_DROP_REASON_DEV_READY);
>
> I'm also worried that dev_kfree_skb_irq() is not the same as
> kfree_skb_reason() because of the following commit:
> commit 7df5cb75cfb8acf96c7f2342530eb41e0c11f4c3
> Author: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
> Date:   Thu Jul 23 11:31:48 2020 -0600
>
>     dev: Defer free of skbs in flush_backlog
>
>     IRQs are disabled when freeing skbs in input queue.
>     Use the IRQ safe variant to free skbs here.
>
>     Fixes: 145dd5f9c88f ("net: flush the softnet backlog in process context")
>     Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
>     Signed-off-by: David S. Miller <davem@davemloft.net>

The point of this patch is to no longer attempt the kfree_skb() while being
in hard or soft irq blocking sections.

Therefore call efficient kfree_skb() instead of expensive fallbacks
that were designed
for callers in hard irq contexts.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next] net: flush_backlog() small changes
  2025-02-05 16:00   ` Eric Dumazet
@ 2025-02-05 16:17     ` Jason Xing
  2025-02-05 16:29       ` Eric Dumazet
  0 siblings, 1 reply; 7+ messages in thread
From: Jason Xing @ 2025-02-05 16:17 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
	Kuniyuki Iwashima, Simon Horman, eric.dumazet

On Thu, Feb 6, 2025 at 12:00 AM Eric Dumazet <edumazet@google.com> wrote:
>
> On Wed, Feb 5, 2025 at 4:22 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
> >
> > Hi Eric,
> >
> > On Tue, Feb 4, 2025 at 10:49 PM Eric Dumazet <edumazet@google.com> wrote:
> > >
> > > Add READ_ONCE() around reads of skb->dev->reg_state, because
> > > this field can be changed from other threads/cpus.
> > >
> > > Instead of calling dev_kfree_skb_irq() and kfree_skb()
> > > while interrupts are masked and locks held,
> > > use a temporary list and use __skb_queue_purge_reason()
> > >
> > > Use SKB_DROP_REASON_DEV_READY drop reason to better
> > > describe why these skbs are dropped.
> > >
> > > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > > ---
> > >  net/core/dev.c | 12 ++++++++----
> > >  1 file changed, 8 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index c0021cbd28fc11e4c4eb6184d98a2505fa674871..cd31e78a7d8a2229e3dc17d08bb638f862148823 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -6119,16 +6119,18 @@ EXPORT_SYMBOL(netif_receive_skb_list);
> > >  static void flush_backlog(struct work_struct *work)
> > >  {
> > >         struct sk_buff *skb, *tmp;
> > > +       struct sk_buff_head list;
> > >         struct softnet_data *sd;
> > >
> > > +       __skb_queue_head_init(&list);
> > >         local_bh_disable();
> > >         sd = this_cpu_ptr(&softnet_data);
> > >
> > >         backlog_lock_irq_disable(sd);
> > >         skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
> > > -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> > > +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
> > >                         __skb_unlink(skb, &sd->input_pkt_queue);
> > > -                       dev_kfree_skb_irq(skb);
> > > +                       __skb_queue_tail(&list, skb);
> >
> > I wonder why we cannot simply replace the above function with
> > 'dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_DEV_READY);'?
>
> Because this horribly expensive thing pushes packets to another perc-cpu list,
> and raises a softirq to perform the freeing later from BH.

Agreed about this case. How about changing kfree_skb_reason(skb, ...)?

>
>
> >
> > >                         rps_input_queue_head_incr(sd);
> > >                 }
> > >         }
> > > @@ -6136,14 +6138,16 @@ static void flush_backlog(struct work_struct *work)
> > >
> > >         local_lock_nested_bh(&softnet_data.process_queue_bh_lock);
> > >         skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
> > > -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> > > +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
> > >                         __skb_unlink(skb, &sd->process_queue);
> > > -                       kfree_skb(skb);
> > > +                       __skb_queue_tail(&list, skb);
> >
> > Same here.
>
> Please read the changelog, I think you missed the point.

I meant why not directly use kfree_skb_reason(skb, ...) here? It's
simpler, right? Then don't bother to use a temporary list.

>
> >
> > >                         rps_input_queue_head_incr(sd);
> > >                 }
> > >         }
> > >         local_unlock_nested_bh(&softnet_data.process_queue_bh_lock);
> > >         local_bh_enable();
> > > +
> > > +       __skb_queue_purge_reason(&list, SKB_DROP_REASON_DEV_READY);
> >
> > I'm also worried that dev_kfree_skb_irq() is not the same as
> > kfree_skb_reason() because of the following commit:
> > commit 7df5cb75cfb8acf96c7f2342530eb41e0c11f4c3
> > Author: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
> > Date:   Thu Jul 23 11:31:48 2020 -0600
> >
> >     dev: Defer free of skbs in flush_backlog
> >
> >     IRQs are disabled when freeing skbs in input queue.
> >     Use the IRQ safe variant to free skbs here.
> >
> >     Fixes: 145dd5f9c88f ("net: flush the softnet backlog in process context")
> >     Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
> >     Signed-off-by: David S. Miller <davem@davemloft.net>
>
> The point of this patch is to no longer attempt the kfree_skb() while being
> in hard or soft irq blocking sections.
>
> Therefore call efficient kfree_skb() instead of expensive fallbacks
> that were designed
> for callers in hard irq contexts.

Thanks for the explanation!

Thanks,
Jason

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next] net: flush_backlog() small changes
  2025-02-05 16:17     ` Jason Xing
@ 2025-02-05 16:29       ` Eric Dumazet
  0 siblings, 0 replies; 7+ messages in thread
From: Eric Dumazet @ 2025-02-05 16:29 UTC (permalink / raw)
  To: Jason Xing
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
	Kuniyuki Iwashima, Simon Horman, eric.dumazet

On Wed, Feb 5, 2025 at 5:17 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> On Thu, Feb 6, 2025 at 12:00 AM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Wed, Feb 5, 2025 at 4:22 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
> > >
> > > Hi Eric,
> > >
> > > On Tue, Feb 4, 2025 at 10:49 PM Eric Dumazet <edumazet@google.com> wrote:
> > > >
> > > > Add READ_ONCE() around reads of skb->dev->reg_state, because
> > > > this field can be changed from other threads/cpus.
> > > >
> > > > Instead of calling dev_kfree_skb_irq() and kfree_skb()
> > > > while interrupts are masked and locks held,
> > > > use a temporary list and use __skb_queue_purge_reason()
> > > >
> > > > Use SKB_DROP_REASON_DEV_READY drop reason to better
> > > > describe why these skbs are dropped.
> > > >
> > > > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > > > ---
> > > >  net/core/dev.c | 12 ++++++++----
> > > >  1 file changed, 8 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > > index c0021cbd28fc11e4c4eb6184d98a2505fa674871..cd31e78a7d8a2229e3dc17d08bb638f862148823 100644
> > > > --- a/net/core/dev.c
> > > > +++ b/net/core/dev.c
> > > > @@ -6119,16 +6119,18 @@ EXPORT_SYMBOL(netif_receive_skb_list);
> > > >  static void flush_backlog(struct work_struct *work)
> > > >  {
> > > >         struct sk_buff *skb, *tmp;
> > > > +       struct sk_buff_head list;
> > > >         struct softnet_data *sd;
> > > >
> > > > +       __skb_queue_head_init(&list);
> > > >         local_bh_disable();
> > > >         sd = this_cpu_ptr(&softnet_data);
> > > >
> > > >         backlog_lock_irq_disable(sd);
> > > >         skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
> > > > -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> > > > +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
> > > >                         __skb_unlink(skb, &sd->input_pkt_queue);
> > > > -                       dev_kfree_skb_irq(skb);
> > > > +                       __skb_queue_tail(&list, skb);
> > >
> > > I wonder why we cannot simply replace the above function with
> > > 'dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_DEV_READY);'?
> >
> > Because this horribly expensive thing pushes packets to another perc-cpu list,
> > and raises a softirq to perform the freeing later from BH.
>
> Agreed about this case. How about changing kfree_skb_reason(skb, ...)?
>
> >
> >
> > >
> > > >                         rps_input_queue_head_incr(sd);
> > > >                 }
> > > >         }
> > > > @@ -6136,14 +6138,16 @@ static void flush_backlog(struct work_struct *work)
> > > >
> > > >         local_lock_nested_bh(&softnet_data.process_queue_bh_lock);
> > > >         skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
> > > > -               if (skb->dev->reg_state == NETREG_UNREGISTERING) {
> > > > +               if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
> > > >                         __skb_unlink(skb, &sd->process_queue);
> > > > -                       kfree_skb(skb);
> > > > +                       __skb_queue_tail(&list, skb);
> > >
> > > Same here.
> >
> > Please read the changelog, I think you missed the point.
>
> I meant why not directly use kfree_skb_reason(skb, ...) here? It's
> simpler, right? Then don't bother to use a temporary list.

Because we are blocking BH here, for a potential long time.

This was mentioned in the changelog. Please read it again.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next] net: flush_backlog() small changes
  2025-02-04 14:48 [PATCH net-next] net: flush_backlog() small changes Eric Dumazet
  2025-02-05 15:22 ` Jason Xing
@ 2025-02-05 16:43 ` Jason Xing
  2025-02-06  2:50 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 7+ messages in thread
From: Jason Xing @ 2025-02-05 16:43 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
	Kuniyuki Iwashima, Simon Horman, eric.dumazet

On Tue, Feb 4, 2025 at 10:49 PM Eric Dumazet <edumazet@google.com> wrote:
>
> Add READ_ONCE() around reads of skb->dev->reg_state, because
> this field can be changed from other threads/cpus.
>
> Instead of calling dev_kfree_skb_irq() and kfree_skb()
> while interrupts are masked and locks held,
> use a temporary list and use __skb_queue_purge_reason()
>
> Use SKB_DROP_REASON_DEV_READY drop reason to better
> describe why these skbs are dropped.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Thanks for the optimization!

Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next] net: flush_backlog() small changes
  2025-02-04 14:48 [PATCH net-next] net: flush_backlog() small changes Eric Dumazet
  2025-02-05 15:22 ` Jason Xing
  2025-02-05 16:43 ` Jason Xing
@ 2025-02-06  2:50 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-02-06  2:50 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, kuba, pabeni, netdev, kuniyu, horms, eric.dumazet

Hello:

This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Tue,  4 Feb 2025 14:48:25 +0000 you wrote:
> Add READ_ONCE() around reads of skb->dev->reg_state, because
> this field can be changed from other threads/cpus.
> 
> Instead of calling dev_kfree_skb_irq() and kfree_skb()
> while interrupts are masked and locks held,
> use a temporary list and use __skb_queue_purge_reason()
> 
> [...]

Here is the summary with links:
  - [net-next] net: flush_backlog() small changes
    https://git.kernel.org/netdev/net-next/c/cbe08724c180

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-02-06  2:50 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-04 14:48 [PATCH net-next] net: flush_backlog() small changes Eric Dumazet
2025-02-05 15:22 ` Jason Xing
2025-02-05 16:00   ` Eric Dumazet
2025-02-05 16:17     ` Jason Xing
2025-02-05 16:29       ` Eric Dumazet
2025-02-05 16:43 ` Jason Xing
2025-02-06  2:50 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).