netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] net: use bulk free in kfree_skb_list
@ 2019-03-24  6:58 Felix Fietkau
  2019-03-24 11:31 ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 3+ messages in thread
From: Felix Fietkau @ 2019-03-24  6:58 UTC (permalink / raw)
  To: netdev; +Cc: davem

Since we're freeing multiple skbs, we might as well use bulk free to save a
few cycles. Use the same conditions for bulk free as in napi_consume_skb.

Signed-off-by: Felix Fietkau <nbd@nbd.name>
---
 net/core/skbuff.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 2415d9cb9b89..ec030ab7f1e7 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -666,12 +666,39 @@ EXPORT_SYMBOL(kfree_skb);
 
 void kfree_skb_list(struct sk_buff *segs)
 {
-	while (segs) {
-		struct sk_buff *next = segs->next;
+	struct sk_buff *next = segs;
+	void *skbs[16];
+	int n_skbs = 0;
 
-		kfree_skb(segs);
-		segs = next;
+	while ((segs = next) != NULL) {
+		next = segs->next;
+
+		if (!skb_unref(segs))
+			continue;
+
+		if (segs->fclone != SKB_FCLONE_UNAVAILABLE ||
+		    n_skbs >= ARRAY_SIZE(skbs)) {
+			kfree_skb(segs);
+			continue;
+		}
+
+		trace_kfree_skb(segs, __builtin_return_address(0));
+
+		/* drop skb->head and call any destructors for packet */
+		skb_release_all(segs);
+
+#ifdef CONFIG_SLUB
+		/* SLUB writes into objects when freeing */
+		prefetchw(segs);
+#endif
+
+		skbs[n_skbs++] = segs;
 	}
+
+	if (!n_skbs)
+		return;
+
+	kmem_cache_free_bulk(skbuff_head_cache, n_skbs, skbs);
 }
 EXPORT_SYMBOL(kfree_skb_list);
 
-- 
2.17.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next] net: use bulk free in kfree_skb_list
  2019-03-24  6:58 [PATCH net-next] net: use bulk free in kfree_skb_list Felix Fietkau
@ 2019-03-24 11:31 ` Jesper Dangaard Brouer
  2019-03-24 16:54   ` Felix Fietkau
  0 siblings, 1 reply; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2019-03-24 11:31 UTC (permalink / raw)
  To: Felix Fietkau; +Cc: brouer, netdev, davem, Florian Westphal

On Sun, 24 Mar 2019 07:58:34 +0100
Felix Fietkau <nbd@nbd.name> wrote:

> Since we're freeing multiple skbs, we might as well use bulk free to save a
> few cycles. Use the same conditions for bulk free as in napi_consume_skb.
> 
> Signed-off-by: Felix Fietkau <nbd@nbd.name>

Thanks for working on this, it's been on my todo list for a very long
time. I just discussed this with Florian at NetDevconf.

> ---
>  net/core/skbuff.c | 35 +++++++++++++++++++++++++++++++----
>  1 file changed, 31 insertions(+), 4 deletions(-)
> 
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 2415d9cb9b89..ec030ab7f1e7 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -666,12 +666,39 @@ EXPORT_SYMBOL(kfree_skb);
>  
>  void kfree_skb_list(struct sk_buff *segs)
>  {
> -	while (segs) {
> -		struct sk_buff *next = segs->next;
> +	struct sk_buff *next = segs;
> +	void *skbs[16];
> +	int n_skbs = 0;
>  
> -		kfree_skb(segs);
> -		segs = next;
> +	while ((segs = next) != NULL) {
> +		next = segs->next;
> +
> +		if (!skb_unref(segs))
> +			continue;
> +
> +		if (segs->fclone != SKB_FCLONE_UNAVAILABLE ||
> +		    n_skbs >= ARRAY_SIZE(skbs)) {

You could call kmem_cache_free_bulk() here and reset n_skbs=0.

> +			kfree_skb(segs);
> +			continue;
> +		}
> +
> +		trace_kfree_skb(segs, __builtin_return_address(0));
> +
> +		/* drop skb->head and call any destructors for packet */
> +		skb_release_all(segs);
> +
> +#ifdef CONFIG_SLUB
> +		/* SLUB writes into objects when freeing */
> +		prefetchw(segs);
> +#endif
> +
> +		skbs[n_skbs++] = segs;
>  	}
> +
> +	if (!n_skbs)
> +		return;
> +
> +	kmem_cache_free_bulk(skbuff_head_cache, n_skbs, skbs);
>  }
>  EXPORT_SYMBOL(kfree_skb_list);
>  



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next] net: use bulk free in kfree_skb_list
  2019-03-24 11:31 ` Jesper Dangaard Brouer
@ 2019-03-24 16:54   ` Felix Fietkau
  0 siblings, 0 replies; 3+ messages in thread
From: Felix Fietkau @ 2019-03-24 16:54 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: netdev, davem, Florian Westphal

On 2019-03-24 12:31, Jesper Dangaard Brouer wrote:
> On Sun, 24 Mar 2019 07:58:34 +0100
> Felix Fietkau <nbd@nbd.name> wrote:
> 
>> Since we're freeing multiple skbs, we might as well use bulk free to save a
>> few cycles. Use the same conditions for bulk free as in napi_consume_skb.
>> 
>> Signed-off-by: Felix Fietkau <nbd@nbd.name>
> 
> Thanks for working on this, it's been on my todo list for a very long
> time. I just discussed this with Florian at NetDevconf.
No problem. It was showing up on my perf traces while improving
mac80211/mt76 performance, so I decided to deal with it now :)

>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index 2415d9cb9b89..ec030ab7f1e7 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -666,12 +666,39 @@ EXPORT_SYMBOL(kfree_skb);
>>  
>>  void kfree_skb_list(struct sk_buff *segs)
>>  {
>> -	while (segs) {
>> -		struct sk_buff *next = segs->next;
>> +	struct sk_buff *next = segs;
>> +	void *skbs[16];
>> +	int n_skbs = 0;
>>  
>> -		kfree_skb(segs);
>> -		segs = next;
>> +	while ((segs = next) != NULL) {
>> +		next = segs->next;
>> +
>> +		if (!skb_unref(segs))
>> +			continue;
>> +
>> +		if (segs->fclone != SKB_FCLONE_UNAVAILABLE ||
>> +		    n_skbs >= ARRAY_SIZE(skbs)) {
> 
> You could call kmem_cache_free_bulk() here and reset n_skbs=0.
Sure, good idea. I'll send v2 shortly.

- Felix

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-03-24 16:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-03-24  6:58 [PATCH net-next] net: use bulk free in kfree_skb_list Felix Fietkau
2019-03-24 11:31 ` Jesper Dangaard Brouer
2019-03-24 16:54   ` Felix Fietkau

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).