* [PATCH] net: avoid one atomic op per cloned skb
@ 2010-05-18 13:40 Eric Dumazet
2010-05-18 18:58 ` Eric Dumazet
0 siblings, 1 reply; 3+ messages in thread
From: Eric Dumazet @ 2010-05-18 13:40 UTC (permalink / raw)
To: David Miller; +Cc: netdev
Hi David
I know you said 'only patches', but I found following patch small
enough ?
I have a followup patch to avoid two atomic ops per cloned skb on
dataref (helps TCP tx path) but will submit it for 2.6.36, since its
diffstat is a bit more than 3++- :)
Thanks
[PATCH] net: avoid one atomic op per cloned skb
skb_clone() can use atomic_set(clone_ref, 2) safely, because only
current thread can possibly touch clone_ref at this point.
Add a WARN_ON_ONCE() for a while, to catch wrong assumptions.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
net/core/skbuff.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index c543dd2..4444f15 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -628,7 +628,8 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
n->fclone == SKB_FCLONE_UNAVAILABLE) {
atomic_t *fclone_ref = (atomic_t *) (n + 1);
n->fclone = SKB_FCLONE_CLONE;
- atomic_inc(fclone_ref);
+ WARN_ON_ONCE(atomic_read(fclone_ref) != 1);
+ atomic_set(fclone_ref, 2);
} else {
n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
if (!n)
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] net: avoid one atomic op per cloned skb
2010-05-18 13:40 [PATCH] net: avoid one atomic op per cloned skb Eric Dumazet
@ 2010-05-18 18:58 ` Eric Dumazet
2010-05-19 5:18 ` Eric Dumazet
0 siblings, 1 reply; 3+ messages in thread
From: Eric Dumazet @ 2010-05-18 18:58 UTC (permalink / raw)
To: David Miller; +Cc: netdev
Le mardi 18 mai 2010 à 15:40 +0200, Eric Dumazet a écrit :
> Hi David
>
> I know you said 'only patches', but I found following patch small
> enough ?
>
> Thanks
>
> [PATCH] net: avoid one atomic op per cloned skb
>
> skb_clone() can use atomic_set(clone_ref, 2) safely, because only
> current thread can possibly touch clone_ref at this point.
>
> Add a WARN_ON_ONCE() for a while, to catch wrong assumptions.
>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> ---
> net/core/skbuff.c | 3 ++-
> 1 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index c543dd2..4444f15 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -628,7 +628,8 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
> n->fclone == SKB_FCLONE_UNAVAILABLE) {
> atomic_t *fclone_ref = (atomic_t *) (n + 1);
> n->fclone = SKB_FCLONE_CLONE;
> - atomic_inc(fclone_ref);
> + WARN_ON_ONCE(atomic_read(fclone_ref) != 1);
> + atomic_set(fclone_ref, 2);
> } else {
> n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
> if (!n)
>
>
Oops, it needs more thinking, definitely not a 2.6.35 thing...
There would be a race between skb_clone() and kfree_skbmem()
kfree_skbmem() must perform the atomic_dec_and_test() before setting
skb->fclone to SKB_FCLONE_UNAVAILABLE.
Doing so avoids dirtying skb->fclone right before kmem_cache_free()...
V2 would be :
[RFC v2] net: avoid one atomic op per cloned skb
skb_clone() can use atomic_set(clone_ref, 2) safely, because only
current thread can possibly touch clone_ref at this point.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index c543dd2..77d5a6b 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -370,13 +370,13 @@ static void kfree_skbmem(struct sk_buff *skb)
fclone_ref = (atomic_t *) (skb + 1);
other = skb - 1;
- /* The clone portion is available for
- * fast-cloning again.
- */
- skb->fclone = SKB_FCLONE_UNAVAILABLE;
-
if (atomic_dec_and_test(fclone_ref))
kmem_cache_free(skbuff_fclone_cache, other);
+ else
+ /* The clone portion is available for fast-cloning.
+ * Note this must be done after the fclone_ref change.
+ */
+ skb->fclone = SKB_FCLONE_UNAVAILABLE;
break;
}
}
@@ -628,7 +628,7 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
n->fclone == SKB_FCLONE_UNAVAILABLE) {
atomic_t *fclone_ref = (atomic_t *) (n + 1);
n->fclone = SKB_FCLONE_CLONE;
- atomic_inc(fclone_ref);
+ atomic_set(fclone_ref, 2);
} else {
n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
if (!n)
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] net: avoid one atomic op per cloned skb
2010-05-18 18:58 ` Eric Dumazet
@ 2010-05-19 5:18 ` Eric Dumazet
0 siblings, 0 replies; 3+ messages in thread
From: Eric Dumazet @ 2010-05-19 5:18 UTC (permalink / raw)
To: David Miller; +Cc: netdev
Le mardi 18 mai 2010 à 20:58 +0200, Eric Dumazet a écrit :
> Oops, it needs more thinking, definitely not a 2.6.35 thing...
>
> There would be a race between skb_clone() and kfree_skbmem()
>
> kfree_skbmem() must perform the atomic_dec_and_test() before setting
> skb->fclone to SKB_FCLONE_UNAVAILABLE.
>
> Doing so avoids dirtying skb->fclone right before kmem_cache_free()...
>
>
> V2 would be :
>
> [RFC v2] net: avoid one atomic op per cloned skb
>
> skb_clone() can use atomic_set(clone_ref, 2) safely, because only
> current thread can possibly touch clone_ref at this point.
>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> ---
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index c543dd2..77d5a6b 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -370,13 +370,13 @@ static void kfree_skbmem(struct sk_buff *skb)
> fclone_ref = (atomic_t *) (skb + 1);
> other = skb - 1;
>
> - /* The clone portion is available for
> - * fast-cloning again.
> - */
> - skb->fclone = SKB_FCLONE_UNAVAILABLE;
> -
> if (atomic_dec_and_test(fclone_ref))
> kmem_cache_free(skbuff_fclone_cache, other);
> + else
> + /* The clone portion is available for fast-cloning.
> + * Note this must be done after the fclone_ref change.
> + */
> + skb->fclone = SKB_FCLONE_UNAVAILABLE;
This still is racy, because we are not allowed to access skb after the
atomic_dec_and_test() :
Other thread can now go past the final refcount decrement and free skb
under us.
Hmm...
> break;
> }
> }
> @@ -628,7 +628,7 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
> n->fclone == SKB_FCLONE_UNAVAILABLE) {
> atomic_t *fclone_ref = (atomic_t *) (n + 1);
> n->fclone = SKB_FCLONE_CLONE;
> - atomic_inc(fclone_ref);
> + atomic_set(fclone_ref, 2);
> } else {
> n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
> if (!n)
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2010-05-19 5:19 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-18 13:40 [PATCH] net: avoid one atomic op per cloned skb Eric Dumazet
2010-05-18 18:58 ` Eric Dumazet
2010-05-19 5:18 ` Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox