public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Shawn Bohrer <sbohrer@rgmadvisors.com>
Cc: netdev@vger.kernel.org
Subject: Re: Heavy spin_lock contention in __udp4_lib_mcast_deliver increase
Date: Thu, 26 Apr 2012 18:21:29 +0200	[thread overview]
Message-ID: <1335457289.2775.52.camel@edumazet-glaptop> (raw)
In-Reply-To: <1335457112.2775.50.camel@edumazet-glaptop>

On Thu, 2012-04-26 at 18:18 +0200, Eric Dumazet wrote:
> On Thu, 2012-04-26 at 17:53 +0200, Eric Dumazet wrote:
> 
> > Let me understand
> > 
> > You have 300 sockets bound to the same port, so a single message must be
> > copied 300 times and delivered to those sockets ?
> > 
> > 
> 
> Please try the following patch. It should allow up to 512 sockets (on
> x86_64) to be stored in stack, and delivery performed out of the locked
> section.
> 
>  net/ipv4/udp.c |   16 ++++++++++++----
>  net/ipv6/udp.c |   15 +++++++++++----
>  2 files changed, 23 insertions(+), 8 deletions(-)
> 
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index 279fd08..beb9ea6 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -1539,13 +1539,20 @@ static void flush_stack(struct sock **stack, unsigned int count,
>  static int __udp4_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
>  				    struct udphdr  *uh,
>  				    __be32 saddr, __be32 daddr,
> -				    struct udp_table *udptable)
> +				    struct udp_table *udptable,
> +				   int proto)
>  {
> -	struct sock *sk, *stack[256 / sizeof(struct sock *)];
> +	struct sock *sk, **stack;
>  	struct udp_hslot *hslot = udp_hashslot(udptable, net, ntohs(uh->dest));
>  	int dif;
>  	unsigned int i, count = 0;
>  
> +	stack = kmalloc(PAGE_SIZE, GFP_ATOMIC);
> +	if (unlikely(!stack)) {
> +		UDP_INC_STATS_BH(net, UDP_MIB_RCVBUFERRORS, proto == IPPROTO_UDPLITE);
> +		kfree_skb(skb);
> +		return 0;
> +	}
>  	spin_lock(&hslot->lock);
>  	sk = sk_nulls_head(&hslot->head);
>  	dif = skb->dev->ifindex;
> @@ -1554,7 +1561,7 @@ static int __udp4_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
>  		stack[count++] = sk;
>  		sk = udp_v4_mcast_next(net, sk_nulls_next(sk), uh->dest,
>  				       daddr, uh->source, saddr, dif);
> -		if (unlikely(count == ARRAY_SIZE(stack))) {
> +		if (unlikely(count == PAGE_SIZE/sizeof(*sk))) {

Oops, should be PAGE_SIZE/sizeof(sk) of course

(same problem in ipv6/udp.c)

>  			if (!sk)
>  				break;
>  			flush_stack(stack, count, skb, ~0);
> @@ -1580,6 +1587,7 @@ static int __udp4_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
>  	} else {
>  		kfree_skb(skb);
>  	}
> +	kfree(stack);
>  	return 0;
>  }
>  

  reply	other threads:[~2012-04-26 16:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-26 15:15 Heavy spin_lock contention in __udp4_lib_mcast_deliver increase Shawn Bohrer
2012-04-26 15:53 ` Eric Dumazet
2012-04-26 16:18   ` Eric Dumazet
2012-04-26 16:21     ` Eric Dumazet [this message]
2012-04-26 16:28   ` Shawn Bohrer
2012-04-26 16:31     ` Eric Dumazet
2012-04-26 21:44       ` Shawn Bohrer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1335457289.2775.52.camel@edumazet-glaptop \
    --to=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=sbohrer@rgmadvisors.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox