public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Dumazet <eric.dumazet@gmail.com>
To: David Miller <davem@davemloft.net>
Cc: anton@samba.org, netdev@vger.kernel.org, miltonm@bga.com
Subject: Re: Spurious "TCP: too many of orphaned sockets", unable to allocate sockets
Date: Wed, 25 Aug 2010 10:47:37 +0200	[thread overview]
Message-ID: <1282726057.2487.1.camel@edumazet-laptop> (raw)
In-Reply-To: <20100825.012058.116362511.davem@davemloft.net>

Le mercredi 25 août 2010 à 01:20 -0700, David Miller a écrit :
> From: David Miller <davem@davemloft.net>
> Date: Wed, 25 Aug 2010 00:59:29 -0700 (PDT)
> 
> > Solution seems simple, if the too many orphan check triggers, simply
> > redo the check using the expensive but more accurate per-cpu counter
> > read (which avoids the skew) to make sure.
> 
> Something like this:
> 
> tcp: Combat per-cpu skew in orphan tests.
> 
> As reported by Anton Blanchard when we use
> percpu_counter_read_positive() to make our orphan socket limit checks,
> the check can be off by up to num_cpus_online() * batch (which is 32
> by default) which on a 128 cpu machine can be as large as the default
> orphan limit itself.
> 
> Fix this by doing the full expensive sum check if the optimized check
> triggers.
> 
> Reported-by: Anton Blanchard <anton@samba.org>
> Signed-off-by: David S. Miller <davem@davemloft.net>

Very nice !

tcp_too_many_orphans() might be a bit large to still be inlined ...

Acked-by: Eric Dumazet <eric.dumazet@gmail.com>


> 
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index df6a2eb..eaa9582 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -268,11 +268,21 @@ static inline int between(__u32 seq1, __u32 seq2, __u32 seq3)
>  	return seq3 - seq2 >= seq1 - seq2;
>  }
>  
> -static inline int tcp_too_many_orphans(struct sock *sk, int num)
> +static inline bool tcp_too_many_orphans(struct sock *sk, int shift)
>  {
> -	return (num > sysctl_tcp_max_orphans) ||
> -		(sk->sk_wmem_queued > SOCK_MIN_SNDBUF &&
> -		 atomic_read(&tcp_memory_allocated) > sysctl_tcp_mem[2]);
> +	struct percpu_counter *ocp = sk->sk_prot->orphan_count;
> +	int orphans = percpu_counter_read_positive(ocp);
> +
> +	if (orphans << shift > sysctl_tcp_max_orphans) {
> +		orphans = percpu_counter_sum_positive(ocp);
> +		if (orphans << shift > sysctl_tcp_max_orphans)
> +			return true;
> +	}
> +
> +	if (sk->sk_wmem_queued > SOCK_MIN_SNDBUF &&
> +	    atomic_read(&tcp_memory_allocated) > sysctl_tcp_mem[2])
> +		return true;
> +	return false;
>  }
>  
>  /* syncookies: remember time of last synqueue overflow */
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 176e11a..197b9b7 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2011,11 +2011,8 @@ adjudge_to_death:
>  		}
>  	}
>  	if (sk->sk_state != TCP_CLOSE) {
> -		int orphan_count = percpu_counter_read_positive(
> -						sk->sk_prot->orphan_count);
> -
>  		sk_mem_reclaim(sk);
> -		if (tcp_too_many_orphans(sk, orphan_count)) {
> +		if (tcp_too_many_orphans(sk, 0)) {
>  			if (net_ratelimit())
>  				printk(KERN_INFO "TCP: too many of orphaned "
>  				       "sockets\n");
> diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
> index 808bb92..c35b469 100644
> --- a/net/ipv4/tcp_timer.c
> +++ b/net/ipv4/tcp_timer.c
> @@ -66,18 +66,18 @@ static void tcp_write_err(struct sock *sk)
>  static int tcp_out_of_resources(struct sock *sk, int do_reset)
>  {
>  	struct tcp_sock *tp = tcp_sk(sk);
> -	int orphans = percpu_counter_read_positive(&tcp_orphan_count);
> +	int shift = 0;
>  
>  	/* If peer does not open window for long time, or did not transmit
>  	 * anything for long time, penalize it. */
>  	if ((s32)(tcp_time_stamp - tp->lsndtime) > 2*TCP_RTO_MAX || !do_reset)
> -		orphans <<= 1;
> +		shift++;
>  
>  	/* If some dubious ICMP arrived, penalize even more. */
>  	if (sk->sk_err_soft)
> -		orphans <<= 1;
> +		shift++;
>  
> -	if (tcp_too_many_orphans(sk, orphans)) {
> +	if (tcp_too_many_orphans(sk, shift)) {
>  		if (net_ratelimit())
>  			printk(KERN_INFO "Out of socket memory\n");
>  



  reply	other threads:[~2010-08-25  8:47 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-25  7:16 Spurious "TCP: too many of orphaned sockets", unable to allocate sockets Anton Blanchard
2010-08-25  7:17 ` [PATCH] tcp: Fix sysctl_tcp_max_orphans when PAGE_SIZE != 4k Anton Blanchard
2010-08-25  7:39   ` Eric Dumazet
2010-08-25  7:59     ` David Miller
2010-08-25 17:50   ` Eric Dumazet
2010-08-25 23:57     ` David Miller
2010-08-26  0:38       ` Anton Blanchard
2010-08-26  3:53         ` David Miller
2010-08-26  6:36           ` Anton Blanchard
2010-08-26  4:45         ` Eric Dumazet
2010-08-26  5:15         ` [PATCH] tcp: fix three tcp sysctls tuning Eric Dumazet
2010-08-26  6:02           ` David Miller
2010-08-26  6:21             ` Eric Dumazet
2010-08-25  7:59 ` Spurious "TCP: too many of orphaned sockets", unable to allocate sockets David Miller
2010-08-25  8:20   ` David Miller
2010-08-25  8:47     ` Eric Dumazet [this message]
2010-08-25  9:28       ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1282726057.2487.1.camel@edumazet-laptop \
    --to=eric.dumazet@gmail.com \
    --cc=anton@samba.org \
    --cc=davem@davemloft.net \
    --cc=miltonm@bga.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox