netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Florian Westphal <fw@strlen.de>
Cc: "liujian (CE)" <liujian56@huawei.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"edumazet@google.com" <edumazet@google.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Wangkefeng (Kevin)" <wangkefeng.wang@huawei.com>,
	"weiyongjun (A)" <weiyongjun1@huawei.com>,
	brouer@redhat.com
Subject: Re: Question about ip_defrag
Date: Wed, 30 Aug 2017 12:58:43 +0200	[thread overview]
Message-ID: <20170830125843.250c91c1@redhat.com> (raw)
In-Reply-To: <20170829075315.GA9993@breakpoint.cc>

(trimmed CC list a bit)

On Tue, 29 Aug 2017 09:53:15 +0200 Florian Westphal <fw@strlen.de> wrote:

> Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> > On Mon, 28 Aug 2017 16:00:32 +0200
> > Florian Westphal <fw@strlen.de> wrote:
> >   
> > > liujian (CE) <liujian56@huawei.com> wrote:  
> > > > Hi
> > > > 
> > > > I checked our 3.10 kernel, we had backported all percpu_counter
> > > > bug fix in lib/percpu_counter.c and include/linux/percpu_counter.h.
> > > > And I check 4.13-rc6, also has the issue if NIC's rx cpu num big enough.
> > > >     
> > > > > > > > the issue:
> > > > > > > > Ip_defrag fail caused by frag_mem_limit reached 4M(frags.high_thresh).
> > > > > > > > At this moment,sum_frag_mem_limit is about 10K.    
> > > > 
> > > > So should we change ipfrag high/low thresh to a reasonable value ? 
> > > > And if it is, is there a standard to change the value?    
> > > 
> > > Each cpu can have frag_percpu_counter_batch bytes rest doesn't know
> > > about so with 64 cpus that is ~8 mbyte.
> > > 
> > > possible solutions:
> > > 1. reduce frag_percpu_counter_batch to 16k or so
> > > 2. make both low and high thresh depend on NR_CPUS  
> 
> I take 2) back.  Its wrong to do this, for large NR_CPU values it
> would even overflow.

Alternatively solution 3:
Why do we want to maintain a (4MBytes) memory limit, across all CPUs?
Couldn't we just allow each CPU to have a memory limit?


> > To me it looks like we/I have been using the wrong API for comparing
> > against percpu_counters.  I guess we should have used __percpu_counter_compare().  
> 
> Are you sure?  For liujian use case (64 cores) it looks like we would
> always fall through to percpu_counter_sum() so we eat spinlock_irqsave
> cost for all compares.
> 
> Before we entertain this we should consider reducing frag_percpu_counter_batch
> to a smaller value.

Yes, I agree, we really need to lower/reduce the frag_percpu_counter_batch.
As you say, else the __percpu_counter_compare() call will be useless
(around systems with >= 32 CPUs).

I think the bug is in frag_mem_limit().  It just reads the global
counter (fbc->count), without considering other CPUs can have upto 130K
that haven't been subtracted yet (due to 3M low limit, become dangerous
at >=24 CPUs).  The  __percpu_counter_compare() does the right thing,
and takes into account the number of (online) CPUs and batch size, to
account for this.

If we choose 16K (16384), and use __percpu_counter_compare(), then we
can scale to systems with 256 CPUs (4*1024*1024/16384=256), before this
memory accounting becomes more expensive (than not using percpu_counters).
But Liujian, reports he have a 384 CPU system, so he would still need
to increase the lower+high threshold.


$ grep -H .  /proc/sys/net/ipv*/ip*frag_*_thresh /proc/sys/net/netfilter/nf_conntrack_frag6_*_thresh
/proc/sys/net/ipv4/ipfrag_high_thresh:4194304
/proc/sys/net/ipv4/ipfrag_low_thresh:3145728
/proc/sys/net/ipv6/ip6frag_high_thresh:4194304
/proc/sys/net/ipv6/ip6frag_low_thresh:3145728
/proc/sys/net/netfilter/nf_conntrack_frag6_high_thresh:4194304
/proc/sys/net/netfilter/nf_conntrack_frag6_low_thresh:3145728

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2017-08-30 10:58 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <4F88C5DDA1E80143B232E89585ACE27D018F07E2@DGGEMA502-MBX.china.huawei.com>
2017-08-24 13:53 ` Question about ip_defrag Jesper Dangaard Brouer
     [not found]   ` <4F88C5DDA1E80143B232E89585ACE27D018F0AE1@DGGEMA502-MBX.china.huawei.com>
2017-08-24 18:59     ` Jesper Dangaard Brouer
2017-08-25  1:33       ` liujian (CE)
2017-08-28  8:08       ` liujian (CE)
2017-08-28 14:00         ` Florian Westphal
2017-08-29  7:20           ` Jesper Dangaard Brouer
2017-08-29  7:44             ` liujian (CE)
2017-08-29  7:53             ` Florian Westphal
2017-08-30 10:58               ` Jesper Dangaard Brouer [this message]
2017-08-30 11:58                 ` Florian Westphal
2017-08-30 12:22                   ` Jesper Dangaard Brouer
2017-08-29  7:40           ` liujian (CE)
2017-08-29 13:01           ` liujian (CE)
2017-08-29 13:46             ` Florian Westphal
2017-08-30  1:52               ` liujian (CE)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170830125843.250c91c1@redhat.com \
    --to=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=fw@strlen.de \
    --cc=liujian56@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=weiyongjun1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).