From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Dumazet Subject: Re: [RFC PATCH] ip: re-introduce fragments cache worker Date: Fri, 6 Jul 2018 05:09:59 -0700 Message-ID: <1df6b0ea-885b-7d5e-a0c9-e01a5a33a4f2@gmail.com> References: <6512d94713d40f1d572d2023168c48990f0d9cf0.1530798211.git.pabeni@redhat.com> <51ef14ac-1d98-ad75-d282-eb6cb177fe7a@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: "David S. Miller" , Eric Dumazet , Florian Westphal , NeilBrown To: Paolo Abeni , Eric Dumazet , netdev@vger.kernel.org Return-path: Received: from mail-pl0-f66.google.com ([209.85.160.66]:44316 "EHLO mail-pl0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932451AbeGFMKF (ORCPT ); Fri, 6 Jul 2018 08:10:05 -0400 Received: by mail-pl0-f66.google.com with SMTP id m16-v6so2813720pls.11 for ; Fri, 06 Jul 2018 05:10:05 -0700 (PDT) In-Reply-To: Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 07/06/2018 04:56 AM, Paolo Abeni wrote: > Hi, > > On Fri, 2018-07-06 at 04:23 -0700, Eric Dumazet wrote: >> Ho hum. No please. >> >> I do not think adding back a GC is wise, since my patches were going in the direction >> of allowing us to increase limits on current hardware. >> >> Meaning that the amount of frags to evict would be quite big under DDOS. >> (One inet_frag_queue allocated for every incoming tiny frame :/ ) >> >> A GC is a _huge_ problem, burning one cpu (you would have to provision for this CPU) >> compared to letting normal per frag timer doing its job. >> >> My plan was to reduce the per frag timer under load (default is 30 seconds), since >> this is exactly what your patch is indirectly doing, by aggressively pruning >> frags under stress. >> >> That would be a much simpler heuristic. [1] >> >> BTW my own results (before patch) are : >> >> lpaa5:/export/hda3/google/edumazet# ./super_netperf 10 -H 10.246.7.134 -t UDP_STREAM -l 60 >> 9602 >> lpaa5:/export/hda3/google/edumazet# ./super_netperf 200 -H 10.246.7.134 -t UDP_STREAM -l 60 >> 9557 >> >> On receiver (normal settings here) I had : >> >> lpaa6:/export/hda3/google/edumazet# grep . /proc/sys/net/ipv4/ipfrag_* >> /proc/sys/net/ipv4/ipfrag_high_thresh:104857600 >> /proc/sys/net/ipv4/ipfrag_low_thresh:78643200 >> /proc/sys/net/ipv4/ipfrag_max_dist:0 >> /proc/sys/net/ipv4/ipfrag_secret_interval:0 >> /proc/sys/net/ipv4/ipfrag_time:30 >> >> lpaa6:/export/hda3/google/edumazet# grep FRAG /proc/net/sockstat >> FRAG: inuse 824 memory 53125312 > > Than you for the feedback. > > With your setting, you need a bit more concurrent connections (400 ?) > to saturate the ipfrag cache. Above that number, performances will > still sink. Maybe, but IP defrag can not be 'perfect'. For this particular use case I could still bump high_thresh to 6 GB and all would be good :) > This looks nice, I'll try to test it in my use case and I'll report > here. > > Perhaps we can use the default timeout when usage < low_thresh, to > avoid some maths in possibly common scenario? On current 64bit hardware, a divide here is not a big cost (compared to the rest of frag setup) and I would rather starting having smaller timeouts sooner than later ;) (low_thresh is typically set to 75% of high_thresh) > > I have doubt: under DDOS we will trigger timeout per > jiffy, can that keep a CPU busy, too? Yes, the cpu(s) handling the RX queue(s), which are already provisioned for networking stuff ;) Even without any frag being received, these cpu can be 100% busy.