From: Greg KH <gregkh@linux-foundation.org>
To: Michal Kubecek <mkubecek@suse.cz>
Cc: maowenan <maowenan@huawei.com>,
dwmw2@infradead.org, netdev@vger.kernel.org,
eric.dumazet@gmail.com, edumazet@google.com, davem@davemloft.net,
ycheng@google.com, jdw@amazon.de, stable@vger.kernel.org,
Takashi Iwai <tiwai@suse.de>
Subject: Re: [PATCH stable 4.4 0/9] fix SegmentSmack in stable branch (CVE-2018-5390)
Date: Thu, 13 Sep 2018 14:32:38 +0200 [thread overview]
Message-ID: <20180913123238.GI2268@kroah.com> (raw)
In-Reply-To: <20180816152409.GK10648@kroah.com>
On Thu, Aug 16, 2018 at 05:24:09PM +0200, Greg KH wrote:
> On Thu, Aug 16, 2018 at 02:33:56PM +0200, Michal Kubecek wrote:
> > On Thu, Aug 16, 2018 at 08:05:50PM +0800, maowenan wrote:
> > > On 2018/8/16 19:39, Michal Kubecek wrote:
> > > >
> > > > I suspect you may be doing something wrong with your tests. I checked
> > > > the segmentsmack testcase and the CPU utilization on receiving side
> > > > (with sending 10 times as many packets as default) went down from ~100%
> > > > to ~3% even when comparing what is in stable 4.4 now against older 4.4
> > > > kernel.
> > >
> > > There seems no obvious problem when you send packets with default
> > > parameter in Segmentsmack POC, Which is also very related with your
> > > server's hardware configuration. Please try with below parameter to
> > > form OFO packets
> >
> > I did and even with these (questionable, see below) changes, I did not
> > get more than 10% (of one core) by receiving ksoftirqd.
> >
> > > for (i = 0; i < 1024; i++) // 128->1024
> > ...
> > > usleep(10*1000); // Adjust this and packet count to match the target!, sleep 100ms->10ms
> >
> > The comment in the testcase source suggests to do _one_ of these two
> > changes so that you generate 10 times as many packets as the original
> > testcase. You did both so that you end up sending 102400 packets per
> > second. With 55 byte long packets, this kind of attack requires at least
> > 5.5 MB/s (44 Mb/s) of throughput. This is no longer a "low packet rate
> > DoS", I'm afraid.
> >
> > Anyway, even at this rate, I only get ~10% of one core (Intel E5-2697).
> >
> > What I can see, though, is that with current stable 4.4 code, modified
> > testcase which sends something like
> >
> > 2:3, 3:4, ..., 3001:3002, 3003:3004, 3004:3005, ... 6001:6002, ...
> >
> > I quickly eat 6 MB of memory for receive queue of one socket while
> > earlier 4.4 kernels only take 200-300 KB. I didn't test latest 4.4 with
> > Takashi's follow-up yet but I'm pretty sure it will help while
> > preserving nice performance when using the original segmentsmack
> > testcase (with increased packet ratio).
>
> Ok, for now I've applied Takashi's fix to the 4.4 stable queue and will
> push out a new 4.4-rc later tonight. Can everyone standardize on that
> and test and let me know if it does, or does not, fix the reported
> issues?
>
> If not, we can go from there and evaluate this much larger patch series.
> But let's try the simple thing first.
So, is the issue still present on the latest 4.4 release? Has anyone
tested it? If not, I'm more than willing to look at backported patches,
but I want to ensure that they really are needed here.
thanks,
greg k-h
next prev parent reply other threads:[~2018-09-13 17:41 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-16 2:50 [PATCH stable 4.4 0/9] fix SegmentSmack in stable branch (CVE-2018-5390) Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 1/9] Revert "tcp: detect malicious patterns in tcp_collapse_ofo_queue()" Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 2/9] Revert "tcp: avoid collapses in tcp_prune_queue() if possible" Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 3/9] tcp: increment sk_drops for dropped rx packets Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 4/9] tcp: use an RB tree for ooo receive queue Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 5/9] tcp: free batches of packets in tcp_prune_ofo_queue() Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 6/9] tcp: avoid collapses in tcp_prune_queue() if possible Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 7/9] tcp: detect malicious patterns in tcp_collapse_ofo_queue() Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 8/9] tcp: call tcp_drop() from tcp_data_queue_ofo() Mao Wenan
2018-08-16 2:50 ` [PATCH stable 4.4 9/9] tcp: add tcp_ooo_try_coalesce() helper Mao Wenan
2018-08-16 6:16 ` [PATCH stable 4.4 0/9] fix SegmentSmack in stable branch (CVE-2018-5390) Michal Kubecek
2018-08-16 6:42 ` maowenan
2018-08-16 6:52 ` Michal Kubecek
2018-08-16 7:19 ` maowenan
2018-08-16 7:23 ` Michal Kubecek
2018-08-16 7:39 ` maowenan
2018-08-16 7:44 ` Michal Kubecek
2018-08-16 7:55 ` maowenan
2018-08-16 11:39 ` Michal Kubecek
2018-08-16 12:05 ` maowenan
2018-08-16 12:33 ` Michal Kubecek
2018-08-16 15:24 ` Greg KH
2018-08-16 16:06 ` Michal Kubecek
2018-08-16 16:20 ` Greg KH
2018-08-17 2:48 ` maowenan
2018-09-13 12:32 ` Greg KH [this message]
2018-09-13 12:44 ` Eric Dumazet
2018-09-14 2:24 ` maowenan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180913123238.GI2268@kroah.com \
--to=gregkh@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=dwmw2@infradead.org \
--cc=edumazet@google.com \
--cc=eric.dumazet@gmail.com \
--cc=jdw@amazon.de \
--cc=maowenan@huawei.com \
--cc=mkubecek@suse.cz \
--cc=netdev@vger.kernel.org \
--cc=stable@vger.kernel.org \
--cc=tiwai@suse.de \
--cc=ycheng@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox