XDP Newbie developer discussions
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Federico Parola <fede.parola@hotmail.it>
Cc: xdp-newbies@vger.kernel.org, brouer@redhat.com
Subject: Re: Multi-core scalability problems
Date: Thu, 15 Oct 2020 15:22:52 +0200	[thread overview]
Message-ID: <20201015152252.4360cf9a@carbon> (raw)
In-Reply-To: <VI1PR04MB3104487E7F503BEEE13AE7999E020@VI1PR04MB3104.eurprd04.prod.outlook.com>

On Thu, 15 Oct 2020 14:04:51 +0200
Federico Parola <fede.parola@hotmail.it> wrote:

> On 14/10/20 16:26, Jesper Dangaard Brouer wrote:
> > On Wed, 14 Oct 2020 14:17:46 +0200
> > Federico Parola <fede.parola@hotmail.it> wrote:
> >   
> >> On 14/10/20 11:15, Jesper Dangaard Brouer wrote:  
> >>> On Wed, 14 Oct 2020 08:56:43 +0200
> >>> Federico Parola <fede.parola@hotmail.it> wrote:
> >>>
> >>> [...]  
> >>>>> Can you try to use this[2] tool:
> >>>>>     ethtool_stats.pl --dev enp101s0f0
> >>>>>
> >>>>> And notice if there are any strange counters.
> >>>>>
> >>>>>
> >>>>> [2]https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
[...]

> >> The only solution I've found so far is to reduce the size of the rx ring
> >> as I mentioned in my former post. However I still see a decrease in
> >> performance when exceeding 4 cores.  
> > 
> > What is happening when you are reducing the size of the rx ring is two
> > things. (1) i40e driver have reuse/recycle-pages trick that get less
> > efficient, but because you are dropping packets early you are not
> > affected. (2) the total size of L3 memory you need to touch is also
> > decreased.
> > 
> > I think you are hitting case (2).  The Intel CPU have a cool feature
> > called DDIO (Data-Direct IO) or DCA (Direct Cache Access), which can
> > deliver packet data into L3 cache memory (if NIC is directly PCIe
> > connected to CPU).  The CPU is in charge when this feature is enabled,
> > and it will try to avoid L3 trashing and disable it in certain cases.
> > When you reduce the size of the rx rings, then you are also needing
> > less L3 cache memory, to the CPU will allow this DDIO feature.
> > 
> > You can use the 'perf stat' tool to check if this is happening, by
> > monitoring L3 (and L2) cache usage.  
> 
> What events should I monitor? LLC-load-misses/LLC-loads?

Looking at my own results from xdp-paper[1], it looks like that it
results in real 'cache-misses' (perf stat -e cache-misses).

E.g I ran:
 sudo ~/perf stat -C3 -e cycles -e  instructions -e cache-references -e cache-misses -r 3 sleep 1

Notice how the 'insn per cycle' gets less efficient when we experience
these cache-misses.

Also how RX-size of queues affect XDP-redirect in [2].


[1] https://github.com/xdp-project/xdp-paper/blob/master/benchmarks/bench01_baseline.org
[2] https://github.com/xdp-project/xdp-paper/blob/master/benchmarks/bench05_xdp_redirect.org
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


  reply	other threads:[~2020-10-15 13:23 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-13 13:49 Multi-core scalability problems Federico Parola
2020-10-13 16:41 ` Jesper Dangaard Brouer
2020-10-13 16:44 ` Toke Høiland-Jørgensen
2020-10-14  6:56   ` Federico Parola
2020-10-14  9:15     ` Jesper Dangaard Brouer
2020-10-14 12:17       ` Federico Parola
2020-10-14 14:26         ` Jesper Dangaard Brouer
2020-10-15 12:04           ` Federico Parola
2020-10-15 13:22             ` Jesper Dangaard Brouer [this message]
2020-10-19 15:23               ` Federico Parola
2020-10-19 18:26                 ` Jesper Dangaard Brouer
2020-10-24 13:57                   ` Federico Parola
2020-10-26  8:14                     ` Jesper Dangaard Brouer
     [not found] <VI1PR04MB3104C1D86BDC113F4AC0CF4A9E050@VI1PR04MB3104.eurprd04.prod.outlook.com>
2020-10-14  8:35 ` Federico Parola

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201015152252.4360cf9a@carbon \
    --to=brouer@redhat.com \
    --cc=fede.parola@hotmail.it \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox