netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: Tom Herbert <tom@herbertland.com>
Cc: Linux Kernel Network Developers <netdev@vger.kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>
Subject: Re: [RFC PATCH 0/3] udp: scalability improvements
Date: Mon, 08 May 2017 09:29:10 +0200	[thread overview]
Message-ID: <1494228550.2397.3.camel@redhat.com> (raw)
In-Reply-To: <CALx6S349K0_61uKs0Aje4f-8yjkQdLgFTxgF=Xd1jOyS0uddVQ@mail.gmail.com>

On Sat, 2017-05-06 at 16:09 -0700, Tom Herbert wrote:
> On Sat, May 6, 2017 at 1:42 PM, Paolo Abeni <pabeni@redhat.com> wrote:
> > This patch series implement an idea suggested by Eric Dumazet to
> > reduce the contention of the udp sk_receive_queue lock when the socket is
> > under flood.
> > 
> > An ancillary queue is added to the udp socket, and the socket always
> > tries first to read packets from such queue. If it's empty, we splice
> > the content from sk_receive_queue into the ancillary queue.
> > 
> > The first patch introduces some helpers to keep the udp code small, and the
> > following two implement the ancillary queue strategy. The code is split
> > to hopefully help the reviewing process.
> > 
> > The measured overall gain under udp flood is in the 20-35% range depending on
> > the numa layout and the number of ingress queue used by the relevant nic.
> > 
> 
> Certainly sounds good, but can you give real reproducible performance
> numbers including the test that was run?

You are right, and I'm sorry, the cover letter was too terse.

I used pktgen as sender, with 64 bytes packets, random src port on an
host b2b connected via a 10Gbs link with the dut.

On the receiver I used the udp_sink program by Jesper (https://github.c
om/netoptimizer/network-testing/blob/master/src/udp_sink.c) and I
configured an h/w l4 rx hash, so that I could control the number of
ingress nic rx queues hit by the udp traffic via ethtool -L.

The udp_sink program was bound to the first idle cpu, to get more
stable numbers.

Using a single numa note as receiver, I got the following:

nic rx queues		vanilla			patched kernel
1			1820 kpps		1900 kpps
2			1950 kpps		2500 kpps
16			1670 kpps		2120 kpps

When using a single nic rx queue I also enabled busy polling;
elsewhere, in my scenario, the bh processing becames the bottle-neck
and this produces large artifacts in the measured performances (e.g.
improving the udp sink run time, decreases the overall tput, since more
action from the scheduler comes into play).

Cheers,

Paolo

      reply	other threads:[~2017-05-08  7:29 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-06 20:42 [RFC PATCH 0/3] udp: scalability improvements Paolo Abeni
2017-05-06 20:42 ` [RFC PATCH 1/3] net/sock: factor out dequeue/peek with offset code Paolo Abeni
2017-05-06 20:42 ` [RFC PATCH 2/3] udp: use a separate rx queue for packet reception Paolo Abeni
2017-05-06 20:42 ` [RFC PATCH 3/3] udp: keep the sk_receive_queue held when splicing Paolo Abeni
2017-05-06 23:09 ` [RFC PATCH 0/3] udp: scalability improvements Tom Herbert
2017-05-08  7:29   ` Paolo Abeni [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1494228550.2397.3.camel@redhat.com \
    --to=pabeni@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=tom@herbertland.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).