From: Paolo Abeni <pabeni@redhat.com>
To: Eric Dumazet <edumazet@google.com>
Cc: netdev <netdev@vger.kernel.org>,
David Miller <davem@davemloft.net>,
Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
intel-wired-lan@lists.osuosl.org,
Alexander Duyck <alexander.h.duyck@intel.com>
Subject: Re: [RFC PATCH 0/2] net:setup XPS mapping for each online CPU
Date: Thu, 15 Mar 2018 16:51:22 +0100 [thread overview]
Message-ID: <1521129082.2681.13.camel@redhat.com> (raw)
In-Reply-To: <CANn89iKcHzs2enS-CjNx3xJWwXwjX9Uu_7JYihOR4=HPsQLj6Q@mail.gmail.com>
Hi,
On Thu, 2018-03-15 at 15:31 +0000, Eric Dumazet wrote:
> On Thu, Mar 15, 2018 at 8:08 AM Paolo Abeni <pabeni@redhat.com> wrote:
>
> > Currently, most MQ netdevice setup the default XPS configuration mapping
>
> 1-1
> > the first real_num_tx_queues queues and CPUs and no mapping is created for
> > the CPUs with id greater then real_num_tx_queues, if any.
> > As a result, the xmit path for unconnected sockets on such cores
>
> experiences a
> > relevant overhead in netdev_pick_tx(), which needs to dissect each packet
>
> and
> > compute its hash.
> > Such scenario is easily triggered. e.g. from DNS server under relevant
>
> load, as
> > the user-space process is moved away from the CPUs serving the softirqs
>
> (note:
> > this is beneficial for the overall DNS server performances).
> > This series introduces an helper to easily setup up XPS mapping for all
>
> the
> > online CPUs, and use it in the ixgbe driver, demonstrating a relevant
> > performance improvement in the above scenario.
> > Paolo Abeni (2):
> > net: introduce netif_set_xps()
> > ixgbe: setup XPS via netif_set_xps()
>
>
> Resent, not HTML this time, sorry for duplication.
>
> I truly believe XPS should not be setup by devices.
>
> XPS is policy, and policy does belong to user space.
Thank you for your comments!
As general principle, I agree policies should be in user-space, but I
also think that the kernel should provide a reasonable default. Many MQ
devices already configure XPS and their default is AFAICS sub-optimal.
> Note that if XPS is not setup, MQ queue selection is just fine by default ;
I'm sorry, I do not follow. AFAICS with unconnected sockets without XPS
we always hit the netdev_pick_tx()/skb_tx_hash()/skb_flow_dissect()
overhead in xmit path.
Cheers,
Paolo
next prev parent reply other threads:[~2018-03-15 15:51 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-15 15:08 [RFC PATCH 0/2] net:setup XPS mapping for each online CPU Paolo Abeni
2018-03-15 15:08 ` [RFC PATCH 1/2] net: introduce netif_set_xps() Paolo Abeni
2018-03-15 15:08 ` [RFC PATCH 2/2] ixgbe: setup XPS via netif_set_xps() Paolo Abeni
2018-03-15 16:43 ` [Intel-wired-lan] " Alexander Duyck
2018-03-15 17:05 ` Paolo Abeni
2018-03-15 17:22 ` Alexander Duyck
2018-03-15 15:31 ` [RFC PATCH 0/2] net:setup XPS mapping for each online CPU Eric Dumazet
2018-03-15 15:51 ` Paolo Abeni [this message]
2018-03-15 15:59 ` Eric Dumazet
2018-03-15 17:20 ` Paolo Abeni
2018-03-15 17:47 ` [Intel-wired-lan] " Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1521129082.2681.13.camel@redhat.com \
--to=pabeni@redhat.com \
--cc=alexander.h.duyck@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jeffrey.t.kirsher@intel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).