netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin Habets <habetsm.xilinx@gmail.com>
To: Ivan Babrou <ivan@cloudflare.com>
Cc: Linux Kernel Network Developers <netdev@vger.kernel.org>,
	kernel-team <kernel-team@cloudflare.com>,
	Edward Cree <ecree.xilinx@gmail.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: Re: [PATCH net-next] sfc: backport XDP EV queue sharing from the out-of-tree driver
Date: Mon, 14 Dec 2020 13:10:28 +0000	[thread overview]
Message-ID: <20201214131028.f4fey3yhjugfcftr@gmail.com> (raw)
In-Reply-To: <CABWYdi1VWaOOhOx6wOAd0DjSXMGaPvL_x6d=M0jtX15naecBWA@mail.gmail.com>

On Sun, Dec 13, 2020 at 10:44:56AM -0800, Ivan Babrou wrote:
> On Sun, Dec 13, 2020 at 4:23 AM Martin Habets <habetsm.xilinx@gmail.com> wrote:
> >
> > On Thu, Dec 10, 2020 at 04:18:53PM -0800, Ivan Babrou wrote:
> > > Queue sharing behaviour already exists in the out-of-tree sfc driver,
> > > available under xdp_alloc_tx_resources module parameter.
> >
> > This comment is not relevant for in-tree patches. I'd also like to
> > make clear that we never intend to upstream any module parameters.
> 
> Would the following commit message be acceptable?
> 
> sfc: reduce the number of requested xdp ev queues
> 
> Without this change the driver tries to allocate too many queues,
> breaching the number of available msi-x interrupts on machines
> with many logical cpus and default adapter settings:
> 
> Insufficient resources for 12 XDP event queues (24 other channels, max 32)
> 
> Which in turn triggers EINVAL on XDP processing:
> 
> sfc 0000:86:00.0 ext0: XDP TX failed (-22)

Yes, that looks fine to me.

> > > This avoids the following issue on machines with many cpus:
> > >
> > > Insufficient resources for 12 XDP event queues (24 other channels, max 32)
> > >
> > > Which in turn triggers EINVAL on XDP processing:
> > >
> > > sfc 0000:86:00.0 ext0: XDP TX failed (-22)
> >
> > The code changes themselves are good.
> > The real limit that is hit here is with the number of MSI-X interrupts.
> > Reducing the number of event queues needed also reduces the number of
> > interrupts required, so this is a good thing.
> > Another way to get around this issue is to increase the number of
> > MSI-X interrupts allowed bu the NIC using the sfboot tool.
> 
> I've tried that, but on 5.10-rc7 with the in-tree driver both ethtool -l
> and sfboot are unable to work for some reason with sfc adapter.
> 
> The docs about the setting itself says you need to contact support
> to figure out the right values to use to make sure it works properly.

Indeed, our support may be better placed to help with this.

> What is your overall verdict on the patch? Should it be in the kernel
> or should users change msix-limit configuration? The configuration
> change requires breaking pcie lockdown measures as well, which is
> why I'd prefer for things to work out of the box.

The patch itself is good, as it saves on resources.

Thanks,
Martin

> Thanks!
> 
> >
> > Best regards,
> > Martin
> >
> > > Signed-off-by: Ivan Babrou <ivan@cloudflare.com>
> > > ---
> > >  drivers/net/ethernet/sfc/efx_channels.c | 6 ++++--
> > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/sfc/efx_channels.c
> > > b/drivers/net/ethernet/sfc/efx_channels.c
> > > index a4a626e9cd9a..1bfeee283ea9 100644
> > > --- a/drivers/net/ethernet/sfc/efx_channels.c
> > > +++ b/drivers/net/ethernet/sfc/efx_channels.c
> > > @@ -17,6 +17,7 @@
> > >  #include "rx_common.h"
> > >  #include "nic.h"
> > >  #include "sriov.h"
> > > +#include "workarounds.h"
> > >
> > >  /* This is the first interrupt mode to try out of:
> > >   * 0 => MSI-X
> > > @@ -137,6 +138,7 @@ static int efx_allocate_msix_channels(struct efx_nic *efx,
> > >  {
> > >   unsigned int n_channels = parallelism;
> > >   int vec_count;
> > > + int tx_per_ev;
> > >   int n_xdp_tx;
> > >   int n_xdp_ev;
> > >
> > > @@ -149,9 +151,9 @@ static int efx_allocate_msix_channels(struct efx_nic *efx,
> > >   * multiple tx queues, assuming tx and ev queues are both
> > >   * maximum size.
> > >   */
> > > -
> > > + tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx);
> > >   n_xdp_tx = num_possible_cpus();
> > > - n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, EFX_MAX_TXQ_PER_CHANNEL);
> > > + n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev);
> > >
> > >   vec_count = pci_msix_vec_count(efx->pci_dev);
> > >   if (vec_count < 0)
> > > --
> > > 2.29.2
> >
> > --
> > Martin Habets <habetsm.xilinx@gmail.com>

-- 
Martin Habets <habetsm.xilinx@gmail.com>

      reply	other threads:[~2020-12-14 13:11 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-11  0:18 [PATCH net-next] sfc: backport XDP EV queue sharing from the out-of-tree driver Ivan Babrou
2020-12-12  3:36 ` Jakub Kicinski
2020-12-13 12:23 ` Martin Habets
2020-12-13 18:44   ` Ivan Babrou
2020-12-14 13:10     ` Martin Habets [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201214131028.f4fey3yhjugfcftr@gmail.com \
    --to=habetsm.xilinx@gmail.com \
    --cc=davem@davemloft.net \
    --cc=ecree.xilinx@gmail.com \
    --cc=ivan@cloudflare.com \
    --cc=kernel-team@cloudflare.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).