netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: "Wilczynski, Michal" <michal.wilczynski@intel.com>
Cc: Jiri Pirko <jiri@resnulli.us>,
	Tony Nguyen <anthony.l.nguyen@intel.com>, <davem@davemloft.net>,
	<pabeni@redhat.com>, <edumazet@google.com>,
	<netdev@vger.kernel.org>, <lukasz.czapnik@intel.com>,
	<przemyslaw.kitszel@intel.com>
Subject: Re: [PATCH net-next 0/5][pull request] ice: Support 5 layer Tx scheduler topology
Date: Thu, 25 May 2023 08:41:39 -0700	[thread overview]
Message-ID: <20230525084139.7e381557@kernel.org> (raw)
In-Reply-To: <e5a3edb9-1f6b-d7af-3f3a-4c80ee567c6b@intel.com>

On Thu, 25 May 2023 09:49:53 +0200 Wilczynski, Michal wrote:
> On 5/24/2023 10:02 PM, Jakub Kicinski wrote:
> > On Wed, 24 May 2023 18:59:20 +0200 Wilczynski, Michal wrote:  
> >> Sorry about that, I gave examples from the top of my head, since those are the
> >> features that potentially could modify the scheduler tree, seemed obvious to me
> >> at the time. Lowering number of layers in the scheduling tree increases performance,
> >> but only allows you to create a much simpler scheduling tree. I agree that mentioning the
> >> features that actually modify the scheduling tree could be helpful to the reviewer.  
> > Reviewer is one thing, but also the user. The documentation needs to be
> > clear enough for the user to be able to confidently make a choice one
> > way or the other. I'm not sure 5- vs 9-layer is meaningful to the user
> > at all.  
> 
> It is relevant especially if the number of VF's/queues is not a multiply of 8, as described
> in the first commit of this series - that's the real-world user problem. Performance was
> not consistent among queues if you had 9 queues for example.
> 
> But I was also trying to provide some background on why we don't want to make 5-layer
> topology the default in the answers above.

What I'm saying is that 5- vs 9-layer is not meaningful as 
a description. The user has to (somehow?!) know that the number 
of layers in the hierarchy implies the grouping problem.
The documentation doesn't mention the grouping problem!

+     - This parameter gives user flexibility to choose the 5-layer
+       transmit scheduler topology, which helps to smooth out the transmit
+       performance. The default topology is 9-layer. Each layer represents
+       a physical junction in the network. Decreased number of layers
+       improves performance, but at the same time number of network junctions
+       is reduced, which might not be desirable depending on the use case.

> >  In fact, the entire configuration would be better defined as
> > a choice of features user wants to be available and the FW || driver
> > makes the decision on how to implement that most efficiently.  
> 
> User can change number of queues/VF's 'on the fly' , but change in topology
> requires a reboot basically, since the contents of the NVM are changed.
> 
> So to accomplish that we would need to perform topology change after each
> change to number of queues to adapt, and it's not feasible to reboot every time
> user changes number of queues.
> 
> Additionally 5-layer topology doesn't disable any of the features mentioned
> (i.e. DCB/devlink-rate) it just makes them work a bit differently, but they still
> should work.
> 
> To summarize: I would say that this series address specific performance problem
> user might have if their queue count is not a power of 8. I can't see how this can
> be solved by a choice of features, as the decision regarding number of queues can
> be made 'on-the-fly'.

Well, think among yourselves. "txbalancing" and a enigmatic
documentation talking about topology and junctions is a no go.

  reply	other threads:[~2023-05-25 15:41 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-23 17:40 [PATCH net-next 0/5][pull request] ice: Support 5 layer Tx scheduler topology Tony Nguyen
2023-05-23 17:40 ` [PATCH net-next 1/5] ice: Support 5 layer topology Tony Nguyen
2023-05-23 17:40 ` [PATCH net-next 2/5] ice: Adjust the VSI/Aggregator layers Tony Nguyen
2023-05-23 17:40 ` [PATCH net-next 3/5] ice: Enable switching default tx scheduler topology Tony Nguyen
2023-05-23 17:40 ` [PATCH net-next 4/5] ice: Add txbalancing devlink param Tony Nguyen
2023-05-24 11:57   ` Jiri Pirko
2023-05-25 16:01     ` Lukasz Czapnik
2023-05-23 17:40 ` [PATCH net-next 5/5] ice: Document txbalancing parameter Tony Nguyen
2023-05-24 11:54 ` [PATCH net-next 0/5][pull request] ice: Support 5 layer Tx scheduler topology Jiri Pirko
2023-05-24 13:25   ` Wilczynski, Michal
2023-05-24 16:26     ` Jakub Kicinski
2023-05-24 16:59       ` Wilczynski, Michal
2023-05-24 20:02         ` Jakub Kicinski
2023-05-25  7:49           ` Wilczynski, Michal
2023-05-25 15:41             ` Jakub Kicinski [this message]
2023-05-26  7:43               ` Wilczynski, Michal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230525084139.7e381557@kernel.org \
    --to=kuba@kernel.org \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jiri@resnulli.us \
    --cc=lukasz.czapnik@intel.com \
    --cc=michal.wilczynski@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).