From: Vladimir Oltean <vladimir.oltean@nxp.com>
To: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Cc: netdev@vger.kernel.org, John Fastabend <john.fastabend@gmail.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Claudiu Manoil <claudiu.manoil@nxp.com>,
Camelia Groza <camelia.groza@nxp.com>,
Xiaoliang Yang <xiaoliang.yang_1@nxp.com>,
Gerhard Engleder <gerhard@engleder-embedded.com>,
Alexander Duyck <alexander.duyck@gmail.com>,
Kurt Kanzenbach <kurt@linutronix.de>,
Ferenc Fejes <ferenc.fejes@ericsson.com>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Jesse Brandeburg <jesse.brandeburg@intel.com>,
Jacob Keller <jacob.e.keller@intel.com>
Subject: Re: [RFC PATCH net-next 00/11] ENETC mqprio/taprio cleanup
Date: Thu, 26 Jan 2023 22:39:49 +0200 [thread overview]
Message-ID: <20230126203949.vd2mptdxmbbz55r2@skbuf> (raw)
In-Reply-To: <87h6wegrjz.fsf@intel.com>
On Wed, Jan 25, 2023 at 02:47:28PM -0800, Vinicius Costa Gomes wrote:
> > The problem with gates per TXQ is that it doesn't answer the obvious
> > question of how does that work out when there is >1 TXQ per TC.
> > With the clarification that "gates per TXQ" requires that there is a
> > single TXQ per TC, this effectively becomes just a matter of changing
> > the indices of set bits in the gate mask (TC 3 may correspond to TXQ
> > offset 5), which is essentially what Gerhard seems to want to see with
> > tsnep. That is something I don't have a problem with.
> >
> > But I may want, as a sanity measure, to enforce that the mqprio queue
> > count for each TC is no more than 1 ;) Otherwise, we fall into that
> > problem I keep repeating: skb_tx_hash() arbitrarily hashes between 2
> > TXQs, both have an open gate in software (allowing traffic to pass),
> > but in hardware, one TXQ has an open gate and the other has a closed gate.
> > So half the traffic goes into the bitbucket, because software doesn't
> > know what hardware does/expects.
> >
> > So please ACK this issue and my proposal to break your "popular" mqprio
> > configuration.
>
> I am afraid that I cannot give my ACK for that, that is, for some
> definition, a breaking change. A config that has been working for many
> years is going to stop working.
>
> I know that is not ideal, perhaps we could use the capabilities "trick"
> to help minimize the breakage? i.e. add a capability whether or not the
> device supports/"make sense" having multiple TXQs handling a single TC?
>
> Would it help?
Not having multiple TXQs handling a single TC (that is fine), but having
multiple TXQs of different priorities handling a single tc...
So how does it work with igc, what exactly are we keeping alive?
prev parent reply other threads:[~2023-01-26 20:40 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-20 14:15 [RFC PATCH net-next 00/11] ENETC mqprio/taprio cleanup Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 01/11] net/sched: mqprio: refactor nlattr parsing to a separate function Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 02/11] net/sched: mqprio: refactor offloading and unoffloading to dedicated functions Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 03/11] net/sched: move struct tc_mqprio_qopt_offload from pkt_cls.h to pkt_sched.h Vladimir Oltean
2023-01-25 13:09 ` Kurt Kanzenbach
2023-01-25 13:16 ` Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 04/11] net/sched: mqprio: allow offloading drivers to request queue count validation Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 05/11] net/sched: mqprio: add extack messages for " Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 06/11] net: enetc: request mqprio to validate the queue counts Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 07/11] net: enetc: act upon the requested mqprio queue configuration Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 08/11] net/sched: taprio: pass mqprio queue configuration to ndo_setup_tc() Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 09/11] net: enetc: act upon mqprio queue config in taprio offload Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 10/11] net/sched: taprio: validate that gate mask does not exceed number of TCs Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 11/11] net/sched: taprio: only calculate gate mask per TXQ for igc Vladimir Oltean
2023-01-25 1:11 ` Vinicius Costa Gomes
2023-01-23 18:22 ` [RFC PATCH net-next 00/11] ENETC mqprio/taprio cleanup Jacob Keller
2023-01-24 14:26 ` Vladimir Oltean
2023-01-24 22:30 ` Jacob Keller
2023-01-23 21:21 ` Gerhard Engleder
2023-01-23 21:31 ` Vladimir Oltean
2023-01-23 22:20 ` Gerhard Engleder
2023-01-25 1:11 ` Vinicius Costa Gomes
2023-01-25 13:10 ` Vladimir Oltean
2023-01-25 22:47 ` Vinicius Costa Gomes
2023-01-26 20:39 ` Vladimir Oltean [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230126203949.vd2mptdxmbbz55r2@skbuf \
--to=vladimir.oltean@nxp.com \
--cc=alexander.duyck@gmail.com \
--cc=anthony.l.nguyen@intel.com \
--cc=camelia.groza@nxp.com \
--cc=claudiu.manoil@nxp.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=ferenc.fejes@ericsson.com \
--cc=gerhard@engleder-embedded.com \
--cc=jacob.e.keller@intel.com \
--cc=jesse.brandeburg@intel.com \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=kurt@linutronix.de \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=vinicius.gomes@intel.com \
--cc=xiaoliang.yang_1@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox