netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vinicius Costa Gomes <vinicius.gomes@intel.com>
To: Vladimir Oltean <vladimir.oltean@nxp.com>,
	netdev@vger.kernel.org, John Fastabend <john.fastabend@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Claudiu Manoil <claudiu.manoil@nxp.com>,
	Camelia Groza <camelia.groza@nxp.com>,
	Xiaoliang Yang <xiaoliang.yang_1@nxp.com>,
	Gerhard Engleder <gerhard@engleder-embedded.com>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Kurt Kanzenbach <kurt@linutronix.de>,
	Ferenc Fejes <ferenc.fejes@ericsson.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Jacob Keller <jacob.e.keller@intel.com>
Subject: Re: [RFC PATCH net-next 11/11] net/sched: taprio: only calculate gate mask per TXQ for igc
Date: Tue, 24 Jan 2023 17:11:36 -0800	[thread overview]
Message-ID: <87r0vjh0zb.fsf@intel.com> (raw)
In-Reply-To: <20230120141537.1350744-12-vladimir.oltean@nxp.com>

Vladimir Oltean <vladimir.oltean@nxp.com> writes:

> Vinicius has repeated a couple of times in our discussion that it was a
> mistake for the taprio UAPI to take as input the Qbv gate mask per TC
> rather than per TXQ. In the Frame Preemption RFC thread:
> https://patchwork.kernel.org/project/netdevbpf/patch/20220816222920.1952936-3-vladimir.oltean@nxp.com/#25011225
>
> I had this unanswered question:
>
> | > And even that it works out because taprio "translates" from traffic
> | > classes to queues when it sends the offload information to the driver,
> | > i.e. the driver knows the schedule of queues, not traffic classes.
> |
> | Which is incredibly strange to me, since the standard clearly defines
> | Qbv gates to be per traffic class, and in ENETC, even if we have 2 TX
> | queues for the same traffic class (one per CPU), the hardware schedule
> | is still per traffic class and not per independent TX queue (BD ring).
> |
> | How does this work for i225/i226, if 2 queues are configured for the
> | same dequeue priority? Do the taprio gates still take effect per
> | queue?

Sorry that I haven't answered this before.

Two things, for i225/i226:
  - The gates open/close registers are per-queue, i.e. I control
  explicitly when each gate is going to close/open inside each cycle
  (yes, this design does have limitations);
  - Looking at the datasheet there's also this: "Each queue must be
  assigned with a unique priority level". Not sure what happens if I set
  the same, I would expect that the ordering would be undefined, but I
  never tested that.

>
> I haven't gotten an answer, and some things are still unclear, but I
> suspect that igc is the outlier, and all the other hardware actually has
> the gate mask per TC and not per TXQ, just like the standard says.
>
> For example, in ENETC up until now, we weren't passed the mqprio queue
> configuration via struct tc_taprio_qopt_offload, and hence, we needed to
> assume that the TC:TXQ mapping was 1:1. So "per TC" or "per TXQ" did not
> make a practical difference. I suspect that other drivers are in the
> same position.
>
> Benefit from the TC_QUERY_CAPS feature that Jakub suggested we add, and
> query the device driver before calling the proper ndo_setup_tc(), and
> figure out if it expects the gate mask to be per TC or per TXQ.
>
> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
> ---
>  drivers/net/ethernet/intel/igc/igc_main.c | 17 +++++++++++++++++
>  include/net/pkt_sched.h                   |  1 +
>  net/sched/sch_taprio.c                    | 11 ++++++++---
>  3 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
> index e86b15efaeb8..9b6f2aaf78c2 100644
> --- a/drivers/net/ethernet/intel/igc/igc_main.c
> +++ b/drivers/net/ethernet/intel/igc/igc_main.c
> @@ -6205,12 +6205,29 @@ static int igc_tsn_enable_cbs(struct igc_adapter *adapter,
>  	return igc_tsn_offload_apply(adapter);
>  }
>  
> +static int igc_tc_query_caps(struct tc_query_caps_base *base)
> +{
> +	switch (base->type) {
> +	case TC_SETUP_QDISC_TAPRIO: {
> +		struct tc_taprio_caps *caps = base->caps;
> +
> +		caps->gate_mask_per_txq = true;
> +
> +		return 0;
> +	}
> +	default:
> +		return -EOPNOTSUPP;
> +	}
> +}
> +
>  static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type,
>  			void *type_data)
>  {
>  	struct igc_adapter *adapter = netdev_priv(dev);
>  
>  	switch (type) {
> +	case TC_QUERY_CAPS:
> +		return igc_tc_query_caps(type_data);
>  	case TC_SETUP_QDISC_TAPRIO:
>  		return igc_tsn_enable_qbv_scheduling(adapter, type_data);
>  
> diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
> index ace8be520fb0..fd889fc4912b 100644
> --- a/include/net/pkt_sched.h
> +++ b/include/net/pkt_sched.h
> @@ -176,6 +176,7 @@ struct tc_mqprio_qopt_offload {
>  
>  struct tc_taprio_caps {
>  	bool supports_queue_max_sdu:1;
> +	bool gate_mask_per_txq:1;
>  };
>  
>  struct tc_taprio_sched_entry {
> diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
> index a3fa5debe513..58efa982db65 100644
> --- a/net/sched/sch_taprio.c
> +++ b/net/sched/sch_taprio.c
> @@ -1212,7 +1212,8 @@ static u32 tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask)
>  
>  static void taprio_sched_to_offload(struct net_device *dev,
>  				    struct sched_gate_list *sched,
> -				    struct tc_taprio_qopt_offload *offload)
> +				    struct tc_taprio_qopt_offload *offload,
> +				    bool gate_mask_per_txq)
>  {
>  	struct sched_entry *entry;
>  	int i = 0;
> @@ -1226,7 +1227,11 @@ static void taprio_sched_to_offload(struct net_device *dev,
>  
>  		e->command = entry->command;
>  		e->interval = entry->interval;
> -		e->gate_mask = tc_map_to_queue_mask(dev, entry->gate_mask);
> +		if (gate_mask_per_txq)
> +			e->gate_mask = tc_map_to_queue_mask(dev,
> +							    entry->gate_mask);
> +		else
> +			e->gate_mask = entry->gate_mask;
>  
>  		i++;
>  	}
> @@ -1273,7 +1278,7 @@ static int taprio_enable_offload(struct net_device *dev,
>  	offload->enable = 1;
>  	if (mqprio)
>  		offload->mqprio.qopt = *mqprio;
> -	taprio_sched_to_offload(dev, sched, offload);
> +	taprio_sched_to_offload(dev, sched, offload, caps.gate_mask_per_txq);
>  
>  	for (tc = 0; tc < TC_MAX_QUEUE; tc++)
>  		offload->max_sdu[tc] = q->max_sdu[tc];
> -- 
> 2.34.1
>

-- 
Vinicius

  reply	other threads:[~2023-01-25  1:11 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-20 14:15 [RFC PATCH net-next 00/11] ENETC mqprio/taprio cleanup Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 01/11] net/sched: mqprio: refactor nlattr parsing to a separate function Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 02/11] net/sched: mqprio: refactor offloading and unoffloading to dedicated functions Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 03/11] net/sched: move struct tc_mqprio_qopt_offload from pkt_cls.h to pkt_sched.h Vladimir Oltean
2023-01-25 13:09   ` Kurt Kanzenbach
2023-01-25 13:16     ` Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 04/11] net/sched: mqprio: allow offloading drivers to request queue count validation Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 05/11] net/sched: mqprio: add extack messages for " Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 06/11] net: enetc: request mqprio to validate the queue counts Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 07/11] net: enetc: act upon the requested mqprio queue configuration Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 08/11] net/sched: taprio: pass mqprio queue configuration to ndo_setup_tc() Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 09/11] net: enetc: act upon mqprio queue config in taprio offload Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 10/11] net/sched: taprio: validate that gate mask does not exceed number of TCs Vladimir Oltean
2023-01-20 14:15 ` [RFC PATCH net-next 11/11] net/sched: taprio: only calculate gate mask per TXQ for igc Vladimir Oltean
2023-01-25  1:11   ` Vinicius Costa Gomes [this message]
2023-01-23 18:22 ` [RFC PATCH net-next 00/11] ENETC mqprio/taprio cleanup Jacob Keller
2023-01-24 14:26   ` Vladimir Oltean
2023-01-24 22:30     ` Jacob Keller
2023-01-23 21:21 ` Gerhard Engleder
2023-01-23 21:31   ` Vladimir Oltean
2023-01-23 22:20     ` Gerhard Engleder
2023-01-25  1:11 ` Vinicius Costa Gomes
2023-01-25 13:10   ` Vladimir Oltean
2023-01-25 22:47     ` Vinicius Costa Gomes
2023-01-26 20:39       ` Vladimir Oltean

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87r0vjh0zb.fsf@intel.com \
    --to=vinicius.gomes@intel.com \
    --cc=alexander.duyck@gmail.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=camelia.groza@nxp.com \
    --cc=claudiu.manoil@nxp.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=ferenc.fejes@ericsson.com \
    --cc=gerhard@engleder-embedded.com \
    --cc=jacob.e.keller@intel.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=kurt@linutronix.de \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=vladimir.oltean@nxp.com \
    --cc=xiaoliang.yang_1@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).