From: rohan.g.thomas@intel.com
To: esben@geanix.com
Cc: alexandre.torgue@foss.st.com, conor+dt@kernel.org,
davem@davemloft.net, devicetree@vger.kernel.org,
edumazet@google.com, fancer.lancer@gmail.com,
joabreu@synopsys.com, krzysztof.kozlowski+dt@linaro.org,
kuba@kernel.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org,
linux-stm32@st-md-mailman.stormreply.com,
mcoquelin.stm32@gmail.com, netdev@vger.kernel.org,
pabeni@redhat.com, peppe.cavallaro@st.com, robh+dt@kernel.org,
rohan.g.thomas@intel.com
Subject: RE: [PATCH net-next 2/2] net: stmmac: TBS support for platform driver
Date: Sat, 27 Jan 2024 01:39:25 +0800 [thread overview]
Message-ID: <20240126173925.16794-1-rohan.g.thomas@intel.com> (raw)
In-Reply-To: <87v87g4hmy.fsf@geanix.com>
From: Rohan G Thomas <rohan.g.thomas@intel.com>
On Fri, 26 Jan 2024 09:35:01 +0100, Esben Haabendal wrote:
Hi Esben,
Thanks for your comments. Please find my reply inline.
> > + /* If TBS feature is supported(i.e. tbssel is true), then at least 1 Tx
> > + * DMA channel supports TBS. So if tbs_ch_num is 0 and tbssel is true,
> > + * assume all Tx DMA channels support TBS. TBS_CH field, which gives
> > + * number of Tx DMA channels with TBS support is only available only
> for
> > + * DW xGMAC IP. For other DWMAC IPs all Tx DMA channels can
> support TBS.
>
> The Ethernet QOS controllers found in various i.MX socs does not support TBS
> on TX queue 0. I believe this patch would break the dwmac driver for these
> platforms.
AFAIU from Synopsys DWMAC5 Databook, all queues support TBS. But TBS
cannot coexist with TSO. So all glue drivers enabling TBS feature are
avoiding queue 0 to support TSO. Also packets requesting TSO are
always directed to queue 0 by stmmac driver.
>
> > + */
> > + if (priv->dma_cap.tbssel && !priv->dma_cap.tbs_ch_num)
> > + priv->dma_cap.tbs_ch_num = priv-
> >dma_cap.number_tx_channel;
> > +
I don't think updating tbs_ch_num with number_tx_channel would break
the dwmac driver for i.MX socs. tbs_ch_num is used by 'Earlier check
for TBS' loop to set tbs flag.
> > --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > @@ -3773,12 +3773,18 @@ stmmac_setup_dma_desc(struct stmmac_priv
> *priv, unsigned int mtu)
> > dma_conf->dma_rx_size = DMA_DEFAULT_RX_SIZE;
> >
> > /* Earlier check for TBS */
> > - for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) {
> > - struct stmmac_tx_queue *tx_q = &dma_conf-
> >tx_queue[chan];
> > - int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en;
> > + if (priv->dma_cap.tbssel) {
> > + /* TBS is available only for tbs_ch_num of Tx DMA channels,
> > + * starting from the highest Tx DMA channel.
> > + */
> > + chan = priv->dma_cap.number_tx_channel - priv-
> >dma_cap.tbs_ch_num;
For IPs which don't have tbs_ch_num, this loop goes from 0 to
number_tx_channel to check if tbs_enable is set by glue driver.
Existing logic is also the same. Unless you set tbs_en flag of
queue 0 from the glue driver or dts configuration this patch doesn't
set tbs flag for queue 0. This is a sanity check to avoid wrong
configuration for IPs which support tbs only in a few number of
queues.
> > + for (; chan < priv->plat->tx_queues_to_use; chan++) {
> > + struct stmmac_tx_queue *tx_q = &dma_conf-
> >tx_queue[chan];
> > + int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en;
> >
> > - /* Setup per-TXQ tbs flag before TX descriptor alloc */
> > - tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0;
> > + /* Setup per-TXQ tbs flag before TX descriptor alloc
> */
> > + tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0;
> > + }
> > }
Please correct me if I've misstated anything.
BR,
Rohan
next prev parent reply other threads:[~2024-01-26 17:39 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-27 13:09 [PATCH net-next 0/2] net: stmmac: TBS support for platform driver Rohan G Thomas
2023-09-27 13:09 ` [PATCH net-next 1/2] dt-bindings: net: snps,dwmac: Time Based Scheduling Rohan G Thomas
2023-09-28 18:09 ` Rob Herring
2023-09-29 5:17 ` rohan.g.thomas
2024-01-26 8:52 ` Esben Haabendal
2024-01-26 17:36 ` rohan.g.thomas
2024-01-26 20:19 ` Jakub Kicinski
2024-01-26 23:22 ` Rohan G Thomas
2023-09-27 13:09 ` [PATCH net-next 2/2] net: stmmac: TBS support for platform driver Rohan G Thomas
2024-01-10 20:19 ` Abhishek Chauhan (ABC)
2024-01-11 10:26 ` Rohan G Thomas
2024-01-26 8:43 ` Esben Haabendal
2024-01-31 21:59 ` Abhishek Chauhan (ABC)
2024-02-01 8:26 ` Esben Haabendal
2024-02-01 19:00 ` Abhishek Chauhan (ABC)
2024-01-26 8:35 ` Esben Haabendal
2024-01-26 17:39 ` rohan.g.thomas [this message]
2024-01-29 10:11 ` Esben Haabendal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240126173925.16794-1-rohan.g.thomas@intel.com \
--to=rohan.g.thomas@intel.com \
--cc=alexandre.torgue@foss.st.com \
--cc=conor+dt@kernel.org \
--cc=davem@davemloft.net \
--cc=devicetree@vger.kernel.org \
--cc=edumazet@google.com \
--cc=esben@geanix.com \
--cc=fancer.lancer@gmail.com \
--cc=joabreu@synopsys.com \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=kuba@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-stm32@st-md-mailman.stormreply.com \
--cc=mcoquelin.stm32@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=peppe.cavallaro@st.com \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).