From: Roger Quadros <rogerq@kernel.org>
To: Vladimir Oltean <vladimir.oltean@nxp.com>
Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, horms@kernel.org, s-vadapalli@ti.com,
srk@ti.com, vigneshr@ti.com, p-varis@ti.com,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
rogerq@kernel.rog
Subject: Re: [PATCH v2] net: ethernet: ti: am65-cpsw: add mqprio qdisc offload in channel mode
Date: Wed, 20 Sep 2023 10:19:12 +0300 [thread overview]
Message-ID: <79bd4b5b-7ea8-4a3b-d098-9aecd43b1675@kernel.org> (raw)
In-Reply-To: <20230919124703.hj2bvqeogfhv36qy@skbuf>
Hi Vladimir,
On 19/09/2023 15:47, Vladimir Oltean wrote:
> Hi Roger,
>
> On Mon, Sep 18, 2023 at 10:53:58AM +0300, Roger Quadros wrote:
>> -int am65_cpsw_qos_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type,
>> - void *type_data)
>> -{
>> - switch (type) {
>> - case TC_QUERY_CAPS:
>> - return am65_cpsw_tc_query_caps(ndev, type_data);
>> - case TC_SETUP_QDISC_TAPRIO:
>> - return am65_cpsw_setup_taprio(ndev, type_data);
>> - case TC_SETUP_BLOCK:
>> - return am65_cpsw_qos_setup_tc_block(ndev, type_data);
>> - default:
>> - return -EOPNOTSUPP;
>> - }
>> -}
>> -
>> -void am65_cpsw_qos_link_up(struct net_device *ndev, int link_speed)
>> -{
>> - struct am65_cpsw_port *port = am65_ndev_to_port(ndev);
>> -
>> - if (!IS_ENABLED(CONFIG_TI_AM65_CPSW_TAS))
>> - return;
>> -
>> - am65_cpsw_est_link_up(ndev, link_speed);
>> - port->qos.link_down_time = 0;
>> -}
>> -
>> -void am65_cpsw_qos_link_down(struct net_device *ndev)
>> -{
>> - struct am65_cpsw_port *port = am65_ndev_to_port(ndev);
>> -
>> - if (!IS_ENABLED(CONFIG_TI_AM65_CPSW_TAS))
>> - return;
>> -
>> - if (!port->qos.link_down_time)
>> - port->qos.link_down_time = ktime_get();
>> -
>> - port->qos.link_speed = SPEED_UNKNOWN;
>> -}
>> -
>
> Could you split the code movement to a separate change?
OK.
>
>> + if (port->qos.link_speed != SPEED_UNKNOWN) {
>> + if (min_rate_total > port->qos.link_speed) {
>> + NL_SET_ERR_MSG_FMT_MOD(extack, "TX rate min %llu exceeds link speed %d\n",
>> + min_rate_total, port->qos.link_speed);
>> + return -EINVAL;
>> + }
>> +
>> + if (max_rate_total > port->qos.link_speed) {
>> + NL_SET_ERR_MSG_FMT_MOD(extack, "TX rate max %llu exceeds link speed %d\n",
>> + max_rate_total, port->qos.link_speed);
>> + return -EINVAL;
>> + }
>> + }
>
> Link speeds can be renegotiated, and the mqprio offload can be installed
> while the link is down. So this restriction, while honorable, has limited
> usefulness.
For link down case it won't run those checks, but I get your point.
I'll drop these checks.
>
>> +
>> + p_mqprio->shaper_en = 1;
>
> s/1/true/
>
>> + p_mqprio->max_rate_total = max_t(u64, min_rate_total, max_rate_total);
>> +
>> + return 0;
>> +}
>> +
>> +static void am65_cpsw_reset_tc_mqprio(struct net_device *ndev)
>> +{
>> + struct am65_cpsw_port *port = am65_ndev_to_port(ndev);
>> + struct am65_cpsw_mqprio *p_mqprio = &port->qos.mqprio;
>> + struct am65_cpsw_common *common = port->common;
>> +
>> + p_mqprio->shaper_en = 0;
>
> s/0/false/
>
>> + p_mqprio->max_rate_total = 0;
>> +
>> + am65_cpsw_tx_pn_shaper_reset(port);
>> + netdev_reset_tc(ndev);
>> + netif_set_real_num_tx_queues(ndev, common->tx_ch_num);
>> +
>> + /* Reset all Queue priorities to 0 */
>> + writel(0,
>> + port->port_base + AM65_CPSW_PN_REG_TX_PRI_MAP);
>
> What exactly needs pm_runtime_get_sync()? This writel() doesn't?
Good catch. In my tests, the network interface was up so controller
was already active. But we will need to do a pm_runtime_get_sync()
if all network interfaces of the controller are down.
So, I will need to move the pm_runtime_get_sync() call before
am65_cpsw_reset_tc_mqprio();
>
>> +}
>> +
>> +static int am65_cpsw_setup_mqprio(struct net_device *ndev, void *type_data)
>> +{
>> + struct am65_cpsw_port *port = am65_ndev_to_port(ndev);
>> + struct am65_cpsw_mqprio *p_mqprio = &port->qos.mqprio;
>> + struct tc_mqprio_qopt_offload *mqprio = type_data;
>> + struct am65_cpsw_common *common = port->common;
>> + struct tc_mqprio_qopt *qopt = &mqprio->qopt;
>> + int tc, offset, count, ret, prio;
>> + u8 num_tc = qopt->num_tc;
>> + u32 tx_prio_map = 0;
>> + int i;
>> +
>> + memcpy(&p_mqprio->mqprio_hw, mqprio, sizeof(*mqprio));
>> +
>> + if (!num_tc) {
>> + am65_cpsw_reset_tc_mqprio(ndev);
>> + return 0;
>> + }
>> +
>> + ret = pm_runtime_get_sync(common->dev);
>> + if (ret < 0) {
>> + pm_runtime_put_noidle(common->dev);
>> + return ret;
>> + }
--
cheers,
-roger
prev parent reply other threads:[~2023-09-20 7:19 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-18 7:53 [PATCH v2] net: ethernet: ti: am65-cpsw: add mqprio qdisc offload in channel mode Roger Quadros
2023-09-19 11:32 ` Paolo Abeni
2023-09-20 7:09 ` Roger Quadros
2023-09-19 12:47 ` Vladimir Oltean
2023-09-20 7:19 ` Roger Quadros [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=79bd4b5b-7ea8-4a3b-d098-9aecd43b1675@kernel.org \
--to=rogerq@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=p-varis@ti.com \
--cc=pabeni@redhat.com \
--cc=rogerq@kernel.rog \
--cc=s-vadapalli@ti.com \
--cc=srk@ti.com \
--cc=vigneshr@ti.com \
--cc=vladimir.oltean@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).