From: Saeed Mahameed <saeed@kernel.org>
To: tanhuazhong <tanhuazhong@huawei.com>, davem@davemloft.net
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
kuba@kernel.org, huangdaode@huawei.com,
Jian Shen <shenjian15@huawei.com>
Subject: Re: [PATCH net-next 2/7] net: hns3: add support for tc mqprio offload
Date: Thu, 10 Dec 2020 12:24:46 -0800 [thread overview]
Message-ID: <d2c14bd14daabcd7f589e17b14b2ffeebc0d8a15.camel@kernel.org> (raw)
In-Reply-To: <42c9fd63-3e51-543e-bbbd-01e7face7c9c@huawei.com>
On Thu, 2020-12-10 at 20:27 +0800, tanhuazhong wrote:
>
> On 2020/12/10 12:50, Saeed Mahameed wrote:
> > On Thu, 2020-12-10 at 11:42 +0800, Huazhong Tan wrote:
> > > From: Jian Shen <shenjian15@huawei.com>
> > >
> > > Currently, the HNS3 driver only supports offload for tc number
> > > and prio_tc. This patch adds support for other qopts, including
> > > queues count and offset for each tc.
> > >
> > > When enable tc mqprio offload, it's not allowed to change
> > > queue numbers by ethtool. For hardware limitation, the queue
> > > number of each tc should be power of 2.
> > >
> > > For the queues is not assigned to each tc by average, so it's
> > > should return vport->alloc_tqps for hclge_get_max_channels().
> > >
> >
> > The commit message needs some improvements, it is not really clear
> > what
> > the last two sentences are about.
> >
>
> The hclge_get_max_channels() returns the max queue number of each TC
> for
> user can set by command ethool -L. In previous implement, the queues
> are
> assigned to each TC by average, so we return it by vport-:
> alloc_tqps / num_tc. And now we can assign differrent queue number
> for
> each TC, so it shouldn't be divided by num_tc.
What do you mean by "queues assigned to each tc by average" ?
[...]
>
> > > + }
> > > + if (hdev->vport[0].alloc_tqps < queue_sum) {
> >
> > can't you just allocate new tqps according to the new mqprio input
> > like
> > other drivers do ? how the user allocates those tqps ?
> >
>
> maybe the name of 'alloc_tqps' is a little bit misleading, the
> meaning
> of this field is the total number of the available tqps in this
> vport.
>
from your driver code it seems alloc_tqps is number of rings allocated
via ethool -L.
My point is, it seems like in this patch you demand for the queues to
be preallocated, but what other drivers do on setup tc, they just
duplicate what ever number of queues was configured prior to setup tc,
num_tc times.
> > > + dev_err(&hdev->pdev->dev,
> > > + "qopt queue count sum should be less than
> > > %u\n",
> > > + hdev->vport[0].alloc_tqps);
> > > + return -EINVAL;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static void hclge_sync_mqprio_qopt(struct hnae3_tc_info
> > > *tc_info,
> > > + struct tc_mqprio_qopt_offload
> > > *mqprio_qopt)
> > > +{
> > > + int i;
> > > +
> > > + memset(tc_info, 0, sizeof(*tc_info));
> > > + tc_info->num_tc = mqprio_qopt->qopt.num_tc;
> > > + memcpy(tc_info->prio_tc, mqprio_qopt->qopt.prio_tc_map,
> > > + sizeof_field(struct hnae3_tc_info, prio_tc));
> > > + memcpy(tc_info->tqp_count, mqprio_qopt->qopt.count,
> > > + sizeof_field(struct hnae3_tc_info, tqp_count));
> > > + memcpy(tc_info->tqp_offset, mqprio_qopt->qopt.offset,
> > > + sizeof_field(struct hnae3_tc_info, tqp_offset));
> > > +
> >
> > isn't it much easier to just store a copy of tc_mqprio_qopt in you
> > tc_info and then just:
> > tc_info->qopt = mqprio->qopt;
> >
> > [...]
>
> The tc_mqprio_qopt_offload still contains a lot of opt hns3 driver
> does
> not use yet, even if the hns3 use all the opt, I still think it is
> better to create our own struct, if struct tc_mqprio_qopt_offload
> changes in the future, we can limit the change to the
> tc_mqprio_qopt_offload convertion.
>
ok.
next prev parent reply other threads:[~2020-12-10 20:25 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-10 3:42 [PATCH net-next 0/7] net: hns3: updates for -next Huazhong Tan
2020-12-10 3:42 ` [PATCH net-next 1/7] net: hns3: refine the struct hane3_tc_info Huazhong Tan
2020-12-10 3:42 ` [PATCH net-next 2/7] net: hns3: add support for tc mqprio offload Huazhong Tan
2020-12-10 4:50 ` Saeed Mahameed
2020-12-10 12:27 ` tanhuazhong
2020-12-10 20:24 ` Saeed Mahameed [this message]
2020-12-11 6:55 ` tanhuazhong
2020-12-10 3:42 ` [PATCH net-next 3/7] net: hns3: add support for forwarding packet to queues of specified TC when flow director rule hit Huazhong Tan
2020-12-10 5:40 ` Saeed Mahameed
2020-12-10 12:24 ` tanhuazhong
2020-12-10 20:46 ` Saeed Mahameed
2020-12-11 7:06 ` tanhuazhong
2020-12-10 3:42 ` [PATCH net-next 4/7] net: hns3: add support for hw tc offload of tc flower Huazhong Tan
2020-12-10 3:42 ` [PATCH net-next 5/7] net: hns3: add support for max 512 rss size Huazhong Tan
2020-12-10 3:42 ` [PATCH net-next 6/7] net: hns3: adjust rss indirection table configure command Huazhong Tan
2020-12-10 3:42 ` [PATCH net-next 7/7] net: hns3: adjust rss tc mode " Huazhong Tan
2020-12-10 4:34 ` [PATCH net-next 0/7] net: hns3: updates for -next David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d2c14bd14daabcd7f589e17b14b2ffeebc0d8a15.camel@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=huangdaode@huawei.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=shenjian15@huawei.com \
--cc=tanhuazhong@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).