From: Simon Horman <horms@kernel.org>
To: Xin Tian <tianx@yunsilicon.com>
Cc: netdev@vger.kernel.org, leon@kernel.org, andrew+netdev@lunn.ch,
kuba@kernel.org, pabeni@redhat.com, edumazet@google.com,
davem@davemloft.net, jeff.johnson@oss.qualcomm.com,
przemyslaw.kitszel@intel.com, weihg@yunsilicon.com,
wanry@yunsilicon.com, jacky@yunsilicon.com,
parthiban.veerasooran@microchip.com, masahiroy@kernel.org,
kalesh-anakkur.purayil@broadcom.com, geert+renesas@glider.be,
geert@linux-m68k.org
Subject: Re: [PATCH net-next v9 11/14] xsc: ndo_open and ndo_stop
Date: Wed, 26 Mar 2025 10:14:19 +0000 [thread overview]
Message-ID: <20250326101419.GZ892515@horms.kernel.org> (raw)
In-Reply-To: <20250318151515.1376756-12-tianx@yunsilicon.com>
On Tue, Mar 18, 2025 at 11:15:16PM +0800, Xin Tian wrote:
...
> diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/main.c b/drivers/net/ethernet/yunsilicon/xsc/net/main.c
...
> +static int xsc_eth_open_rss_qp_rqs(struct xsc_adapter *adapter,
> + struct xsc_rq_param *prq_param,
> + struct xsc_eth_channels *chls,
> + unsigned int num_chl)
> +{
> + u8 q_log_size = prq_param->rq_attr.q_log_size;
> + struct xsc_create_multiqp_mbox_in *in;
> + struct xsc_create_qp_request *req;
> + unsigned int hw_npages;
> + struct xsc_channel *c;
> + int ret = 0, err = 0;
> + struct xsc_rq *prq;
> + int paslen = 0;
> + int entry_len;
> + u32 rqn_base;
> + int i, j, n;
> + int inlen;
> +
> + for (i = 0; i < num_chl; i++) {
> + c = &chls->c[i];
> +
> + for (j = 0; j < c->qp.rq_num; j++) {
> + prq = &c->qp.rq[j];
> + ret = xsc_eth_alloc_rq(c, prq, prq_param);
> + if (ret)
> + goto err_alloc_rqs;
> +
> + hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size,
> + PAGE_SIZE_4K);
> + /*support different npages number smoothly*/
> + entry_len = sizeof(struct xsc_create_qp_request) +
> + sizeof(__be64) * hw_npages;
Hi Xin Tian,
Here entry_len is calculated for each entry of c->qp.rq, prq.
Based on prq->wq_ctrl.buf.size.
> +
> + paslen += entry_len;
> + }
> + }
> +
> + inlen = sizeof(struct xsc_create_multiqp_mbox_in) + paslen;
> + in = kvzalloc(inlen, GFP_KERNEL);
> + if (!in) {
> + ret = -ENOMEM;
> + goto err_create_rss_rqs;
> + }
> +
> + in->qp_num = cpu_to_be16(num_chl);
> + in->qp_type = XSC_QUEUE_TYPE_RAW;
> + in->req_len = cpu_to_be32(inlen);
> +
> + req = (struct xsc_create_qp_request *)&in->data[0];
> + n = 0;
> + for (i = 0; i < num_chl; i++) {
> + c = &chls->c[i];
> + for (j = 0; j < c->qp.rq_num; j++) {
> + prq = &c->qp.rq[j];
> +
> + hw_npages = DIV_ROUND_UP(prq->wq_ctrl.buf.size,
> + PAGE_SIZE_4K);
> + /* no use for eth */
> + req->input_qpn = cpu_to_be16(0);
> + req->qp_type = XSC_QUEUE_TYPE_RAW;
> + req->log_rq_sz = ilog2(adapter->xdev->caps.recv_ds_num)
> + + q_log_size;
> + req->pa_num = cpu_to_be16(hw_npages);
> + req->cqn_recv = cpu_to_be16(prq->cq.xcq.cqn);
> + req->cqn_send = req->cqn_recv;
> + req->glb_funcid =
> + cpu_to_be16(adapter->xdev->glb_func_id);
> +
> + xsc_core_fill_page_frag_array(&prq->wq_ctrl.buf,
> + &req->pas[0],
> + hw_npages);
> + n++;
> + req = (struct xsc_create_qp_request *)
> + (&in->data[0] + entry_len * n);
But here the value for the last entry of c->qp.rq for the last channel, in
chls->c[i], as determined by the previous for loop, is used for all entries
of c->qp.rq.
Is this correct?
Flagged by Smatch.
> + }
> + }
> +
> + ret = xsc_core_eth_create_rss_qp_rqs(adapter->xdev, in, inlen,
> + &rqn_base);
> + kvfree(in);
> + if (ret)
> + goto err_create_rss_rqs;
> +
> + n = 0;
> + for (i = 0; i < num_chl; i++) {
> + c = &chls->c[i];
> + for (j = 0; j < c->qp.rq_num; j++) {
> + prq = &c->qp.rq[j];
> + prq->rqn = rqn_base + n;
> + prq->cqp.qpn = prq->rqn;
> + prq->cqp.event = xsc_eth_qp_event;
> + prq->cqp.eth_queue_type = XSC_RES_RQ;
> + ret = xsc_core_create_resource_common(adapter->xdev,
> + &prq->cqp);
> + if (ret) {
> + err = ret;
> + netdev_err(adapter->netdev,
> + "create resource common error qp:%d errno:%d\n",
> + prq->rqn, ret);
> + continue;
> + }
> +
> + n++;
> + }
> + }
> + if (err)
> + return err;
> +
> + adapter->channels.rqn_base = rqn_base;
> + return 0;
> +
> +err_create_rss_rqs:
> + i = num_chl;
> +err_alloc_rqs:
> + for (--i; i >= 0; i--) {
> + c = &chls->c[i];
> + for (j = 0; j < c->qp.rq_num; j++) {
> + prq = &c->qp.rq[j];
> + xsc_free_qp_rq(prq);
> + }
> + }
> + return ret;
> +}
next prev parent reply other threads:[~2025-03-26 10:14 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-18 15:15 [PATCH net-next v9 00/14] xsc: ADD Yunsilicon XSC Ethernet Driver Xin Tian
2025-03-18 15:14 ` [PATCH net-next v9 01/14] xsc: Add xsc driver basic framework Xin Tian
2025-03-18 15:14 ` [PATCH net-next v9 02/14] xsc: Enable command queue Xin Tian
2025-03-18 15:14 ` [PATCH net-next v9 03/14] xsc: Add hardware setup APIs Xin Tian
2025-03-18 15:14 ` [PATCH net-next v9 04/14] xsc: Add qp and cq management Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 05/14] xsc: Add eq and alloc Xin Tian
2025-03-25 12:11 ` Jakub Kicinski
2025-04-07 6:56 ` Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 06/14] xsc: Init pci irq Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 07/14] xsc: Init auxiliary device Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 08/14] xsc: Add ethernet interface Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 09/14] xsc: Init net device Xin Tian
2025-03-25 12:19 ` Jakub Kicinski
2025-04-07 9:16 ` Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 10/14] xsc: Add eth needed qp and cq apis Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 11/14] xsc: ndo_open and ndo_stop Xin Tian
2025-03-26 10:14 ` Simon Horman [this message]
2025-04-07 10:44 ` Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 12/14] xsc: Add ndo_start_xmit Xin Tian
2025-03-26 10:17 ` Simon Horman
2025-04-09 7:36 ` Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 13/14] xsc: Add eth reception data path Xin Tian
2025-03-26 10:27 ` Simon Horman
2025-04-09 8:06 ` Xin Tian
2025-03-18 15:15 ` [PATCH net-next v9 14/14] xsc: add ndo_get_stats64 Xin Tian
2025-03-26 10:31 ` Simon Horman
2025-04-09 8:08 ` Xin Tian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250326101419.GZ892515@horms.kernel.org \
--to=horms@kernel.org \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=geert+renesas@glider.be \
--cc=geert@linux-m68k.org \
--cc=jacky@yunsilicon.com \
--cc=jeff.johnson@oss.qualcomm.com \
--cc=kalesh-anakkur.purayil@broadcom.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=masahiroy@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=parthiban.veerasooran@microchip.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=tianx@yunsilicon.com \
--cc=wanry@yunsilicon.com \
--cc=weihg@yunsilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).