From: Jacob Keller <jacob.e.keller@intel.com>
To: netdev@vger.kernel.org, David Miller <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: Michal Kubiak <michal.kubiak@intel.com>,
Alexander Lobakin <aleksander.lobakin@intel.com>,
Alan Brady <alan.brady@intel.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
Krishneil Singh <krishneil.k.singh@intel.com>,
Jacob Keller <jacob.e.keller@intel.com>
Subject: [PATCH next 1/2] idpf: set scheduling mode for completion queue
Date: Fri, 20 Oct 2023 10:55:59 -0700 [thread overview]
Message-ID: <20231020175600.24412-2-jacob.e.keller@intel.com> (raw)
In-Reply-To: <20231020175600.24412-1-jacob.e.keller@intel.com>
From: Michal Kubiak <michal.kubiak@intel.com>
The HW must be programmed differently for queue-based scheduling mode.
To program the completion queue context correctly, the control plane
must know the scheduling mode not only for the Tx queue, but also for
the completion queue.
Unfortunately, currently the driver sets the scheduling mode only for
the Tx queues.
Propagate the scheduling mode data for the completion queue as
well when sending the queue configuration messages.
Fixes: 1c325aac10a8 ("idpf: configure resources for TX queues")
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Reviewed-by: Alan Brady <alan.brady@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 10 ++++++++--
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 8 +++++++-
2 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 6fa79898c42c..58c5412d3173 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -1160,6 +1160,7 @@ static void idpf_rxq_set_descids(struct idpf_vport *vport, struct idpf_queue *q)
*/
static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
{
+ bool flow_sch_en;
int err, i;
vport->txq_grps = kcalloc(vport->num_txq_grp,
@@ -1167,6 +1168,9 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
if (!vport->txq_grps)
return -ENOMEM;
+ flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
+ VIRTCHNL2_CAP_SPLITQ_QSCHED);
+
for (i = 0; i < vport->num_txq_grp; i++) {
struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
struct idpf_adapter *adapter = vport->adapter;
@@ -1195,8 +1199,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
q->txq_grp = tx_qgrp;
hash_init(q->sched_buf_hash);
- if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS,
- VIRTCHNL2_CAP_SPLITQ_QSCHED))
+ if (flow_sch_en)
set_bit(__IDPF_Q_FLOW_SCH_EN, q->flags);
}
@@ -1215,6 +1218,9 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
tx_qgrp->complq->desc_count = vport->complq_desc_count;
tx_qgrp->complq->vport = vport;
tx_qgrp->complq->txq_grp = tx_qgrp;
+
+ if (flow_sch_en)
+ __set_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags);
}
return 0;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 9bc85b2f1709..e276b5360c2e 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -1473,7 +1473,7 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
/* Populate the queue info buffer with all queue context info */
for (i = 0; i < vport->num_txq_grp; i++) {
struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
- int j;
+ int j, sched_mode;
for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
qi[k].queue_id =
@@ -1514,6 +1514,12 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
qi[k].ring_len = cpu_to_le16(tx_qgrp->complq->desc_count);
qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->complq->dma);
+ if (test_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags))
+ sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ else
+ sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+ qi[k].sched_mode = cpu_to_le16(sched_mode);
+
k++;
}
--
2.41.0
next prev parent reply other threads:[~2023-10-20 17:56 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-20 17:55 [PATCH next 0/2] Intel Wired LAN Driver Updates 2023-10-19 (idpf) Jacob Keller
2023-10-20 17:55 ` Jacob Keller [this message]
2023-10-20 17:56 ` [PATCH next 2/2] idpf: cancel mailbox work in error path Jacob Keller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231020175600.24412-2-jacob.e.keller@intel.com \
--to=jacob.e.keller@intel.com \
--cc=alan.brady@intel.com \
--cc=aleksander.lobakin@intel.com \
--cc=davem@davemloft.net \
--cc=krishneil.k.singh@intel.com \
--cc=kuba@kernel.org \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).