* [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access
@ 2025-07-08 0:58 Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 1/9] idpf: introduce local idpf structure to store virtchnl queue chunks Pavan Kumar Linga
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga
Queue and vector resources for a given vport, are stored in the
idpf_vport structure. At the time of configuration, these resources are
accessed using vport pointer. Meaning, all the config path functions
are tied to the default queue and vector resources of the vport.
There are use cases which can make use of config path functions to
configure queue and vector resources that are not tied to any vport.
One such use case is PTP secondary mailbox creation (it would be in a
followup series). To configure queue and interrupt resources for such
cases, we can make use of the existing config infrastructure by passing
the necessary queue and vector resources info.
To achieve this, group the existing queue and vector resources into
default resource group and refactor the code to pass the resource
pointer to the config path functions.
This series also includes patches which generalizes the send virtchnl
message APIs and mailbox API that are necessary for the implementation
of PTP secondary mailbox.
---
v6:
* rebase on top of XDP series
v4:
* avoid returning an error in idpf_vport_init if PTP timestamp caps are
not supported
* re-flow the commit messages and cover letter to ~72chars per line
based on off-list feedback from Paul Menzel <pmenzel@molgen.mpg.de>
v3:
* rebase on top of libeth XDP and other patches
v2:
* rebase on top of PTP patch series
Pavan Kumar Linga (9):
idpf: introduce local idpf structure to store virtchnl queue chunks
idpf: use existing queue chunk info instead of preparing it
idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it
idpf: move queue resources to idpf_q_vec_rsrc structure
idpf: reshuffle idpf_vport struct members to avoid holes
idpf: add rss_data field to RSS function parameters
idpf: generalize send virtchnl message API
idpf: avoid calling get_rx_ptypes for each vport
idpf: generalize mailbox API
drivers/net/ethernet/intel/idpf/idpf.h | 175 ++-
drivers/net/ethernet/intel/idpf/idpf_dev.c | 18 +-
.../net/ethernet/intel/idpf/idpf_ethtool.c | 87 +-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 237 ++--
drivers/net/ethernet/intel/idpf/idpf_ptp.c | 17 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 653 +++++-----
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 37 +-
drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 21 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 1118 ++++++++---------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 81 +-
drivers/net/ethernet/intel/idpf/xdp.c | 33 +-
drivers/net/ethernet/intel/idpf/xdp.h | 6 +-
12 files changed, 1293 insertions(+), 1190 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 1/9] idpf: introduce local idpf structure to store virtchnl queue chunks
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 2/9] idpf: use existing queue chunk info instead of preparing it Pavan Kumar Linga
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Milena Olech, Anton Nadezhdin
Queue ID and register info received from device Control Plane is stored
locally in the same little endian format. As the queue chunks are
retrieved in 3 functions, lexx_to_cpu conversions are done each time.
Instead introduce a new idpf structure to store the received queue info.
It also avoids conditional check to retrieve queue chunks.
With this change, there is no need to store the queue chunks in
'req_qs_chunks' field. So remove that.
Suggested-by: Milena Olech <milena.olech@intel.com>
Reviewed-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf.h | 31 +++-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 35 ++--
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 155 +++++++++---------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 11 +-
4 files changed, 140 insertions(+), 92 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
index 847d00930941..1be111d04467 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -514,18 +514,45 @@ struct idpf_vector_lifo {
u16 *vec_idx;
};
+/**
+ * idpf_queue_id_reg_chunk - individual queue ID and register chunk
+ * @qtail_reg_start: queue tail register offset
+ * @qtail_reg_spacing: queue tail register spacing
+ * @type: queue type of the queues in the chunk
+ * @start_queue_id: starting queue ID in the chunk
+ * @num_queues: number of queues in the chunk
+ */
+struct idpf_queue_id_reg_chunk {
+ u64 qtail_reg_start;
+ u32 qtail_reg_spacing;
+ u32 type;
+ u32 start_queue_id;
+ u32 num_queues;
+};
+
+/**
+ * idpf_queue_id_reg_info - struct to store the queue ID and register chunk
+ * info received over the mailbox
+ * @num_chunks: number of chunks
+ * @queue_chunks: array of chunks
+ */
+struct idpf_queue_id_reg_info {
+ u16 num_chunks;
+ struct idpf_queue_id_reg_chunk *queue_chunks;
+};
+
/**
* struct idpf_vport_config - Vport configuration data
* @user_config: see struct idpf_vport_user_config_data
* @max_q: Maximum possible queues
- * @req_qs_chunks: Queue chunk data for requested queues
+ * @qid_reg_info: Struct to store the queue ID and register info
* @mac_filter_list_lock: Lock to protect mac filters
* @flags: See enum idpf_vport_config_flags
*/
struct idpf_vport_config {
struct idpf_vport_user_config_data user_config;
struct idpf_vport_max_q max_q;
- struct virtchnl2_add_queues *req_qs_chunks;
+ struct idpf_queue_id_reg_info qid_reg_info;
spinlock_t mac_filter_list_lock;
DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS);
};
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 573424e186a3..2579e751370b 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -841,6 +841,7 @@ static void idpf_remove_features(struct idpf_vport *vport)
static void idpf_vport_stop(struct idpf_vport *vport)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct idpf_queue_id_reg_info *chunks;
if (np->state <= __IDPF_VPORT_DOWN)
return;
@@ -848,6 +849,8 @@ static void idpf_vport_stop(struct idpf_vport *vport)
netif_carrier_off(vport->netdev);
netif_tx_disable(vport->netdev);
+ chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info;
+
idpf_send_disable_vport_msg(vport);
idpf_send_disable_queues_msg(vport);
idpf_send_map_unmap_queue_vector_msg(vport, false);
@@ -857,7 +860,7 @@ static void idpf_vport_stop(struct idpf_vport *vport)
* instead of deleting and reallocating the vport.
*/
if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags))
- idpf_send_delete_queues_msg(vport);
+ idpf_send_delete_queues_msg(vport, chunks);
idpf_remove_features(vport);
@@ -956,15 +959,14 @@ static void idpf_vport_rel(struct idpf_vport *vport)
kfree(vport->q_vector_idxs);
vport->q_vector_idxs = NULL;
+ kfree(vport_config->qid_reg_info.queue_chunks);
+ vport_config->qid_reg_info.queue_chunks = NULL;
kfree(adapter->vport_params_recvd[idx]);
adapter->vport_params_recvd[idx] = NULL;
kfree(adapter->vport_params_reqd[idx]);
adapter->vport_params_reqd[idx] = NULL;
- if (adapter->vport_config[idx]) {
- kfree(adapter->vport_config[idx]->req_qs_chunks);
- adapter->vport_config[idx]->req_qs_chunks = NULL;
- }
+
kfree(vport);
adapter->num_alloc_vports--;
}
@@ -1081,6 +1083,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
u16 idx = adapter->next_vport;
struct idpf_vport *vport;
u16 num_max_q;
+ int err;
if (idx == IDPF_NO_FREE_SLOT)
return NULL;
@@ -1129,7 +1132,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
if (!vport->q_vector_idxs)
goto free_vport;
- idpf_vport_init(vport, max_q);
+ err = idpf_vport_init(vport, max_q);
+ if (err)
+ goto free_vector_idxs;
/* This alloc is done separate from the LUT because it's not strictly
* dependent on how many queues we have. If we change number of queues
@@ -1139,7 +1144,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
rss_data = &adapter->vport_config[idx]->user_config.rss_data;
rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL);
if (!rss_data->rss_key)
- goto free_vector_idxs;
+ goto free_qreg_chunks;
/* Initialize default rss key */
netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size);
@@ -1154,6 +1159,8 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
return vport;
+free_qreg_chunks:
+ kfree(adapter->vport_config[idx]->qid_reg_info.queue_chunks);
free_vector_idxs:
kfree(vport->q_vector_idxs);
free_vport:
@@ -1330,6 +1337,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
+ struct idpf_queue_id_reg_info *chunks;
int err;
if (np->state != __IDPF_VPORT_DOWN)
@@ -1349,7 +1357,10 @@ static int idpf_vport_open(struct idpf_vport *vport)
if (err)
goto intr_rel;
- err = idpf_vport_queue_ids_init(vport);
+ vport_config = adapter->vport_config[vport->idx];
+ chunks = &vport_config->qid_reg_info;
+
+ err = idpf_vport_queue_ids_init(vport, chunks);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n",
vport->vport_id, err);
@@ -1370,7 +1381,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
goto queues_rel;
}
- err = idpf_queue_reg_init(vport);
+ err = idpf_queue_reg_init(vport, chunks);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",
vport->vport_id, err);
@@ -1420,7 +1431,6 @@ static int idpf_vport_open(struct idpf_vport *vport)
idpf_restore_features(vport);
- vport_config = adapter->vport_config[vport->idx];
if (vport_config->user_config.rss_data.rss_lut)
err = idpf_config_rss(vport);
else
@@ -1866,6 +1876,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
enum idpf_vport_state current_state = np->state;
struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport_config *vport_config;
struct idpf_vport *new_vport;
int err;
@@ -1912,8 +1923,10 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
goto free_vport;
}
+ vport_config = adapter->vport_config[vport->idx];
+
if (current_state <= __IDPF_VPORT_DOWN) {
- idpf_send_delete_queues_msg(vport);
+ idpf_send_delete_queues_msg(vport, &vport_config->qid_reg_info);
} else {
set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags);
idpf_vport_stop(vport);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 793bc38077d7..89d2c9d7236d 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -1003,6 +1003,42 @@ static void idpf_init_avail_queues(struct idpf_adapter *adapter)
avail_queues->avail_complq = le16_to_cpu(caps->max_tx_complq);
}
+/**
+ * idpf_vport_init_queue_reg_chunks - initialize queue register chunks
+ * @vport_config: persistent vport structure to store the queue register info
+ * @schunks: source chunks to copy data from
+ *
+ * Return: %0 on success, -%errno on failure.
+ */
+static int
+idpf_vport_init_queue_reg_chunks(struct idpf_vport_config *vport_config,
+ struct virtchnl2_queue_reg_chunks *schunks)
+{
+ struct idpf_queue_id_reg_info *q_info = &vport_config->qid_reg_info;
+ u16 num_chunks = le16_to_cpu(schunks->num_chunks);
+
+ kfree(q_info->queue_chunks);
+
+ q_info->num_chunks = num_chunks;
+ q_info->queue_chunks = kcalloc(num_chunks, sizeof(*q_info->queue_chunks),
+ GFP_KERNEL);
+ if (!q_info->queue_chunks)
+ return -ENOMEM;
+
+ for (u16 i = 0; i < num_chunks; i++) {
+ struct idpf_queue_id_reg_chunk *dchunk = &q_info->queue_chunks[i];
+ struct virtchnl2_queue_reg_chunk *schunk = &schunks->chunks[i];
+
+ dchunk->qtail_reg_start = le64_to_cpu(schunk->qtail_reg_start);
+ dchunk->qtail_reg_spacing = le32_to_cpu(schunk->qtail_reg_spacing);
+ dchunk->type = le32_to_cpu(schunk->type);
+ dchunk->start_queue_id = le32_to_cpu(schunk->start_queue_id);
+ dchunk->num_queues = le32_to_cpu(schunk->num_queues);
+ }
+
+ return 0;
+}
+
/**
* idpf_get_reg_intr_vecs - Get vector queue register offset
* @vport: virtual port structure
@@ -1063,25 +1099,25 @@ int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
* are filled.
*/
static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type,
- struct virtchnl2_queue_reg_chunks *chunks)
+ struct idpf_queue_id_reg_info *chunks)
{
- u16 num_chunks = le16_to_cpu(chunks->num_chunks);
+ u16 num_chunks = chunks->num_chunks;
int reg_filled = 0, i;
u32 reg_val;
while (num_chunks--) {
- struct virtchnl2_queue_reg_chunk *chunk;
+ struct idpf_queue_id_reg_chunk *chunk;
u16 num_q;
- chunk = &chunks->chunks[num_chunks];
- if (le32_to_cpu(chunk->type) != q_type)
+ chunk = &chunks->queue_chunks[num_chunks];
+ if (chunk->type != q_type)
continue;
- num_q = le32_to_cpu(chunk->num_queues);
- reg_val = le64_to_cpu(chunk->qtail_reg_start);
+ num_q = chunk->num_queues;
+ reg_val = chunk->qtail_reg_start;
for (i = 0; i < num_q && reg_filled < num_regs ; i++) {
reg_vals[reg_filled++] = reg_val;
- reg_val += le32_to_cpu(chunk->qtail_reg_spacing);
+ reg_val += chunk->qtail_reg_spacing;
}
}
@@ -1151,15 +1187,13 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
/**
* idpf_queue_reg_init - initialize queue registers
* @vport: virtual port structure
+ * @chunks: queue registers received over mailbox
*
* Return 0 on success, negative on failure
*/
-int idpf_queue_reg_init(struct idpf_vport *vport)
+int idpf_queue_reg_init(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks)
{
- struct virtchnl2_create_vport *vport_params;
- struct virtchnl2_queue_reg_chunks *chunks;
- struct idpf_vport_config *vport_config;
- u16 vport_idx = vport->idx;
int num_regs, ret = 0;
u32 *reg_vals;
@@ -1168,16 +1202,6 @@ int idpf_queue_reg_init(struct idpf_vport *vport)
if (!reg_vals)
return -ENOMEM;
- vport_config = vport->adapter->vport_config[vport_idx];
- if (vport_config->req_qs_chunks) {
- struct virtchnl2_add_queues *vc_aq =
- (struct virtchnl2_add_queues *)vport_config->req_qs_chunks;
- chunks = &vc_aq->chunks;
- } else {
- vport_params = vport->adapter->vport_params_recvd[vport_idx];
- chunks = &vport_params->chunks;
- }
-
/* Initialize Tx queue tail register address */
num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
VIRTCHNL2_QUEUE_TYPE_TX,
@@ -2032,46 +2056,36 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport)
* @num_chunks: number of chunks to copy
*/
static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks,
- struct virtchnl2_queue_reg_chunk *schunks,
+ struct idpf_queue_id_reg_chunk *schunks,
u16 num_chunks)
{
u16 i;
for (i = 0; i < num_chunks; i++) {
- dchunks[i].type = schunks[i].type;
- dchunks[i].start_queue_id = schunks[i].start_queue_id;
- dchunks[i].num_queues = schunks[i].num_queues;
+ dchunks[i].type = cpu_to_le32(schunks[i].type);
+ dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id);
+ dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues);
}
}
/**
* idpf_send_delete_queues_msg - send delete queues virtchnl message
- * @vport: Virtual port private data structure
+ * @vport: virtual port private data structure
+ * @chunks: queue ids received over mailbox
*
* Will send delete queues virtchnl message. Return 0 on success, negative on
* failure.
*/
-int idpf_send_delete_queues_msg(struct idpf_vport *vport)
+int idpf_send_delete_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks)
{
struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL;
- struct virtchnl2_create_vport *vport_params;
- struct virtchnl2_queue_reg_chunks *chunks;
struct idpf_vc_xn_params xn_params = {};
- struct idpf_vport_config *vport_config;
- u16 vport_idx = vport->idx;
ssize_t reply_sz;
u16 num_chunks;
int buf_size;
- vport_config = vport->adapter->vport_config[vport_idx];
- if (vport_config->req_qs_chunks) {
- chunks = &vport_config->req_qs_chunks->chunks;
- } else {
- vport_params = vport->adapter->vport_params_recvd[vport_idx];
- chunks = &vport_params->chunks;
- }
-
- num_chunks = le16_to_cpu(chunks->num_chunks);
+ num_chunks = chunks->num_chunks;
buf_size = struct_size(eq, chunks.chunks, num_chunks);
eq = kzalloc(buf_size, GFP_KERNEL);
@@ -2081,7 +2095,7 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport)
eq->vport_id = cpu_to_le32(vport->vport_id);
eq->chunks.num_chunks = cpu_to_le16(num_chunks);
- idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->chunks,
+ idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks,
num_chunks);
xn_params.vc_op = VIRTCHNL2_OP_DEL_QUEUES;
@@ -2138,8 +2152,6 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
return -ENOMEM;
vport_config = vport->adapter->vport_config[vport_idx];
- kfree(vport_config->req_qs_chunks);
- vport_config->req_qs_chunks = NULL;
aq.vport_id = cpu_to_le32(vport->vport_id);
aq.num_tx_q = cpu_to_le16(num_tx_q);
@@ -2169,11 +2181,7 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
if (reply_sz < size)
return -EIO;
- vport_config->req_qs_chunks = kmemdup(vc_msg, size, GFP_KERNEL);
- if (!vport_config->req_qs_chunks)
- return -ENOMEM;
-
- return 0;
+ return idpf_vport_init_queue_reg_chunks(vport_config, &vc_msg->chunks);
}
/**
@@ -3172,8 +3180,10 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport)
* @max_q: vport max queue info
*
* Will initialize vport with the info received through MB earlier
+ *
+ * Return: %0 on success, -%errno on failure.
*/
-void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
+int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
{
struct idpf_adapter *adapter = vport->adapter;
struct virtchnl2_create_vport *vport_msg;
@@ -3188,6 +3198,11 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
rss_data = &vport_config->user_config.rss_data;
vport_msg = adapter->vport_params_recvd[idx];
+ err = idpf_vport_init_queue_reg_chunks(vport_config,
+ &vport_msg->chunks);
+ if (err)
+ return err;
+
vport_config->max_q.max_txq = max_q->max_txq;
vport_config->max_q.max_rxq = max_q->max_rxq;
vport_config->max_q.max_complq = max_q->max_complq;
@@ -3220,15 +3235,17 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
if (!(vport_msg->vport_flags &
cpu_to_le16(VIRTCHNL2_VPORT_UPLINK_PORT)))
- return;
+ return 0;
err = idpf_ptp_get_vport_tstamps_caps(vport);
if (err) {
pci_dbg(vport->adapter->pdev, "Tx timestamping not supported\n");
- return;
+ return err == -EOPNOTSUPP ? 0 : err;
}
INIT_WORK(&vport->tstamp_task, idpf_tstamp_task);
+
+ return 0;
}
/**
@@ -3287,21 +3304,21 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter,
* Returns number of ids filled
*/
static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type,
- struct virtchnl2_queue_reg_chunks *chunks)
+ struct idpf_queue_id_reg_info *chunks)
{
- u16 num_chunks = le16_to_cpu(chunks->num_chunks);
+ u16 num_chunks = chunks->num_chunks;
u32 num_q_id_filled = 0, i;
u32 start_q_id, num_q;
while (num_chunks--) {
- struct virtchnl2_queue_reg_chunk *chunk;
+ struct idpf_queue_id_reg_chunk *chunk;
- chunk = &chunks->chunks[num_chunks];
- if (le32_to_cpu(chunk->type) != q_type)
+ chunk = &chunks->queue_chunks[num_chunks];
+ if (chunk->type != q_type)
continue;
- num_q = le32_to_cpu(chunk->num_queues);
- start_q_id = le32_to_cpu(chunk->start_queue_id);
+ num_q = chunk->num_queues;
+ start_q_id = chunk->start_queue_id;
for (i = 0; i < num_q; i++) {
if ((num_q_id_filled + i) < num_qids) {
@@ -3394,30 +3411,18 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
/**
* idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
* @vport: virtual port for which the queues ids are initialized
+ * @chunks: queue ids received over mailbox
*
* Will initialize all queue ids with ids received as mailbox parameters.
* Returns 0 on success, negative if all the queues are not initialized.
*/
-int idpf_vport_queue_ids_init(struct idpf_vport *vport)
+int idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks)
{
- struct virtchnl2_create_vport *vport_params;
- struct virtchnl2_queue_reg_chunks *chunks;
- struct idpf_vport_config *vport_config;
- u16 vport_idx = vport->idx;
int num_ids, err = 0;
u16 q_type;
u32 *qids;
- vport_config = vport->adapter->vport_config[vport_idx];
- if (vport_config->req_qs_chunks) {
- struct virtchnl2_add_queues *vc_aq =
- (struct virtchnl2_add_queues *)vport_config->req_qs_chunks;
- chunks = &vc_aq->chunks;
- } else {
- vport_params = vport->adapter->vport_params_recvd[vport_idx];
- chunks = &vport_params->chunks;
- }
-
qids = kcalloc(IDPF_MAX_QIDS, sizeof(u32), GFP_KERNEL);
if (!qids)
return -ENOMEM;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 87b193ce6ed7..27fc406a424c 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -102,8 +102,10 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter);
int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
struct idpf_vec_regs *reg_vals);
-int idpf_queue_reg_init(struct idpf_vport *vport);
-int idpf_vport_queue_ids_init(struct idpf_vport *vport);
+int idpf_queue_reg_init(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks);
+int idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks);
bool idpf_vport_is_cap_ena(struct idpf_vport *vport, u16 flag);
bool idpf_sideband_flow_type_ena(struct idpf_vport *vport, u32 flow_type);
@@ -115,7 +117,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter);
int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
u16 msg_size, u8 *msg, u16 cookie);
-void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q);
+int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q);
u32 idpf_get_vport_id(struct idpf_vport *vport);
int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
@@ -130,7 +132,8 @@ void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
u16 num_complq, u16 num_rx_q, u16 num_rx_bufq);
-int idpf_send_delete_queues_msg(struct idpf_vport *vport);
+int idpf_send_delete_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks);
int idpf_send_enable_queues_msg(struct idpf_vport *vport);
int idpf_send_disable_queues_msg(struct idpf_vport *vport);
int idpf_send_config_queues_msg(struct idpf_vport *vport);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 2/9] idpf: use existing queue chunk info instead of preparing it
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 1/9] idpf: introduce local idpf structure to store virtchnl queue chunks Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 3/9] idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it Pavan Kumar Linga
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin
Queue chunk info received from the device control plane is stored in the
persistent data section. Necessary info from these chunks is parsed and
stored in the queue structure. While sending the enable/disable queues
virtchnl message, queue chunk info is prepared using the stored queue
info. Instead of that, use the stored queue chunks directly which has
info about all the queues that needs to be enabled/disabled.
Reviewed-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf_lib.c | 6 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 188 +++++-------------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 6 +-
3 files changed, 52 insertions(+), 148 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 2579e751370b..6d9be0b69f45 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -852,7 +852,7 @@ static void idpf_vport_stop(struct idpf_vport *vport)
chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info;
idpf_send_disable_vport_msg(vport);
- idpf_send_disable_queues_msg(vport);
+ idpf_send_disable_queues_msg(vport, chunks);
idpf_send_map_unmap_queue_vector_msg(vport, false);
/* Normally we ask for queues in create_vport, but if the number of
* initially requested queues have changed, for example via ethtool
@@ -1414,7 +1414,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
goto rxq_deinit;
}
- err = idpf_send_enable_queues_msg(vport);
+ err = idpf_send_enable_queues_msg(vport, chunks);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to enable queues for vport %u: %d\n",
vport->vport_id, err);
@@ -1455,7 +1455,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
disable_vport:
idpf_send_disable_vport_msg(vport);
disable_queues:
- idpf_send_disable_queues_msg(vport);
+ idpf_send_disable_queues_msg(vport, chunks);
unmap_queue_vectors:
idpf_send_map_unmap_queue_vector_msg(vport, false);
rxq_deinit:
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 89d2c9d7236d..1b3f9296dd93 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -1003,6 +1003,24 @@ static void idpf_init_avail_queues(struct idpf_adapter *adapter)
avail_queues->avail_complq = le16_to_cpu(caps->max_tx_complq);
}
+/**
+ * idpf_convert_reg_to_queue_chunks - copy queue chunk information to the right
+ * structure
+ * @dchunks: destination chunks to store data to
+ * @schunks: source chunks to copy data from
+ * @num_chunks: number of chunks to copy
+ */
+static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks,
+ struct idpf_queue_id_reg_chunk *schunks,
+ u16 num_chunks)
+{
+ for (u16 i = 0; i < num_chunks; i++) {
+ dchunks[i].type = cpu_to_le32(schunks[i].type);
+ dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id);
+ dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues);
+ }
+}
+
/**
* idpf_vport_init_queue_reg_chunks - initialize queue register chunks
* @vport_config: persistent vport structure to store the queue register info
@@ -1735,116 +1753,20 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
* idpf_send_ena_dis_queues_msg - Send virtchnl enable or disable
* queues message
* @vport: virtual port data structure
+ * @chunks: queue register info
* @ena: if true enable, false disable
*
* Send enable or disable queues virtchnl message. Returns 0 on success,
* negative on failure.
*/
-static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena)
+static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks,
+ bool ena)
{
struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL;
- struct virtchnl2_queue_chunk *qc __free(kfree) = NULL;
- u32 num_msgs, num_chunks, num_txq, num_rxq, num_q;
struct idpf_vc_xn_params xn_params = {};
- struct virtchnl2_queue_chunks *qcs;
- u32 config_sz, chunk_sz, buf_sz;
+ u32 num_chunks, buf_sz;
ssize_t reply_sz;
- int i, j, k = 0;
-
- num_txq = vport->num_txq + vport->num_complq;
- num_rxq = vport->num_rxq + vport->num_bufq;
- num_q = num_txq + num_rxq;
- buf_sz = sizeof(struct virtchnl2_queue_chunk) * num_q;
- qc = kzalloc(buf_sz, GFP_KERNEL);
- if (!qc)
- return -ENOMEM;
-
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
-
- for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
- qc[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX);
- qc[k].start_queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id);
- qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
- }
- }
- if (vport->num_txq != k)
- return -EINVAL;
-
- if (!idpf_is_queue_model_split(vport->txq_model))
- goto setup_rx;
-
- for (i = 0; i < vport->num_txq_grp; i++, k++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
-
- qc[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION);
- qc[k].start_queue_id = cpu_to_le32(tx_qgrp->complq->q_id);
- qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
- }
- if (vport->num_complq != (k - vport->num_txq))
- return -EINVAL;
-
-setup_rx:
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
-
- if (idpf_is_queue_model_split(vport->rxq_model))
- num_rxq = rx_qgrp->splitq.num_rxq_sets;
- else
- num_rxq = rx_qgrp->singleq.num_rxq;
-
- for (j = 0; j < num_rxq; j++, k++) {
- if (idpf_is_queue_model_split(vport->rxq_model)) {
- qc[k].start_queue_id =
- cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_id);
- qc[k].type =
- cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
- } else {
- qc[k].start_queue_id =
- cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_id);
- qc[k].type =
- cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
- }
- qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
- }
- }
- if (vport->num_rxq != k - (vport->num_txq + vport->num_complq))
- return -EINVAL;
-
- if (!idpf_is_queue_model_split(vport->rxq_model))
- goto send_msg;
-
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
-
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
- const struct idpf_buf_queue *q;
-
- q = &rx_qgrp->splitq.bufq_sets[j].bufq;
- qc[k].type =
- cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER);
- qc[k].start_queue_id = cpu_to_le32(q->q_id);
- qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
- }
- }
- if (vport->num_bufq != k - (vport->num_txq +
- vport->num_complq +
- vport->num_rxq))
- return -EINVAL;
-
-send_msg:
- /* Chunk up the queue info into multiple messages */
- config_sz = sizeof(struct virtchnl2_del_ena_dis_queues);
- chunk_sz = sizeof(struct virtchnl2_queue_chunk);
-
- num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz),
- num_q);
- num_msgs = DIV_ROUND_UP(num_q, num_chunks);
-
- buf_sz = struct_size(eq, chunks.chunks, num_chunks);
- eq = kzalloc(buf_sz, GFP_KERNEL);
- if (!eq)
- return -ENOMEM;
if (ena) {
xn_params.vc_op = VIRTCHNL2_OP_ENABLE_QUEUES;
@@ -1854,27 +1776,23 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, bool ena)
xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC;
}
- for (i = 0, k = 0; i < num_msgs; i++) {
- memset(eq, 0, buf_sz);
- eq->vport_id = cpu_to_le32(vport->vport_id);
- eq->chunks.num_chunks = cpu_to_le16(num_chunks);
- qcs = &eq->chunks;
- memcpy(qcs->chunks, &qc[k], chunk_sz * num_chunks);
+ num_chunks = chunks->num_chunks;
+ buf_sz = struct_size(eq, chunks.chunks, num_chunks);
+ eq = kzalloc(buf_sz, GFP_KERNEL);
+ if (!eq)
+ return -ENOMEM;
- xn_params.send_buf.iov_base = eq;
- xn_params.send_buf.iov_len = buf_sz;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
- if (reply_sz < 0)
- return reply_sz;
+ eq->vport_id = cpu_to_le32(vport->vport_id);
+ eq->chunks.num_chunks = cpu_to_le16(num_chunks);
- k += num_chunks;
- num_q -= num_chunks;
- num_chunks = min(num_chunks, num_q);
- /* Recalculate buffer size */
- buf_sz = struct_size(eq, chunks.chunks, num_chunks);
- }
+ idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks,
+ num_chunks);
- return 0;
+ xn_params.send_buf.iov_base = eq;
+ xn_params.send_buf.iov_len = buf_sz;
+ reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+
+ return reply_sz < 0 ? reply_sz : 0;
}
/**
@@ -2021,53 +1939,37 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
/**
* idpf_send_enable_queues_msg - send enable queues virtchnl message
* @vport: Virtual port private data structure
+ * @chunks: queue ids received over mailbox
*
* Will send enable queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-int idpf_send_enable_queues_msg(struct idpf_vport *vport)
+int idpf_send_enable_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks)
{
- return idpf_send_ena_dis_queues_msg(vport, true);
+ return idpf_send_ena_dis_queues_msg(vport, chunks, true);
}
/**
* idpf_send_disable_queues_msg - send disable queues virtchnl message
* @vport: Virtual port private data structure
+ * @chunks: queue ids received over mailbox
*
* Will send disable queues virtchnl message. Returns 0 on success, negative
* on failure.
*/
-int idpf_send_disable_queues_msg(struct idpf_vport *vport)
+int idpf_send_disable_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks)
{
int err;
- err = idpf_send_ena_dis_queues_msg(vport, false);
+ err = idpf_send_ena_dis_queues_msg(vport, chunks, false);
if (err)
return err;
return idpf_wait_for_marker_event(vport);
}
-/**
- * idpf_convert_reg_to_queue_chunks - Copy queue chunk information to the right
- * structure
- * @dchunks: Destination chunks to store data to
- * @schunks: Source chunks to copy data from
- * @num_chunks: number of chunks to copy
- */
-static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks,
- struct idpf_queue_id_reg_chunk *schunks,
- u16 num_chunks)
-{
- u16 i;
-
- for (i = 0; i < num_chunks; i++) {
- dchunks[i].type = cpu_to_le32(schunks[i].type);
- dchunks[i].start_queue_id = cpu_to_le32(schunks[i].start_queue_id);
- dchunks[i].num_queues = cpu_to_le32(schunks[i].num_queues);
- }
-}
-
/**
* idpf_send_delete_queues_msg - send delete queues virtchnl message
* @vport: virtual port private data structure
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 27fc406a424c..3f7e5e6b235d 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -134,8 +134,10 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
u16 num_complq, u16 num_rx_q, u16 num_rx_bufq);
int idpf_send_delete_queues_msg(struct idpf_vport *vport,
struct idpf_queue_id_reg_info *chunks);
-int idpf_send_enable_queues_msg(struct idpf_vport *vport);
-int idpf_send_disable_queues_msg(struct idpf_vport *vport);
+int idpf_send_enable_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks);
+int idpf_send_disable_queues_msg(struct idpf_vport *vport,
+ struct idpf_queue_id_reg_info *chunks);
int idpf_send_config_queues_msg(struct idpf_vport *vport);
int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 3/9] idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 1/9] idpf: introduce local idpf structure to store virtchnl queue chunks Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 2/9] idpf: use existing queue chunk info instead of preparing it Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 4/9] idpf: move queue resources to idpf_q_vec_rsrc structure Pavan Kumar Linga
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin
To group all the vector and queue resources, introduce idpf_q_vec_rsrc
structure. This helps to reuse the same config path functions by other
features. For example, PTP implementation can use the existing config
infrastructure to configure secondary mailbox by passing its queue and
vector info. It also helps to avoid any duplication of code.
Existing queue and vector resources are grouped as default resources.
This patch moves vector info to the newly introduced structure.
Following patch moves the queue resources.
While at it, declare the loop iterator for 'num_q_vectors' in loop and
use the correct type.
Reviewed-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf.h | 39 ++--
drivers/net/ethernet/intel/idpf/idpf_dev.c | 16 +-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 39 ++--
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 195 +++++++++---------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 14 +-
drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 16 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 24 ++-
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 4 +-
8 files changed, 193 insertions(+), 154 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
index 1be111d04467..ccc872441908 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -8,6 +8,7 @@
struct idpf_adapter;
struct idpf_vport;
struct idpf_vport_max_q;
+struct idpf_q_vec_rsrc;
#include <net/pkt_sched.h>
#include <linux/aer.h>
@@ -196,7 +197,8 @@ struct idpf_vport_max_q {
*/
struct idpf_reg_ops {
void (*ctlq_reg_init)(struct idpf_ctlq_create_info *cq);
- int (*intr_reg_init)(struct idpf_vport *vport);
+ int (*intr_reg_init)(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
void (*mb_intr_reg_init)(struct idpf_adapter *adapter);
void (*reset_reg_init)(struct idpf_adapter *adapter);
void (*trigger_reset)(struct idpf_adapter *adapter,
@@ -255,8 +257,28 @@ struct idpf_fsteer_fltr {
u32 q_index;
};
+/**
+ * struct idpf_q_vec_rsrc - handle for queue and vector resources
+ * @q_vectors: array of queue vectors
+ * @q_vector_idxs: starting index of queue vectors
+ * @num_q_vectors: number of IRQ vectors allocated
+ * @noirq_v_idx: ID of the NOIRQ vector
+ * @noirq_dyn_ctl_ena: value to write to the above to enable it
+ * @noirq_dyn_ctl: register to enable/disable the vector for NOIRQ queues
+ */
+struct idpf_q_vec_rsrc {
+ struct idpf_q_vector *q_vectors;
+ u16 *q_vector_idxs;
+ u16 num_q_vectors;
+ u16 noirq_v_idx;
+ u32 noirq_dyn_ctl_ena;
+ void __iomem *noirq_dyn_ctl;
+
+};
+
/**
* struct idpf_vport - Handle for netdevices and queue resources
+ * @dflt_qv_rsrc: contains default queue and vector resources
* @num_txq: Number of allocated TX queues
* @num_complq: Number of allocated completion queues
* @txq_desc_count: TX queue descriptor count
@@ -292,12 +314,6 @@ struct idpf_fsteer_fltr {
* @idx: Software index in adapter vports struct
* @default_vport: Use this vport if one isn't specified
* @base_rxd: True if the driver should use base descriptors instead of flex
- * @num_q_vectors: Number of IRQ vectors allocated
- * @q_vectors: Array of queue vectors
- * @q_vector_idxs: Starting index of queue vectors
- * @noirq_dyn_ctl: register to enable/disable the vector for NOIRQ queues
- * @noirq_dyn_ctl_ena: value to write to the above to enable it
- * @noirq_v_idx: ID of the NOIRQ vector
* @max_mtu: device given max possible MTU
* @default_mac_addr: device will give a default MAC to use
* @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation
@@ -309,6 +325,7 @@ struct idpf_fsteer_fltr {
* @tstamp_task: Tx timestamping task
*/
struct idpf_vport {
+ struct idpf_q_vec_rsrc dflt_qv_rsrc;
u16 num_txq;
u16 num_complq;
u32 txq_desc_count;
@@ -344,14 +361,6 @@ struct idpf_vport {
bool default_vport;
bool base_rxd;
- u16 num_q_vectors;
- struct idpf_q_vector *q_vectors;
- u16 *q_vector_idxs;
-
- void __iomem *noirq_dyn_ctl;
- u32 noirq_dyn_ctl_ena;
- u16 noirq_v_idx;
-
u16 max_mtu;
u8 default_mac_addr[ETH_ALEN];
u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c
index 1aad22903b9d..457fb314926c 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_dev.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c
@@ -67,11 +67,13 @@ static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter)
/**
* idpf_intr_reg_init - Initialize interrupt registers
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
*/
-static int idpf_intr_reg_init(struct idpf_vport *vport)
+static int idpf_intr_reg_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_adapter *adapter = vport->adapter;
- int num_vecs = vport->num_q_vectors;
+ u16 num_vecs = rsrc->num_q_vectors;
struct idpf_vec_regs *reg_vals;
int num_regs, i, err = 0;
u32 rx_itr, tx_itr, val;
@@ -90,8 +92,8 @@ static int idpf_intr_reg_init(struct idpf_vport *vport)
}
for (i = 0; i < num_vecs; i++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[i];
- u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[i];
+ u16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
struct idpf_intr_reg *intr = &q_vector->intr_reg;
u32 spacing;
@@ -120,12 +122,12 @@ static int idpf_intr_reg_init(struct idpf_vport *vport)
/* Data vector for NOIRQ queues */
- val = reg_vals[vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg;
- vport->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val);
+ val = reg_vals[rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg;
+ rsrc->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val);
val = PF_GLINT_DYN_CTL_WB_ON_ITR_M | PF_GLINT_DYN_CTL_INTENA_MSK_M |
FIELD_PREP(PF_GLINT_DYN_CTL_ITR_INDX_M, IDPF_NO_ITR_UPDATE_IDX);
- vport->noirq_dyn_ctl_ena = val;
+ rsrc->noirq_dyn_ctl_ena = val;
free_reg_vals:
kfree(reg_vals);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 6d9be0b69f45..c4ec081a9917 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -841,6 +841,7 @@ static void idpf_remove_features(struct idpf_vport *vport)
static void idpf_vport_stop(struct idpf_vport *vport)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_queue_id_reg_info *chunks;
if (np->state <= __IDPF_VPORT_DOWN)
@@ -852,7 +853,7 @@ static void idpf_vport_stop(struct idpf_vport *vport)
chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info;
idpf_send_disable_vport_msg(vport);
- idpf_send_disable_queues_msg(vport, chunks);
+ idpf_send_disable_queues_msg(vport, rsrc, chunks);
idpf_send_map_unmap_queue_vector_msg(vport, false);
/* Normally we ask for queues in create_vport, but if the number of
* initially requested queues have changed, for example via ethtool
@@ -865,10 +866,10 @@ static void idpf_vport_stop(struct idpf_vport *vport)
idpf_remove_features(vport);
vport->link_up = false;
- idpf_vport_intr_deinit(vport);
+ idpf_vport_intr_deinit(vport, rsrc);
idpf_xdp_rxq_info_deinit_all(vport);
idpf_vport_queues_rel(vport);
- idpf_vport_intr_rel(vport);
+ idpf_vport_intr_rel(rsrc);
np->state = __IDPF_VPORT_DOWN;
}
@@ -928,6 +929,7 @@ static void idpf_decfg_netdev(struct idpf_vport *vport)
*/
static void idpf_vport_rel(struct idpf_vport *vport)
{
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
struct idpf_vector_info vec_info;
@@ -952,13 +954,13 @@ static void idpf_vport_rel(struct idpf_vport *vport)
/* Release all the allocated vectors on the stack */
vec_info.num_req_vecs = 0;
- vec_info.num_curr_vecs = vport->num_q_vectors;
+ vec_info.num_curr_vecs = rsrc->num_q_vectors;
vec_info.default_vport = vport->default_vport;
- idpf_req_rel_vector_indexes(adapter, vport->q_vector_idxs, &vec_info);
+ idpf_req_rel_vector_indexes(adapter, rsrc->q_vector_idxs, &vec_info);
- kfree(vport->q_vector_idxs);
- vport->q_vector_idxs = NULL;
+ kfree(rsrc->q_vector_idxs);
+ rsrc->q_vector_idxs = NULL;
kfree(vport_config->qid_reg_info.queue_chunks);
vport_config->qid_reg_info.queue_chunks = NULL;
@@ -1081,6 +1083,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
{
struct idpf_rss_data *rss_data;
u16 idx = adapter->next_vport;
+ struct idpf_q_vec_rsrc *rsrc;
struct idpf_vport *vport;
u16 num_max_q;
int err;
@@ -1128,8 +1131,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
vport->default_vport = adapter->num_alloc_vports <
idpf_get_default_vports(adapter);
- vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
- if (!vport->q_vector_idxs)
+ rsrc = &vport->dflt_qv_rsrc;
+ rsrc->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
+ if (!rsrc->q_vector_idxs)
goto free_vport;
err = idpf_vport_init(vport, max_q);
@@ -1162,7 +1166,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
free_qreg_chunks:
kfree(adapter->vport_config[idx]->qid_reg_info.queue_chunks);
free_vector_idxs:
- kfree(vport->q_vector_idxs);
+ kfree(rsrc->q_vector_idxs);
free_vport:
kfree(vport);
@@ -1335,6 +1339,7 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
static int idpf_vport_open(struct idpf_vport *vport)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
struct idpf_queue_id_reg_info *chunks;
@@ -1346,7 +1351,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
/* we do not allow interface up just yet */
netif_carrier_off(vport->netdev);
- err = idpf_vport_intr_alloc(vport);
+ err = idpf_vport_intr_alloc(vport, rsrc);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n",
vport->vport_id, err);
@@ -1367,7 +1372,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
goto queues_rel;
}
- err = idpf_vport_intr_init(vport);
+ err = idpf_vport_intr_init(vport, rsrc);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n",
vport->vport_id, err);
@@ -1398,7 +1403,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
goto intr_deinit;
}
- idpf_vport_intr_ena(vport);
+ idpf_vport_intr_ena(vport, rsrc);
err = idpf_send_config_queues_msg(vport);
if (err) {
@@ -1455,17 +1460,17 @@ static int idpf_vport_open(struct idpf_vport *vport)
disable_vport:
idpf_send_disable_vport_msg(vport);
disable_queues:
- idpf_send_disable_queues_msg(vport, chunks);
+ idpf_send_disable_queues_msg(vport, rsrc, chunks);
unmap_queue_vectors:
idpf_send_map_unmap_queue_vector_msg(vport, false);
rxq_deinit:
idpf_xdp_rxq_info_deinit_all(vport);
intr_deinit:
- idpf_vport_intr_deinit(vport);
+ idpf_vport_intr_deinit(vport, rsrc);
queues_rel:
idpf_vport_queues_rel(vport);
intr_rel:
- idpf_vport_intr_rel(vport);
+ idpf_vport_intr_rel(rsrc);
return err;
}
@@ -1952,7 +1957,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up));
if (reset_cause == IDPF_SR_Q_CHANGE)
- idpf_vport_alloc_vec_indexes(vport);
+ idpf_vport_alloc_vec_indexes(vport, &vport->dflt_qv_rsrc);
err = idpf_set_real_num_queues(vport);
if (err)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index b63203963cb4..7bd6a8e9ef42 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -3672,39 +3672,34 @@ static irqreturn_t idpf_vport_intr_clean_queues(int __always_unused irq,
/**
* idpf_vport_intr_napi_del_all - Unregister napi for all q_vectors in vport
- * @vport: virtual port structure
- *
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_napi_del_all(struct idpf_vport *vport)
+static void idpf_vport_intr_napi_del_all(struct idpf_q_vec_rsrc *rsrc)
{
- u16 v_idx;
-
- for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++)
- netif_napi_del(&vport->q_vectors[v_idx].napi);
+ for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++)
+ netif_napi_del(&rsrc->q_vectors[v_idx].napi);
}
/**
* idpf_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport
- * @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_napi_dis_all(struct idpf_vport *vport)
+static void idpf_vport_intr_napi_dis_all(struct idpf_q_vec_rsrc *rsrc)
{
- int v_idx;
-
- for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++)
- napi_disable(&vport->q_vectors[v_idx].napi);
+ for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++)
+ napi_disable(&rsrc->q_vectors[v_idx].napi);
}
/**
* idpf_vport_intr_rel - Free memory allocated for interrupt vectors
- * @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* Free the memory allocated for interrupt vectors associated to a vport
*/
-void idpf_vport_intr_rel(struct idpf_vport *vport)
+void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc)
{
- for (u32 v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
+ for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) {
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[v_idx];
kfree(q_vector->complq);
q_vector->complq = NULL;
@@ -3716,8 +3711,8 @@ void idpf_vport_intr_rel(struct idpf_vport *vport)
q_vector->rx = NULL;
}
- kfree(vport->q_vectors);
- vport->q_vectors = NULL;
+ kfree(rsrc->q_vectors);
+ rsrc->q_vectors = NULL;
}
static void idpf_q_vector_set_napi(struct idpf_q_vector *q_vector, bool link)
@@ -3737,21 +3732,22 @@ static void idpf_q_vector_set_napi(struct idpf_q_vector *q_vector, bool link)
/**
* idpf_vport_intr_rel_irq - Free the IRQ association with the OS
* @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_rel_irq(struct idpf_vport *vport)
+static void idpf_vport_intr_rel_irq(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_adapter *adapter = vport->adapter;
- int vector;
- for (vector = 0; vector < vport->num_q_vectors; vector++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[vector];
+ for (u16 vector = 0; vector < rsrc->num_q_vectors; vector++) {
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[vector];
int irq_num, vidx;
/* free only the irqs that were actually requested */
if (!q_vector)
continue;
- vidx = vport->q_vector_idxs[vector];
+ vidx = rsrc->q_vector_idxs[vector];
irq_num = adapter->msix_entries[vidx].vector;
idpf_q_vector_set_napi(q_vector, false);
@@ -3761,16 +3757,15 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport *vport)
/**
* idpf_vport_intr_dis_irq_all - Disable all interrupt
- * @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_dis_irq_all(struct idpf_vport *vport)
+static void idpf_vport_intr_dis_irq_all(struct idpf_q_vec_rsrc *rsrc)
{
- struct idpf_q_vector *q_vector = vport->q_vectors;
- int q_idx;
+ struct idpf_q_vector *q_vector = rsrc->q_vectors;
- writel(0, vport->noirq_dyn_ctl);
+ writel(0, rsrc->noirq_dyn_ctl);
- for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++)
+ for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++)
writel(0, q_vector[q_idx].intr_reg.dyn_ctl);
}
@@ -3908,8 +3903,10 @@ void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector)
/**
* idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport
* @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-static int idpf_vport_intr_req_irq(struct idpf_vport *vport)
+static int idpf_vport_intr_req_irq(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_adapter *adapter = vport->adapter;
const char *drv_name, *if_name, *vec_name;
@@ -3918,11 +3915,11 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport)
drv_name = dev_driver_string(&adapter->pdev->dev);
if_name = netdev_name(vport->netdev);
- for (vector = 0; vector < vport->num_q_vectors; vector++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[vector];
+ for (vector = 0; vector < rsrc->num_q_vectors; vector++) {
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[vector];
char *name;
- vidx = vport->q_vector_idxs[vector];
+ vidx = rsrc->q_vector_idxs[vector];
irq_num = adapter->msix_entries[vidx].vector;
if (q_vector->num_rxq && q_vector->num_txq)
@@ -3952,9 +3949,9 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport)
free_q_irqs:
while (--vector >= 0) {
- vidx = vport->q_vector_idxs[vector];
+ vidx = rsrc->q_vector_idxs[vector];
irq_num = adapter->msix_entries[vidx].vector;
- kfree(free_irq(irq_num, &vport->q_vectors[vector]));
+ kfree(free_irq(irq_num, &rsrc->q_vectors[vector]));
}
return err;
@@ -3983,15 +3980,16 @@ void idpf_vport_intr_write_itr(struct idpf_q_vector *q_vector, u16 itr, bool tx)
/**
* idpf_vport_intr_ena_irq_all - Enable IRQ for the given vport
* @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport)
+static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
bool dynamic;
- int q_idx;
u16 itr;
- for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
- struct idpf_q_vector *qv = &vport->q_vectors[q_idx];
+ for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) {
+ struct idpf_q_vector *qv = &rsrc->q_vectors[q_idx];
/* Set the initial ITR values */
if (qv->num_txq) {
@@ -4014,19 +4012,21 @@ static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport)
idpf_vport_intr_update_itr_ena_irq(qv);
}
- writel(vport->noirq_dyn_ctl_ena, vport->noirq_dyn_ctl);
+ writel(rsrc->noirq_dyn_ctl_ena, rsrc->noirq_dyn_ctl);
}
/**
* idpf_vport_intr_deinit - Release all vector associations for the vport
* @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-void idpf_vport_intr_deinit(struct idpf_vport *vport)
+void idpf_vport_intr_deinit(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
- idpf_vport_intr_dis_irq_all(vport);
- idpf_vport_intr_napi_dis_all(vport);
- idpf_vport_intr_napi_del_all(vport);
- idpf_vport_intr_rel_irq(vport);
+ idpf_vport_intr_dis_irq_all(rsrc);
+ idpf_vport_intr_napi_dis_all(rsrc);
+ idpf_vport_intr_napi_del_all(rsrc);
+ idpf_vport_intr_rel_irq(vport, rsrc);
}
/**
@@ -4098,14 +4098,12 @@ static void idpf_init_dim(struct idpf_q_vector *qv)
/**
* idpf_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport
- * @vport: main vport structure
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_napi_ena_all(struct idpf_vport *vport)
+static void idpf_vport_intr_napi_ena_all(struct idpf_q_vec_rsrc *rsrc)
{
- int q_idx;
-
- for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[q_idx];
+ for (u16 q_idx = 0; q_idx < rsrc->num_q_vectors; q_idx++) {
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[q_idx];
idpf_init_dim(q_vector);
napi_enable(&q_vector->napi);
@@ -4224,10 +4222,12 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
/**
* idpf_vport_intr_map_vector_to_qs - Map vectors to queues
* @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* Mapping for vectors to queues
*/
-static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
+static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
u16 num_txq_grp = vport->num_txq_grp - vport->num_xdp_txq;
bool split = idpf_is_queue_model_split(vport->rxq_model);
@@ -4238,7 +4238,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) {
u16 num_rxq;
- if (qv_idx >= vport->num_q_vectors)
+ if (qv_idx >= rsrc->num_q_vectors)
qv_idx = 0;
rx_qgrp = &vport->rxq_grps[i];
@@ -4254,7 +4254,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
else
q = rx_qgrp->singleq.rxqs[j];
- q->q_vector = &vport->q_vectors[qv_idx];
+ q->q_vector = &rsrc->q_vectors[qv_idx];
q_index = q->q_vector->num_rxq;
q->q_vector->rx[q_index] = q;
q->q_vector->num_rxq++;
@@ -4268,7 +4268,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
struct idpf_buf_queue *bufq;
bufq = &rx_qgrp->splitq.bufq_sets[j].bufq;
- bufq->q_vector = &vport->q_vectors[qv_idx];
+ bufq->q_vector = &rsrc->q_vectors[qv_idx];
q_index = bufq->q_vector->num_bufq;
bufq->q_vector->bufq[q_index] = bufq;
bufq->q_vector->num_bufq++;
@@ -4283,7 +4283,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
for (i = 0, qv_idx = 0; i < num_txq_grp; i++) {
u16 num_txq;
- if (qv_idx >= vport->num_q_vectors)
+ if (qv_idx >= rsrc->num_q_vectors)
qv_idx = 0;
tx_qgrp = &vport->txq_grps[i];
@@ -4293,14 +4293,14 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
struct idpf_tx_queue *q;
q = tx_qgrp->txqs[j];
- q->q_vector = &vport->q_vectors[qv_idx];
+ q->q_vector = &rsrc->q_vectors[qv_idx];
q->q_vector->tx[q->q_vector->num_txq++] = q;
}
if (split) {
struct idpf_compl_queue *q = tx_qgrp->complq;
- q->q_vector = &vport->q_vectors[qv_idx];
+ q->q_vector = &rsrc->q_vectors[qv_idx];
q->q_vector->complq[q->q_vector->num_complq++] = q;
}
@@ -4311,10 +4311,12 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
/**
* idpf_vport_intr_init_vec_idx - Initialize the vector indexes
* @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* Initialize vector indexes with values returened over mailbox
*/
-static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport)
+static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_adapter *adapter = vport->adapter;
struct virtchnl2_alloc_vectors *ac;
@@ -4323,10 +4325,10 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport)
ac = adapter->req_vec_chunks;
if (!ac) {
- for (i = 0; i < vport->num_q_vectors; i++)
- vport->q_vectors[i].v_idx = vport->q_vector_idxs[i];
+ for (i = 0; i < rsrc->num_q_vectors; i++)
+ rsrc->q_vectors[i].v_idx = rsrc->q_vector_idxs[i];
- vport->noirq_v_idx = vport->q_vector_idxs[i];
+ rsrc->noirq_v_idx = rsrc->q_vector_idxs[i];
return 0;
}
@@ -4338,10 +4340,10 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport)
idpf_get_vec_ids(adapter, vecids, total_vecs, &ac->vchunks);
- for (i = 0; i < vport->num_q_vectors; i++)
- vport->q_vectors[i].v_idx = vecids[vport->q_vector_idxs[i]];
+ for (i = 0; i < rsrc->num_q_vectors; i++)
+ rsrc->q_vectors[i].v_idx = vecids[rsrc->q_vector_idxs[i]];
- vport->noirq_v_idx = vecids[vport->q_vector_idxs[i]];
+ rsrc->noirq_v_idx = vecids[rsrc->q_vector_idxs[i]];
kfree(vecids);
@@ -4351,21 +4353,24 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport)
/**
* idpf_vport_intr_napi_add_all- Register napi handler for all qvectors
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport)
+static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
int (*napi_poll)(struct napi_struct *napi, int budget);
- u16 v_idx, qv_idx;
int irq_num;
+ u16 qv_idx;
if (idpf_is_queue_model_split(vport->txq_model))
napi_poll = idpf_vport_splitq_napi_poll;
else
napi_poll = idpf_vport_singleq_napi_poll;
- for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
- qv_idx = vport->q_vector_idxs[v_idx];
+ for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) {
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[v_idx];
+
+ qv_idx = rsrc->q_vector_idxs[v_idx];
irq_num = vport->adapter->msix_entries[qv_idx].vector;
netif_napi_add_config(vport->netdev, &q_vector->napi,
@@ -4377,37 +4382,40 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport)
/**
* idpf_vport_intr_alloc - Allocate memory for interrupt vectors
* @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* We allocate one q_vector per queue interrupt. If allocation fails we
* return -ENOMEM.
*/
-int idpf_vport_intr_alloc(struct idpf_vport *vport)
+int idpf_vport_intr_alloc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector;
struct idpf_vport_user_config_data *user_config;
struct idpf_q_vector *q_vector;
struct idpf_q_coalesce *q_coal;
- u32 complqs_per_vector, v_idx;
+ u32 complqs_per_vector;
u16 idx = vport->idx;
user_config = &vport->adapter->vport_config[idx]->user_config;
- vport->q_vectors = kcalloc(vport->num_q_vectors,
- sizeof(struct idpf_q_vector), GFP_KERNEL);
- if (!vport->q_vectors)
+
+ rsrc->q_vectors = kcalloc(rsrc->num_q_vectors,
+ sizeof(struct idpf_q_vector), GFP_KERNEL);
+ if (!rsrc->q_vectors)
return -ENOMEM;
txqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp,
- vport->num_q_vectors);
+ rsrc->num_q_vectors);
rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq_grp,
- vport->num_q_vectors);
+ rsrc->num_q_vectors);
bufqs_per_vector = vport->num_bufqs_per_qgrp *
DIV_ROUND_UP(vport->num_rxq_grp,
- vport->num_q_vectors);
+ rsrc->num_q_vectors);
complqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp,
- vport->num_q_vectors);
+ rsrc->num_q_vectors);
- for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
- q_vector = &vport->q_vectors[v_idx];
+ for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) {
+ q_vector = &rsrc->q_vectors[v_idx];
q_coal = &user_config->q_coalesce[v_idx];
q_vector->vport = vport;
@@ -4448,7 +4456,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
return 0;
error:
- idpf_vport_intr_rel(vport);
+ idpf_vport_intr_rel(rsrc);
return -ENOMEM;
}
@@ -4456,40 +4464,41 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
/**
* idpf_vport_intr_init - Setup all vectors for the given vport
* @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* Returns 0 on success or negative on failure
*/
-int idpf_vport_intr_init(struct idpf_vport *vport)
+int idpf_vport_intr_init(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc)
{
int err;
- err = idpf_vport_intr_init_vec_idx(vport);
+ err = idpf_vport_intr_init_vec_idx(vport, rsrc);
if (err)
return err;
- idpf_vport_intr_map_vector_to_qs(vport);
- idpf_vport_intr_napi_add_all(vport);
+ idpf_vport_intr_map_vector_to_qs(vport, rsrc);
+ idpf_vport_intr_napi_add_all(vport, rsrc);
- err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport);
+ err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport, rsrc);
if (err)
goto unroll_vectors_alloc;
- err = idpf_vport_intr_req_irq(vport);
+ err = idpf_vport_intr_req_irq(vport, rsrc);
if (err)
goto unroll_vectors_alloc;
return 0;
unroll_vectors_alloc:
- idpf_vport_intr_napi_del_all(vport);
+ idpf_vport_intr_napi_del_all(rsrc);
return err;
}
-void idpf_vport_intr_ena(struct idpf_vport *vport)
+void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc)
{
- idpf_vport_intr_napi_ena_all(vport);
- idpf_vport_intr_ena_irq_all(vport);
+ idpf_vport_intr_napi_ena_all(rsrc);
+ idpf_vport_intr_ena_irq_all(vport, rsrc);
}
/**
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 0f87e4de02cd..675d9e1ed561 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -1076,12 +1076,16 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index,
void idpf_vport_calc_num_q_groups(struct idpf_vport *vport);
int idpf_vport_queues_alloc(struct idpf_vport *vport);
void idpf_vport_queues_rel(struct idpf_vport *vport);
-void idpf_vport_intr_rel(struct idpf_vport *vport);
-int idpf_vport_intr_alloc(struct idpf_vport *vport);
+void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc);
+int idpf_vport_intr_alloc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector);
-void idpf_vport_intr_deinit(struct idpf_vport *vport);
-int idpf_vport_intr_init(struct idpf_vport *vport);
-void idpf_vport_intr_ena(struct idpf_vport *vport);
+void idpf_vport_intr_deinit(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
+int idpf_vport_intr_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
+void idpf_vport_intr_ena(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
int idpf_config_rss(struct idpf_vport *vport);
int idpf_init_rss(struct idpf_vport *vport);
void idpf_deinit_rss(struct idpf_vport *vport);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
index a6993a01a9b0..574dadc6e190 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
@@ -66,11 +66,13 @@ static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter)
/**
* idpf_vf_intr_reg_init - Initialize interrupt registers
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
*/
-static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
+static int idpf_vf_intr_reg_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_adapter *adapter = vport->adapter;
- int num_vecs = vport->num_q_vectors;
+ u16 num_vecs = rsrc->num_q_vectors;
struct idpf_vec_regs *reg_vals;
int num_regs, i, err = 0;
u32 rx_itr, tx_itr, val;
@@ -89,8 +91,8 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
}
for (i = 0; i < num_vecs; i++) {
- struct idpf_q_vector *q_vector = &vport->q_vectors[i];
- u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
+ struct idpf_q_vector *q_vector = &rsrc->q_vectors[i];
+ u16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
struct idpf_intr_reg *intr = &q_vector->intr_reg;
u32 spacing;
@@ -119,12 +121,12 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
/* Data vector for NOIRQ queues */
- val = reg_vals[vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg;
- vport->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val);
+ val = reg_vals[rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg;
+ rsrc->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val);
val = VF_INT_DYN_CTLN_WB_ON_ITR_M | VF_INT_DYN_CTLN_INTENA_MSK_M |
FIELD_PREP(VF_INT_DYN_CTLN_ITR_INDX_M, IDPF_NO_ITR_UPDATE_IDX);
- vport->noirq_dyn_ctl_ena = val;
+ rsrc->noirq_dyn_ctl_ena = val;
free_reg_vals:
kfree(reg_vals);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 1b3f9296dd93..65516fb77b20 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -1808,6 +1808,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
{
struct virtchnl2_queue_vector_maps *vqvm __free(kfree) = NULL;
struct virtchnl2_queue_vector *vqv __free(kfree) = NULL;
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_vc_xn_params xn_params = {};
u32 config_sz, chunk_sz, buf_sz;
u32 num_msgs, num_chunks, num_q;
@@ -1846,7 +1847,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
v_idx = vec->v_idx;
tx_itr_idx = vec->tx_itr_idx;
} else {
- v_idx = vport->noirq_v_idx;
+ v_idx = rsrc->noirq_v_idx;
tx_itr_idx = VIRTCHNL2_ITR_IDX_1;
}
@@ -1878,7 +1879,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
vqv[k].queue_id = cpu_to_le32(rxq->q_id);
if (idpf_queue_has(NOIRQ, rxq)) {
- v_idx = vport->noirq_v_idx;
+ v_idx = rsrc->noirq_v_idx;
rx_itr_idx = VIRTCHNL2_ITR_IDX_0;
} else {
v_idx = rxq->q_vector->v_idx;
@@ -1938,7 +1939,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
/**
* idpf_send_enable_queues_msg - send enable queues virtchnl message
- * @vport: Virtual port private data structure
+ * @vport: virtual port private data structure
* @chunks: queue ids received over mailbox
*
* Will send enable queues virtchnl message. Returns 0 on success, negative on
@@ -1952,13 +1953,15 @@ int idpf_send_enable_queues_msg(struct idpf_vport *vport,
/**
* idpf_send_disable_queues_msg - send disable queues virtchnl message
- * @vport: Virtual port private data structure
+ * @vport: virtual port private data structure
+ * @rsrc: pointer to queue and vector resources
* @chunks: queue ids received over mailbox
*
* Will send disable queues virtchnl message. Returns 0 on success, negative
* on failure.
*/
int idpf_send_disable_queues_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks)
{
int err;
@@ -3037,6 +3040,7 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter)
/**
* idpf_vport_alloc_vec_indexes - Get relative vector indexes
* @vport: virtual port data struct
+ * @rsrc: pointer to queue and vector resources
*
* This function requests the vector information required for the vport and
* stores the vector indexes received from the 'global vector distribution'
@@ -3044,13 +3048,14 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter)
*
* Return 0 on success, error on failure
*/
-int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport)
+int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_vector_info vec_info;
int num_alloc_vecs;
u32 req;
- vec_info.num_curr_vecs = vport->num_q_vectors;
+ vec_info.num_curr_vecs = rsrc->num_q_vectors;
if (vec_info.num_curr_vecs)
vec_info.num_curr_vecs += IDPF_RESERVED_VECS;
@@ -3063,7 +3068,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport)
vec_info.index = vport->idx;
num_alloc_vecs = idpf_req_rel_vector_indexes(vport->adapter,
- vport->q_vector_idxs,
+ rsrc->q_vector_idxs,
&vec_info);
if (num_alloc_vecs <= 0) {
dev_err(&vport->adapter->pdev->dev, "Vector distribution failed: %d\n",
@@ -3071,7 +3076,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport)
return -EINVAL;
}
- vport->num_q_vectors = num_alloc_vecs - IDPF_RESERVED_VECS;
+ rsrc->num_q_vectors = num_alloc_vecs - IDPF_RESERVED_VECS;
return 0;
}
@@ -3087,6 +3092,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport)
*/
int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
{
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_adapter *adapter = vport->adapter;
struct virtchnl2_create_vport *vport_msg;
struct idpf_vport_config *vport_config;
@@ -3131,7 +3137,7 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
idpf_vport_init_num_qs(vport, vport_msg);
idpf_vport_calc_num_q_desc(vport);
idpf_vport_calc_num_q_groups(vport);
- idpf_vport_alloc_vec_indexes(vport);
+ idpf_vport_alloc_vec_indexes(vport, rsrc);
vport->crc_enable = adapter->crc_enable;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 3f7e5e6b235d..0b53c7e8a539 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -137,10 +137,12 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport,
int idpf_send_enable_queues_msg(struct idpf_vport *vport,
struct idpf_queue_id_reg_info *chunks);
int idpf_send_disable_queues_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks);
int idpf_send_config_queues_msg(struct idpf_vport *vport);
-int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport);
+int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
int idpf_get_vec_ids(struct idpf_adapter *adapter,
u16 *vecids, int num_vecids,
struct virtchnl2_vector_chunks *chunks);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 4/9] idpf: move queue resources to idpf_q_vec_rsrc structure
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
` (2 preceding siblings ...)
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 3/9] idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 5/9] idpf: reshuffle idpf_vport struct members to avoid holes Pavan Kumar Linga
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin
Move both TX and RX queue resources to the newly introduced
idpf_q_vec_rsrc structure.
While at it, declare the loop iterator in loop and use the correct type.
Reviewed-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf.h | 69 +--
.../net/ethernet/intel/idpf/idpf_ethtool.c | 85 ++--
drivers/net/ethernet/intel/idpf/idpf_lib.c | 71 +--
drivers/net/ethernet/intel/idpf/idpf_ptp.c | 17 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 416 +++++++++---------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 17 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 227 +++++-----
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 12 +-
drivers/net/ethernet/intel/idpf/xdp.c | 33 +-
drivers/net/ethernet/intel/idpf/xdp.h | 6 +-
10 files changed, 508 insertions(+), 445 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
index ccc872441908..4d5dfef30f49 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -265,6 +265,25 @@ struct idpf_fsteer_fltr {
* @noirq_v_idx: ID of the NOIRQ vector
* @noirq_dyn_ctl_ena: value to write to the above to enable it
* @noirq_dyn_ctl: register to enable/disable the vector for NOIRQ queues
+ * @txq_grps: array of TX queue groups
+ * @txq_desc_count: TX queue descriptor count
+ * @complq_desc_count: nompletion queue descriptor count
+ * @txq_model: split queue or single queue queuing model
+ * @num_txq: number of allocated TX queues
+ * @num_complq: number of allocated completion queues
+ * @num_txq_grp: number of TX queue groups
+ * @num_rxq_grp: number of RX queues in a group
+ * @rxq_model: splitq queue or single queue queuing model
+ * @rxq_grps: total number of RX groups. Number of groups * number of RX per
+ * group will yield total number of RX queues.
+ * @num_rxq: number of allocated RX queues
+ * @num_bufq: number of allocated buffer queues
+ * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors
+ * to complete all buffer descriptors for all buffer queues in
+ * the worst case.
+ * @bufq_desc_count: buffer queue descriptor count
+ * @num_bufqs_per_qgrp: buffer queues per RX queue in a given grouping
+ * @base_rxd: true if the driver should use base descriptors instead of flex
*/
struct idpf_q_vec_rsrc {
struct idpf_q_vector *q_vectors;
@@ -274,36 +293,36 @@ struct idpf_q_vec_rsrc {
u32 noirq_dyn_ctl_ena;
void __iomem *noirq_dyn_ctl;
+ struct idpf_txq_group *txq_grps;
+ u32 txq_desc_count;
+ u32 complq_desc_count;
+ u32 txq_model;
+ u16 num_txq;
+ u16 num_complq;
+ u16 num_txq_grp;
+
+ u16 num_rxq_grp;
+ u32 rxq_model;
+ struct idpf_rxq_group *rxq_grps;
+ u16 num_rxq;
+ u16 num_bufq;
+ u32 rxq_desc_count;
+ u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP];
+ u8 num_bufqs_per_qgrp;
+ bool base_rxd;
};
/**
* struct idpf_vport - Handle for netdevices and queue resources
* @dflt_qv_rsrc: contains default queue and vector resources
* @num_txq: Number of allocated TX queues
- * @num_complq: Number of allocated completion queues
- * @txq_desc_count: TX queue descriptor count
- * @complq_desc_count: Completion queue descriptor count
* @compln_clean_budget: Work budget for completion clean
- * @num_txq_grp: Number of TX queue groups
- * @txq_grps: Array of TX queue groups
- * @txq_model: Split queue or single queue queuing model
* @txqs: Used only in hotpath to get to the right queue very fast
* @crc_enable: Enable CRC insertion offload
* @xdpsq_share: whether XDPSQ sharing is enabled
* @num_xdp_txq: number of XDPSQs
* @xdp_txq_offset: index of the first XDPSQ (== number of regular SQs)
* @xdp_prog: installed XDP program
- * @num_rxq: Number of allocated RX queues
- * @num_bufq: Number of allocated buffer queues
- * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors
- * to complete all buffer descriptors for all buffer queues in
- * the worst case.
- * @num_bufqs_per_qgrp: Buffer queues per RX queue in a given grouping
- * @bufq_desc_count: Buffer queue descriptor count
- * @num_rxq_grp: Number of RX queues in a group
- * @rxq_grps: Total number of RX groups. Number of groups * number of RX per
- * group will yield total number of RX queues.
- * @rxq_model: Splitq queue or single queue queuing model
* @rx_ptype_lkup: Lookup table for ptypes on RX
* @adapter: back pointer to associated adapter
* @netdev: Associated net_device. Each vport should have one and only one
@@ -313,7 +332,6 @@ struct idpf_q_vec_rsrc {
* @vport_id: Device given vport identifier
* @idx: Software index in adapter vports struct
* @default_vport: Use this vport if one isn't specified
- * @base_rxd: True if the driver should use base descriptors instead of flex
* @max_mtu: device given max possible MTU
* @default_mac_addr: device will give a default MAC to use
* @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation
@@ -327,13 +345,7 @@ struct idpf_q_vec_rsrc {
struct idpf_vport {
struct idpf_q_vec_rsrc dflt_qv_rsrc;
u16 num_txq;
- u16 num_complq;
- u32 txq_desc_count;
- u32 complq_desc_count;
u32 compln_clean_budget;
- u16 num_txq_grp;
- struct idpf_txq_group *txq_grps;
- u32 txq_model;
struct idpf_tx_queue **txqs;
bool crc_enable;
@@ -342,14 +354,6 @@ struct idpf_vport {
u16 xdp_txq_offset;
struct bpf_prog *xdp_prog;
- u16 num_rxq;
- u16 num_bufq;
- u32 rxq_desc_count;
- u8 num_bufqs_per_qgrp;
- u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP];
- u16 num_rxq_grp;
- struct idpf_rxq_group *rxq_grps;
- u32 rxq_model;
struct libeth_rx_pt *rx_ptype_lkup;
struct idpf_adapter *adapter;
@@ -359,7 +363,6 @@ struct idpf_vport {
u32 vport_id;
u16 idx;
bool default_vport;
- bool base_rxd;
u16 max_mtu;
u8 default_mac_addr[ETH_ALEN];
diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
index b205c2b4971b..0b6c5620bef1 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
@@ -29,7 +29,7 @@ static int idpf_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
switch (cmd->cmd) {
case ETHTOOL_GRXRINGS:
- cmd->data = vport->num_rxq;
+ cmd->data = vport->dflt_qv_rsrc.num_rxq;
break;
case ETHTOOL_GRXCLSRLCNT:
cmd->rule_cnt = user_config->num_fsteer_fltrs;
@@ -596,8 +596,8 @@ static void idpf_get_ringparam(struct net_device *netdev,
ring->rx_max_pending = IDPF_MAX_RXQ_DESC;
ring->tx_max_pending = IDPF_MAX_TXQ_DESC;
- ring->rx_pending = vport->rxq_desc_count;
- ring->tx_pending = vport->txq_desc_count;
+ ring->rx_pending = vport->dflt_qv_rsrc.rxq_desc_count;
+ ring->tx_pending = vport->dflt_qv_rsrc.txq_desc_count;
kring->tcp_data_split = idpf_vport_get_hsplit(vport);
@@ -621,8 +621,9 @@ static int idpf_set_ringparam(struct net_device *netdev,
{
struct idpf_vport_user_config_data *config_data;
u32 new_rx_count, new_tx_count;
+ struct idpf_q_vec_rsrc *rsrc;
struct idpf_vport *vport;
- int i, err = 0;
+ int err = 0;
u16 idx;
idpf_vport_ctrl_lock(netdev);
@@ -656,8 +657,9 @@ static int idpf_set_ringparam(struct net_device *netdev,
netdev_info(netdev, "Requested Tx descriptor count rounded up to %u\n",
new_tx_count);
- if (new_tx_count == vport->txq_desc_count &&
- new_rx_count == vport->rxq_desc_count &&
+ rsrc = &vport->dflt_qv_rsrc;
+ if (new_tx_count == rsrc->txq_desc_count &&
+ new_rx_count == rsrc->rxq_desc_count &&
kring->tcp_data_split == idpf_vport_get_hsplit(vport))
goto unlock_mutex;
@@ -676,10 +678,10 @@ static int idpf_set_ringparam(struct net_device *netdev,
/* Since we adjusted the RX completion queue count, the RX buffer queue
* descriptor count needs to be adjusted as well
*/
- for (i = 0; i < vport->num_bufqs_per_qgrp; i++)
- vport->bufq_desc_count[i] =
+ for (u8 i = 0; i < rsrc->num_bufqs_per_qgrp; i++)
+ rsrc->bufq_desc_count[i] =
IDPF_RX_BUFQ_DESC_COUNT(new_rx_count,
- vport->num_bufqs_per_qgrp);
+ rsrc->num_bufqs_per_qgrp);
err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_DESC_CHANGE);
@@ -1056,7 +1058,7 @@ static void idpf_add_port_stats(struct idpf_vport *vport, u64 **data)
static void idpf_collect_queue_stats(struct idpf_vport *vport)
{
struct idpf_port_stats *pstats = &vport->port_stats;
- int i, j;
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
/* zero out port stats since they're actually tracked in per
* queue stats; this is only for reporting
@@ -1072,22 +1074,22 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport)
u64_stats_set(&pstats->tx_dma_map_errs, 0);
u64_stats_update_end(&pstats->stats_sync);
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rxq_grp = &rsrc->rxq_grps[i];
u16 num_rxq;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
num_rxq = rxq_grp->splitq.num_rxq_sets;
else
num_rxq = rxq_grp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs;
struct idpf_rx_queue_stats *stats;
struct idpf_rx_queue *rxq;
unsigned int start;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
rxq = &rxq_grp->splitq.rxq_sets[j]->rxq;
else
rxq = rxq_grp->singleq.rxqs[j];
@@ -1114,10 +1116,10 @@ static void idpf_collect_queue_stats(struct idpf_vport *vport)
}
}
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i];
- for (j = 0; j < txq_grp->num_txq; j++) {
+ for (u16 j = 0; j < txq_grp->num_txq; j++) {
u64 linearize, qbusy, skb_drops, dma_map_errs;
struct idpf_tx_queue *txq = txq_grp->txqs[j];
struct idpf_tx_queue_stats *stats;
@@ -1160,9 +1162,9 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_config *vport_config;
+ struct idpf_q_vec_rsrc *rsrc;
struct idpf_vport *vport;
unsigned int total = 0;
- unsigned int i, j;
bool is_splitq;
u16 qtype;
@@ -1180,12 +1182,13 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
idpf_collect_queue_stats(vport);
idpf_add_port_stats(vport, &data);
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ rsrc = &vport->dflt_qv_rsrc;
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i];
qtype = VIRTCHNL2_QUEUE_TYPE_TX;
- for (j = 0; j < txq_grp->num_txq; j++, total++) {
+ for (u16 j = 0; j < txq_grp->num_txq; j++, total++) {
struct idpf_tx_queue *txq = txq_grp->txqs[j];
if (!txq)
@@ -1205,10 +1208,10 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_TX);
total = 0;
- is_splitq = idpf_is_queue_model_split(vport->rxq_model);
+ is_splitq = idpf_is_queue_model_split(rsrc->rxq_model);
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rxq_grp = &rsrc->rxq_grps[i];
u16 num_rxq;
qtype = VIRTCHNL2_QUEUE_TYPE_RX;
@@ -1218,7 +1221,7 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
else
num_rxq = rxq_grp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++, total++) {
+ for (u16 j = 0; j < num_rxq; j++, total++) {
struct idpf_rx_queue *rxq;
if (is_splitq)
@@ -1250,15 +1253,16 @@ static void idpf_get_ethtool_stats(struct net_device *netdev,
static struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport,
int q_num)
{
+ const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
int q_grp, q_idx;
- if (!idpf_is_queue_model_split(vport->rxq_model))
- return vport->rxq_grps->singleq.rxqs[q_num]->q_vector;
+ if (!idpf_is_queue_model_split(rsrc->rxq_model))
+ return rsrc->rxq_grps->singleq.rxqs[q_num]->q_vector;
q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
- return vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector;
+ return rsrc->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq.q_vector;
}
/**
@@ -1271,14 +1275,15 @@ static struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport,
static struct idpf_q_vector *idpf_find_txq_vec(const struct idpf_vport *vport,
int q_num)
{
+ const struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
int q_grp;
- if (!idpf_is_queue_model_split(vport->txq_model))
+ if (!idpf_is_queue_model_split(rsrc->txq_model))
return vport->txqs[q_num]->q_vector;
q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP;
- return vport->txq_grps[q_grp].complq->q_vector;
+ return rsrc->txq_grps[q_grp].complq->q_vector;
}
/**
@@ -1315,7 +1320,8 @@ static int idpf_get_q_coalesce(struct net_device *netdev,
u32 q_num)
{
const struct idpf_netdev_priv *np = netdev_priv(netdev);
- const struct idpf_vport *vport;
+ struct idpf_q_vec_rsrc *rsrc;
+ struct idpf_vport *vport;
int err = 0;
idpf_vport_ctrl_lock(netdev);
@@ -1324,16 +1330,17 @@ static int idpf_get_q_coalesce(struct net_device *netdev,
if (np->state != __IDPF_VPORT_UP)
goto unlock_mutex;
- if (q_num >= vport->num_rxq && q_num >= vport->num_txq) {
+ rsrc = &vport->dflt_qv_rsrc;
+ if (q_num >= rsrc->num_rxq && q_num >= rsrc->num_txq) {
err = -EINVAL;
goto unlock_mutex;
}
- if (q_num < vport->num_rxq)
+ if (q_num < rsrc->num_rxq)
__idpf_get_q_coalesce(ec, idpf_find_rxq_vec(vport, q_num),
VIRTCHNL2_QUEUE_TYPE_RX);
- if (q_num < vport->num_txq)
+ if (q_num < rsrc->num_txq)
__idpf_get_q_coalesce(ec, idpf_find_txq_vec(vport, q_num),
VIRTCHNL2_QUEUE_TYPE_TX);
@@ -1501,8 +1508,9 @@ static int idpf_set_coalesce(struct net_device *netdev,
struct idpf_netdev_priv *np = netdev_priv(netdev);
struct idpf_vport_user_config_data *user_config;
struct idpf_q_coalesce *q_coal;
+ struct idpf_q_vec_rsrc *rsrc;
struct idpf_vport *vport;
- int i, err = 0;
+ int err = 0;
user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
@@ -1512,14 +1520,15 @@ static int idpf_set_coalesce(struct net_device *netdev,
if (np->state != __IDPF_VPORT_UP)
goto unlock_mutex;
- for (i = 0; i < vport->num_txq; i++) {
+ rsrc = &vport->dflt_qv_rsrc;
+ for (u16 i = 0; i < rsrc->num_txq; i++) {
q_coal = &user_config->q_coalesce[i];
err = idpf_set_q_coalesce(vport, q_coal, ec, i, false);
if (err)
goto unlock_mutex;
}
- for (i = 0; i < vport->num_rxq; i++) {
+ for (u16 i = 0; i < rsrc->num_rxq; i++) {
q_coal = &user_config->q_coalesce[i];
err = idpf_set_q_coalesce(vport, q_coal, ec, i, true);
if (err)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index c4ec081a9917..aee80600ea01 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -854,7 +854,7 @@ static void idpf_vport_stop(struct idpf_vport *vport)
idpf_send_disable_vport_msg(vport);
idpf_send_disable_queues_msg(vport, rsrc, chunks);
- idpf_send_map_unmap_queue_vector_msg(vport, false);
+ idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false);
/* Normally we ask for queues in create_vport, but if the number of
* initially requested queues have changed, for example via ethtool
* set channels, we do delete queues and then add the queues back
@@ -867,8 +867,8 @@ static void idpf_vport_stop(struct idpf_vport *vport)
vport->link_up = false;
idpf_vport_intr_deinit(vport, rsrc);
- idpf_xdp_rxq_info_deinit_all(vport);
- idpf_vport_queues_rel(vport);
+ idpf_xdp_rxq_info_deinit_all(rsrc);
+ idpf_vport_queues_rel(vport, rsrc);
idpf_vport_intr_rel(rsrc);
np->state = __IDPF_VPORT_DOWN;
}
@@ -1014,7 +1014,7 @@ static void idpf_vport_dealloc(struct idpf_vport *vport)
*/
static bool idpf_is_hsplit_supported(const struct idpf_vport *vport)
{
- return idpf_is_queue_model_split(vport->rxq_model) &&
+ return idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model) &&
idpf_is_cap_ena_all(vport->adapter, IDPF_HSPLIT_CAPS,
IDPF_CAP_HSPLIT);
}
@@ -1274,9 +1274,10 @@ static void idpf_restore_features(struct idpf_vport *vport)
*/
static int idpf_set_real_num_queues(struct idpf_vport *vport)
{
- int err, txq = vport->num_txq - vport->num_xdp_txq;
+ int err, txq = vport->dflt_qv_rsrc.num_txq - vport->num_xdp_txq;
- err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq);
+ err = netif_set_real_num_rx_queues(vport->netdev,
+ vport->dflt_qv_rsrc.num_rxq);
if (err)
return err;
@@ -1305,24 +1306,22 @@ static int idpf_up_complete(struct idpf_vport *vport)
/**
* idpf_rx_init_buf_tail - Write initial buffer ring tail value
- * @vport: virtual port struct
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
+static void idpf_rx_init_buf_tail(struct idpf_q_vec_rsrc *rsrc)
{
- int i, j;
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *grp = &rsrc->rxq_grps[i];
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *grp = &vport->rxq_grps[i];
-
- if (idpf_is_queue_model_split(vport->rxq_model)) {
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ if (idpf_is_queue_model_split(rsrc->rxq_model)) {
+ for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
const struct idpf_buf_queue *q =
&grp->splitq.bufq_sets[j].bufq;
writel(q->next_to_alloc, q->tail);
}
} else {
- for (j = 0; j < grp->singleq.num_rxq; j++) {
+ for (u16 j = 0; j < grp->singleq.num_rxq; j++) {
const struct idpf_rx_queue *q =
grp->singleq.rxqs[j];
@@ -1358,14 +1357,14 @@ static int idpf_vport_open(struct idpf_vport *vport)
return err;
}
- err = idpf_vport_queues_alloc(vport);
+ err = idpf_vport_queues_alloc(vport, rsrc);
if (err)
goto intr_rel;
vport_config = adapter->vport_config[vport->idx];
chunks = &vport_config->qid_reg_info;
- err = idpf_vport_queue_ids_init(vport, chunks);
+ err = idpf_vport_queue_ids_init(vport, rsrc, chunks);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n",
vport->vport_id, err);
@@ -1379,23 +1378,23 @@ static int idpf_vport_open(struct idpf_vport *vport)
goto queues_rel;
}
- err = idpf_rx_bufs_init_all(vport);
+ err = idpf_rx_bufs_init_all(vport, rsrc);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n",
vport->vport_id, err);
goto queues_rel;
}
- err = idpf_queue_reg_init(vport, chunks);
+ err = idpf_queue_reg_init(vport, rsrc, chunks);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",
vport->vport_id, err);
goto queues_rel;
}
- idpf_rx_init_buf_tail(vport);
+ idpf_rx_init_buf_tail(rsrc);
- err = idpf_xdp_rxq_info_init_all(vport);
+ err = idpf_xdp_rxq_info_init_all(rsrc);
if (err) {
netdev_err(vport->netdev,
"Failed to initialize XDP RxQ info for vport %u: %pe\n",
@@ -1405,14 +1404,14 @@ static int idpf_vport_open(struct idpf_vport *vport)
idpf_vport_intr_ena(vport, rsrc);
- err = idpf_send_config_queues_msg(vport);
+ err = idpf_send_config_queues_msg(vport, rsrc);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n",
vport->vport_id, err);
goto rxq_deinit;
}
- err = idpf_send_map_unmap_queue_vector_msg(vport, true);
+ err = idpf_send_map_unmap_queue_vector_msg(vport, rsrc, true);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n",
vport->vport_id, err);
@@ -1462,13 +1461,13 @@ static int idpf_vport_open(struct idpf_vport *vport)
disable_queues:
idpf_send_disable_queues_msg(vport, rsrc, chunks);
unmap_queue_vectors:
- idpf_send_map_unmap_queue_vector_msg(vport, false);
+ idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false);
rxq_deinit:
- idpf_xdp_rxq_info_deinit_all(vport);
+ idpf_xdp_rxq_info_deinit_all(rsrc);
intr_deinit:
idpf_vport_intr_deinit(vport, rsrc);
queues_rel:
- idpf_vport_queues_rel(vport);
+ idpf_vport_queues_rel(vport, rsrc);
intr_rel:
idpf_vport_intr_rel(rsrc);
@@ -1879,9 +1878,11 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
enum idpf_vport_reset_cause reset_cause)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
enum idpf_vport_state current_state = np->state;
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
+ struct idpf_q_vec_rsrc *new_rsrc;
struct idpf_vport *new_vport;
int err;
@@ -1908,16 +1909,18 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
*/
memcpy(new_vport, vport, offsetof(struct idpf_vport, link_up));
+ new_rsrc = &new_vport->dflt_qv_rsrc;
+
/* Adjust resource parameters prior to reallocating resources */
switch (reset_cause) {
case IDPF_SR_Q_CHANGE:
- err = idpf_vport_adjust_qs(new_vport);
+ err = idpf_vport_adjust_qs(new_vport, new_rsrc);
if (err)
goto free_vport;
break;
case IDPF_SR_Q_DESC_CHANGE:
/* Update queue parameters before allocating resources */
- idpf_vport_calc_num_q_desc(new_vport);
+ idpf_vport_calc_num_q_desc(new_vport, new_rsrc);
break;
case IDPF_SR_MTU_CHANGE:
case IDPF_SR_RSC_CHANGE:
@@ -1944,10 +1947,10 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
* to add code to add_queues to change the vport config within
* vport itself as it will be wiped with a memcpy later.
*/
- err = idpf_send_add_queues_msg(vport, new_vport->num_txq,
- new_vport->num_complq,
- new_vport->num_rxq,
- new_vport->num_bufq);
+ err = idpf_send_add_queues_msg(vport, new_rsrc->num_txq,
+ new_rsrc->num_complq,
+ new_rsrc->num_rxq,
+ new_rsrc->num_bufq);
if (err)
goto err_reset;
@@ -1971,8 +1974,8 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
return err;
err_reset:
- idpf_send_add_queues_msg(vport, vport->num_txq, vport->num_complq,
- vport->num_rxq, vport->num_bufq);
+ idpf_send_add_queues_msg(vport, rsrc->num_txq, rsrc->num_complq,
+ rsrc->num_rxq, rsrc->num_bufq);
err_open:
if (current_state == __IDPF_VPORT_UP)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_ptp.c b/drivers/net/ethernet/intel/idpf/idpf_ptp.c
index ee21f2ff0cad..31da005f27fc 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_ptp.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_ptp.c
@@ -384,15 +384,17 @@ static int idpf_ptp_update_cached_phctime(struct idpf_adapter *adapter)
WRITE_ONCE(adapter->ptp->cached_phc_jiffies, jiffies);
idpf_for_each_vport(adapter, vport) {
+ struct idpf_q_vec_rsrc *rsrc;
bool split;
- if (!vport || !vport->rxq_grps)
+ if (!vport || !vport->dflt_qv_rsrc.rxq_grps)
continue;
- split = idpf_is_queue_model_split(vport->rxq_model);
+ rsrc = &vport->dflt_qv_rsrc;
+ split = idpf_is_queue_model_split(rsrc->rxq_model);
- for (u16 i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *grp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *grp = &rsrc->rxq_grps[i];
idpf_ptp_update_phctime_rxq_grp(grp, split, systime);
}
@@ -676,9 +678,10 @@ int idpf_ptp_request_ts(struct idpf_tx_queue *tx_q, struct sk_buff *skb,
*/
static void idpf_ptp_set_rx_tstamp(struct idpf_vport *vport, int rx_filter)
{
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
bool enable = true, splitq;
- splitq = idpf_is_queue_model_split(vport->rxq_model);
+ splitq = idpf_is_queue_model_split(rsrc->rxq_model);
if (rx_filter == HWTSTAMP_FILTER_NONE) {
enable = false;
@@ -687,8 +690,8 @@ static void idpf_ptp_set_rx_tstamp(struct idpf_vport *vport, int rx_filter)
vport->tstamp_config.rx_filter = HWTSTAMP_FILTER_ALL;
}
- for (u16 i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *grp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *grp = &rsrc->rxq_grps[i];
struct idpf_rx_queue *rx_queue;
u16 j, num_rxq;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 7bd6a8e9ef42..228c79a2f0a0 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -176,24 +176,22 @@ static void idpf_compl_desc_rel(struct idpf_compl_queue *complq)
/**
* idpf_tx_desc_rel_all - Free Tx Resources for All Queues
- * @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
*
* Free all transmit software resources
*/
-static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
+static void idpf_tx_desc_rel_all(struct idpf_q_vec_rsrc *rsrc)
{
- int i, j;
-
- if (!vport->txq_grps)
+ if (!rsrc->txq_grps)
return;
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i];
- for (j = 0; j < txq_grp->num_txq; j++)
+ for (u16 j = 0; j < txq_grp->num_txq; j++)
idpf_tx_desc_rel(txq_grp->txqs[j]);
- if (idpf_is_queue_model_split(vport->txq_model))
+ if (idpf_is_queue_model_split(rsrc->txq_model))
idpf_compl_desc_rel(txq_grp->complq);
}
}
@@ -246,13 +244,11 @@ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q)
/**
* idpf_tx_desc_alloc - Allocate the Tx descriptors
- * @vport: vport to allocate resources for
* @tx_q: the tx ring to set up
*
* Returns 0 on success, negative on failure
*/
-static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
- struct idpf_tx_queue *tx_q)
+static int idpf_tx_desc_alloc(struct idpf_tx_queue *tx_q)
{
struct device *dev = tx_q->dev;
int err;
@@ -288,13 +284,11 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
/**
* idpf_compl_desc_alloc - allocate completion descriptors
- * @vport: vport to allocate resources for
* @complq: completion queue to set up
*
* Return: 0 on success, -errno on failure.
*/
-static int idpf_compl_desc_alloc(const struct idpf_vport *vport,
- struct idpf_compl_queue *complq)
+static int idpf_compl_desc_alloc(struct idpf_compl_queue *complq)
{
u32 desc_size;
@@ -318,24 +312,25 @@ static int idpf_compl_desc_alloc(const struct idpf_vport *vport,
/**
* idpf_tx_desc_alloc_all - allocate all queues Tx resources
* @vport: virtual port private structure
+ * @rsrc: pointer to queue and vector resources
*
* Returns 0 on success, negative on failure
*/
-static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
+static int idpf_tx_desc_alloc_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
int err = 0;
- int i, j;
/* Setup buffer queues. In single queue model buffer queues and
* completion queues will be same
*/
- for (i = 0; i < vport->num_txq_grp; i++) {
- for (j = 0; j < vport->txq_grps[i].num_txq; j++) {
- struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j];
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ for (u16 j = 0; j < rsrc->txq_grps[i].num_txq; j++) {
+ struct idpf_tx_queue *txq = rsrc->txq_grps[i].txqs[j];
u8 gen_bits = 0;
u16 bufidx_mask;
- err = idpf_tx_desc_alloc(vport, txq);
+ err = idpf_tx_desc_alloc(txq);
if (err) {
pci_err(vport->adapter->pdev,
"Allocation for Tx Queue %u failed\n",
@@ -343,7 +338,7 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
goto err_out;
}
- if (!idpf_is_queue_model_split(vport->txq_model) ||
+ if (!idpf_is_queue_model_split(rsrc->txq_model) ||
idpf_queue_has(XDP, txq))
continue;
@@ -373,11 +368,11 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
GETMAXVAL(txq->compl_tag_gen_s);
}
- if (!idpf_is_queue_model_split(vport->txq_model))
+ if (!idpf_is_queue_model_split(rsrc->txq_model))
continue;
/* Setup completion queues */
- err = idpf_compl_desc_alloc(vport, vport->txq_grps[i].complq);
+ err = idpf_compl_desc_alloc(rsrc->txq_grps[i].complq);
if (err) {
pci_err(vport->adapter->pdev,
"Allocation for Tx Completion Queue %u failed\n",
@@ -388,7 +383,7 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
err_out:
if (err)
- idpf_tx_desc_rel_all(vport);
+ idpf_tx_desc_rel_all(rsrc);
return err;
}
@@ -532,38 +527,39 @@ static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue *bufq,
/**
* idpf_rx_desc_rel_all - Free Rx Resources for All Queues
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
*
* Free all rx queues resources
*/
-static void idpf_rx_desc_rel_all(struct idpf_vport *vport)
+static void idpf_rx_desc_rel_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct device *dev = &vport->adapter->pdev->dev;
struct idpf_rxq_group *rx_qgrp;
u16 num_rxq;
- int i, j;
- if (!vport->rxq_grps)
+ if (!rsrc->rxq_grps)
return;
- for (i = 0; i < vport->num_rxq_grp; i++) {
- rx_qgrp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ rx_qgrp = &rsrc->rxq_grps[i];
- if (!idpf_is_queue_model_split(vport->rxq_model)) {
- for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+ if (!idpf_is_queue_model_split(rsrc->rxq_model)) {
+ for (u16 j = 0; j < rx_qgrp->singleq.num_rxq; j++)
idpf_rx_desc_rel(rx_qgrp->singleq.rxqs[j], dev,
VIRTCHNL2_QUEUE_MODEL_SINGLE);
continue;
}
num_rxq = rx_qgrp->splitq.num_rxq_sets;
- for (j = 0; j < num_rxq; j++)
+ for (u16 j = 0; j < num_rxq; j++)
idpf_rx_desc_rel(&rx_qgrp->splitq.rxq_sets[j]->rxq,
dev, VIRTCHNL2_QUEUE_MODEL_SPLIT);
if (!rx_qgrp->splitq.bufq_sets)
continue;
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
struct idpf_bufq_set *bufq_set =
&rx_qgrp->splitq.bufq_sets[j];
@@ -818,26 +814,28 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq,
/**
* idpf_rx_bufs_init_all - Initialize all RX bufs
- * @vport: virtual port struct
+ * @vport: pointer to vport struct
+ * @rsrc: pointer to queue and vector resources
*
* Returns 0 on success, negative on failure
*/
-int idpf_rx_bufs_init_all(struct idpf_vport *vport)
+int idpf_rx_bufs_init_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
- bool split = idpf_is_queue_model_split(vport->rxq_model);
- int i, j, err;
+ bool split = idpf_is_queue_model_split(rsrc->rxq_model);
+ int err;
- idpf_xdp_copy_prog_to_rqs(vport, vport->xdp_prog);
+ idpf_xdp_copy_prog_to_rqs(rsrc, vport->xdp_prog);
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u32 truesize = 0;
/* Allocate bufs for the rxq itself in singleq */
if (!split) {
int num_rxq = rx_qgrp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
struct idpf_rx_queue *q;
q = rx_qgrp->singleq.rxqs[j];
@@ -850,7 +848,7 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport)
}
/* Otherwise, allocate bufs for the buffer queues */
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
enum libeth_fqe_type type;
struct idpf_buf_queue *q;
@@ -933,26 +931,28 @@ static int idpf_bufq_desc_alloc(const struct idpf_vport *vport,
/**
* idpf_rx_desc_alloc_all - allocate all RX queues resources
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
*
* Returns 0 on success, negative on failure
*/
-static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
+static int idpf_rx_desc_alloc_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_rxq_group *rx_qgrp;
- int i, j, err;
u16 num_rxq;
+ int err;
- for (i = 0; i < vport->num_rxq_grp; i++) {
- rx_qgrp = &vport->rxq_grps[i];
- if (idpf_is_queue_model_split(vport->rxq_model))
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ rx_qgrp = &rsrc->rxq_grps[i];
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
num_rxq = rx_qgrp->splitq.num_rxq_sets;
else
num_rxq = rx_qgrp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
struct idpf_rx_queue *q;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
else
q = rx_qgrp->singleq.rxqs[j];
@@ -966,10 +966,10 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
}
}
- if (!idpf_is_queue_model_split(vport->rxq_model))
+ if (!idpf_is_queue_model_split(rsrc->rxq_model))
continue;
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
struct idpf_buf_queue *q;
q = &rx_qgrp->splitq.bufq_sets[j].bufq;
@@ -987,7 +987,7 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
return 0;
err_out:
- idpf_rx_desc_rel_all(vport);
+ idpf_rx_desc_rel_all(vport, rsrc);
return err;
}
@@ -995,23 +995,24 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
/**
* idpf_txq_group_rel - Release all resources for txq groups
* @vport: vport to release txq groups on
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_txq_group_rel(struct idpf_vport *vport)
+static void idpf_txq_group_rel(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
bool split, flow_sch_en;
- int i, j;
- if (!vport->txq_grps)
+ if (!rsrc->txq_grps)
return;
- split = idpf_is_queue_model_split(vport->txq_model);
+ split = idpf_is_queue_model_split(rsrc->txq_model);
flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
VIRTCHNL2_CAP_SPLITQ_QSCHED);
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i];
- for (j = 0; j < txq_grp->num_txq; j++) {
+ for (u16 j = 0; j < txq_grp->num_txq; j++) {
kfree(txq_grp->txqs[j]);
txq_grp->txqs[j] = NULL;
}
@@ -1025,8 +1026,8 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
if (flow_sch_en)
kfree(txq_grp->stashes);
}
- kfree(vport->txq_grps);
- vport->txq_grps = NULL;
+ kfree(rsrc->txq_grps);
+ rsrc->txq_grps = NULL;
}
/**
@@ -1035,12 +1036,10 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
*/
static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp)
{
- int i, j;
-
- for (i = 0; i < rx_qgrp->vport->num_bufqs_per_qgrp; i++) {
+ for (u8 i = 0; i < rx_qgrp->vport->dflt_qv_rsrc.num_bufqs_per_qgrp; i++) {
struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[i];
- for (j = 0; j < bufq_set->num_refillqs; j++) {
+ for (u16 j = 0; j < bufq_set->num_refillqs; j++) {
kfree(bufq_set->refillqs[j].ring);
bufq_set->refillqs[j].ring = NULL;
}
@@ -1051,23 +1050,20 @@ static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp)
/**
* idpf_rxq_group_rel - Release all resources for rxq groups
- * @vport: vport to release rxq groups on
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_rxq_group_rel(struct idpf_vport *vport)
+static void idpf_rxq_group_rel(struct idpf_q_vec_rsrc *rsrc)
{
- int i;
-
- if (!vport->rxq_grps)
+ if (!rsrc->rxq_grps)
return;
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u16 num_rxq;
- int j;
- if (idpf_is_queue_model_split(vport->rxq_model)) {
+ if (idpf_is_queue_model_split(rsrc->rxq_model)) {
num_rxq = rx_qgrp->splitq.num_rxq_sets;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
kfree(rx_qgrp->splitq.rxq_sets[j]);
rx_qgrp->splitq.rxq_sets[j] = NULL;
}
@@ -1077,41 +1073,45 @@ static void idpf_rxq_group_rel(struct idpf_vport *vport)
rx_qgrp->splitq.bufq_sets = NULL;
} else {
num_rxq = rx_qgrp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
kfree(rx_qgrp->singleq.rxqs[j]);
rx_qgrp->singleq.rxqs[j] = NULL;
}
}
}
- kfree(vport->rxq_grps);
- vport->rxq_grps = NULL;
+ kfree(rsrc->rxq_grps);
+ rsrc->rxq_grps = NULL;
}
/**
* idpf_vport_queue_grp_rel_all - Release all queue groups
* @vport: vport to release queue groups for
+ * @rsrc: pointer to queue and vector resources
*/
-static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport)
+static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
- idpf_txq_group_rel(vport);
- idpf_rxq_group_rel(vport);
+ idpf_txq_group_rel(vport, rsrc);
+ idpf_rxq_group_rel(rsrc);
}
/**
* idpf_vport_queues_rel - Free memory for all queues
* @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* Free the memory allocated for queues associated to a vport
*/
-void idpf_vport_queues_rel(struct idpf_vport *vport)
+void idpf_vport_queues_rel(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
- idpf_xdp_copy_prog_to_rqs(vport, NULL);
+ idpf_xdp_copy_prog_to_rqs(rsrc, NULL);
- idpf_tx_desc_rel_all(vport);
- idpf_rx_desc_rel_all(vport);
+ idpf_tx_desc_rel_all(rsrc);
+ idpf_rx_desc_rel_all(vport, rsrc);
idpf_xdpsqs_put(vport);
- idpf_vport_queue_grp_rel_all(vport);
+ idpf_vport_queue_grp_rel_all(vport, rsrc);
kfree(vport->txqs);
vport->txqs = NULL;
@@ -1120,6 +1120,7 @@ void idpf_vport_queues_rel(struct idpf_vport *vport)
/**
* idpf_vport_init_fast_path_txqs - Initialize fast path txq array
* @vport: vport to init txqs on
+ * @rsrc: pointer to queue and vector resources
*
* We get a queue index from skb->queue_mapping and we need a fast way to
* dereference the queue from queue groups. This allows us to quickly pull a
@@ -1127,22 +1128,23 @@ void idpf_vport_queues_rel(struct idpf_vport *vport)
*
* Returns 0 on success, negative on failure
*/
-static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport)
+static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_ptp_vport_tx_tstamp_caps *caps = vport->tx_tstamp_caps;
struct work_struct *tstamp_task = &vport->tstamp_task;
- int i, j, k = 0;
+ int k = 0;
- vport->txqs = kcalloc(vport->num_txq, sizeof(*vport->txqs),
+ vport->txqs = kcalloc(rsrc->num_txq, sizeof(*vport->txqs),
GFP_KERNEL);
-
if (!vport->txqs)
return -ENOMEM;
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_grp = &vport->txq_grps[i];
+ vport->num_txq = rsrc->num_txq;
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_grp = &rsrc->txq_grps[i];
- for (j = 0; j < tx_grp->num_txq; j++, k++) {
+ for (u16 j = 0; j < tx_grp->num_txq; j++, k++) {
vport->txqs[k] = tx_grp->txqs[j];
vport->txqs[k]->idx = k;
@@ -1161,16 +1163,18 @@ static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport)
* idpf_vport_init_num_qs - Initialize number of queues
* @vport: vport to initialize queues
* @vport_msg: data to be filled into vport
+ * @rsrc: pointer to queue and vector resources
*/
void idpf_vport_init_num_qs(struct idpf_vport *vport,
- struct virtchnl2_create_vport *vport_msg)
+ struct virtchnl2_create_vport *vport_msg,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_vport_user_config_data *config_data;
u16 idx = vport->idx;
config_data = &vport->adapter->vport_config[idx]->user_config;
- vport->num_txq = le16_to_cpu(vport_msg->num_tx_q);
- vport->num_rxq = le16_to_cpu(vport_msg->num_rx_q);
+ rsrc->num_txq = le16_to_cpu(vport_msg->num_tx_q);
+ rsrc->num_rxq = le16_to_cpu(vport_msg->num_rx_q);
/* number of txqs and rxqs in config data will be zeros only in the
* driver load path and we dont update them there after
*/
@@ -1179,10 +1183,10 @@ void idpf_vport_init_num_qs(struct idpf_vport *vport,
config_data->num_req_rx_qs = le16_to_cpu(vport_msg->num_rx_q);
}
- if (idpf_is_queue_model_split(vport->txq_model))
- vport->num_complq = le16_to_cpu(vport_msg->num_tx_complq);
- if (idpf_is_queue_model_split(vport->rxq_model))
- vport->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq);
+ if (idpf_is_queue_model_split(rsrc->txq_model))
+ rsrc->num_complq = le16_to_cpu(vport_msg->num_tx_complq);
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
+ rsrc->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq);
vport->xdp_prog = config_data->xdp_prog;
if (idpf_xdp_enabled(vport)) {
@@ -1197,56 +1201,57 @@ void idpf_vport_init_num_qs(struct idpf_vport *vport,
}
/* Adjust number of buffer queues per Rx queue group. */
- if (!idpf_is_queue_model_split(vport->rxq_model)) {
- vport->num_bufqs_per_qgrp = 0;
+ if (!idpf_is_queue_model_split(rsrc->rxq_model)) {
+ rsrc->num_bufqs_per_qgrp = 0;
return;
}
- vport->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP;
+ rsrc->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP;
}
/**
* idpf_vport_calc_num_q_desc - Calculate number of queue groups
* @vport: vport to calculate q groups for
+ * @rsrc: pointer to queue and vector resources
*/
-void idpf_vport_calc_num_q_desc(struct idpf_vport *vport)
+void idpf_vport_calc_num_q_desc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct idpf_vport_user_config_data *config_data;
- int num_bufqs = vport->num_bufqs_per_qgrp;
+ u8 num_bufqs = rsrc->num_bufqs_per_qgrp;
u32 num_req_txq_desc, num_req_rxq_desc;
u16 idx = vport->idx;
- int i;
config_data = &vport->adapter->vport_config[idx]->user_config;
num_req_txq_desc = config_data->num_req_txq_desc;
num_req_rxq_desc = config_data->num_req_rxq_desc;
- vport->complq_desc_count = 0;
+ rsrc->complq_desc_count = 0;
if (num_req_txq_desc) {
- vport->txq_desc_count = num_req_txq_desc;
- if (idpf_is_queue_model_split(vport->txq_model)) {
- vport->complq_desc_count = num_req_txq_desc;
- if (vport->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC)
- vport->complq_desc_count =
+ rsrc->txq_desc_count = num_req_txq_desc;
+ if (idpf_is_queue_model_split(rsrc->txq_model)) {
+ rsrc->complq_desc_count = num_req_txq_desc;
+ if (rsrc->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC)
+ rsrc->complq_desc_count =
IDPF_MIN_TXQ_COMPLQ_DESC;
}
} else {
- vport->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT;
- if (idpf_is_queue_model_split(vport->txq_model))
- vport->complq_desc_count =
+ rsrc->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT;
+ if (idpf_is_queue_model_split(rsrc->txq_model))
+ rsrc->complq_desc_count =
IDPF_DFLT_TX_COMPLQ_DESC_COUNT;
}
if (num_req_rxq_desc)
- vport->rxq_desc_count = num_req_rxq_desc;
+ rsrc->rxq_desc_count = num_req_rxq_desc;
else
- vport->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT;
+ rsrc->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT;
- for (i = 0; i < num_bufqs; i++) {
- if (!vport->bufq_desc_count[i])
- vport->bufq_desc_count[i] =
- IDPF_RX_BUFQ_DESC_COUNT(vport->rxq_desc_count,
+ for (u8 i = 0; i < num_bufqs; i++) {
+ if (!rsrc->bufq_desc_count[i])
+ rsrc->bufq_desc_count[i] =
+ IDPF_RX_BUFQ_DESC_COUNT(rsrc->rxq_desc_count,
num_bufqs);
}
}
@@ -1335,54 +1340,54 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx,
/**
* idpf_vport_calc_num_q_groups - Calculate number of queue groups
- * @vport: vport to calculate q groups for
+ * @rsrc: pointer to queue and vector resources
*/
-void idpf_vport_calc_num_q_groups(struct idpf_vport *vport)
+void idpf_vport_calc_num_q_groups(struct idpf_q_vec_rsrc *rsrc)
{
- if (idpf_is_queue_model_split(vport->txq_model))
- vport->num_txq_grp = vport->num_txq;
+ if (idpf_is_queue_model_split(rsrc->txq_model))
+ rsrc->num_txq_grp = rsrc->num_txq;
else
- vport->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS;
+ rsrc->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS;
- if (idpf_is_queue_model_split(vport->rxq_model))
- vport->num_rxq_grp = vport->num_rxq;
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
+ rsrc->num_rxq_grp = rsrc->num_rxq;
else
- vport->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS;
+ rsrc->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS;
}
/**
* idpf_vport_calc_numq_per_grp - Calculate number of queues per group
- * @vport: vport to calculate queues for
+ * @rsrc: pointer to queue and vector resources
* @num_txq: return parameter for number of TX queues
* @num_rxq: return parameter for number of RX queues
*/
-static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport,
+static void idpf_vport_calc_numq_per_grp(struct idpf_q_vec_rsrc *rsrc,
u16 *num_txq, u16 *num_rxq)
{
- if (idpf_is_queue_model_split(vport->txq_model))
+ if (idpf_is_queue_model_split(rsrc->txq_model))
*num_txq = IDPF_DFLT_SPLITQ_TXQ_PER_GROUP;
else
- *num_txq = vport->num_txq;
+ *num_txq = rsrc->num_txq;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
*num_rxq = IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
else
- *num_rxq = vport->num_rxq;
+ *num_rxq = rsrc->num_rxq;
}
/**
* idpf_rxq_set_descids - set the descids supported by this queue
- * @vport: virtual port data structure
+ * @rsrc: pointer to queue and vector resources
* @q: rx queue for which descids are set
*
*/
-static void idpf_rxq_set_descids(const struct idpf_vport *vport,
+static void idpf_rxq_set_descids(struct idpf_q_vec_rsrc *rsrc,
struct idpf_rx_queue *q)
{
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
return;
- if (vport->base_rxd)
+ if (rsrc->base_rxd)
q->rxdids = VIRTCHNL2_RXDID_1_32B_BASE_M;
else
q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
@@ -1391,34 +1396,35 @@ static void idpf_rxq_set_descids(const struct idpf_vport *vport,
/**
* idpf_txq_group_alloc - Allocate all txq group resources
* @vport: vport to allocate txq groups for
+ * @rsrc: pointer to queue and vector resources
* @num_txq: number of txqs to allocate for each group
*
* Returns 0 on success, negative on failure
*/
-static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+static int idpf_txq_group_alloc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
+ u16 num_txq)
{
bool split, flow_sch_en;
- int i;
- vport->txq_grps = kcalloc(vport->num_txq_grp,
- sizeof(*vport->txq_grps), GFP_KERNEL);
- if (!vport->txq_grps)
+ rsrc->txq_grps = kcalloc(rsrc->num_txq_grp,
+ sizeof(*rsrc->txq_grps), GFP_KERNEL);
+ if (!rsrc->txq_grps)
return -ENOMEM;
- split = idpf_is_queue_model_split(vport->txq_model);
+ split = idpf_is_queue_model_split(rsrc->txq_model);
flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
VIRTCHNL2_CAP_SPLITQ_QSCHED);
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i];
struct idpf_adapter *adapter = vport->adapter;
struct idpf_txq_stash *stashes;
- int j;
tx_qgrp->vport = vport;
tx_qgrp->num_txq = num_txq;
- for (j = 0; j < tx_qgrp->num_txq; j++) {
+ for (u16 j = 0; j < tx_qgrp->num_txq; j++) {
tx_qgrp->txqs[j] = kzalloc(sizeof(*tx_qgrp->txqs[j]),
GFP_KERNEL);
if (!tx_qgrp->txqs[j])
@@ -1434,11 +1440,11 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
tx_qgrp->stashes = stashes;
}
- for (j = 0; j < tx_qgrp->num_txq; j++) {
+ for (u16 j = 0; j < tx_qgrp->num_txq; j++) {
struct idpf_tx_queue *q = tx_qgrp->txqs[j];
q->dev = &adapter->pdev->dev;
- q->desc_count = vport->txq_desc_count;
+ q->desc_count = rsrc->txq_desc_count;
q->tx_max_bufs = idpf_get_max_tx_bufs(adapter);
q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter);
q->netdev = vport->netdev;
@@ -1470,7 +1476,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
if (!tx_qgrp->complq)
goto err_alloc;
- tx_qgrp->complq->desc_count = vport->complq_desc_count;
+ tx_qgrp->complq->desc_count = rsrc->complq_desc_count;
tx_qgrp->complq->txq_grp = tx_qgrp;
tx_qgrp->complq->netdev = vport->netdev;
tx_qgrp->complq->clean_budget = vport->compln_clean_budget;
@@ -1482,7 +1488,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
return 0;
err_alloc:
- idpf_txq_group_rel(vport);
+ idpf_txq_group_rel(vport, rsrc);
return -ENOMEM;
}
@@ -1490,30 +1496,32 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
/**
* idpf_rxq_group_alloc - Allocate all rxq group resources
* @vport: vport to allocate rxq groups for
+ * @rsrc: pointer to queue and vector resources
* @num_rxq: number of rxqs to allocate for each group
*
* Returns 0 on success, negative on failure
*/
-static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
+static int idpf_rxq_group_alloc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
+ u16 num_rxq)
{
- int i, k, err = 0;
+ int k, err = 0;
bool hs;
- vport->rxq_grps = kcalloc(vport->num_rxq_grp,
- sizeof(struct idpf_rxq_group), GFP_KERNEL);
- if (!vport->rxq_grps)
+ rsrc->rxq_grps = kcalloc(rsrc->num_rxq_grp,
+ sizeof(struct idpf_rxq_group), GFP_KERNEL);
+ if (!rsrc->rxq_grps)
return -ENOMEM;
hs = idpf_vport_get_hsplit(vport) == ETHTOOL_TCP_DATA_SPLIT_ENABLED;
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
- int j;
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
rx_qgrp->vport = vport;
- if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ if (!idpf_is_queue_model_split(rsrc->rxq_model)) {
rx_qgrp->singleq.num_rxq = num_rxq;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
rx_qgrp->singleq.rxqs[j] =
kzalloc(sizeof(*rx_qgrp->singleq.rxqs[j]),
GFP_KERNEL);
@@ -1526,7 +1534,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
}
rx_qgrp->splitq.num_rxq_sets = num_rxq;
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
rx_qgrp->splitq.rxq_sets[j] =
kzalloc(sizeof(struct idpf_rxq_set),
GFP_KERNEL);
@@ -1536,7 +1544,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
}
}
- rx_qgrp->splitq.bufq_sets = kcalloc(vport->num_bufqs_per_qgrp,
+ rx_qgrp->splitq.bufq_sets = kcalloc(rsrc->num_bufqs_per_qgrp,
sizeof(struct idpf_bufq_set),
GFP_KERNEL);
if (!rx_qgrp->splitq.bufq_sets) {
@@ -1544,14 +1552,14 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
goto err_alloc;
}
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
struct idpf_bufq_set *bufq_set =
&rx_qgrp->splitq.bufq_sets[j];
int swq_size = sizeof(struct idpf_sw_queue);
struct idpf_buf_queue *q;
q = &rx_qgrp->splitq.bufq_sets[j].bufq;
- q->desc_count = vport->bufq_desc_count[j];
+ q->desc_count = rsrc->bufq_desc_count[j];
q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK;
idpf_queue_assign(HSPLIT_EN, q, hs);
@@ -1568,7 +1576,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
&bufq_set->refillqs[k];
refillq->desc_count =
- vport->bufq_desc_count[j];
+ rsrc->bufq_desc_count[j];
idpf_queue_set(GEN_CHK, refillq);
idpf_queue_set(RFL_GEN_CHK, refillq);
refillq->ring = kcalloc(refillq->desc_count,
@@ -1582,37 +1590,37 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
}
skip_splitq_rx_init:
- for (j = 0; j < num_rxq; j++) {
+ for (u16 j = 0; j < num_rxq; j++) {
struct idpf_rx_queue *q;
- if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ if (!idpf_is_queue_model_split(rsrc->rxq_model)) {
q = rx_qgrp->singleq.rxqs[j];
goto setup_rxq;
}
q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
rx_qgrp->splitq.rxq_sets[j]->refillq[0] =
&rx_qgrp->splitq.bufq_sets[0].refillqs[j];
- if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP)
+ if (rsrc->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP)
rx_qgrp->splitq.rxq_sets[j]->refillq[1] =
&rx_qgrp->splitq.bufq_sets[1].refillqs[j];
idpf_queue_assign(HSPLIT_EN, q, hs);
setup_rxq:
- q->desc_count = vport->rxq_desc_count;
+ q->desc_count = rsrc->rxq_desc_count;
q->rx_ptype_lkup = vport->rx_ptype_lkup;
q->bufq_sets = rx_qgrp->splitq.bufq_sets;
q->idx = (i * num_rxq) + j;
q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK;
q->rx_max_pkt_size = vport->netdev->mtu +
LIBETH_RX_LL_LEN;
- idpf_rxq_set_descids(vport, q);
+ idpf_rxq_set_descids(rsrc, q);
}
}
err_alloc:
if (err)
- idpf_rxq_group_rel(vport);
+ idpf_rxq_group_rel(rsrc);
return err;
}
@@ -1620,28 +1628,30 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
/**
* idpf_vport_queue_grp_alloc_all - Allocate all queue groups/resources
* @vport: vport with qgrps to allocate
+ * @rsrc: pointer to queue and vector resources
*
* Returns 0 on success, negative on failure
*/
-static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport)
+static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
u16 num_txq, num_rxq;
int err;
- idpf_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq);
+ idpf_vport_calc_numq_per_grp(rsrc, &num_txq, &num_rxq);
- err = idpf_txq_group_alloc(vport, num_txq);
+ err = idpf_txq_group_alloc(vport, rsrc, num_txq);
if (err)
goto err_out;
- err = idpf_rxq_group_alloc(vport, num_rxq);
+ err = idpf_rxq_group_alloc(vport, rsrc, num_rxq);
if (err)
goto err_out;
return 0;
err_out:
- idpf_vport_queue_grp_rel_all(vport);
+ idpf_vport_queue_grp_rel_all(vport, rsrc);
return err;
}
@@ -1649,19 +1659,21 @@ static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport)
/**
* idpf_vport_queues_alloc - Allocate memory for all queues
* @vport: virtual port
+ * @rsrc: pointer to queue and vector resources
*
* Allocate memory for queues associated with a vport. Returns 0 on success,
* negative on failure.
*/
-int idpf_vport_queues_alloc(struct idpf_vport *vport)
+int idpf_vport_queues_alloc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
int err;
- err = idpf_vport_queue_grp_alloc_all(vport);
+ err = idpf_vport_queue_grp_alloc_all(vport, rsrc);
if (err)
goto err_out;
- err = idpf_vport_init_fast_path_txqs(vport);
+ err = idpf_vport_init_fast_path_txqs(vport, rsrc);
if (err)
goto err_out;
@@ -1669,18 +1681,18 @@ int idpf_vport_queues_alloc(struct idpf_vport *vport)
if (err)
goto err_out;
- err = idpf_tx_desc_alloc_all(vport);
+ err = idpf_tx_desc_alloc_all(vport, rsrc);
if (err)
goto err_out;
- err = idpf_rx_desc_alloc_all(vport);
+ err = idpf_rx_desc_alloc_all(vport, rsrc);
if (err)
goto err_out;
return 0;
err_out:
- idpf_vport_queues_rel(vport);
+ idpf_vport_queues_rel(vport, rsrc);
return err;
}
@@ -3052,7 +3064,7 @@ netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev)
return NETDEV_TX_OK;
}
- if (idpf_is_queue_model_split(vport->txq_model))
+ if (idpf_is_queue_model_split(vport->dflt_qv_rsrc.txq_model))
return idpf_tx_splitq_frame(skb, tx_q);
else
return idpf_tx_singleq_frame(skb, tx_q);
@@ -4229,19 +4241,19 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc)
{
- u16 num_txq_grp = vport->num_txq_grp - vport->num_xdp_txq;
- bool split = idpf_is_queue_model_split(vport->rxq_model);
+ u16 num_txq_grp = rsrc->num_txq_grp - vport->num_xdp_txq;
+ bool split = idpf_is_queue_model_split(rsrc->rxq_model);
struct idpf_rxq_group *rx_qgrp;
struct idpf_txq_group *tx_qgrp;
u32 i, qv_idx, q_index;
- for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) {
+ for (i = 0, qv_idx = 0; i < rsrc->num_rxq_grp; i++) {
u16 num_rxq;
if (qv_idx >= rsrc->num_q_vectors)
qv_idx = 0;
- rx_qgrp = &vport->rxq_grps[i];
+ rx_qgrp = &rsrc->rxq_grps[i];
if (split)
num_rxq = rx_qgrp->splitq.num_rxq_sets;
else
@@ -4264,7 +4276,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport,
}
if (split) {
- for (u32 j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
struct idpf_buf_queue *bufq;
bufq = &rx_qgrp->splitq.bufq_sets[j].bufq;
@@ -4278,7 +4290,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport,
qv_idx++;
}
- split = idpf_is_queue_model_split(vport->txq_model);
+ split = idpf_is_queue_model_split(rsrc->txq_model);
for (i = 0, qv_idx = 0; i < num_txq_grp; i++) {
u16 num_txq;
@@ -4286,7 +4298,7 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport,
if (qv_idx >= rsrc->num_q_vectors)
qv_idx = 0;
- tx_qgrp = &vport->txq_grps[i];
+ tx_qgrp = &rsrc->txq_grps[i];
num_txq = tx_qgrp->num_txq;
for (u32 j = 0; j < num_txq; j++) {
@@ -4362,7 +4374,7 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport,
int irq_num;
u16 qv_idx;
- if (idpf_is_queue_model_split(vport->txq_model))
+ if (idpf_is_queue_model_split(rsrc->txq_model))
napi_poll = idpf_vport_splitq_napi_poll;
else
napi_poll = idpf_vport_singleq_napi_poll;
@@ -4404,14 +4416,14 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport,
if (!rsrc->q_vectors)
return -ENOMEM;
- txqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp,
+ txqs_per_vector = DIV_ROUND_UP(rsrc->num_txq_grp,
rsrc->num_q_vectors);
- rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq_grp,
+ rxqs_per_vector = DIV_ROUND_UP(rsrc->num_rxq_grp,
rsrc->num_q_vectors);
- bufqs_per_vector = vport->num_bufqs_per_qgrp *
- DIV_ROUND_UP(vport->num_rxq_grp,
+ bufqs_per_vector = rsrc->num_bufqs_per_qgrp *
+ DIV_ROUND_UP(rsrc->num_rxq_grp,
rsrc->num_q_vectors);
- complqs_per_vector = DIV_ROUND_UP(vport->num_txq_grp,
+ complqs_per_vector = DIV_ROUND_UP(rsrc->num_txq_grp,
rsrc->num_q_vectors);
for (u16 v_idx = 0; v_idx < rsrc->num_q_vectors; v_idx++) {
@@ -4437,7 +4449,7 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport,
if (!q_vector->rx)
goto error;
- if (!idpf_is_queue_model_split(vport->rxq_model))
+ if (!idpf_is_queue_model_split(rsrc->rxq_model))
continue;
q_vector->bufq = kcalloc(bufqs_per_vector,
@@ -4524,8 +4536,8 @@ int idpf_config_rss(struct idpf_vport *vport)
*/
static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport)
{
+ u16 num_active_rxq = vport->dflt_qv_rsrc.num_rxq;
struct idpf_adapter *adapter = vport->adapter;
- u16 num_active_rxq = vport->num_rxq;
struct idpf_rss_data *rss_data;
int i;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 675d9e1ed561..8407939cc625 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -1068,14 +1068,18 @@ static inline void idpf_vport_intr_set_wb_on_itr(struct idpf_q_vector *q_vector)
int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
void idpf_vport_init_num_qs(struct idpf_vport *vport,
- struct virtchnl2_create_vport *vport_msg);
-void idpf_vport_calc_num_q_desc(struct idpf_vport *vport);
+ struct virtchnl2_create_vport *vport_msg,
+ struct idpf_q_vec_rsrc *rsrc);
+void idpf_vport_calc_num_q_desc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index,
struct virtchnl2_create_vport *vport_msg,
struct idpf_vport_max_q *max_q);
-void idpf_vport_calc_num_q_groups(struct idpf_vport *vport);
-int idpf_vport_queues_alloc(struct idpf_vport *vport);
-void idpf_vport_queues_rel(struct idpf_vport *vport);
+void idpf_vport_calc_num_q_groups(struct idpf_q_vec_rsrc *rsrc);
+int idpf_vport_queues_alloc(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
+void idpf_vport_queues_rel(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
void idpf_vport_intr_rel(struct idpf_q_vec_rsrc *rsrc);
int idpf_vport_intr_alloc(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
@@ -1089,7 +1093,8 @@ void idpf_vport_intr_ena(struct idpf_vport *vport,
int idpf_config_rss(struct idpf_vport *vport);
int idpf_init_rss(struct idpf_vport *vport);
void idpf_deinit_rss(struct idpf_vport *vport);
-int idpf_rx_bufs_init_all(struct idpf_vport *vport);
+int idpf_rx_bufs_init_all(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
bool xmit_more);
unsigned int idpf_size_to_txd_count(unsigned int size);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 65516fb77b20..47b00ecd7324 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -1145,13 +1145,15 @@ static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type,
/**
* __idpf_queue_reg_init - initialize queue registers
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
* @reg_vals: registers we are initializing
* @num_regs: how many registers there are in total
* @q_type: queue model
*
* Return number of queues that are initialized
*/
-static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
+static int __idpf_queue_reg_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc, u32 *reg_vals,
int num_regs, u32 q_type)
{
struct idpf_adapter *adapter = vport->adapter;
@@ -1159,8 +1161,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
switch (q_type) {
case VIRTCHNL2_QUEUE_TYPE_TX:
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ for (i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i];
for (j = 0; j < tx_qgrp->num_txq && k < num_regs; j++, k++)
tx_qgrp->txqs[j]->tail =
@@ -1168,8 +1170,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
}
break;
case VIRTCHNL2_QUEUE_TYPE_RX:
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u16 num_rxq = rx_qgrp->singleq.num_rxq;
for (j = 0; j < num_rxq && k < num_regs; j++, k++) {
@@ -1182,9 +1184,9 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
}
break;
case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
- u8 num_bufqs = vport->num_bufqs_per_qgrp;
+ for (i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
+ u8 num_bufqs = rsrc->num_bufqs_per_qgrp;
for (j = 0; j < num_bufqs && k < num_regs; j++, k++) {
struct idpf_buf_queue *q;
@@ -1205,11 +1207,13 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
/**
* idpf_queue_reg_init - initialize queue registers
* @vport: virtual port structure
+ * @rsrc: pointer to queue and vector resources
* @chunks: queue registers received over mailbox
*
* Return 0 on success, negative on failure
*/
int idpf_queue_reg_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks)
{
int num_regs, ret = 0;
@@ -1224,14 +1228,14 @@ int idpf_queue_reg_init(struct idpf_vport *vport,
num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
VIRTCHNL2_QUEUE_TYPE_TX,
chunks);
- if (num_regs < vport->num_txq) {
+ if (num_regs < rsrc->num_txq) {
ret = -EINVAL;
goto free_reg_vals;
}
- num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs,
+ num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs,
VIRTCHNL2_QUEUE_TYPE_TX);
- if (num_regs < vport->num_txq) {
+ if (num_regs < rsrc->num_txq) {
ret = -EINVAL;
goto free_reg_vals;
}
@@ -1239,18 +1243,18 @@ int idpf_queue_reg_init(struct idpf_vport *vport,
/* Initialize Rx/buffer queue tail register address based on Rx queue
* model
*/
- if (idpf_is_queue_model_split(vport->rxq_model)) {
+ if (idpf_is_queue_model_split(rsrc->rxq_model)) {
num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
chunks);
- if (num_regs < vport->num_bufq) {
+ if (num_regs < rsrc->num_bufq) {
ret = -EINVAL;
goto free_reg_vals;
}
- num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs,
+ num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs,
VIRTCHNL2_QUEUE_TYPE_RX_BUFFER);
- if (num_regs < vport->num_bufq) {
+ if (num_regs < rsrc->num_bufq) {
ret = -EINVAL;
goto free_reg_vals;
}
@@ -1258,14 +1262,14 @@ int idpf_queue_reg_init(struct idpf_vport *vport,
num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
VIRTCHNL2_QUEUE_TYPE_RX,
chunks);
- if (num_regs < vport->num_rxq) {
+ if (num_regs < rsrc->num_rxq) {
ret = -EINVAL;
goto free_reg_vals;
}
- num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs,
+ num_regs = __idpf_queue_reg_init(vport, rsrc, reg_vals, num_regs,
VIRTCHNL2_QUEUE_TYPE_RX);
- if (num_regs < vport->num_rxq) {
+ if (num_regs < rsrc->num_rxq) {
ret = -EINVAL;
goto free_reg_vals;
}
@@ -1364,6 +1368,7 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
*/
int idpf_check_supported_desc_ids(struct idpf_vport *vport)
{
+ struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_adapter *adapter = vport->adapter;
struct virtchnl2_create_vport *vport_msg;
u64 rx_desc_ids, tx_desc_ids;
@@ -1380,17 +1385,17 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport)
rx_desc_ids = le64_to_cpu(vport_msg->rx_desc_ids);
tx_desc_ids = le64_to_cpu(vport_msg->tx_desc_ids);
- if (idpf_is_queue_model_split(vport->rxq_model)) {
+ if (idpf_is_queue_model_split(rsrc->rxq_model)) {
if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M)) {
dev_info(&adapter->pdev->dev, "Minimum RX descriptor support not provided, using the default\n");
vport_msg->rx_desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M);
}
} else {
if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M))
- vport->base_rxd = true;
+ rsrc->base_rxd = true;
}
- if (!idpf_is_queue_model_split(vport->txq_model))
+ if (!idpf_is_queue_model_split(rsrc->txq_model))
return 0;
if ((tx_desc_ids & MIN_SUPPORT_TXDID) != MIN_SUPPORT_TXDID) {
@@ -1476,11 +1481,13 @@ int idpf_send_disable_vport_msg(struct idpf_vport *vport)
/**
* idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message
* @vport: virtual port data structure
+ * @rsrc: pointer to queue and vector resources
*
* Send config tx queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
+static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct virtchnl2_config_tx_queues *ctq __free(kfree) = NULL;
struct virtchnl2_txq_info *qi __free(kfree) = NULL;
@@ -1488,30 +1495,30 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
u32 config_sz, chunk_sz, buf_sz;
int totqs, num_msgs, num_chunks;
ssize_t reply_sz;
- int i, k = 0;
+ int k = 0;
- totqs = vport->num_txq + vport->num_complq;
+ totqs = rsrc->num_txq + rsrc->num_complq;
qi = kcalloc(totqs, sizeof(struct virtchnl2_txq_info), GFP_KERNEL);
if (!qi)
return -ENOMEM;
/* Populate the queue info buffer with all queue context info */
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
- int j, sched_mode;
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i];
+ int sched_mode;
- for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+ for (u16 j = 0; j < tx_qgrp->num_txq; j++, k++) {
qi[k].queue_id =
cpu_to_le32(tx_qgrp->txqs[j]->q_id);
qi[k].model =
- cpu_to_le16(vport->txq_model);
+ cpu_to_le16(rsrc->txq_model);
qi[k].type =
cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX);
qi[k].ring_len =
cpu_to_le16(tx_qgrp->txqs[j]->desc_count);
qi[k].dma_ring_addr =
cpu_to_le64(tx_qgrp->txqs[j]->dma);
- if (idpf_is_queue_model_split(vport->txq_model)) {
+ if (idpf_is_queue_model_split(rsrc->txq_model)) {
struct idpf_tx_queue *q = tx_qgrp->txqs[j];
qi[k].tx_compl_queue_id =
@@ -1530,11 +1537,11 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
}
}
- if (!idpf_is_queue_model_split(vport->txq_model))
+ if (!idpf_is_queue_model_split(rsrc->txq_model))
continue;
qi[k].queue_id = cpu_to_le32(tx_qgrp->complq->q_id);
- qi[k].model = cpu_to_le16(vport->txq_model);
+ qi[k].model = cpu_to_le16(rsrc->txq_model);
qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION);
qi[k].ring_len = cpu_to_le16(tx_qgrp->complq->desc_count);
qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->complq->dma);
@@ -1570,7 +1577,7 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
xn_params.vc_op = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
- for (i = 0, k = 0; i < num_msgs; i++) {
+ for (u16 i = 0, k = 0; i < num_msgs; i++) {
memset(ctq, 0, buf_sz);
ctq->vport_id = cpu_to_le32(vport->vport_id);
ctq->num_qinfo = cpu_to_le16(num_chunks);
@@ -1595,11 +1602,13 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
/**
* idpf_send_config_rx_queues_msg - Send virtchnl config rx queues message
* @vport: virtual port data structure
+ * @rsrc: pointer to queue and vector resources
*
* Send config rx queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
struct virtchnl2_config_rx_queues *crq __free(kfree) = NULL;
struct virtchnl2_rxq_info *qi __free(kfree) = NULL;
@@ -1607,28 +1616,27 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
u32 config_sz, chunk_sz, buf_sz;
int totqs, num_msgs, num_chunks;
ssize_t reply_sz;
- int i, k = 0;
+ int k = 0;
- totqs = vport->num_rxq + vport->num_bufq;
+ totqs = rsrc->num_rxq + rsrc->num_bufq;
qi = kcalloc(totqs, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL);
if (!qi)
return -ENOMEM;
/* Populate the queue info buffer with all queue context info */
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u16 num_rxq;
- int j;
- if (!idpf_is_queue_model_split(vport->rxq_model))
+ if (!idpf_is_queue_model_split(rsrc->rxq_model))
goto setup_rxqs;
- for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
+ for (u8 j = 0; j < rsrc->num_bufqs_per_qgrp; j++, k++) {
struct idpf_buf_queue *bufq =
&rx_qgrp->splitq.bufq_sets[j].bufq;
qi[k].queue_id = cpu_to_le32(bufq->q_id);
- qi[k].model = cpu_to_le16(vport->rxq_model);
+ qi[k].model = cpu_to_le16(rsrc->rxq_model);
qi[k].type =
cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX_BUFFER);
qi[k].desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M);
@@ -1643,17 +1651,17 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
}
setup_rxqs:
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
num_rxq = rx_qgrp->splitq.num_rxq_sets;
else
num_rxq = rx_qgrp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++, k++) {
+ for (u16 j = 0; j < num_rxq; j++, k++) {
const struct idpf_bufq_set *sets;
struct idpf_rx_queue *rxq;
u32 rxdids;
- if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ if (!idpf_is_queue_model_split(rsrc->rxq_model)) {
rxq = rx_qgrp->singleq.rxqs[j];
rxdids = rxq->rxdids;
@@ -1670,7 +1678,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
rxq->rx_buf_size = sets[0].bufq.rx_buf_size;
qi[k].rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id);
- if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) {
+ if (rsrc->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) {
qi[k].bufq2_ena = IDPF_BUFQ2_ENA;
qi[k].rx_bufq2_id =
cpu_to_le16(sets[1].bufq.q_id);
@@ -1693,7 +1701,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
common_qi_fields:
qi[k].queue_id = cpu_to_le32(rxq->q_id);
- qi[k].model = cpu_to_le16(vport->rxq_model);
+ qi[k].model = cpu_to_le16(rsrc->rxq_model);
qi[k].type = cpu_to_le32(VIRTCHNL2_QUEUE_TYPE_RX);
qi[k].ring_len = cpu_to_le16(rxq->desc_count);
qi[k].dma_ring_addr = cpu_to_le64(rxq->dma);
@@ -1727,7 +1735,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
xn_params.vc_op = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
- for (i = 0, k = 0; i < num_msgs; i++) {
+ for (u16 i = 0, k = 0; i < num_msgs; i++) {
memset(crq, 0, buf_sz);
crq->vport_id = cpu_to_le32(vport->vport_id);
crq->num_qinfo = cpu_to_le16(num_chunks);
@@ -1799,33 +1807,35 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport,
* idpf_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue
* vector message
* @vport: virtual port data structure
+ * @rsrc: pointer to queue and vector resources
* @map: true for map and false for unmap
*
* Send map or unmap queue vector virtchnl message. Returns 0 on success,
* negative on failure.
*/
-int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
+int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
+ bool map)
{
struct virtchnl2_queue_vector_maps *vqvm __free(kfree) = NULL;
struct virtchnl2_queue_vector *vqv __free(kfree) = NULL;
- struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
struct idpf_vc_xn_params xn_params = {};
u32 config_sz, chunk_sz, buf_sz;
u32 num_msgs, num_chunks, num_q;
ssize_t reply_sz;
- int i, j, k = 0;
+ int k = 0;
- num_q = vport->num_txq + vport->num_rxq;
+ num_q = rsrc->num_txq + rsrc->num_rxq;
buf_sz = sizeof(struct virtchnl2_queue_vector) * num_q;
vqv = kzalloc(buf_sz, GFP_KERNEL);
if (!vqv)
return -ENOMEM;
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ for (u16 i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i];
- for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+ for (u16 j = 0; j < tx_qgrp->num_txq; j++, k++) {
const struct idpf_tx_queue *txq = tx_qgrp->txqs[j];
const struct idpf_q_vector *vec;
u32 v_idx, tx_itr_idx;
@@ -1838,7 +1848,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
vec = NULL;
else if (idpf_queue_has(XDP, txq))
vec = txq->complq->q_vector;
- else if (idpf_is_queue_model_split(vport->txq_model))
+ else if (idpf_is_queue_model_split(rsrc->txq_model))
vec = txq->txq_grp->complq->q_vector;
else
vec = txq->q_vector;
@@ -1856,20 +1866,20 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
}
}
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (u16 i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u16 num_rxq;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
num_rxq = rx_qgrp->splitq.num_rxq_sets;
else
num_rxq = rx_qgrp->singleq.num_rxq;
- for (j = 0; j < num_rxq; j++, k++) {
+ for (u16 j = 0; j < num_rxq; j++, k++) {
struct idpf_rx_queue *rxq;
u32 v_idx, rx_itr_idx;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
else
rxq = rx_qgrp->singleq.rxqs[j];
@@ -1915,7 +1925,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC;
}
- for (i = 0, k = 0; i < num_msgs; i++) {
+ for (u16 i = 0, k = 0; i < num_msgs; i++) {
memset(vqvm, 0, buf_sz);
xn_params.send_buf.iov_base = vqvm;
xn_params.send_buf.iov_len = buf_sz;
@@ -2015,19 +2025,21 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport,
/**
* idpf_send_config_queues_msg - Send config queues virtchnl message
* @vport: Virtual port private data structure
+ * @rsrc: pointer to queue and vector resources
*
* Will send config queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-int idpf_send_config_queues_msg(struct idpf_vport *vport)
+int idpf_send_config_queues_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc)
{
int err;
- err = idpf_send_config_tx_queues_msg(vport);
+ err = idpf_send_config_tx_queues_msg(vport, rsrc);
if (err)
return err;
- return idpf_send_config_rx_queues_msg(vport);
+ return idpf_send_config_rx_queues_msg(vport, rsrc);
}
/**
@@ -2482,12 +2494,14 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
struct idpf_vc_xn_params xn_params = {};
u16 next_ptype_id = 0;
ssize_t reply_sz;
+ bool is_splitq;
int i, j, k;
if (vport->rx_ptype_lkup)
return 0;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ is_splitq = idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model);
+ if (is_splitq)
max_ptype = IDPF_RX_MAX_PTYPE;
else
max_ptype = IDPF_RX_MAX_BASE_PTYPE;
@@ -2551,7 +2565,7 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
IDPF_INVALID_PTYPE_ID)
goto out;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (is_splitq)
k = le16_to_cpu(ptype->ptype_id_10);
else
k = ptype->ptype_id_8;
@@ -3060,7 +3074,7 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport,
vec_info.num_curr_vecs += IDPF_RESERVED_VECS;
/* XDPSQs are all bound to the NOIRQ vector from IDPF_RESERVED_VECS */
- req = max(vport->num_txq - vport->num_xdp_txq, vport->num_rxq) +
+ req = max(rsrc->num_txq - vport->num_xdp_txq, rsrc->num_rxq) +
IDPF_RESERVED_VECS;
vec_info.num_req_vecs = req;
@@ -3116,8 +3130,8 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
vport_config->max_q.max_complq = max_q->max_complq;
vport_config->max_q.max_bufq = max_q->max_bufq;
- vport->txq_model = le16_to_cpu(vport_msg->txq_model);
- vport->rxq_model = le16_to_cpu(vport_msg->rxq_model);
+ rsrc->txq_model = le16_to_cpu(vport_msg->txq_model);
+ rsrc->rxq_model = le16_to_cpu(vport_msg->rxq_model);
vport->vport_type = le16_to_cpu(vport_msg->vport_type);
vport->vport_id = le32_to_cpu(vport_msg->vport_id);
@@ -3134,9 +3148,9 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
idpf_vport_set_hsplit(vport, ETHTOOL_TCP_DATA_SPLIT_ENABLED);
- idpf_vport_init_num_qs(vport, vport_msg);
- idpf_vport_calc_num_q_desc(vport);
- idpf_vport_calc_num_q_groups(vport);
+ idpf_vport_init_num_qs(vport, vport_msg, rsrc);
+ idpf_vport_calc_num_q_desc(vport, rsrc);
+ idpf_vport_calc_num_q_groups(rsrc);
idpf_vport_alloc_vec_indexes(vport, rsrc);
vport->crc_enable = adapter->crc_enable;
@@ -3245,6 +3259,7 @@ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type,
/**
* __idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
* @vport: virtual port for which the queues ids are initialized
+ * @rsrc: pointer to queue and vector resources
* @qids: queue ids
* @num_qids: number of queue ids
* @q_type: type of queue
@@ -3253,6 +3268,7 @@ static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type,
* parameters. Returns number of queue ids initialized.
*/
static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
const u32 *qids,
int num_qids,
u32 q_type)
@@ -3261,19 +3277,19 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
switch (q_type) {
case VIRTCHNL2_QUEUE_TYPE_TX:
- for (i = 0; i < vport->num_txq_grp; i++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ for (i = 0; i < rsrc->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i];
for (j = 0; j < tx_qgrp->num_txq && k < num_qids; j++, k++)
tx_qgrp->txqs[j]->q_id = qids[k];
}
break;
case VIRTCHNL2_QUEUE_TYPE_RX:
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u16 num_rxq;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
num_rxq = rx_qgrp->splitq.num_rxq_sets;
else
num_rxq = rx_qgrp->singleq.num_rxq;
@@ -3281,7 +3297,7 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
for (j = 0; j < num_rxq && k < num_qids; j++, k++) {
struct idpf_rx_queue *q;
- if (idpf_is_queue_model_split(vport->rxq_model))
+ if (idpf_is_queue_model_split(rsrc->rxq_model))
q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
else
q = rx_qgrp->singleq.rxqs[j];
@@ -3290,16 +3306,16 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
}
break;
case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
- for (i = 0; i < vport->num_txq_grp && k < num_qids; i++, k++) {
- struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ for (i = 0; i < rsrc->num_txq_grp && k < num_qids; i++, k++) {
+ struct idpf_txq_group *tx_qgrp = &rsrc->txq_grps[i];
tx_qgrp->complq->q_id = qids[k];
}
break;
case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
- for (i = 0; i < vport->num_rxq_grp; i++) {
- struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
- u8 num_bufqs = vport->num_bufqs_per_qgrp;
+ for (i = 0; i < rsrc->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
+ u8 num_bufqs = rsrc->num_bufqs_per_qgrp;
for (j = 0; j < num_bufqs && k < num_qids; j++, k++) {
struct idpf_buf_queue *q;
@@ -3319,12 +3335,14 @@ static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
/**
* idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
* @vport: virtual port for which the queues ids are initialized
+ * @rsrc: pointer to queue and vector resources
* @chunks: queue ids received over mailbox
*
* Will initialize all queue ids with ids received as mailbox parameters.
* Returns 0 on success, negative if all the queues are not initialized.
*/
int idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks)
{
int num_ids, err = 0;
@@ -3338,13 +3356,13 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport,
num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS,
VIRTCHNL2_QUEUE_TYPE_TX,
chunks);
- if (num_ids < vport->num_txq) {
+ if (num_ids < rsrc->num_txq) {
err = -EINVAL;
goto mem_rel;
}
- num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids,
+ num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, num_ids,
VIRTCHNL2_QUEUE_TYPE_TX);
- if (num_ids < vport->num_txq) {
+ if (num_ids < rsrc->num_txq) {
err = -EINVAL;
goto mem_rel;
}
@@ -3352,44 +3370,46 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport,
num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS,
VIRTCHNL2_QUEUE_TYPE_RX,
chunks);
- if (num_ids < vport->num_rxq) {
+ if (num_ids < rsrc->num_rxq) {
err = -EINVAL;
goto mem_rel;
}
- num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids,
+ num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids, num_ids,
VIRTCHNL2_QUEUE_TYPE_RX);
- if (num_ids < vport->num_rxq) {
+ if (num_ids < rsrc->num_rxq) {
err = -EINVAL;
goto mem_rel;
}
- if (!idpf_is_queue_model_split(vport->txq_model))
+ if (!idpf_is_queue_model_split(rsrc->txq_model))
goto check_rxq;
q_type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks);
- if (num_ids < vport->num_complq) {
+ if (num_ids < rsrc->num_complq) {
err = -EINVAL;
goto mem_rel;
}
- num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type);
- if (num_ids < vport->num_complq) {
+ num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids,
+ num_ids, q_type);
+ if (num_ids < rsrc->num_complq) {
err = -EINVAL;
goto mem_rel;
}
check_rxq:
- if (!idpf_is_queue_model_split(vport->rxq_model))
+ if (!idpf_is_queue_model_split(rsrc->rxq_model))
goto mem_rel;
q_type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks);
- if (num_ids < vport->num_bufq) {
+ if (num_ids < rsrc->num_bufq) {
err = -EINVAL;
goto mem_rel;
}
- num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type);
- if (num_ids < vport->num_bufq)
+ num_ids = __idpf_vport_queue_ids_init(vport, rsrc, qids,
+ num_ids, q_type);
+ if (num_ids < rsrc->num_bufq)
err = -EINVAL;
mem_rel:
@@ -3401,23 +3421,24 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport,
/**
* idpf_vport_adjust_qs - Adjust to new requested queues
* @vport: virtual port data struct
+ * @rsrc: pointer to queue and vector resources
*
* Renegotiate queues. Returns 0 on success, negative on failure.
*/
-int idpf_vport_adjust_qs(struct idpf_vport *vport)
+int idpf_vport_adjust_qs(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc)
{
struct virtchnl2_create_vport vport_msg;
int err;
- vport_msg.txq_model = cpu_to_le16(vport->txq_model);
- vport_msg.rxq_model = cpu_to_le16(vport->rxq_model);
+ vport_msg.txq_model = cpu_to_le16(rsrc->txq_model);
+ vport_msg.rxq_model = cpu_to_le16(rsrc->rxq_model);
err = idpf_vport_calc_total_qs(vport->adapter, vport->idx, &vport_msg,
NULL);
if (err)
return err;
- idpf_vport_init_num_qs(vport, &vport_msg);
- idpf_vport_calc_num_q_groups(vport);
+ idpf_vport_init_num_qs(vport, &vport_msg, rsrc);
+ idpf_vport_calc_num_q_groups(rsrc);
return 0;
}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 0b53c7e8a539..82d35aeeabc2 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -103,8 +103,10 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter);
int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
struct idpf_vec_regs *reg_vals);
int idpf_queue_reg_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks);
int idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks);
bool idpf_vport_is_cap_ena(struct idpf_vport *vport, u16 flag);
@@ -125,7 +127,8 @@ int idpf_send_destroy_vport_msg(struct idpf_vport *vport);
int idpf_send_enable_vport_msg(struct idpf_vport *vport);
int idpf_send_disable_vport_msg(struct idpf_vport *vport);
-int idpf_vport_adjust_qs(struct idpf_vport *vport);
+int idpf_vport_adjust_qs(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter,
@@ -139,7 +142,8 @@ int idpf_send_enable_queues_msg(struct idpf_vport *vport,
int idpf_send_disable_queues_msg(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks);
-int idpf_send_config_queues_msg(struct idpf_vport *vport);
+int idpf_send_config_queues_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc);
int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
@@ -148,7 +152,9 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter,
struct virtchnl2_vector_chunks *chunks);
int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors);
int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter);
-int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map);
+int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport,
+ struct idpf_q_vec_rsrc *rsrc,
+ bool map);
int idpf_add_del_mac_filters(struct idpf_vport *vport,
struct idpf_netdev_priv *np,
diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/intel/idpf/xdp.c
index c143b5dc9e2b..539f8c11408b 100644
--- a/drivers/net/ethernet/intel/idpf/xdp.c
+++ b/drivers/net/ethernet/intel/idpf/xdp.c
@@ -5,17 +5,17 @@
#include "idpf_virtchnl.h"
#include "xdp.h"
-static int idpf_rxq_for_each(const struct idpf_vport *vport,
+static int idpf_rxq_for_each(const struct idpf_q_vec_rsrc *rsrc,
int (*fn)(struct idpf_rx_queue *rxq, void *arg),
void *arg)
{
- bool splitq = idpf_is_queue_model_split(vport->rxq_model);
+ bool splitq = idpf_is_queue_model_split(rsrc->rxq_model);
- if (!vport->rxq_grps)
+ if (!rsrc->rxq_grps)
return -ENETDOWN;
- for (u32 i = 0; i < vport->num_rxq_grp; i++) {
- const struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ for (u32 i = 0; i < rsrc->num_rxq_grp; i++) {
+ const struct idpf_rxq_group *rx_qgrp = &rsrc->rxq_grps[i];
u32 num_rxq;
if (splitq)
@@ -44,8 +44,8 @@ static int idpf_rxq_for_each(const struct idpf_vport *vport,
static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg)
{
const struct idpf_vport *vport = rxq->q_vector->vport;
- bool split = idpf_is_queue_model_split(vport->rxq_model);
const struct page_pool *pp;
+ bool split;
int err;
err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx,
@@ -54,6 +54,7 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg)
if (err)
return err;
+ split = idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model);
pp = split ? rxq->bufq_sets[0].bufq.pp : rxq->pp;
xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp);
@@ -66,9 +67,9 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg)
return 0;
}
-int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport)
+int idpf_xdp_rxq_info_init_all(const struct idpf_q_vec_rsrc *rsrc)
{
- return idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_init, NULL);
+ return idpf_rxq_for_each(rsrc, __idpf_xdp_rxq_info_init, NULL);
}
static int __idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, void *arg)
@@ -84,10 +85,10 @@ static int __idpf_xdp_rxq_info_deinit(struct idpf_rx_queue *rxq, void *arg)
return 0;
}
-void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport)
+void idpf_xdp_rxq_info_deinit_all(const struct idpf_q_vec_rsrc *rsrc)
{
- idpf_rxq_for_each(vport, __idpf_xdp_rxq_info_deinit,
- (void *)(size_t)vport->rxq_model);
+ idpf_rxq_for_each(rsrc, __idpf_xdp_rxq_info_deinit,
+ (void *)(size_t)rsrc->rxq_model);
}
static int idpf_xdp_rxq_assign_prog(struct idpf_rx_queue *rxq, void *arg)
@@ -106,10 +107,10 @@ static int idpf_xdp_rxq_assign_prog(struct idpf_rx_queue *rxq, void *arg)
return 0;
}
-void idpf_xdp_copy_prog_to_rqs(const struct idpf_vport *vport,
+void idpf_xdp_copy_prog_to_rqs(const struct idpf_q_vec_rsrc *rsrc,
struct bpf_prog *xdp_prog)
{
- idpf_rxq_for_each(vport, idpf_xdp_rxq_assign_prog, xdp_prog);
+ idpf_rxq_for_each(rsrc, idpf_xdp_rxq_assign_prog, xdp_prog);
}
static void idpf_xdp_tx_timer(struct work_struct *work);
@@ -368,7 +369,7 @@ static const struct xdp_metadata_ops idpf_xdpmo = {
void idpf_xdp_set_features(const struct idpf_vport *vport)
{
- if (!idpf_is_queue_model_split(vport->rxq_model))
+ if (!idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model))
return;
libeth_xdp_set_features_noredir(vport->netdev, &idpf_xdpmo);
@@ -388,7 +389,7 @@ static int idpf_xdp_setup_prog(struct idpf_vport *vport,
!test_bit(IDPF_VPORT_REG_NETDEV, cfg->flags) ||
!!vport->xdp_prog == !!prog) {
if (np->state == __IDPF_VPORT_UP)
- idpf_xdp_copy_prog_to_rqs(vport, prog);
+ idpf_xdp_copy_prog_to_rqs(&vport->dflt_qv_rsrc, prog);
old = xchg(&vport->xdp_prog, prog);
if (old)
@@ -433,7 +434,7 @@ int idpf_xdp(struct net_device *dev, struct netdev_bpf *xdp)
idpf_vport_ctrl_lock(dev);
vport = idpf_netdev_to_vport(dev);
- if (!idpf_is_queue_model_split(vport->txq_model))
+ if (!idpf_is_queue_model_split(vport->dflt_qv_rsrc.txq_model))
goto notsupp;
switch (xdp->command) {
diff --git a/drivers/net/ethernet/intel/idpf/xdp.h b/drivers/net/ethernet/intel/idpf/xdp.h
index 66ad83a0e85e..7cbc17b68e71 100644
--- a/drivers/net/ethernet/intel/idpf/xdp.h
+++ b/drivers/net/ethernet/intel/idpf/xdp.h
@@ -8,9 +8,9 @@
#include "idpf_txrx.h"
-int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport);
-void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport);
-void idpf_xdp_copy_prog_to_rqs(const struct idpf_vport *vport,
+int idpf_xdp_rxq_info_init_all(const struct idpf_q_vec_rsrc *rsrc);
+void idpf_xdp_rxq_info_deinit_all(const struct idpf_q_vec_rsrc *rsrc);
+void idpf_xdp_copy_prog_to_rqs(const struct idpf_q_vec_rsrc *rsrc,
struct bpf_prog *xdp_prog);
int idpf_xdpsqs_get(const struct idpf_vport *vport);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 5/9] idpf: reshuffle idpf_vport struct members to avoid holes
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
` (3 preceding siblings ...)
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 4/9] idpf: move queue resources to idpf_q_vec_rsrc structure Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 6/9] idpf: add rss_data field to RSS function parameters Pavan Kumar Linga
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin
The previous refactor of moving queue and vector resources out of the
idpf_vport structure, created few holes. Reshuffle the existing members
to avoid holes as much as possible.
Reviewed-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf.h | 32 ++++++++++++--------------
1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
index 4d5dfef30f49..510897dc1d40 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -315,28 +315,28 @@ struct idpf_q_vec_rsrc {
/**
* struct idpf_vport - Handle for netdevices and queue resources
* @dflt_qv_rsrc: contains default queue and vector resources
- * @num_txq: Number of allocated TX queues
- * @compln_clean_budget: Work budget for completion clean
* @txqs: Used only in hotpath to get to the right queue very fast
- * @crc_enable: Enable CRC insertion offload
- * @xdpsq_share: whether XDPSQ sharing is enabled
+ * @num_txq: Number of allocated TX queues
* @num_xdp_txq: number of XDPSQs
* @xdp_txq_offset: index of the first XDPSQ (== number of regular SQs)
+ * @xdpsq_share: whether XDPSQ sharing is enabled
* @xdp_prog: installed XDP program
- * @rx_ptype_lkup: Lookup table for ptypes on RX
* @adapter: back pointer to associated adapter
* @netdev: Associated net_device. Each vport should have one and only one
* associated netdev.
* @flags: See enum idpf_vport_flags
- * @vport_type: Default SRIOV, SIOV, etc.
+ * @compln_clean_budget: Work budget for completion clean
* @vport_id: Device given vport identifier
+ * @vport_type: Default SRIOV, SIOV, etc.
* @idx: Software index in adapter vports struct
- * @default_vport: Use this vport if one isn't specified
* @max_mtu: device given max possible MTU
* @default_mac_addr: device will give a default MAC to use
* @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation
* @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation
+ * @rx_ptype_lkup: Lookup table for ptypes on RX
* @port_stats: per port csum, header split, and other offload stats
+ * @default_vport: Use this vport if one isn't specified
+ * @crc_enable: Enable CRC insertion offload
* @link_up: True if link is up
* @tx_tstamp_caps: Capabilities negotiated for Tx timestamping
* @tstamp_config: The Tx tstamp config
@@ -344,32 +344,30 @@ struct idpf_q_vec_rsrc {
*/
struct idpf_vport {
struct idpf_q_vec_rsrc dflt_qv_rsrc;
- u16 num_txq;
- u32 compln_clean_budget;
struct idpf_tx_queue **txqs;
- bool crc_enable;
-
- bool xdpsq_share;
+ u16 num_txq;
u16 num_xdp_txq;
u16 xdp_txq_offset;
+ bool xdpsq_share;
struct bpf_prog *xdp_prog;
- struct libeth_rx_pt *rx_ptype_lkup;
-
struct idpf_adapter *adapter;
struct net_device *netdev;
DECLARE_BITMAP(flags, IDPF_VPORT_FLAGS_NBITS);
- u16 vport_type;
+ u32 compln_clean_budget;
u32 vport_id;
+ u16 vport_type;
u16 idx;
- bool default_vport;
u16 max_mtu;
u8 default_mac_addr[ETH_ALEN];
u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
- struct idpf_port_stats port_stats;
+ struct libeth_rx_pt *rx_ptype_lkup;
+ struct idpf_port_stats port_stats;
+ bool default_vport;
+ bool crc_enable;
bool link_up;
struct idpf_ptp_vport_tx_tstamp_caps *tx_tstamp_caps;
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 6/9] idpf: add rss_data field to RSS function parameters
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
` (4 preceding siblings ...)
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 5/9] idpf: reshuffle idpf_vport struct members to avoid holes Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 7/9] idpf: generalize send virtchnl message API Pavan Kumar Linga
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin
Retrieve rss_data field of vport just once and pass it to RSS related
functions instead of retrieving it in each function.
While at it, update s/rss/RSS in the RSS function doc comments.
Reviewed-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf.h | 1 +
.../net/ethernet/intel/idpf/idpf_ethtool.c | 2 +-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 16 +++++----
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 34 +++++++------------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 ++--
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 24 ++++++-------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 8 +++--
7 files changed, 45 insertions(+), 46 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
index 510897dc1d40..624a61e4a15f 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -9,6 +9,7 @@ struct idpf_adapter;
struct idpf_vport;
struct idpf_vport_max_q;
struct idpf_q_vec_rsrc;
+struct idpf_rss_data;
#include <net/pkt_sched.h>
#include <linux/aer.h>
diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
index 0b6c5620bef1..eea05f27b6f6 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
@@ -453,7 +453,7 @@ static int idpf_set_rxfh(struct net_device *netdev,
rss_data->rss_lut[lut] = rxfh->indir[lut];
}
- err = idpf_config_rss(vport);
+ err = idpf_config_rss(vport, rss_data);
unlock_mutex:
idpf_vport_ctrl_unlock(netdev);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index aee80600ea01..3898441b7c70 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -938,8 +938,8 @@ static void idpf_vport_rel(struct idpf_vport *vport)
u16 idx = vport->idx;
vport_config = adapter->vport_config[vport->idx];
- idpf_deinit_rss(vport);
rss_data = &vport_config->user_config.rss_data;
+ idpf_deinit_rss(rss_data);
kfree(rss_data->rss_key);
rss_data->rss_key = NULL;
@@ -1342,6 +1342,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
struct idpf_queue_id_reg_info *chunks;
+ struct idpf_rss_data *rss_data;
int err;
if (np->state != __IDPF_VPORT_DOWN)
@@ -1435,10 +1436,11 @@ static int idpf_vport_open(struct idpf_vport *vport)
idpf_restore_features(vport);
- if (vport_config->user_config.rss_data.rss_lut)
- err = idpf_config_rss(vport);
+ rss_data = &vport_config->user_config.rss_data;
+ if (rss_data->rss_lut)
+ err = idpf_config_rss(vport, rss_data);
else
- err = idpf_init_rss(vport);
+ err = idpf_init_rss(vport, rss_data);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to initialize RSS for vport %u: %d\n",
vport->vport_id, err);
@@ -1455,7 +1457,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
return 0;
deinit_rss:
- idpf_deinit_rss(vport);
+ idpf_deinit_rss(rss_data);
disable_vport:
idpf_send_disable_vport_msg(vport);
disable_queues:
@@ -1940,7 +1942,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
idpf_vport_stop(vport);
}
- idpf_deinit_rss(vport);
+ idpf_deinit_rss(&vport_config->user_config.rss_data);
/* We're passing in vport here because we need its wait_queue
* to send a message and it should be getting all the vport
* config data out of the adapter but we need to be careful not
@@ -2137,7 +2139,7 @@ static int idpf_vport_manage_rss_lut(struct idpf_vport *vport)
memset(rss_data->rss_lut, 0, lut_size);
}
- return idpf_config_rss(vport);
+ return idpf_config_rss(vport, rss_data);
}
/**
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 228c79a2f0a0..f8bc90561754 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -4516,33 +4516,32 @@ void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc)
/**
* idpf_config_rss - Send virtchnl messages to configure RSS
* @vport: virtual port
+ * @rss_data: pointer to RSS key and lut info
*
* Return 0 on success, negative on failure
*/
-int idpf_config_rss(struct idpf_vport *vport)
+int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data)
{
int err;
- err = idpf_send_get_set_rss_key_msg(vport, false);
+ err = idpf_send_get_set_rss_key_msg(vport, rss_data, false);
if (err)
return err;
- return idpf_send_get_set_rss_lut_msg(vport, false);
+ return idpf_send_get_set_rss_lut_msg(vport, rss_data, false);
}
/**
* idpf_fill_dflt_rss_lut - Fill the indirection table with the default values
* @vport: virtual port structure
+ * @rss_data: pointer to RSS key and lut info
*/
-static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport)
+static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport,
+ struct idpf_rss_data *rss_data)
{
u16 num_active_rxq = vport->dflt_qv_rsrc.num_rxq;
- struct idpf_adapter *adapter = vport->adapter;
- struct idpf_rss_data *rss_data;
int i;
- rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
-
for (i = 0; i < rss_data->rss_lut_size; i++) {
rss_data->rss_lut[i] = i % num_active_rxq;
rss_data->cached_lut[i] = rss_data->rss_lut[i];
@@ -4552,17 +4551,14 @@ static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport)
/**
* idpf_init_rss - Allocate and initialize RSS resources
* @vport: virtual port
+ * @rss_data: pointer to RSS key and lut info
*
* Return 0 on success, negative on failure
*/
-int idpf_init_rss(struct idpf_vport *vport)
+int idpf_init_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data)
{
- struct idpf_adapter *adapter = vport->adapter;
- struct idpf_rss_data *rss_data;
u32 lut_size;
- rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
-
lut_size = rss_data->rss_lut_size * sizeof(u32);
rss_data->rss_lut = kzalloc(lut_size, GFP_KERNEL);
if (!rss_data->rss_lut)
@@ -4577,21 +4573,17 @@ int idpf_init_rss(struct idpf_vport *vport)
}
/* Fill the default RSS lut values */
- idpf_fill_dflt_rss_lut(vport);
+ idpf_fill_dflt_rss_lut(vport, rss_data);
- return idpf_config_rss(vport);
+ return idpf_config_rss(vport, rss_data);
}
/**
* idpf_deinit_rss - Release RSS resources
- * @vport: virtual port
+ * @rss_data: pointer to RSS key and lut info
*/
-void idpf_deinit_rss(struct idpf_vport *vport)
+void idpf_deinit_rss(struct idpf_rss_data *rss_data)
{
- struct idpf_adapter *adapter = vport->adapter;
- struct idpf_rss_data *rss_data;
-
- rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
kfree(rss_data->cached_lut);
rss_data->cached_lut = NULL;
kfree(rss_data->rss_lut);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 8407939cc625..9bc378e119d5 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -1090,9 +1090,9 @@ int idpf_vport_intr_init(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
void idpf_vport_intr_ena(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
-int idpf_config_rss(struct idpf_vport *vport);
-int idpf_init_rss(struct idpf_vport *vport);
-void idpf_deinit_rss(struct idpf_vport *vport);
+int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data);
+int idpf_init_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data);
+void idpf_deinit_rss(struct idpf_rss_data *rss_data);
int idpf_rx_bufs_init_all(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 47b00ecd7324..d35f0b5a8ce9 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -2275,24 +2275,24 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport)
}
/**
- * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set rss lut message
+ * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message
* @vport: virtual port data structure
- * @get: flag to set or get rss look up table
+ * @rss_data: pointer to RSS key and lut info
+ * @get: flag to set or get RSS look up table
*
* Returns 0 on success, negative on failure.
*/
-int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get)
+int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
+ struct idpf_rss_data *rss_data,
+ bool get)
{
struct virtchnl2_rss_lut *recv_rl __free(kfree) = NULL;
struct virtchnl2_rss_lut *rl __free(kfree) = NULL;
struct idpf_vc_xn_params xn_params = {};
- struct idpf_rss_data *rss_data;
int buf_size, lut_buf_size;
ssize_t reply_sz;
int i;
- rss_data =
- &vport->adapter->vport_config[vport->idx]->user_config.rss_data;
buf_size = struct_size(rl, lut, rss_data->rss_lut_size);
rl = kzalloc(buf_size, GFP_KERNEL);
if (!rl)
@@ -2350,24 +2350,24 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get)
}
/**
- * idpf_send_get_set_rss_key_msg - Send virtchnl get or set rss key message
+ * idpf_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message
* @vport: virtual port data structure
- * @get: flag to set or get rss look up table
+ * @rss_data: pointer to RSS key and lut info
+ * @get: flag to set or get RSS look up table
*
* Returns 0 on success, negative on failure
*/
-int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get)
+int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport,
+ struct idpf_rss_data *rss_data,
+ bool get)
{
struct virtchnl2_rss_key *recv_rk __free(kfree) = NULL;
struct virtchnl2_rss_key *rk __free(kfree) = NULL;
struct idpf_vc_xn_params xn_params = {};
- struct idpf_rss_data *rss_data;
ssize_t reply_sz;
int i, buf_size;
u16 key_size;
- rss_data =
- &vport->adapter->vport_config[vport->idx]->user_config.rss_data;
buf_size = struct_size(rk, key_flex, rss_data->rss_key_size);
rk = kzalloc(buf_size, GFP_KERNEL);
if (!rk)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 82d35aeeabc2..5f3a278a8fa8 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -167,8 +167,12 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport);
int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport);
int idpf_send_get_stats_msg(struct idpf_vport *vport);
int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs);
-int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get);
-int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get);
+int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport,
+ struct idpf_rss_data *rss_data,
+ bool get);
+int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
+ struct idpf_rss_data *rss_data,
+ bool get);
void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr);
#endif /* _IDPF_VIRTCHNL_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 7/9] idpf: generalize send virtchnl message API
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
` (5 preceding siblings ...)
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 6/9] idpf: add rss_data field to RSS function parameters Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 8/9] idpf: avoid calling get_rx_ptypes for each vport Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 9/9] idpf: generalize mailbox API Pavan Kumar Linga
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin, Madhu Chittim
With the previous refactor of passing idpf resource pointer, all of the
virtchnl send message functions do not require full vport structure.
Those functions can be generalized to be able to use for configuring
vport independent queues.
Signed-off-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf_dev.c | 2 +-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 93 ++++----
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 6 +-
drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 2 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 217 ++++++++++--------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 47 ++--
6 files changed, 199 insertions(+), 168 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c
index 457fb314926c..301d51f2d965 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_dev.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c
@@ -85,7 +85,7 @@ static int idpf_intr_reg_init(struct idpf_vport *vport,
if (!reg_vals)
return -ENOMEM;
- num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
+ num_regs = idpf_get_reg_intr_vecs(adapter, reg_vals);
if (num_regs < num_vecs) {
err = -EINVAL;
goto free_reg_vals;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 3898441b7c70..f56ac8f5db18 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -441,7 +441,6 @@ static int __idpf_del_mac_filter(struct idpf_vport_config *vport_config,
/**
* idpf_del_mac_filter - Delete a MAC filter from the filter list
- * @vport: Main vport structure
* @np: Netdev private structure
* @macaddr: The MAC address
* @async: Don't wait for return message
@@ -449,8 +448,7 @@ static int __idpf_del_mac_filter(struct idpf_vport_config *vport_config,
* Removes filter from list and if interface is up, tells hardware about the
* removed filter.
**/
-static int idpf_del_mac_filter(struct idpf_vport *vport,
- struct idpf_netdev_priv *np,
+static int idpf_del_mac_filter(struct idpf_netdev_priv *np,
const u8 *macaddr, bool async)
{
struct idpf_vport_config *vport_config;
@@ -472,7 +470,8 @@ static int idpf_del_mac_filter(struct idpf_vport *vport,
if (np->state == __IDPF_VPORT_UP) {
int err;
- err = idpf_add_del_mac_filters(vport, np, false, async);
+ err = idpf_add_del_mac_filters(np->adapter, vport_config,
+ np->vport_id, false, async);
if (err)
return err;
}
@@ -520,7 +519,6 @@ static int __idpf_add_mac_filter(struct idpf_vport_config *vport_config,
/**
* idpf_add_mac_filter - Add a mac filter to the filter list
- * @vport: Main vport structure
* @np: Netdev private structure
* @macaddr: The MAC address
* @async: Don't wait for return message
@@ -528,8 +526,7 @@ static int __idpf_add_mac_filter(struct idpf_vport_config *vport_config,
* Returns 0 on success or error on failure. If interface is up, we'll also
* send the virtchnl message to tell hardware about the filter.
**/
-static int idpf_add_mac_filter(struct idpf_vport *vport,
- struct idpf_netdev_priv *np,
+static int idpf_add_mac_filter(struct idpf_netdev_priv *np,
const u8 *macaddr, bool async)
{
struct idpf_vport_config *vport_config;
@@ -541,7 +538,8 @@ static int idpf_add_mac_filter(struct idpf_vport *vport,
return err;
if (np->state == __IDPF_VPORT_UP)
- err = idpf_add_del_mac_filters(vport, np, true, async);
+ err = idpf_add_del_mac_filters(np->adapter, vport_config,
+ np->vport_id, true, async);
return err;
}
@@ -589,7 +587,7 @@ static void idpf_restore_mac_filters(struct idpf_vport *vport)
spin_unlock_bh(&vport_config->mac_filter_list_lock);
- idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev),
+ idpf_add_del_mac_filters(vport->adapter, vport_config, vport->vport_id,
true, false);
}
@@ -613,7 +611,7 @@ static void idpf_remove_mac_filters(struct idpf_vport *vport)
spin_unlock_bh(&vport_config->mac_filter_list_lock);
- idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev),
+ idpf_add_del_mac_filters(vport->adapter, vport_config, vport->vport_id,
false, false);
}
@@ -655,8 +653,7 @@ static int idpf_init_mac_addr(struct idpf_vport *vport,
eth_hw_addr_set(netdev, vport->default_mac_addr);
ether_addr_copy(netdev->perm_addr, vport->default_mac_addr);
- return idpf_add_mac_filter(vport, np, vport->default_mac_addr,
- false);
+ return idpf_add_mac_filter(np, vport->default_mac_addr, false);
}
if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS,
@@ -668,7 +665,7 @@ static int idpf_init_mac_addr(struct idpf_vport *vport,
}
eth_hw_addr_random(netdev);
- err = idpf_add_mac_filter(vport, np, netdev->dev_addr, false);
+ err = idpf_add_mac_filter(np, netdev->dev_addr, false);
if (err)
return err;
@@ -842,7 +839,9 @@ static void idpf_vport_stop(struct idpf_vport *vport)
{
struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
struct idpf_q_vec_rsrc *rsrc = &vport->dflt_qv_rsrc;
+ struct idpf_adapter *adapter = vport->adapter;
struct idpf_queue_id_reg_info *chunks;
+ u32 vport_id = vport->vport_id;
if (np->state <= __IDPF_VPORT_DOWN)
return;
@@ -850,18 +849,18 @@ static void idpf_vport_stop(struct idpf_vport *vport)
netif_carrier_off(vport->netdev);
netif_tx_disable(vport->netdev);
- chunks = &vport->adapter->vport_config[vport->idx]->qid_reg_info;
+ chunks = &adapter->vport_config[vport->idx]->qid_reg_info;
- idpf_send_disable_vport_msg(vport);
+ idpf_send_disable_vport_msg(adapter, vport_id);
idpf_send_disable_queues_msg(vport, rsrc, chunks);
- idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false);
+ idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, false);
/* Normally we ask for queues in create_vport, but if the number of
* initially requested queues have changed, for example via ethtool
* set channels, we do delete queues and then add the queues back
* instead of deleting and reallocating the vport.
*/
if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags))
- idpf_send_delete_queues_msg(vport, chunks);
+ idpf_send_delete_queues_msg(adapter, chunks, vport_id);
idpf_remove_features(vport);
@@ -943,7 +942,7 @@ static void idpf_vport_rel(struct idpf_vport *vport)
kfree(rss_data->rss_key);
rss_data->rss_key = NULL;
- idpf_send_destroy_vport_msg(vport);
+ idpf_send_destroy_vport_msg(adapter, vport->vport_id);
/* Release all max queues allocated to the adapter's pool */
max_q.max_rxq = vport_config->max_q.max_rxq;
@@ -1203,7 +1202,8 @@ void idpf_statistics_task(struct work_struct *work)
struct idpf_vport *vport = adapter->vports[i];
if (vport && !test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
- idpf_send_get_stats_msg(vport);
+ idpf_send_get_stats_msg(netdev_priv(vport->netdev),
+ &vport->port_stats);
}
queue_delayed_work(adapter->stats_wq, &adapter->stats_task,
@@ -1343,6 +1343,8 @@ static int idpf_vport_open(struct idpf_vport *vport)
struct idpf_vport_config *vport_config;
struct idpf_queue_id_reg_info *chunks;
struct idpf_rss_data *rss_data;
+ u32 vport_id = vport->vport_id;
+ bool rsc_ena;
int err;
if (np->state != __IDPF_VPORT_DOWN)
@@ -1405,14 +1407,16 @@ static int idpf_vport_open(struct idpf_vport *vport)
idpf_vport_intr_ena(vport, rsrc);
- err = idpf_send_config_queues_msg(vport, rsrc);
+ rsc_ena = idpf_is_feature_ena(vport, NETIF_F_GRO_HW);
+ err = idpf_send_config_queues_msg(adapter, rsrc, vport_id, rsc_ena);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n",
vport->vport_id, err);
goto rxq_deinit;
}
- err = idpf_send_map_unmap_queue_vector_msg(vport, rsrc, true);
+ err = idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id,
+ true);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n",
vport->vport_id, err);
@@ -1426,7 +1430,7 @@ static int idpf_vport_open(struct idpf_vport *vport)
goto unmap_queue_vectors;
}
- err = idpf_send_enable_vport_msg(vport);
+ err = idpf_send_enable_vport_msg(adapter, vport_id);
if (err) {
dev_err(&adapter->pdev->dev, "Failed to enable vport %u: %d\n",
vport->vport_id, err);
@@ -1459,11 +1463,11 @@ static int idpf_vport_open(struct idpf_vport *vport)
deinit_rss:
idpf_deinit_rss(rss_data);
disable_vport:
- idpf_send_disable_vport_msg(vport);
+ idpf_send_disable_vport_msg(adapter, vport_id);
disable_queues:
idpf_send_disable_queues_msg(vport, rsrc, chunks);
unmap_queue_vectors:
- idpf_send_map_unmap_queue_vector_msg(vport, rsrc, false);
+ idpf_send_map_unmap_queue_vector_msg(adapter, rsrc, vport_id, false);
rxq_deinit:
idpf_xdp_rxq_info_deinit_all(rsrc);
intr_deinit:
@@ -1885,6 +1889,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
struct idpf_adapter *adapter = vport->adapter;
struct idpf_vport_config *vport_config;
struct idpf_q_vec_rsrc *new_rsrc;
+ u32 vport_id = vport->vport_id;
struct idpf_vport *new_vport;
int err;
@@ -1936,28 +1941,21 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
vport_config = adapter->vport_config[vport->idx];
if (current_state <= __IDPF_VPORT_DOWN) {
- idpf_send_delete_queues_msg(vport, &vport_config->qid_reg_info);
+ idpf_send_delete_queues_msg(adapter, &vport_config->qid_reg_info,
+ vport_id);
} else {
set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags);
idpf_vport_stop(vport);
}
idpf_deinit_rss(&vport_config->user_config.rss_data);
- /* We're passing in vport here because we need its wait_queue
- * to send a message and it should be getting all the vport
- * config data out of the adapter but we need to be careful not
- * to add code to add_queues to change the vport config within
- * vport itself as it will be wiped with a memcpy later.
- */
- err = idpf_send_add_queues_msg(vport, new_rsrc->num_txq,
- new_rsrc->num_complq,
- new_rsrc->num_rxq,
- new_rsrc->num_bufq);
+ err = idpf_send_add_queues_msg(adapter, vport_config, new_rsrc,
+ vport_id);
if (err)
goto err_reset;
- /* Same comment as above regarding avoiding copying the wait_queues and
- * mutexes applies here. We do not want to mess with those if possible.
+ /* Avoid copying the wait_queues and mutexes. We do not want to mess
+ * with those if possible.
*/
memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up));
@@ -1976,8 +1974,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport,
return err;
err_reset:
- idpf_send_add_queues_msg(vport, rsrc->num_txq, rsrc->num_complq,
- rsrc->num_rxq, rsrc->num_bufq);
+ idpf_send_add_queues_msg(adapter, vport_config, rsrc, vport_id);
err_open:
if (current_state == __IDPF_VPORT_UP)
@@ -2006,7 +2003,7 @@ static int idpf_addr_sync(struct net_device *netdev, const u8 *addr)
{
struct idpf_netdev_priv *np = netdev_priv(netdev);
- return idpf_add_mac_filter(np->vport, np, addr, true);
+ return idpf_add_mac_filter(np, addr, true);
}
/**
@@ -2034,7 +2031,7 @@ static int idpf_addr_unsync(struct net_device *netdev, const u8 *addr)
if (ether_addr_equal(addr, netdev->dev_addr))
return 0;
- idpf_del_mac_filter(np->vport, np, addr, true);
+ idpf_del_mac_filter(np, addr, true);
return 0;
}
@@ -2118,14 +2115,15 @@ static void idpf_set_rx_mode(struct net_device *netdev)
*/
static int idpf_vport_manage_rss_lut(struct idpf_vport *vport)
{
- bool ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);
struct idpf_rss_data *rss_data;
u16 idx = vport->idx;
int lut_size;
+ bool ena;
rss_data = &vport->adapter->vport_config[idx]->user_config.rss_data;
lut_size = rss_data->rss_lut_size * sizeof(u32);
+ ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);
if (ena) {
/* This will contain the default or user configured LUT */
memcpy(rss_data->rss_lut, rss_data->cached_lut, lut_size);
@@ -2181,8 +2179,13 @@ static int idpf_set_features(struct net_device *netdev,
}
if (changed & NETIF_F_LOOPBACK) {
+ bool loopback_ena;
+
netdev->features ^= NETIF_F_LOOPBACK;
- err = idpf_send_ena_dis_loopback_msg(vport);
+ loopback_ena = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK);
+
+ err = idpf_send_ena_dis_loopback_msg(adapter, vport->vport_id,
+ loopback_ena);
}
unlock_mutex:
@@ -2344,14 +2347,14 @@ static int idpf_set_mac(struct net_device *netdev, void *p)
goto unlock_mutex;
vport_config = vport->adapter->vport_config[vport->idx];
- err = idpf_add_mac_filter(vport, np, addr->sa_data, false);
+ err = idpf_add_mac_filter(np, addr->sa_data, false);
if (err) {
__idpf_del_mac_filter(vport_config, addr->sa_data);
goto unlock_mutex;
}
if (is_valid_ether_addr(vport->default_mac_addr))
- idpf_del_mac_filter(vport, np, vport->default_mac_addr, false);
+ idpf_del_mac_filter(np, vport->default_mac_addr, false);
ether_addr_copy(vport->default_mac_addr, addr->sa_data);
eth_hw_addr_set(netdev, addr->sa_data);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index f8bc90561754..d538dff78bd9 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -4522,13 +4522,15 @@ void idpf_vport_intr_ena(struct idpf_vport *vport, struct idpf_q_vec_rsrc *rsrc)
*/
int idpf_config_rss(struct idpf_vport *vport, struct idpf_rss_data *rss_data)
{
+ struct idpf_adapter *adapter = vport->adapter;
+ u32 vport_id = vport->vport_id;
int err;
- err = idpf_send_get_set_rss_key_msg(vport, rss_data, false);
+ err = idpf_send_get_set_rss_key_msg(adapter, rss_data, vport_id, false);
if (err)
return err;
- return idpf_send_get_set_rss_lut_msg(vport, rss_data, false);
+ return idpf_send_get_set_rss_lut_msg(adapter, rss_data, vport_id, false);
}
/**
diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
index 574dadc6e190..5b904c232584 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
@@ -84,7 +84,7 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport,
if (!reg_vals)
return -ENOMEM;
- num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
+ num_regs = idpf_get_reg_intr_vecs(adapter, reg_vals);
if (num_regs < num_vecs) {
err = -EINVAL;
goto free_reg_vals;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index d35f0b5a8ce9..8a43ad873f25 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -1059,12 +1059,12 @@ idpf_vport_init_queue_reg_chunks(struct idpf_vport_config *vport_config,
/**
* idpf_get_reg_intr_vecs - Get vector queue register offset
- * @vport: virtual port structure
+ * @adapter: adapter structure to get the vector chunks
* @reg_vals: Register offsets to store in
*
* Returns number of registers that got populated
*/
-int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
+int idpf_get_reg_intr_vecs(struct idpf_adapter *adapter,
struct idpf_vec_regs *reg_vals)
{
struct virtchnl2_vector_chunks *chunks;
@@ -1072,7 +1072,7 @@ int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
u16 num_vchunks, num_vec;
int num_regs = 0, i, j;
- chunks = &vport->adapter->req_vec_chunks->vchunks;
+ chunks = &adapter->req_vec_chunks->vchunks;
num_vchunks = le16_to_cpu(chunks->num_vchunks);
for (j = 0; j < num_vchunks; j++) {
@@ -1408,86 +1408,91 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport)
/**
* idpf_send_destroy_vport_msg - Send virtchnl destroy vport message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
+ * @vport_id: vport identifier used while preparing the virtchnl message
*
* Send virtchnl destroy vport message. Returns 0 on success, negative on
* failure.
*/
-int idpf_send_destroy_vport_msg(struct idpf_vport *vport)
+int idpf_send_destroy_vport_msg(struct idpf_adapter *adapter, u32 vport_id)
{
struct idpf_vc_xn_params xn_params = {};
struct virtchnl2_vport v_id;
ssize_t reply_sz;
- v_id.vport_id = cpu_to_le32(vport->vport_id);
+ v_id.vport_id = cpu_to_le32(vport_id);
xn_params.vc_op = VIRTCHNL2_OP_DESTROY_VPORT;
xn_params.send_buf.iov_base = &v_id;
xn_params.send_buf.iov_len = sizeof(v_id);
xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
return reply_sz < 0 ? reply_sz : 0;
}
/**
* idpf_send_enable_vport_msg - Send virtchnl enable vport message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
+ * @vport_id: vport identifier used while preparing the virtchnl message
*
* Send enable vport virtchnl message. Returns 0 on success, negative on
* failure.
*/
-int idpf_send_enable_vport_msg(struct idpf_vport *vport)
+int idpf_send_enable_vport_msg(struct idpf_adapter *adapter, u32 vport_id)
{
struct idpf_vc_xn_params xn_params = {};
struct virtchnl2_vport v_id;
ssize_t reply_sz;
- v_id.vport_id = cpu_to_le32(vport->vport_id);
+ v_id.vport_id = cpu_to_le32(vport_id);
xn_params.vc_op = VIRTCHNL2_OP_ENABLE_VPORT;
xn_params.send_buf.iov_base = &v_id;
xn_params.send_buf.iov_len = sizeof(v_id);
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
return reply_sz < 0 ? reply_sz : 0;
}
/**
* idpf_send_disable_vport_msg - Send virtchnl disable vport message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
+ * @vport_id: vport identifier used while preparing the virtchnl message
*
* Send disable vport virtchnl message. Returns 0 on success, negative on
* failure.
*/
-int idpf_send_disable_vport_msg(struct idpf_vport *vport)
+int idpf_send_disable_vport_msg(struct idpf_adapter *adapter, u32 vport_id)
{
struct idpf_vc_xn_params xn_params = {};
struct virtchnl2_vport v_id;
ssize_t reply_sz;
- v_id.vport_id = cpu_to_le32(vport->vport_id);
+ v_id.vport_id = cpu_to_le32(vport_id);
xn_params.vc_op = VIRTCHNL2_OP_DISABLE_VPORT;
xn_params.send_buf.iov_base = &v_id;
xn_params.send_buf.iov_len = sizeof(v_id);
xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
return reply_sz < 0 ? reply_sz : 0;
}
/**
* idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @rsrc: pointer to queue and vector resources
+ * @vport_id: vport identifier used while preparing the virtchnl message
*
* Send config tx queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport,
- struct idpf_q_vec_rsrc *rsrc)
+static int idpf_send_config_tx_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id)
{
struct virtchnl2_config_tx_queues *ctq __free(kfree) = NULL;
struct virtchnl2_txq_info *qi __free(kfree) = NULL;
@@ -1579,13 +1584,13 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport,
for (u16 i = 0, k = 0; i < num_msgs; i++) {
memset(ctq, 0, buf_sz);
- ctq->vport_id = cpu_to_le32(vport->vport_id);
+ ctq->vport_id = cpu_to_le32(vport_id);
ctq->num_qinfo = cpu_to_le16(num_chunks);
memcpy(ctq->qinfo, &qi[k], chunk_sz * num_chunks);
xn_params.send_buf.iov_base = ctq;
xn_params.send_buf.iov_len = buf_sz;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
@@ -1601,14 +1606,17 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport,
/**
* idpf_send_config_rx_queues_msg - Send virtchnl config rx queues message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @rsrc: pointer to queue and vector resources
+ * @vport_id: vport identifier used while preparing the virtchnl message
+ * @rsc_ena: flag to check if RSC feature is enabled
*
* Send config rx queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport,
- struct idpf_q_vec_rsrc *rsrc)
+static int idpf_send_config_rx_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id, bool rsc_ena)
{
struct virtchnl2_config_rx_queues *crq __free(kfree) = NULL;
struct virtchnl2_rxq_info *qi __free(kfree) = NULL;
@@ -1646,7 +1654,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport,
qi[k].buffer_notif_stride = IDPF_RX_BUF_STRIDE;
qi[k].rx_buffer_low_watermark =
cpu_to_le16(bufq->rx_buffer_low_watermark);
- if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW))
+ if (rsc_ena)
qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC);
}
@@ -1685,7 +1693,7 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport,
}
qi[k].rx_buffer_low_watermark =
cpu_to_le16(rxq->rx_buffer_low_watermark);
- if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW))
+ if (rsc_ena)
qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC);
rxq->rx_hbuf_size = sets[0].bufq.rx_hbuf_size;
@@ -1737,13 +1745,13 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport,
for (u16 i = 0, k = 0; i < num_msgs; i++) {
memset(crq, 0, buf_sz);
- crq->vport_id = cpu_to_le32(vport->vport_id);
+ crq->vport_id = cpu_to_le32(vport_id);
crq->num_qinfo = cpu_to_le16(num_chunks);
memcpy(crq->qinfo, &qi[k], chunk_sz * num_chunks);
xn_params.send_buf.iov_base = crq;
xn_params.send_buf.iov_len = buf_sz;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
@@ -1760,15 +1768,17 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport,
/**
* idpf_send_ena_dis_queues_msg - Send virtchnl enable or disable
* queues message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @chunks: queue register info
+ * @vport_id: vport identifier used while preparing the virtchnl message
* @ena: if true enable, false disable
*
* Send enable or disable queues virtchnl message. Returns 0 on success,
* negative on failure.
*/
-static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport,
+static int idpf_send_ena_dis_queues_msg(struct idpf_adapter *adapter,
struct idpf_queue_id_reg_info *chunks,
+ u32 vport_id,
bool ena)
{
struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL;
@@ -1790,7 +1800,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport,
if (!eq)
return -ENOMEM;
- eq->vport_id = cpu_to_le32(vport->vport_id);
+ eq->vport_id = cpu_to_le32(vport_id);
eq->chunks.num_chunks = cpu_to_le16(num_chunks);
idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks,
@@ -1798,7 +1808,7 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport,
xn_params.send_buf.iov_base = eq;
xn_params.send_buf.iov_len = buf_sz;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
return reply_sz < 0 ? reply_sz : 0;
}
@@ -1806,15 +1816,17 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport,
/**
* idpf_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue
* vector message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @rsrc: pointer to queue and vector resources
+ * @vport_id: vport identifier used while preparing the virtchnl message
* @map: true for map and false for unmap
*
* Send map or unmap queue vector virtchnl message. Returns 0 on success,
* negative on failure.
*/
-int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport,
+int idpf_send_map_unmap_queue_vector_msg(struct idpf_adapter *adapter,
struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id,
bool map)
{
struct virtchnl2_queue_vector_maps *vqvm __free(kfree) = NULL;
@@ -1929,11 +1941,11 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport,
memset(vqvm, 0, buf_sz);
xn_params.send_buf.iov_base = vqvm;
xn_params.send_buf.iov_len = buf_sz;
- vqvm->vport_id = cpu_to_le32(vport->vport_id);
+ vqvm->vport_id = cpu_to_le32(vport_id);
vqvm->num_qv_maps = cpu_to_le16(num_chunks);
memcpy(vqvm->qv_maps, &vqv[k], chunk_sz * num_chunks);
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
@@ -1958,7 +1970,8 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport,
int idpf_send_enable_queues_msg(struct idpf_vport *vport,
struct idpf_queue_id_reg_info *chunks)
{
- return idpf_send_ena_dis_queues_msg(vport, chunks, true);
+ return idpf_send_ena_dis_queues_msg(vport->adapter, chunks,
+ vport->vport_id, true);
}
/**
@@ -1976,7 +1989,8 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport,
{
int err;
- err = idpf_send_ena_dis_queues_msg(vport, chunks, false);
+ err = idpf_send_ena_dis_queues_msg(vport->adapter, chunks,
+ vport->vport_id, false);
if (err)
return err;
@@ -1985,14 +1999,16 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport,
/**
* idpf_send_delete_queues_msg - send delete queues virtchnl message
- * @vport: virtual port private data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @chunks: queue ids received over mailbox
+ * @vport_id: vport identifier used while preparing the virtchnl message
*
* Will send delete queues virtchnl message. Return 0 on success, negative on
* failure.
*/
-int idpf_send_delete_queues_msg(struct idpf_vport *vport,
- struct idpf_queue_id_reg_info *chunks)
+int idpf_send_delete_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_queue_id_reg_info *chunks,
+ u32 vport_id)
{
struct virtchnl2_del_ena_dis_queues *eq __free(kfree) = NULL;
struct idpf_vc_xn_params xn_params = {};
@@ -2007,7 +2023,7 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport,
if (!eq)
return -ENOMEM;
- eq->vport_id = cpu_to_le32(vport->vport_id);
+ eq->vport_id = cpu_to_le32(vport_id);
eq->chunks.num_chunks = cpu_to_le16(num_chunks);
idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->queue_chunks,
@@ -2017,50 +2033,52 @@ int idpf_send_delete_queues_msg(struct idpf_vport *vport,
xn_params.timeout_ms = IDPF_VC_XN_MIN_TIMEOUT_MSEC;
xn_params.send_buf.iov_base = eq;
xn_params.send_buf.iov_len = buf_size;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
return reply_sz < 0 ? reply_sz : 0;
}
/**
* idpf_send_config_queues_msg - Send config queues virtchnl message
- * @vport: Virtual port private data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @rsrc: pointer to queue and vector resources
+ * @vport_id: vport identifier used while preparing the virtchnl message
+ * @rsc_ena: flag to check if RSC feature is enabled
*
* Will send config queues virtchnl message. Returns 0 on success, negative on
* failure.
*/
-int idpf_send_config_queues_msg(struct idpf_vport *vport,
- struct idpf_q_vec_rsrc *rsrc)
+int idpf_send_config_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id, bool rsc_ena)
{
int err;
- err = idpf_send_config_tx_queues_msg(vport, rsrc);
+ err = idpf_send_config_tx_queues_msg(adapter, rsrc, vport_id);
if (err)
return err;
- return idpf_send_config_rx_queues_msg(vport, rsrc);
+ return idpf_send_config_rx_queues_msg(adapter, rsrc, vport_id, rsc_ena);
}
/**
* idpf_send_add_queues_msg - Send virtchnl add queues message
- * @vport: Virtual port private data structure
- * @num_tx_q: number of transmit queues
- * @num_complq: number of transmit completion queues
- * @num_rx_q: number of receive queues
- * @num_rx_bufq: number of receive buffer queues
+ * @adapter: adapter pointer used to send virtchnl message
+ * @vport_config: vport persistent structure to store the queue chunk info
+ * @rsrc: pointer to queue and vector resources
+ * @vport_id: vport identifier used while preparing the virtchnl message
*
* Returns 0 on success, negative on failure. vport _MUST_ be const here as
* we should not change any fields within vport itself in this function.
*/
-int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
- u16 num_complq, u16 num_rx_q, u16 num_rx_bufq)
+int idpf_send_add_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_vport_config *vport_config,
+ struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id)
{
struct virtchnl2_add_queues *vc_msg __free(kfree) = NULL;
struct idpf_vc_xn_params xn_params = {};
- struct idpf_vport_config *vport_config;
struct virtchnl2_add_queues aq = {};
- u16 vport_idx = vport->idx;
ssize_t reply_sz;
int size;
@@ -2068,13 +2086,11 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
if (!vc_msg)
return -ENOMEM;
- vport_config = vport->adapter->vport_config[vport_idx];
-
- aq.vport_id = cpu_to_le32(vport->vport_id);
- aq.num_tx_q = cpu_to_le16(num_tx_q);
- aq.num_tx_complq = cpu_to_le16(num_complq);
- aq.num_rx_q = cpu_to_le16(num_rx_q);
- aq.num_rx_bufq = cpu_to_le16(num_rx_bufq);
+ aq.vport_id = cpu_to_le32(vport_id);
+ aq.num_tx_q = cpu_to_le16(rsrc->num_txq);
+ aq.num_tx_complq = cpu_to_le16(rsrc->num_complq);
+ aq.num_rx_q = cpu_to_le16(rsrc->num_rxq);
+ aq.num_rx_bufq = cpu_to_le16(rsrc->num_bufq);
xn_params.vc_op = VIRTCHNL2_OP_ADD_QUEUES;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
@@ -2082,15 +2098,15 @@ int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
xn_params.send_buf.iov_len = sizeof(aq);
xn_params.recv_buf.iov_base = vc_msg;
xn_params.recv_buf.iov_len = IDPF_CTLQ_MAX_BUF_LEN;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
/* compare vc_msg num queues with vport num queues */
- if (le16_to_cpu(vc_msg->num_tx_q) != num_tx_q ||
- le16_to_cpu(vc_msg->num_rx_q) != num_rx_q ||
- le16_to_cpu(vc_msg->num_tx_complq) != num_complq ||
- le16_to_cpu(vc_msg->num_rx_bufq) != num_rx_bufq)
+ if (le16_to_cpu(vc_msg->num_tx_q) != rsrc->num_txq ||
+ le16_to_cpu(vc_msg->num_rx_q) != rsrc->num_rxq ||
+ le16_to_cpu(vc_msg->num_tx_complq) != rsrc->num_complq ||
+ le16_to_cpu(vc_msg->num_rx_bufq) != rsrc->num_bufq)
return -EINVAL;
size = struct_size(vc_msg, chunks.chunks,
@@ -2221,24 +2237,24 @@ int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs)
/**
* idpf_send_get_stats_msg - Send virtchnl get statistics message
- * @vport: vport to get stats for
+ * @np: netdev private structure
+ * @port_stats: structure to store the vport statistics
*
* Returns 0 on success, negative on failure.
*/
-int idpf_send_get_stats_msg(struct idpf_vport *vport)
+int idpf_send_get_stats_msg(struct idpf_netdev_priv *np,
+ struct idpf_port_stats *port_stats)
{
- struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
struct rtnl_link_stats64 *netstats = &np->netstats;
struct virtchnl2_vport_stats stats_msg = {};
struct idpf_vc_xn_params xn_params = {};
ssize_t reply_sz;
-
/* Don't send get_stats message if the link is down */
if (np->state <= __IDPF_VPORT_DOWN)
return 0;
- stats_msg.vport_id = cpu_to_le32(vport->vport_id);
+ stats_msg.vport_id = cpu_to_le32(np->vport_id);
xn_params.vc_op = VIRTCHNL2_OP_GET_STATS;
xn_params.send_buf.iov_base = &stats_msg;
@@ -2246,7 +2262,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport)
xn_params.recv_buf = xn_params.send_buf;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(np->adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
if (reply_sz < sizeof(stats_msg))
@@ -2267,7 +2283,7 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport)
netstats->rx_dropped = le64_to_cpu(stats_msg.rx_discards);
netstats->tx_dropped = le64_to_cpu(stats_msg.tx_discards);
- vport->port_stats.vport_stats = stats_msg;
+ port_stats->vport_stats = stats_msg;
spin_unlock_bh(&np->stats_lock);
@@ -2276,15 +2292,16 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport)
/**
* idpf_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @rss_data: pointer to RSS key and lut info
+ * @vport_id: vport identifier used while preparing the virtchnl message
* @get: flag to set or get RSS look up table
*
* Returns 0 on success, negative on failure.
*/
-int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
+int idpf_send_get_set_rss_lut_msg(struct idpf_adapter *adapter,
struct idpf_rss_data *rss_data,
- bool get)
+ u32 vport_id, bool get)
{
struct virtchnl2_rss_lut *recv_rl __free(kfree) = NULL;
struct virtchnl2_rss_lut *rl __free(kfree) = NULL;
@@ -2298,7 +2315,7 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
if (!rl)
return -ENOMEM;
- rl->vport_id = cpu_to_le32(vport->vport_id);
+ rl->vport_id = cpu_to_le32(vport_id);
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
xn_params.send_buf.iov_base = rl;
@@ -2318,7 +2335,7 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_LUT;
}
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
if (!get)
@@ -2351,15 +2368,16 @@ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
/**
* idpf_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
* @rss_data: pointer to RSS key and lut info
+ * @vport_id: vport identifier used while preparing the virtchnl message
* @get: flag to set or get RSS look up table
*
* Returns 0 on success, negative on failure
*/
-int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport,
+int idpf_send_get_set_rss_key_msg(struct idpf_adapter *adapter,
struct idpf_rss_data *rss_data,
- bool get)
+ u32 vport_id, bool get)
{
struct virtchnl2_rss_key *recv_rk __free(kfree) = NULL;
struct virtchnl2_rss_key *rk __free(kfree) = NULL;
@@ -2373,7 +2391,7 @@ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport,
if (!rk)
return -ENOMEM;
- rk->vport_id = cpu_to_le32(vport->vport_id);
+ rk->vport_id = cpu_to_le32(vport_id);
xn_params.send_buf.iov_base = rk;
xn_params.send_buf.iov_len = buf_size;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
@@ -2393,7 +2411,7 @@ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport,
xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_KEY;
}
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
if (reply_sz < 0)
return reply_sz;
if (!get)
@@ -2699,24 +2717,27 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
/**
* idpf_send_ena_dis_loopback_msg - Send virtchnl enable/disable loopback
* message
- * @vport: virtual port data structure
+ * @adapter: adapter pointer used to send virtchnl message
+ * @vport_id: vport identifier used while preparing the virtchnl message
+ * @loopback_ena: flag to enable or disable loopback
*
* Returns 0 on success, negative on failure.
*/
-int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport)
+int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id,
+ bool loopback_ena)
{
struct idpf_vc_xn_params xn_params = {};
struct virtchnl2_loopback loopback;
ssize_t reply_sz;
- loopback.vport_id = cpu_to_le32(vport->vport_id);
- loopback.enable = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK);
+ loopback.vport_id = cpu_to_le32(vport_id);
+ loopback.enable = loopback_ena;
xn_params.vc_op = VIRTCHNL2_OP_LOOPBACK;
xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC;
xn_params.send_buf.iov_base = &loopback;
xn_params.send_buf.iov_len = sizeof(loopback);
- reply_sz = idpf_vc_xn_exec(vport->adapter, &xn_params);
+ reply_sz = idpf_vc_xn_exec(adapter, &xn_params);
return reply_sz < 0 ? reply_sz : 0;
}
@@ -3631,22 +3652,21 @@ static int idpf_mac_filter_async_handler(struct idpf_adapter *adapter,
/**
* idpf_add_del_mac_filters - Add/del mac filters
- * @vport: Virtual port data structure
- * @np: Netdev private structure
+ * @adapter: adapter pointer used to send virtchnl message
+ * @vport_config: persistent vport structure to get the MAC filter list
+ * @vport_id: vport identifier used while preparing the virtchnl message
* @add: Add or delete flag
* @async: Don't wait for return message
*
* Returns 0 on success, error on failure.
**/
-int idpf_add_del_mac_filters(struct idpf_vport *vport,
- struct idpf_netdev_priv *np,
- bool add, bool async)
+int idpf_add_del_mac_filters(struct idpf_adapter *adapter,
+ struct idpf_vport_config *vport_config,
+ u32 vport_id, bool add, bool async)
{
struct virtchnl2_mac_addr_list *ma_list __free(kfree) = NULL;
struct virtchnl2_mac_addr *mac_addr __free(kfree) = NULL;
- struct idpf_adapter *adapter = np->adapter;
struct idpf_vc_xn_params xn_params = {};
- struct idpf_vport_config *vport_config;
u32 num_msgs, total_filters = 0;
struct idpf_mac_filter *f;
ssize_t reply_sz;
@@ -3658,7 +3678,6 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport,
xn_params.async = async;
xn_params.async_handler = idpf_mac_filter_async_handler;
- vport_config = adapter->vport_config[np->vport_idx];
spin_lock_bh(&vport_config->mac_filter_list_lock);
/* Find the number of newly added filters */
@@ -3727,7 +3746,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport,
memset(ma_list, 0, buf_size);
}
- ma_list->vport_id = cpu_to_le32(np->vport_id);
+ ma_list->vport_id = cpu_to_le32(vport_id);
ma_list->num_mac_addr = cpu_to_le16(num_entries);
memcpy(ma_list->mac_addr_list, &mac_addr[k], entries_size);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 5f3a278a8fa8..5e3ac5aff635 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -100,7 +100,7 @@ void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter);
int idpf_vc_core_init(struct idpf_adapter *adapter);
void idpf_vc_core_deinit(struct idpf_adapter *adapter);
-int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
+int idpf_get_reg_intr_vecs(struct idpf_adapter *adapter,
struct idpf_vec_regs *reg_vals);
int idpf_queue_reg_init(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc,
@@ -123,9 +123,9 @@ int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q);
u32 idpf_get_vport_id(struct idpf_vport *vport);
int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
-int idpf_send_destroy_vport_msg(struct idpf_vport *vport);
-int idpf_send_enable_vport_msg(struct idpf_vport *vport);
-int idpf_send_disable_vport_msg(struct idpf_vport *vport);
+int idpf_send_destroy_vport_msg(struct idpf_adapter *adapter, u32 vport_id);
+int idpf_send_enable_vport_msg(struct idpf_adapter *adapter, u32 vport_id);
+int idpf_send_disable_vport_msg(struct idpf_adapter *adapter, u32 vport_id);
int idpf_vport_adjust_qs(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
@@ -133,17 +133,21 @@ int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter,
struct idpf_vport_max_q *max_q);
-int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
- u16 num_complq, u16 num_rx_q, u16 num_rx_bufq);
-int idpf_send_delete_queues_msg(struct idpf_vport *vport,
- struct idpf_queue_id_reg_info *chunks);
+int idpf_send_add_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_vport_config *vport_config,
+ struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id);
+int idpf_send_delete_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_queue_id_reg_info *chunks,
+ u32 vport_id);
int idpf_send_enable_queues_msg(struct idpf_vport *vport,
struct idpf_queue_id_reg_info *chunks);
int idpf_send_disable_queues_msg(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc,
struct idpf_queue_id_reg_info *chunks);
-int idpf_send_config_queues_msg(struct idpf_vport *vport,
- struct idpf_q_vec_rsrc *rsrc);
+int idpf_send_config_queues_msg(struct idpf_adapter *adapter,
+ struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id, bool rsc_ena);
int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc);
@@ -152,27 +156,30 @@ int idpf_get_vec_ids(struct idpf_adapter *adapter,
struct virtchnl2_vector_chunks *chunks);
int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors);
int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter);
-int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport,
+int idpf_send_map_unmap_queue_vector_msg(struct idpf_adapter *adapter,
struct idpf_q_vec_rsrc *rsrc,
+ u32 vport_id,
bool map);
-int idpf_add_del_mac_filters(struct idpf_vport *vport,
- struct idpf_netdev_priv *np,
- bool add, bool async);
+int idpf_add_del_mac_filters(struct idpf_adapter *adapter,
+ struct idpf_vport_config *vport_config,
+ u32 vport_id, bool add, bool async);
int idpf_set_promiscuous(struct idpf_adapter *adapter,
struct idpf_vport_user_config_data *config_data,
u32 vport_id);
int idpf_check_supported_desc_ids(struct idpf_vport *vport);
int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport);
-int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport);
-int idpf_send_get_stats_msg(struct idpf_vport *vport);
+int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id,
+ bool loopback_ena);
+int idpf_send_get_stats_msg(struct idpf_netdev_priv *np,
+ struct idpf_port_stats *port_stats);
int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs);
-int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport,
+int idpf_send_get_set_rss_key_msg(struct idpf_adapter *adapter,
struct idpf_rss_data *rss_data,
- bool get);
-int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport,
+ u32 vport_id, bool get);
+int idpf_send_get_set_rss_lut_msg(struct idpf_adapter *adapter,
struct idpf_rss_data *rss_data,
- bool get);
+ u32 vport_id, bool get);
void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr);
#endif /* _IDPF_VIRTCHNL_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 8/9] idpf: avoid calling get_rx_ptypes for each vport
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
` (6 preceding siblings ...)
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 7/9] idpf: generalize send virtchnl message API Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 9/9] idpf: generalize mailbox API Pavan Kumar Linga
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Madhu Chittim
RX ptypes received from device control plane doesn't depend on vport
info, but might vary based on the queue model. When the driver requests
for ptypes, control plane fills both ptype_id_10 (used for splitq) and
ptype_id_8 (used for singleq) fields of the virtchnl2_ptype response
structure. This allows to call get_rx_ptypes once at the adapter level
instead of each vport.
Parse and store the received ptypes of both splitq and singleq in a
separate lookup table. Respective lookup table is used based on the
queue model info. As part of the changes, pull the ptype protocol
parsing code into a separate function.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf.h | 7 +-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 9 -
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 4 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 310 ++++++++++--------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 1 -
5 files changed, 174 insertions(+), 157 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
index 624a61e4a15f..7bca1c177ed7 100644
--- a/drivers/net/ethernet/intel/idpf/idpf.h
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -334,7 +334,6 @@ struct idpf_q_vec_rsrc {
* @default_mac_addr: device will give a default MAC to use
* @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation
* @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation
- * @rx_ptype_lkup: Lookup table for ptypes on RX
* @port_stats: per port csum, header split, and other offload stats
* @default_vport: Use this vport if one isn't specified
* @crc_enable: Enable CRC insertion offload
@@ -365,7 +364,6 @@ struct idpf_vport {
u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
- struct libeth_rx_pt *rx_ptype_lkup;
struct idpf_port_stats port_stats;
bool default_vport;
bool crc_enable;
@@ -603,6 +601,8 @@ struct idpf_vc_xn_manager;
* @vport_params_reqd: Vport params requested
* @vport_params_recvd: Vport params received
* @vport_ids: Array of device given vport identifiers
+ * @singleq_pt_lkup: Lookup table for singleq RX ptypes
+ * @splitq_pt_lkup: Lookup table for splitq RX ptypes
* @vport_config: Vport config parameters
* @max_vports: Maximum vports that can be allocated
* @num_alloc_vports: Current number of vports allocated
@@ -659,6 +659,9 @@ struct idpf_adapter {
struct virtchnl2_create_vport **vport_params_recvd;
u32 *vport_ids;
+ struct libeth_rx_pt *singleq_pt_lkup;
+ struct libeth_rx_pt *splitq_pt_lkup;
+
struct idpf_vport_config **vport_config;
u16 max_vports;
u16 num_alloc_vports;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index f56ac8f5db18..23650a1cda29 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -909,9 +909,6 @@ static void idpf_decfg_netdev(struct idpf_vport *vport)
struct idpf_adapter *adapter = vport->adapter;
u16 idx = vport->idx;
- kfree(vport->rx_ptype_lkup);
- vport->rx_ptype_lkup = NULL;
-
if (test_and_clear_bit(IDPF_VPORT_REG_NETDEV,
adapter->vport_config[idx]->flags)) {
unregister_netdev(vport->netdev);
@@ -1547,10 +1544,6 @@ void idpf_init_task(struct work_struct *work)
if (idpf_cfg_netdev(vport))
goto cfg_netdev_err;
- err = idpf_send_get_rx_ptype_msg(vport);
- if (err)
- goto handle_err;
-
/* Once state is put into DOWN, driver is ready for dev_open */
np = netdev_priv(vport->netdev);
np->state = __IDPF_VPORT_DOWN;
@@ -1596,8 +1589,6 @@ void idpf_init_task(struct work_struct *work)
return;
-handle_err:
- idpf_decfg_netdev(vport);
cfg_netdev_err:
idpf_vport_rel(vport);
adapter->vports[index] = NULL;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index d538dff78bd9..bf23967674d5 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -1505,6 +1505,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport,
struct idpf_q_vec_rsrc *rsrc,
u16 num_rxq)
{
+ struct idpf_adapter *adapter = vport->adapter;
int k, err = 0;
bool hs;
@@ -1595,6 +1596,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport,
if (!idpf_is_queue_model_split(rsrc->rxq_model)) {
q = rx_qgrp->singleq.rxqs[j];
+ q->rx_ptype_lkup = adapter->singleq_pt_lkup;
goto setup_rxq;
}
q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
@@ -1605,10 +1607,10 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport,
&rx_qgrp->splitq.bufq_sets[1].refillqs[j];
idpf_queue_assign(HSPLIT_EN, q, hs);
+ q->rx_ptype_lkup = adapter->splitq_pt_lkup;
setup_rxq:
q->desc_count = rsrc->rxq_desc_count;
- q->rx_ptype_lkup = vport->rx_ptype_lkup;
q->bufq_sets = rx_qgrp->splitq.bufq_sets;
q->idx = (i * num_rxq) + j;
q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 8a43ad873f25..7eee3a275e8b 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -2496,36 +2496,143 @@ static void idpf_finalize_ptype_lookup(struct libeth_rx_pt *ptype)
libeth_rx_pt_gen_hash_type(ptype);
}
+/**
+ * idpf_parse_protocol_ids - parse protocol IDs for a given packet type
+ * @ptype: packet type to parse
+ * @rx_pt: store the parsed packet type info into
+ */
+static void idpf_parse_protocol_ids(struct virtchnl2_ptype *ptype,
+ struct libeth_rx_pt *rx_pt)
+{
+ struct idpf_ptype_state pstate = {};
+
+ for (u32 j = 0; j < ptype->proto_id_count; j++) {
+ u16 id = le16_to_cpu(ptype->proto_id[j]);
+
+ switch (id) {
+ case VIRTCHNL2_PROTO_HDR_GRE:
+ if (pstate.tunnel_state == IDPF_PTYPE_TUNNEL_IP) {
+ rx_pt->tunnel_type =
+ LIBETH_RX_PT_TUNNEL_IP_GRENAT;
+ pstate.tunnel_state |=
+ IDPF_PTYPE_TUNNEL_IP_GRENAT;
+ }
+ break;
+ case VIRTCHNL2_PROTO_HDR_MAC:
+ rx_pt->outer_ip = LIBETH_RX_PT_OUTER_L2;
+ if (pstate.tunnel_state == IDPF_TUN_IP_GRE) {
+ rx_pt->tunnel_type =
+ LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC;
+ pstate.tunnel_state |=
+ IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC;
+ }
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV4:
+ idpf_fill_ptype_lookup(rx_pt, &pstate, true, false);
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV6:
+ idpf_fill_ptype_lookup(rx_pt, &pstate, false, false);
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
+ idpf_fill_ptype_lookup(rx_pt, &pstate, true, true);
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
+ idpf_fill_ptype_lookup(rx_pt, &pstate, false, true);
+ break;
+ case VIRTCHNL2_PROTO_HDR_UDP:
+ rx_pt->inner_prot = LIBETH_RX_PT_INNER_UDP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_TCP:
+ rx_pt->inner_prot = LIBETH_RX_PT_INNER_TCP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_SCTP:
+ rx_pt->inner_prot = LIBETH_RX_PT_INNER_SCTP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_ICMP:
+ rx_pt->inner_prot = LIBETH_RX_PT_INNER_ICMP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_PAY:
+ rx_pt->payload_layer = LIBETH_RX_PT_PAYLOAD_L2;
+ break;
+ case VIRTCHNL2_PROTO_HDR_ICMPV6:
+ case VIRTCHNL2_PROTO_HDR_IPV6_EH:
+ case VIRTCHNL2_PROTO_HDR_PRE_MAC:
+ case VIRTCHNL2_PROTO_HDR_POST_MAC:
+ case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
+ case VIRTCHNL2_PROTO_HDR_SVLAN:
+ case VIRTCHNL2_PROTO_HDR_CVLAN:
+ case VIRTCHNL2_PROTO_HDR_MPLS:
+ case VIRTCHNL2_PROTO_HDR_MMPLS:
+ case VIRTCHNL2_PROTO_HDR_PTP:
+ case VIRTCHNL2_PROTO_HDR_CTRL:
+ case VIRTCHNL2_PROTO_HDR_LLDP:
+ case VIRTCHNL2_PROTO_HDR_ARP:
+ case VIRTCHNL2_PROTO_HDR_ECP:
+ case VIRTCHNL2_PROTO_HDR_EAPOL:
+ case VIRTCHNL2_PROTO_HDR_PPPOD:
+ case VIRTCHNL2_PROTO_HDR_PPPOE:
+ case VIRTCHNL2_PROTO_HDR_IGMP:
+ case VIRTCHNL2_PROTO_HDR_AH:
+ case VIRTCHNL2_PROTO_HDR_ESP:
+ case VIRTCHNL2_PROTO_HDR_IKE:
+ case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
+ case VIRTCHNL2_PROTO_HDR_L2TPV2:
+ case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
+ case VIRTCHNL2_PROTO_HDR_L2TPV3:
+ case VIRTCHNL2_PROTO_HDR_GTP:
+ case VIRTCHNL2_PROTO_HDR_GTP_EH:
+ case VIRTCHNL2_PROTO_HDR_GTPCV2:
+ case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
+ case VIRTCHNL2_PROTO_HDR_GTPU:
+ case VIRTCHNL2_PROTO_HDR_GTPU_UL:
+ case VIRTCHNL2_PROTO_HDR_GTPU_DL:
+ case VIRTCHNL2_PROTO_HDR_ECPRI:
+ case VIRTCHNL2_PROTO_HDR_VRRP:
+ case VIRTCHNL2_PROTO_HDR_OSPF:
+ case VIRTCHNL2_PROTO_HDR_TUN:
+ case VIRTCHNL2_PROTO_HDR_NVGRE:
+ case VIRTCHNL2_PROTO_HDR_VXLAN:
+ case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
+ case VIRTCHNL2_PROTO_HDR_GENEVE:
+ case VIRTCHNL2_PROTO_HDR_NSH:
+ case VIRTCHNL2_PROTO_HDR_QUIC:
+ case VIRTCHNL2_PROTO_HDR_PFCP:
+ case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
+ case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
+ case VIRTCHNL2_PROTO_HDR_RTP:
+ case VIRTCHNL2_PROTO_HDR_NO_PROTO:
+ break;
+ default:
+ break;
+ }
+ }
+}
+
/**
* idpf_send_get_rx_ptype_msg - Send virtchnl for ptype info
- * @vport: virtual port data structure
+ * @adapter: driver specific private structure
*
* Returns 0 on success, negative on failure.
*/
-int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
+static int idpf_send_get_rx_ptype_msg(struct idpf_adapter *adapter)
{
struct virtchnl2_get_ptype_info *get_ptype_info __free(kfree) = NULL;
struct virtchnl2_get_ptype_info *ptype_info __free(kfree) = NULL;
- struct libeth_rx_pt *ptype_lkup __free(kfree) = NULL;
- int max_ptype, ptypes_recvd = 0, ptype_offset;
- struct idpf_adapter *adapter = vport->adapter;
+ struct libeth_rx_pt *singleq_pt_lkup __free(kfree) = NULL;
+ struct libeth_rx_pt *splitq_pt_lkup __free(kfree) = NULL;
struct idpf_vc_xn_params xn_params = {};
+ int ptypes_recvd = 0, ptype_offset;
+ u32 max_ptype = IDPF_RX_MAX_PTYPE;
u16 next_ptype_id = 0;
ssize_t reply_sz;
- bool is_splitq;
- int i, j, k;
-
- if (vport->rx_ptype_lkup)
- return 0;
- is_splitq = idpf_is_queue_model_split(vport->dflt_qv_rsrc.rxq_model);
- if (is_splitq)
- max_ptype = IDPF_RX_MAX_PTYPE;
- else
- max_ptype = IDPF_RX_MAX_BASE_PTYPE;
+ singleq_pt_lkup = kcalloc(IDPF_RX_MAX_BASE_PTYPE,
+ sizeof(*singleq_pt_lkup), GFP_KERNEL);
+ if (!singleq_pt_lkup)
+ return -ENOMEM;
- ptype_lkup = kcalloc(max_ptype, sizeof(*ptype_lkup), GFP_KERNEL);
- if (!ptype_lkup)
+ splitq_pt_lkup = kcalloc(max_ptype, sizeof(*splitq_pt_lkup), GFP_KERNEL);
+ if (!splitq_pt_lkup)
return -ENOMEM;
get_ptype_info = kzalloc(sizeof(*get_ptype_info), GFP_KERNEL);
@@ -2566,154 +2673,59 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
ptype_offset = IDPF_RX_PTYPE_HDR_SZ;
- for (i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) {
- struct idpf_ptype_state pstate = { };
+ for (u16 i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) {
+ struct libeth_rx_pt rx_pt = {};
struct virtchnl2_ptype *ptype;
- u16 id;
+ u16 pt_10, pt_8;
ptype = (struct virtchnl2_ptype *)
((u8 *)ptype_info + ptype_offset);
+ pt_10 = le16_to_cpu(ptype->ptype_id_10);
+ pt_8 = ptype->ptype_id_8;
+
ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
if (ptype_offset > IDPF_CTLQ_MAX_BUF_LEN)
return -EINVAL;
/* 0xFFFF indicates end of ptypes */
- if (le16_to_cpu(ptype->ptype_id_10) ==
- IDPF_INVALID_PTYPE_ID)
+ if (pt_10 == IDPF_INVALID_PTYPE_ID)
goto out;
- if (is_splitq)
- k = le16_to_cpu(ptype->ptype_id_10);
- else
- k = ptype->ptype_id_8;
-
- for (j = 0; j < ptype->proto_id_count; j++) {
- id = le16_to_cpu(ptype->proto_id[j]);
- switch (id) {
- case VIRTCHNL2_PROTO_HDR_GRE:
- if (pstate.tunnel_state ==
- IDPF_PTYPE_TUNNEL_IP) {
- ptype_lkup[k].tunnel_type =
- LIBETH_RX_PT_TUNNEL_IP_GRENAT;
- pstate.tunnel_state |=
- IDPF_PTYPE_TUNNEL_IP_GRENAT;
- }
- break;
- case VIRTCHNL2_PROTO_HDR_MAC:
- ptype_lkup[k].outer_ip =
- LIBETH_RX_PT_OUTER_L2;
- if (pstate.tunnel_state ==
- IDPF_TUN_IP_GRE) {
- ptype_lkup[k].tunnel_type =
- LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC;
- pstate.tunnel_state |=
- IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC;
- }
- break;
- case VIRTCHNL2_PROTO_HDR_IPV4:
- idpf_fill_ptype_lookup(&ptype_lkup[k],
- &pstate, true,
- false);
- break;
- case VIRTCHNL2_PROTO_HDR_IPV6:
- idpf_fill_ptype_lookup(&ptype_lkup[k],
- &pstate, false,
- false);
- break;
- case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
- idpf_fill_ptype_lookup(&ptype_lkup[k],
- &pstate, true,
- true);
- break;
- case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
- idpf_fill_ptype_lookup(&ptype_lkup[k],
- &pstate, false,
- true);
- break;
- case VIRTCHNL2_PROTO_HDR_UDP:
- ptype_lkup[k].inner_prot =
- LIBETH_RX_PT_INNER_UDP;
- break;
- case VIRTCHNL2_PROTO_HDR_TCP:
- ptype_lkup[k].inner_prot =
- LIBETH_RX_PT_INNER_TCP;
- break;
- case VIRTCHNL2_PROTO_HDR_SCTP:
- ptype_lkup[k].inner_prot =
- LIBETH_RX_PT_INNER_SCTP;
- break;
- case VIRTCHNL2_PROTO_HDR_ICMP:
- ptype_lkup[k].inner_prot =
- LIBETH_RX_PT_INNER_ICMP;
- break;
- case VIRTCHNL2_PROTO_HDR_PAY:
- ptype_lkup[k].payload_layer =
- LIBETH_RX_PT_PAYLOAD_L2;
- break;
- case VIRTCHNL2_PROTO_HDR_ICMPV6:
- case VIRTCHNL2_PROTO_HDR_IPV6_EH:
- case VIRTCHNL2_PROTO_HDR_PRE_MAC:
- case VIRTCHNL2_PROTO_HDR_POST_MAC:
- case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
- case VIRTCHNL2_PROTO_HDR_SVLAN:
- case VIRTCHNL2_PROTO_HDR_CVLAN:
- case VIRTCHNL2_PROTO_HDR_MPLS:
- case VIRTCHNL2_PROTO_HDR_MMPLS:
- case VIRTCHNL2_PROTO_HDR_PTP:
- case VIRTCHNL2_PROTO_HDR_CTRL:
- case VIRTCHNL2_PROTO_HDR_LLDP:
- case VIRTCHNL2_PROTO_HDR_ARP:
- case VIRTCHNL2_PROTO_HDR_ECP:
- case VIRTCHNL2_PROTO_HDR_EAPOL:
- case VIRTCHNL2_PROTO_HDR_PPPOD:
- case VIRTCHNL2_PROTO_HDR_PPPOE:
- case VIRTCHNL2_PROTO_HDR_IGMP:
- case VIRTCHNL2_PROTO_HDR_AH:
- case VIRTCHNL2_PROTO_HDR_ESP:
- case VIRTCHNL2_PROTO_HDR_IKE:
- case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
- case VIRTCHNL2_PROTO_HDR_L2TPV2:
- case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
- case VIRTCHNL2_PROTO_HDR_L2TPV3:
- case VIRTCHNL2_PROTO_HDR_GTP:
- case VIRTCHNL2_PROTO_HDR_GTP_EH:
- case VIRTCHNL2_PROTO_HDR_GTPCV2:
- case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
- case VIRTCHNL2_PROTO_HDR_GTPU:
- case VIRTCHNL2_PROTO_HDR_GTPU_UL:
- case VIRTCHNL2_PROTO_HDR_GTPU_DL:
- case VIRTCHNL2_PROTO_HDR_ECPRI:
- case VIRTCHNL2_PROTO_HDR_VRRP:
- case VIRTCHNL2_PROTO_HDR_OSPF:
- case VIRTCHNL2_PROTO_HDR_TUN:
- case VIRTCHNL2_PROTO_HDR_NVGRE:
- case VIRTCHNL2_PROTO_HDR_VXLAN:
- case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
- case VIRTCHNL2_PROTO_HDR_GENEVE:
- case VIRTCHNL2_PROTO_HDR_NSH:
- case VIRTCHNL2_PROTO_HDR_QUIC:
- case VIRTCHNL2_PROTO_HDR_PFCP:
- case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
- case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
- case VIRTCHNL2_PROTO_HDR_RTP:
- case VIRTCHNL2_PROTO_HDR_NO_PROTO:
- break;
- default:
- break;
- }
- }
+ idpf_parse_protocol_ids(ptype, &rx_pt);
+ idpf_finalize_ptype_lookup(&rx_pt);
- idpf_finalize_ptype_lookup(&ptype_lkup[k]);
+ /* For a given protocol ID stack, the ptype value might
+ * vary between ptype_id_10 and ptype_id_8. So store
+ * them separately for splitq and singleq. Also skip
+ * the repeated ptypes in case of singleq.
+ */
+ splitq_pt_lkup[pt_10] = rx_pt;
+ if (!singleq_pt_lkup[pt_8].outer_ip)
+ singleq_pt_lkup[pt_8] = rx_pt;
}
}
out:
- vport->rx_ptype_lkup = no_free_ptr(ptype_lkup);
+ adapter->splitq_pt_lkup = no_free_ptr(splitq_pt_lkup);
+ adapter->singleq_pt_lkup = no_free_ptr(singleq_pt_lkup);
return 0;
}
+/**
+ * idpf_rel_rx_pt_lkup - release RX ptype lookup table
+ * @adapter: adapter pointer to get the lookup table
+ */
+static void idpf_rel_rx_pt_lkup(struct idpf_adapter *adapter)
+{
+ kfree(adapter->splitq_pt_lkup);
+ adapter->splitq_pt_lkup = NULL;
+
+ kfree(adapter->singleq_pt_lkup);
+ adapter->singleq_pt_lkup = NULL;
+}
+
/**
* idpf_send_ena_dis_loopback_msg - Send virtchnl enable/disable loopback
* message
@@ -2987,6 +2999,13 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)
goto err_intr_req;
}
+ err = idpf_send_get_rx_ptype_msg(adapter);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "failed to get RX ptypes: %d\n",
+ err);
+ goto intr_rel;
+ }
+
err = idpf_ptp_init(adapter);
if (err)
pci_err(adapter->pdev, "PTP init failed, err=%pe\n",
@@ -3004,6 +3023,8 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)
return 0;
+intr_rel:
+ idpf_intr_rel(adapter);
err_intr_req:
cancel_delayed_work_sync(&adapter->serv_task);
cancel_delayed_work_sync(&adapter->mbx_task);
@@ -3056,6 +3077,7 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter)
idpf_ptp_release(adapter);
idpf_deinit_task(adapter);
+ idpf_rel_rx_pt_lkup(adapter);
idpf_intr_rel(adapter);
if (remove_in_prog)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 5e3ac5aff635..7b3f422d251a 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -168,7 +168,6 @@ int idpf_set_promiscuous(struct idpf_adapter *adapter,
struct idpf_vport_user_config_data *config_data,
u32 vport_id);
int idpf_check_supported_desc_ids(struct idpf_vport *vport);
-int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport);
int idpf_send_ena_dis_loopback_msg(struct idpf_adapter *adapter, u32 vport_id,
bool loopback_ena);
int idpf_send_get_stats_msg(struct idpf_netdev_priv *np,
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 9/9] idpf: generalize mailbox API
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
` (7 preceding siblings ...)
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 8/9] idpf: avoid calling get_rx_ptypes for each vport Pavan Kumar Linga
@ 2025-07-08 0:58 ` Pavan Kumar Linga
8 siblings, 0 replies; 10+ messages in thread
From: Pavan Kumar Linga @ 2025-07-08 0:58 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Pavan Kumar Linga, Anton Nadezhdin, Madhu Chittim
Add a control queue parameter to all mailbox APIs in order to make use
of those APIs for non-default mailbox as well.
Signed-off-by: Anton Nadezhdin <anton.nadezhdin@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf_lib.c | 2 +-
drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 3 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 33 ++++++++++---------
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 6 ++--
4 files changed, 24 insertions(+), 20 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 23650a1cda29..4c4e3ba5317c 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -1223,7 +1223,7 @@ void idpf_mbx_task(struct work_struct *work)
queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task,
msecs_to_jiffies(300));
- idpf_recv_mb_msg(adapter);
+ idpf_recv_mb_msg(adapter, adapter->hw.arq);
}
/**
diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
index 5b904c232584..c4c3c43767e4 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
@@ -155,7 +155,8 @@ static void idpf_vf_trigger_reset(struct idpf_adapter *adapter,
/* Do not send VIRTCHNL2_OP_RESET_VF message on driver unload */
if (trig_cause == IDPF_HR_FUNC_RESET &&
!test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
- idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL, 0);
+ idpf_send_mb_msg(adapter, adapter->hw.asq,
+ VIRTCHNL2_OP_RESET_VF, 0, NULL, 0);
}
/**
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index 7eee3a275e8b..4d5af6299c9b 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -116,13 +116,15 @@ static void idpf_recv_event_msg(struct idpf_adapter *adapter,
/**
* idpf_mb_clean - Reclaim the send mailbox queue entries
- * @adapter: Driver specific private structure
+ * @adapter: driver specific private structure
+ * @asq: send control queue info
*
* Reclaim the send mailbox queue entries to be used to send further messages
*
* Returns 0 on success, negative on failure
*/
-static int idpf_mb_clean(struct idpf_adapter *adapter)
+static int idpf_mb_clean(struct idpf_adapter *adapter,
+ struct idpf_ctlq_info *asq)
{
u16 i, num_q_msg = IDPF_DFLT_MBX_Q_LEN;
struct idpf_ctlq_msg **q_msg;
@@ -133,7 +135,7 @@ static int idpf_mb_clean(struct idpf_adapter *adapter)
if (!q_msg)
return -ENOMEM;
- err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+ err = idpf_ctlq_clean_sq(asq, &num_q_msg, q_msg);
if (err)
goto err_kfree;
@@ -205,7 +207,8 @@ static void idpf_prepare_ptp_mb_msg(struct idpf_adapter *adapter, u32 op,
/**
* idpf_send_mb_msg - Send message over mailbox
- * @adapter: Driver specific private structure
+ * @adapter: driver specific private structure
+ * @asq: control queue to send message to
* @op: virtchnl opcode
* @msg_size: size of the payload
* @msg: pointer to buffer holding the payload
@@ -215,8 +218,8 @@ static void idpf_prepare_ptp_mb_msg(struct idpf_adapter *adapter, u32 op,
*
* Returns 0 on success, negative on failure
*/
-int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
- u16 msg_size, u8 *msg, u16 cookie)
+int idpf_send_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *asq,
+ u32 op, u16 msg_size, u8 *msg, u16 cookie)
{
struct idpf_ctlq_msg *ctlq_msg;
struct idpf_dma_mem *dma_mem;
@@ -230,7 +233,7 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
if (idpf_is_reset_detected(adapter))
return 0;
- err = idpf_mb_clean(adapter);
+ err = idpf_mb_clean(adapter, asq);
if (err)
return err;
@@ -266,7 +269,7 @@ int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
ctlq_msg->ctx.indirect.payload = dma_mem;
ctlq_msg->ctx.sw_cookie.data = cookie;
- err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+ err = idpf_ctlq_send(&adapter->hw, asq, 1, ctlq_msg);
if (err)
goto send_error;
@@ -462,7 +465,7 @@ ssize_t idpf_vc_xn_exec(struct idpf_adapter *adapter,
cookie = FIELD_PREP(IDPF_VC_XN_SALT_M, xn->salt) |
FIELD_PREP(IDPF_VC_XN_IDX_M, xn->idx);
- retval = idpf_send_mb_msg(adapter, params->vc_op,
+ retval = idpf_send_mb_msg(adapter, adapter->hw.asq, params->vc_op,
send_buf->iov_len, send_buf->iov_base,
cookie);
if (retval) {
@@ -661,12 +664,13 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter,
/**
* idpf_recv_mb_msg - Receive message over mailbox
- * @adapter: Driver specific private structure
+ * @adapter: driver specific private structure
+ * @arq: control queue to receive message from
*
* Will receive control queue message and posts the receive buffer. Returns 0
* on success and negative on failure.
*/
-int idpf_recv_mb_msg(struct idpf_adapter *adapter)
+int idpf_recv_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *arq)
{
struct idpf_ctlq_msg ctlq_msg;
struct idpf_dma_mem *dma_mem;
@@ -678,7 +682,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter)
* actually received on num_recv.
*/
num_recv = 1;
- err = idpf_ctlq_recv(adapter->hw.arq, &num_recv, &ctlq_msg);
+ err = idpf_ctlq_recv(arq, &num_recv, &ctlq_msg);
if (err || !num_recv)
break;
@@ -694,8 +698,7 @@ int idpf_recv_mb_msg(struct idpf_adapter *adapter)
else
err = idpf_vc_xn_forward_reply(adapter, &ctlq_msg);
- post_err = idpf_ctlq_post_rx_buffs(&adapter->hw,
- adapter->hw.arq,
+ post_err = idpf_ctlq_post_rx_buffs(&adapter->hw, arq,
&num_recv, &dma_mem);
/* If post failed clear the only buffer we supplied */
@@ -2828,7 +2831,7 @@ int idpf_init_dflt_mbx(struct idpf_adapter *adapter)
void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter)
{
if (adapter->hw.arq && adapter->hw.asq) {
- idpf_mb_clean(adapter);
+ idpf_mb_clean(adapter, adapter->hw.asq);
idpf_ctlq_deinit(&adapter->hw);
}
adapter->hw.arq = NULL;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
index 7b3f422d251a..cecb95f37645 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
@@ -115,9 +115,9 @@ bool idpf_sideband_action_ena(struct idpf_vport *vport,
struct ethtool_rx_flow_spec *fsp);
unsigned int idpf_fsteer_max_rules(struct idpf_vport *vport);
-int idpf_recv_mb_msg(struct idpf_adapter *adapter);
-int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
- u16 msg_size, u8 *msg, u16 cookie);
+int idpf_recv_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *arq);
+int idpf_send_mb_msg(struct idpf_adapter *adapter, struct idpf_ctlq_info *asq,
+ u32 op, u16 msg_size, u8 *msg, u16 cookie);
int idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q);
u32 idpf_get_vport_id(struct idpf_vport *vport);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-07-08 15:16 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-08 0:58 [Intel-wired-lan] [PATCH iwl-next v6 0/9] refactor IDPF resource access Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 1/9] idpf: introduce local idpf structure to store virtchnl queue chunks Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 2/9] idpf: use existing queue chunk info instead of preparing it Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 3/9] idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 4/9] idpf: move queue resources to idpf_q_vec_rsrc structure Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 5/9] idpf: reshuffle idpf_vport struct members to avoid holes Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 6/9] idpf: add rss_data field to RSS function parameters Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 7/9] idpf: generalize send virtchnl message API Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 8/9] idpf: avoid calling get_rx_ptypes for each vport Pavan Kumar Linga
2025-07-08 0:58 ` [Intel-wired-lan] [PATCH net-next v6 9/9] idpf: generalize mailbox API Pavan Kumar Linga
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).