netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next PATCH v1 0/9] Introduce NAPI queues support
@ 2023-07-29  0:46 Amritha Nambiar
  2023-07-29  0:46 ` [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations Amritha Nambiar
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:46 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Introduce support for associating NAPI instances with
corresponding RX and TX queue set. Add the capability
to export NAPI information supported by the device.
Extend the netdev_genl generic netlink family for netdev
with NAPI data. The NAPI fields exposed are:
- NAPI id
- queue/queue-set (both RX and TX) associated with each
  NAPI instance
- Interrupt number associated with the NAPI instance
- PID for the NAPI thread

This series only supports 'get' ability for retrieving
certain NAPI attributes. The 'set' ability for setting
queue[s] associated with a NAPI instance via netdev-genl
will be submitted as a separate patch series.

Previous discussion at:
https://lore.kernel.org/netdev/c8476530638a5f4381d64db0e024ed49c2db3b02.camel@gmail.com/T/#m00999652a8b4731fbdb7bf698d2e3666c65a60e7

$ ./cli.py --spec netdev.yaml  --do napi-get --json='{"ifindex": 6}'

[{'ifindex': 6},
 {'napi-info': [{'irq': 296,
                 'napi-id': 390,
                 'pid': 3475,
                 'rx-queues': [5],
                 'tx-queues': [5]},
                {'irq': 295,
                 'napi-id': 389,
                 'pid': 3474,
                 'rx-queues': [4],
                 'tx-queues': [4]},
                {'irq': 294,
                 'napi-id': 388,
                 'pid': 3473,
                 'rx-queues': [3],
                 'tx-queues': [3]},
                {'irq': 293,
                 'napi-id': 387,
                 'pid': 3472,
                 'rx-queues': [2],
                 'tx-queues': [2]},
                {'irq': 292,
                 'napi-id': 386,
                 'pid': 3471,
                 'rx-queues': [1],
                 'tx-queues': [1]},
                {'irq': 291,
                 'napi-id': 385,
                 'pid': 3470,
                 'rx-queues': [0],
                 'tx-queues': [0]}]}]
 
RFC -> v1
* Changed to separate 'napi_get' command
* Added support to expose interrupt and PID for the NAPI
* Used list of netdev queue structs
* Split patches further and fixed code style and errors

---

Amritha Nambiar (9):
      net: Introduce new fields for napi and queue associations
      ice: Add support in the driver for associating napi with queue[s]
      netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
      net: Move kernel helpers for queue index outside sysfs
      netdev-genl: Add netlink framework functions for napi
      netdev-genl: spec: Add irq in netdev netlink YAML spec
      net: Add NAPI IRQ support
      netdev-genl: spec: Add PID in netdev netlink YAML spec
      netdev-genl: Add PID for the NAPI thread


 Documentation/netlink/specs/netdev.yaml   |   54 ++++++
 drivers/net/ethernet/intel/ice/ice_lib.c  |   60 ++++++
 drivers/net/ethernet/intel/ice/ice_lib.h  |    4 
 drivers/net/ethernet/intel/ice/ice_main.c |    4 
 include/linux/netdevice.h                 |   41 ++++
 include/uapi/linux/netdev.h               |   20 ++
 net/core/dev.c                            |   53 ++++++
 net/core/net-sysfs.c                      |   11 -
 net/core/netdev-genl-gen.c                |   17 ++
 net/core/netdev-genl-gen.h                |    2 
 net/core/netdev-genl.c                    |  270 +++++++++++++++++++++++++++++
 tools/include/uapi/linux/netdev.h         |   20 ++
 tools/net/ynl/generated/netdev-user.c     |  232 +++++++++++++++++++++++++
 tools/net/ynl/generated/netdev-user.h     |   67 +++++++
 tools/net/ynl/ynl-gen-c.py                |    2 
 15 files changed, 841 insertions(+), 16 deletions(-)

--

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
@ 2023-07-29  0:46 ` Amritha Nambiar
  2023-07-29  9:55   ` kernel test robot
  2023-07-30 17:10   ` Simon Horman
  2023-07-29  0:47 ` [net-next PATCH v1 2/9] ice: Add support in the driver for associating napi with queue[s] Amritha Nambiar
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:46 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Add the napi pointer in netdev queue for tracking the napi
instance for each queue. This achieves the queue<->napi mapping.

Introduce new napi fields 'napi_rxq_list' and 'napi_txq_list'
for rx and tx queue set associated with the napi. Add functions
to associate the queue with the napi and handle their removal
as well. This lists the queue/queue-set on the corresponding
irq line for each napi instance.


Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 include/linux/netdevice.h |   19 ++++++++++++++++
 net/core/dev.c            |   52 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 84c36a7f873f..7299872bfdff 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -342,6 +342,14 @@ struct gro_list {
  */
 #define GRO_HASH_BUCKETS	8
 
+/*
+ * napi queue container type
+ */
+enum queue_type {
+	NAPI_QUEUE_RX,
+	NAPI_QUEUE_TX,
+};
+
 /*
  * Structure for NAPI scheduling similar to tasklet but with weighting
  */
@@ -376,6 +384,8 @@ struct napi_struct {
 	/* control-path-only fields follow */
 	struct list_head	dev_list;
 	struct hlist_node	napi_hash_node;
+	struct list_head	napi_rxq_list;
+	struct list_head	napi_txq_list;
 };
 
 enum {
@@ -651,6 +661,9 @@ struct netdev_queue {
 
 	unsigned long		state;
 
+	/* NAPI instance for the queue */
+	struct napi_struct      *napi;
+	struct list_head        q_list;
 #ifdef CONFIG_BQL
 	struct dql		dql;
 #endif
@@ -796,6 +809,9 @@ struct netdev_rx_queue {
 #ifdef CONFIG_XDP_SOCKETS
 	struct xsk_buff_pool            *pool;
 #endif
+	struct list_head		q_list;
+	/* NAPI instance for the queue */
+	struct napi_struct		*napi;
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -2618,6 +2634,9 @@ static inline void *netdev_priv(const struct net_device *dev)
  */
 #define SET_NETDEV_DEVTYPE(net, devtype)	((net)->dev.type = (devtype))
 
+int netif_napi_add_queue(struct napi_struct *napi, unsigned int queue_index,
+			 enum queue_type type);
+
 /* Default NAPI poll() weight
  * Device drivers are strongly advised to not use bigger value
  */
diff --git a/net/core/dev.c b/net/core/dev.c
index b58674774a57..875023ab614c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6389,6 +6389,42 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
 }
 EXPORT_SYMBOL(dev_set_threaded);
 
+/**
+ * netif_napi_add_queue - Associate queue with the napi
+ * @napi: NAPI context
+ * @queue_index: Index of queue
+ * @queue_type: queue type as RX or TX
+ *
+ * Add queue with its corresponding napi context
+ */
+int netif_napi_add_queue(struct napi_struct *napi, unsigned int queue_index,
+			 enum queue_type type)
+{
+	struct net_device *dev = napi->dev;
+	struct netdev_rx_queue *rxq;
+	struct netdev_queue *txq;
+
+	if (!dev)
+		return -EINVAL;
+
+	switch (type) {
+	case NAPI_QUEUE_RX:
+		rxq = __netif_get_rx_queue(dev, queue_index);
+		rxq->napi = napi;
+		list_add_rcu(&rxq->q_list, &napi->napi_rxq_list);
+		break;
+	case NAPI_QUEUE_TX:
+		txq = netdev_get_tx_queue(dev, queue_index);
+		txq->napi = napi;
+		list_add_rcu(&txq->q_list, &napi->napi_txq_list);
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+EXPORT_SYMBOL(netif_napi_add_queue);
+
 void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
 			   int (*poll)(struct napi_struct *, int), int weight)
 {
@@ -6424,6 +6460,9 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
 	 */
 	if (dev->threaded && napi_kthread_create(napi))
 		dev->threaded = 0;
+
+	INIT_LIST_HEAD(&napi->napi_rxq_list);
+	INIT_LIST_HEAD(&napi->napi_txq_list);
 }
 EXPORT_SYMBOL(netif_napi_add_weight);
 
@@ -6485,6 +6524,18 @@ static void flush_gro_hash(struct napi_struct *napi)
 	}
 }
 
+static void napi_del_queues(struct napi_struct *napi)
+{
+	struct netdev_rx_queue *rx_queue, *rxq;
+	struct netdev_queue *tx_queue, *txq;
+
+	list_for_each_entry_safe(rx_queue, rxq, &napi->napi_rxq_list, q_list)
+		list_del_rcu(&rx_queue->q_list);
+
+	list_for_each_entry_safe(tx_queue, txq, &napi->napi_txq_list, q_list)
+		list_del_rcu(&tx_queue->q_list);
+}
+
 /* Must be called in process context */
 void __netif_napi_del(struct napi_struct *napi)
 {
@@ -6502,6 +6553,7 @@ void __netif_napi_del(struct napi_struct *napi)
 		kthread_stop(napi->thread);
 		napi->thread = NULL;
 	}
+	napi_del_queues(napi);
 }
 EXPORT_SYMBOL(__netif_napi_del);
 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 2/9] ice: Add support in the driver for associating napi with queue[s]
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
  2023-07-29  0:46 ` [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-29  0:47 ` [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI Amritha Nambiar
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

After the napi context is initialized, map the napi instance
with the queue/queue-set on the corresponding irq line.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c  |   57 +++++++++++++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_lib.h  |    4 ++
 drivers/net/ethernet/intel/ice/ice_main.c |    4 ++
 3 files changed, 64 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 077f2e91ae1a..171177db8fb4 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2464,6 +2464,12 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params)
 			goto unroll_vector_base;
 
 		ice_vsi_map_rings_to_vectors(vsi);
+
+		/* Associate q_vector rings to napi */
+		ret = ice_vsi_add_napi_queues(vsi);
+		if (ret)
+			goto unroll_vector_base;
+
 		vsi->stat_offsets_loaded = false;
 
 		if (ice_is_xdp_ena_vsi(vsi)) {
@@ -2943,6 +2949,57 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi)
 		synchronize_irq(vsi->q_vectors[i]->irq.virq);
 }
 
+/**
+ * ice_q_vector_add_napi_queues - Add queue[s] associated with the napi
+ * @q_vector: q_vector pointer
+ *
+ * Associate the q_vector napi with all the queue[s] on the vector
+ * Returns 0 on success or < 0 on error
+ */
+int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector)
+{
+	struct ice_rx_ring *rx_ring;
+	struct ice_tx_ring *tx_ring;
+	int ret;
+
+	ice_for_each_rx_ring(rx_ring, q_vector->rx) {
+		ret = netif_napi_add_queue(&q_vector->napi, rx_ring->q_index,
+					   NAPI_QUEUE_RX);
+		if (ret)
+			return ret;
+	}
+	ice_for_each_tx_ring(tx_ring, q_vector->tx) {
+		ret = netif_napi_add_queue(&q_vector->napi, tx_ring->q_index,
+					   NAPI_QUEUE_TX);
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
+/**
+ * ice_vsi_add_napi_queues
+ * @vsi: VSI pointer
+ *
+ * Associate queue[s] with napi for all vectors
+ * Returns 0 on success or < 0 on error
+ */
+int ice_vsi_add_napi_queues(struct ice_vsi *vsi)
+{
+	int i, ret = 0;
+
+	if (!vsi->netdev)
+		return ret;
+
+	ice_for_each_q_vector(vsi, i) {
+		ret = ice_q_vector_add_napi_queues(vsi->q_vectors[i]);
+		if (ret)
+			return ret;
+	}
+	return ret;
+}
+
 /**
  * ice_napi_del - Remove NAPI handler for the VSI
  * @vsi: VSI for which NAPI handler is to be removed
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index dd53fe968ad8..26c427cddf63 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -93,6 +93,10 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc);
 struct ice_vsi *
 ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params);
 
+int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector);
+
+int ice_vsi_add_napi_queues(struct ice_vsi *vsi);
+
 void ice_napi_del(struct ice_vsi *vsi);
 
 int ice_vsi_release(struct ice_vsi *vsi);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 20d5ed572a8c..d3a2fc20157c 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3373,9 +3373,11 @@ static void ice_napi_add(struct ice_vsi *vsi)
 	if (!vsi->netdev)
 		return;
 
-	ice_for_each_q_vector(vsi, v_idx)
+	ice_for_each_q_vector(vsi, v_idx) {
 		netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi,
 			       ice_napi_poll);
+		ice_q_vector_add_napi_queues(vsi->q_vectors[v_idx]);
+	}
 }
 
 /**


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
  2023-07-29  0:46 ` [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations Amritha Nambiar
  2023-07-29  0:47 ` [net-next PATCH v1 2/9] ice: Add support in the driver for associating napi with queue[s] Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-31 19:36   ` Jakub Kicinski
  2023-07-29  0:47 ` [net-next PATCH v1 4/9] net: Move kernel helpers for queue index outside sysfs Amritha Nambiar
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Add support in netlink spec(netdev.yaml) for napi related information.
Add code generated from the spec.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 Documentation/netlink/specs/netdev.yaml |   46 ++++++
 include/uapi/linux/netdev.h             |   18 +++
 net/core/netdev-genl-gen.c              |   17 ++
 net/core/netdev-genl-gen.h              |    2 
 net/core/netdev-genl.c                  |   10 +
 tools/include/uapi/linux/netdev.h       |   18 +++
 tools/net/ynl/generated/netdev-user.c   |  220 +++++++++++++++++++++++++++++++
 tools/net/ynl/generated/netdev-user.h   |   63 +++++++++
 8 files changed, 394 insertions(+)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 1c7284fd535b..507cea4f2319 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -68,6 +68,38 @@ attribute-sets:
         type: u32
         checks:
           min: 1
+  -
+    name: napi-info-entry
+    attributes:
+      -
+        name: napi-id
+        doc: napi id
+        type: u32
+      -
+        name: rx-queues
+        doc: list of rx queues associated with a napi
+        type: u32
+        multi-attr: true
+      -
+        name: tx-queues
+        doc: list of tx queues associated with a napi
+        type: u32
+        multi-attr: true
+  -
+    name: napi
+    attributes:
+      -
+        name: ifindex
+        doc: netdev ifindex
+        type: u32
+        checks:
+          min: 1
+      -
+        name: napi-info
+        doc: napi information such as napi-id, napi queues etc.
+        type: nest
+        multi-attr: true
+        nested-attributes: napi-info-entry
 
 operations:
   list:
@@ -101,6 +133,20 @@ operations:
       doc: Notification about device configuration being changed.
       notify: dev-get
       mcgrp: mgmt
+    -
+      name: napi-get
+      doc: napi information such as napi-id, napi queues etc.
+      attribute-set: napi
+      do:
+        request:
+          attributes:
+            - ifindex
+        reply: &napi-all
+          attributes:
+            - ifindex
+            - napi-info
+      dump:
+        reply: *napi-all
 
 mcast-groups:
   list:
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index c1634b95c223..bc06f692d9fd 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -48,11 +48,29 @@ enum {
 	NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1)
 };
 
+enum {
+	NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID = 1,
+	NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES,
+	NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES,
+
+	__NETDEV_A_NAPI_INFO_ENTRY_MAX,
+	NETDEV_A_NAPI_INFO_ENTRY_MAX = (__NETDEV_A_NAPI_INFO_ENTRY_MAX - 1)
+};
+
+enum {
+	NETDEV_A_NAPI_IFINDEX = 1,
+	NETDEV_A_NAPI_NAPI_INFO,
+
+	__NETDEV_A_NAPI_MAX,
+	NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
+};
+
 enum {
 	NETDEV_CMD_DEV_GET = 1,
 	NETDEV_CMD_DEV_ADD_NTF,
 	NETDEV_CMD_DEV_DEL_NTF,
 	NETDEV_CMD_DEV_CHANGE_NTF,
+	NETDEV_CMD_NAPI_GET,
 
 	__NETDEV_CMD_MAX,
 	NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index ea9231378aa6..d09ce5db8b79 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -15,6 +15,11 @@ static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1
 	[NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
 };
 
+/* NETDEV_CMD_NAPI_GET - do */
+static const struct nla_policy netdev_napi_get_nl_policy[NETDEV_A_NAPI_IFINDEX + 1] = {
+	[NETDEV_A_NAPI_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
+};
+
 /* Ops table for netdev */
 static const struct genl_split_ops netdev_nl_ops[] = {
 	{
@@ -29,6 +34,18 @@ static const struct genl_split_ops netdev_nl_ops[] = {
 		.dumpit	= netdev_nl_dev_get_dumpit,
 		.flags	= GENL_CMD_CAP_DUMP,
 	},
+	{
+		.cmd		= NETDEV_CMD_NAPI_GET,
+		.doit		= netdev_nl_napi_get_doit,
+		.policy		= netdev_napi_get_nl_policy,
+		.maxattr	= NETDEV_A_NAPI_IFINDEX,
+		.flags		= GENL_CMD_CAP_DO,
+	},
+	{
+		.cmd	= NETDEV_CMD_NAPI_GET,
+		.dumpit	= netdev_nl_napi_get_dumpit,
+		.flags	= GENL_CMD_CAP_DUMP,
+	},
 };
 
 static const struct genl_multicast_group netdev_nl_mcgrps[] = {
diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h
index 7b370c073e7d..46dab8ccd568 100644
--- a/net/core/netdev-genl-gen.h
+++ b/net/core/netdev-genl-gen.h
@@ -13,6 +13,8 @@
 
 int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info);
 int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
+int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info);
+int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
 
 enum {
 	NETDEV_NLGRP_MGMT,
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 797c813c7c77..e35cfa3cd173 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -120,6 +120,16 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
+int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+{
+	return -EOPNOTSUPP;
+}
+
+int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+{
+	return -EOPNOTSUPP;
+}
+
 static int netdev_genl_netdevice_event(struct notifier_block *nb,
 				       unsigned long event, void *ptr)
 {
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index c1634b95c223..bc06f692d9fd 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -48,11 +48,29 @@ enum {
 	NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1)
 };
 
+enum {
+	NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID = 1,
+	NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES,
+	NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES,
+
+	__NETDEV_A_NAPI_INFO_ENTRY_MAX,
+	NETDEV_A_NAPI_INFO_ENTRY_MAX = (__NETDEV_A_NAPI_INFO_ENTRY_MAX - 1)
+};
+
+enum {
+	NETDEV_A_NAPI_IFINDEX = 1,
+	NETDEV_A_NAPI_NAPI_INFO,
+
+	__NETDEV_A_NAPI_MAX,
+	NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
+};
+
 enum {
 	NETDEV_CMD_DEV_GET = 1,
 	NETDEV_CMD_DEV_ADD_NTF,
 	NETDEV_CMD_DEV_DEL_NTF,
 	NETDEV_CMD_DEV_CHANGE_NTF,
+	NETDEV_CMD_NAPI_GET,
 
 	__NETDEV_CMD_MAX,
 	NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
diff --git a/tools/net/ynl/generated/netdev-user.c b/tools/net/ynl/generated/netdev-user.c
index 68b408ca0f7f..e9a6c8cb5c68 100644
--- a/tools/net/ynl/generated/netdev-user.c
+++ b/tools/net/ynl/generated/netdev-user.c
@@ -18,6 +18,7 @@ static const char * const netdev_op_strmap[] = {
 	[NETDEV_CMD_DEV_ADD_NTF] = "dev-add-ntf",
 	[NETDEV_CMD_DEV_DEL_NTF] = "dev-del-ntf",
 	[NETDEV_CMD_DEV_CHANGE_NTF] = "dev-change-ntf",
+	[NETDEV_CMD_NAPI_GET] = "napi-get",
 };
 
 const char *netdev_op_str(int op)
@@ -46,6 +47,17 @@ const char *netdev_xdp_act_str(enum netdev_xdp_act value)
 }
 
 /* Policies */
+struct ynl_policy_attr netdev_napi_info_entry_policy[NETDEV_A_NAPI_INFO_ENTRY_MAX + 1] = {
+	[NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID] = { .name = "napi-id", .type = YNL_PT_U32, },
+	[NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES] = { .name = "rx-queues", .type = YNL_PT_U32, },
+	[NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES] = { .name = "tx-queues", .type = YNL_PT_U32, },
+};
+
+struct ynl_policy_nest netdev_napi_info_entry_nest = {
+	.max_attr = NETDEV_A_NAPI_INFO_ENTRY_MAX,
+	.table = netdev_napi_info_entry_policy,
+};
+
 struct ynl_policy_attr netdev_dev_policy[NETDEV_A_DEV_MAX + 1] = {
 	[NETDEV_A_DEV_IFINDEX] = { .name = "ifindex", .type = YNL_PT_U32, },
 	[NETDEV_A_DEV_PAD] = { .name = "pad", .type = YNL_PT_IGNORE, },
@@ -58,7 +70,78 @@ struct ynl_policy_nest netdev_dev_nest = {
 	.table = netdev_dev_policy,
 };
 
+struct ynl_policy_attr netdev_napi_policy[NETDEV_A_NAPI_MAX + 1] = {
+	[NETDEV_A_NAPI_IFINDEX] = { .name = "ifindex", .type = YNL_PT_U32, },
+	[NETDEV_A_NAPI_NAPI_INFO] = { .name = "napi-info", .type = YNL_PT_NEST, .nest = &netdev_napi_info_entry_nest, },
+};
+
+struct ynl_policy_nest netdev_napi_nest = {
+	.max_attr = NETDEV_A_NAPI_MAX,
+	.table = netdev_napi_policy,
+};
+
 /* Common nested types */
+void netdev_napi_info_entry_free(struct netdev_napi_info_entry *obj)
+{
+	free(obj->rx_queues);
+	free(obj->tx_queues);
+}
+
+int netdev_napi_info_entry_parse(struct ynl_parse_arg *yarg,
+				 const struct nlattr *nested)
+{
+	struct netdev_napi_info_entry *dst = yarg->data;
+	unsigned int n_rx_queues = 0;
+	unsigned int n_tx_queues = 0;
+	const struct nlattr *attr;
+	int i;
+
+	if (dst->rx_queues)
+		return ynl_error_parse(yarg, "attribute already present (napi-info-entry.rx-queues)");
+	if (dst->tx_queues)
+		return ynl_error_parse(yarg, "attribute already present (napi-info-entry.tx-queues)");
+
+	mnl_attr_for_each_nested(attr, nested) {
+		unsigned int type = mnl_attr_get_type(attr);
+
+		if (type == NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID) {
+			if (ynl_attr_validate(yarg, attr))
+				return MNL_CB_ERROR;
+			dst->_present.napi_id = 1;
+			dst->napi_id = mnl_attr_get_u32(attr);
+		} else if (type == NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES) {
+			n_rx_queues++;
+		} else if (type == NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES) {
+			n_tx_queues++;
+		}
+	}
+
+	if (n_rx_queues) {
+		dst->rx_queues = calloc(n_rx_queues, sizeof(*dst->rx_queues));
+		dst->n_rx_queues = n_rx_queues;
+		i = 0;
+		mnl_attr_for_each_nested(attr, nested) {
+			if (mnl_attr_get_type(attr) == NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES) {
+				dst->rx_queues[i] = mnl_attr_get_u32(attr);
+				i++;
+			}
+		}
+	}
+	if (n_tx_queues) {
+		dst->tx_queues = calloc(n_tx_queues, sizeof(*dst->tx_queues));
+		dst->n_tx_queues = n_tx_queues;
+		i = 0;
+		mnl_attr_for_each_nested(attr, nested) {
+			if (mnl_attr_get_type(attr) == NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES) {
+				dst->tx_queues[i] = mnl_attr_get_u32(attr);
+				i++;
+			}
+		}
+	}
+
+	return 0;
+}
+
 /* ============== NETDEV_CMD_DEV_GET ============== */
 /* NETDEV_CMD_DEV_GET - do */
 void netdev_dev_get_req_free(struct netdev_dev_get_req *req)
@@ -178,6 +261,143 @@ void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp)
 	free(rsp);
 }
 
+/* ============== NETDEV_CMD_NAPI_GET ============== */
+/* NETDEV_CMD_NAPI_GET - do */
+void netdev_napi_get_req_free(struct netdev_napi_get_req *req)
+{
+	free(req);
+}
+
+void netdev_napi_get_rsp_free(struct netdev_napi_get_rsp *rsp)
+{
+	unsigned int i;
+
+	for (i = 0; i < rsp->n_napi_info; i++)
+		netdev_napi_info_entry_free(&rsp->napi_info[i]);
+	free(rsp->napi_info);
+	free(rsp);
+}
+
+int netdev_napi_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+	struct ynl_parse_arg *yarg = data;
+	struct netdev_napi_get_rsp *dst;
+	unsigned int n_napi_info = 0;
+	const struct nlattr *attr;
+	struct ynl_parse_arg parg;
+	int i;
+
+	dst = yarg->data;
+	parg.ys = yarg->ys;
+
+	if (dst->napi_info)
+		return ynl_error_parse(yarg, "attribute already present (napi.napi-info)");
+
+	mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+		unsigned int type = mnl_attr_get_type(attr);
+
+		if (type == NETDEV_A_NAPI_IFINDEX) {
+			if (ynl_attr_validate(yarg, attr))
+				return MNL_CB_ERROR;
+			dst->_present.ifindex = 1;
+			dst->ifindex = mnl_attr_get_u32(attr);
+		} else if (type == NETDEV_A_NAPI_NAPI_INFO) {
+			n_napi_info++;
+		}
+	}
+
+	if (n_napi_info) {
+		dst->napi_info = calloc(n_napi_info, sizeof(*dst->napi_info));
+		dst->n_napi_info = n_napi_info;
+		i = 0;
+		parg.rsp_policy = &netdev_napi_info_entry_nest;
+		mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+			if (mnl_attr_get_type(attr) == NETDEV_A_NAPI_NAPI_INFO) {
+				parg.data = &dst->napi_info[i];
+				if (netdev_napi_info_entry_parse(&parg, attr))
+					return MNL_CB_ERROR;
+				i++;
+			}
+		}
+	}
+
+	return MNL_CB_OK;
+}
+
+struct netdev_napi_get_rsp *
+netdev_napi_get(struct ynl_sock *ys, struct netdev_napi_get_req *req)
+{
+	struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+	struct netdev_napi_get_rsp *rsp;
+	struct nlmsghdr *nlh;
+	int err;
+
+	nlh = ynl_gemsg_start_req(ys, ys->family_id, NETDEV_CMD_NAPI_GET, 1);
+	ys->req_policy = &netdev_napi_nest;
+	yrs.yarg.rsp_policy = &netdev_napi_nest;
+
+	if (req->_present.ifindex)
+		mnl_attr_put_u32(nlh, NETDEV_A_NAPI_IFINDEX, req->ifindex);
+
+	rsp = calloc(1, sizeof(*rsp));
+	yrs.yarg.data = rsp;
+	yrs.cb = netdev_napi_get_rsp_parse;
+	yrs.rsp_cmd = NETDEV_CMD_NAPI_GET;
+
+	err = ynl_exec(ys, nlh, &yrs);
+	if (err < 0)
+		goto err_free;
+
+	return rsp;
+
+err_free:
+	netdev_napi_get_rsp_free(rsp);
+	return NULL;
+}
+
+/* NETDEV_CMD_NAPI_GET - dump */
+void netdev_napi_get_list_free(struct netdev_napi_get_list *rsp)
+{
+	struct netdev_napi_get_list *next = rsp;
+
+	while ((void *)next != YNL_LIST_END) {
+		unsigned int i;
+
+		rsp = next;
+		next = rsp->next;
+
+		for (i = 0; i < rsp->obj.n_napi_info; i++)
+			netdev_napi_info_entry_free(&rsp->obj.napi_info[i]);
+		free(rsp->obj.napi_info);
+		free(rsp);
+	}
+}
+
+struct netdev_napi_get_list *netdev_napi_get_dump(struct ynl_sock *ys)
+{
+	struct ynl_dump_state yds = {};
+	struct nlmsghdr *nlh;
+	int err;
+
+	yds.ys = ys;
+	yds.alloc_sz = sizeof(struct netdev_napi_get_list);
+	yds.cb = netdev_napi_get_rsp_parse;
+	yds.rsp_cmd = NETDEV_CMD_NAPI_GET;
+	yds.rsp_policy = &netdev_napi_nest;
+
+	nlh = ynl_gemsg_start_dump(ys, ys->family_id, NETDEV_CMD_NAPI_GET, 1);
+
+	err = ynl_exec_dump(ys, nlh, &yds);
+	if (err < 0)
+		goto free_list;
+
+	return yds.first;
+
+free_list:
+	netdev_napi_get_list_free(yds.first);
+	return NULL;
+}
+
 static const struct ynl_ntf_info netdev_ntf_info[] =  {
 	[NETDEV_CMD_DEV_ADD_NTF] =  {
 		.alloc_sz	= sizeof(struct netdev_dev_get_ntf),
diff --git a/tools/net/ynl/generated/netdev-user.h b/tools/net/ynl/generated/netdev-user.h
index 0952d3261f4d..9274711bd862 100644
--- a/tools/net/ynl/generated/netdev-user.h
+++ b/tools/net/ynl/generated/netdev-user.h
@@ -20,6 +20,18 @@ const char *netdev_op_str(int op);
 const char *netdev_xdp_act_str(enum netdev_xdp_act value);
 
 /* Common nested types */
+struct netdev_napi_info_entry {
+	struct {
+		__u32 napi_id:1;
+	} _present;
+
+	__u32 napi_id;
+	unsigned int n_rx_queues;
+	__u32 *rx_queues;
+	unsigned int n_tx_queues;
+	__u32 *tx_queues;
+};
+
 /* ============== NETDEV_CMD_DEV_GET ============== */
 /* NETDEV_CMD_DEV_GET - do */
 struct netdev_dev_get_req {
@@ -84,4 +96,55 @@ struct netdev_dev_get_ntf {
 
 void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp);
 
+/* ============== NETDEV_CMD_NAPI_GET ============== */
+/* NETDEV_CMD_NAPI_GET - do */
+struct netdev_napi_get_req {
+	struct {
+		__u32 ifindex:1;
+	} _present;
+
+	__u32 ifindex;
+};
+
+static inline struct netdev_napi_get_req *netdev_napi_get_req_alloc(void)
+{
+	return calloc(1, sizeof(struct netdev_napi_get_req));
+}
+void netdev_napi_get_req_free(struct netdev_napi_get_req *req);
+
+static inline void
+netdev_napi_get_req_set_ifindex(struct netdev_napi_get_req *req, __u32 ifindex)
+{
+	req->_present.ifindex = 1;
+	req->ifindex = ifindex;
+}
+
+struct netdev_napi_get_rsp {
+	struct {
+		__u32 ifindex:1;
+	} _present;
+
+	__u32 ifindex;
+	unsigned int n_napi_info;
+	struct netdev_napi_info_entry *napi_info;
+};
+
+void netdev_napi_get_rsp_free(struct netdev_napi_get_rsp *rsp);
+
+/*
+ * napi information such as napi-id, napi queues etc.
+ */
+struct netdev_napi_get_rsp *
+netdev_napi_get(struct ynl_sock *ys, struct netdev_napi_get_req *req);
+
+/* NETDEV_CMD_NAPI_GET - dump */
+struct netdev_napi_get_list {
+	struct netdev_napi_get_list *next;
+	struct netdev_napi_get_rsp obj __attribute__ ((aligned (8)));
+};
+
+void netdev_napi_get_list_free(struct netdev_napi_get_list *rsp);
+
+struct netdev_napi_get_list *netdev_napi_get_dump(struct ynl_sock *ys);
+
 #endif /* _LINUX_NETDEV_GEN_H */


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 4/9] net: Move kernel helpers for queue index outside sysfs
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
                   ` (2 preceding siblings ...)
  2023-07-29  0:47 ` [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-29  0:47 ` [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi Amritha Nambiar
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

The kernel helpers for retrieving tx/rx queue index
(get_netdev_queue_index and get_netdev_rx_queue_index)
are restricted to sysfs, move this out for more usability.
Also, replace BUG_ON with DEBUG_NET_WARN_ON_ONCE.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 include/linux/netdevice.h |   16 +++++++++++++---
 net/core/net-sysfs.c      |   11 -----------
 2 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 7299872bfdff..7afbf346dfd1 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2515,6 +2515,18 @@ struct netdev_queue *netdev_get_tx_queue(const struct net_device *dev,
 	return &dev->_tx[index];
 }
 
+static inline
+unsigned int get_netdev_queue_index(struct netdev_queue *queue)
+{
+	struct net_device *dev = queue->dev;
+	unsigned int i;
+
+	i = queue - dev->_tx;
+	DEBUG_NET_WARN_ON_ONCE(i >= dev->num_tx_queues);
+
+	return i;
+}
+
 static inline struct netdev_queue *skb_get_tx_queue(const struct net_device *dev,
 						    const struct sk_buff *skb)
 {
@@ -3856,17 +3868,15 @@ __netif_get_rx_queue(struct net_device *dev, unsigned int rxq)
 	return dev->_rx + rxq;
 }
 
-#ifdef CONFIG_SYSFS
 static inline unsigned int get_netdev_rx_queue_index(
 		struct netdev_rx_queue *queue)
 {
 	struct net_device *dev = queue->dev;
 	int index = queue - dev->_rx;
 
-	BUG_ON(index >= dev->num_rx_queues);
+	DEBUG_NET_WARN_ON_ONCE(index >= dev->num_rx_queues);
 	return index;
 }
-#endif
 
 int netif_get_num_default_rss_queues(void);
 
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index 15e3f4606b5f..9b900d0b6513 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -1239,17 +1239,6 @@ static ssize_t tx_timeout_show(struct netdev_queue *queue, char *buf)
 	return sysfs_emit(buf, fmt_ulong, trans_timeout);
 }
 
-static unsigned int get_netdev_queue_index(struct netdev_queue *queue)
-{
-	struct net_device *dev = queue->dev;
-	unsigned int i;
-
-	i = queue - dev->_tx;
-	BUG_ON(i >= dev->num_tx_queues);
-
-	return i;
-}
-
 static ssize_t traffic_class_show(struct netdev_queue *queue,
 				  char *buf)
 {


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
                   ` (3 preceding siblings ...)
  2023-07-29  0:47 ` [net-next PATCH v1 4/9] net: Move kernel helpers for queue index outside sysfs Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-30 17:15   ` Simon Horman
  2023-07-31 19:37   ` Jakub Kicinski
  2023-07-29  0:47 ` [net-next PATCH v1 6/9] netdev-genl: spec: Add irq in netdev netlink YAML spec Amritha Nambiar
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Implement the netdev netlink framework functions for
napi support. The netdev structure tracks all the napi
instances and napi fields. The napi instances and associated
queue[s] can be retrieved this way.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 net/core/netdev-genl.c |  253 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 251 insertions(+), 2 deletions(-)

diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index e35cfa3cd173..ca3ed6eb457b 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -8,6 +8,20 @@
 
 #include "netdev-genl-gen.h"
 
+struct netdev_nl_dump_ctx {
+	int dev_entry_hash;
+	int dev_entry_idx;
+	int napi_idx;
+};
+
+static inline struct netdev_nl_dump_ctx *
+netdev_dump_ctx(struct netlink_callback *cb)
+{
+	NL_ASSERT_DUMP_CTX_FITS(struct netdev_nl_dump_ctx);
+
+	return (struct netdev_nl_dump_ctx *)cb->ctx;
+}
+
 static int
 netdev_nl_dev_fill(struct net_device *netdev, struct sk_buff *rsp,
 		   u32 portid, u32 seq, int flags, u32 cmd)
@@ -120,14 +134,249 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
+static int
+netdev_nl_napi_fill_one(struct sk_buff *msg, struct napi_struct *napi)
+{
+	struct netdev_rx_queue *rx_queue, *rxq;
+	struct netdev_queue *tx_queue, *txq;
+	unsigned int rx_qid, tx_qid;
+	struct nlattr *napi_info;
+
+	napi_info = nla_nest_start(msg, NETDEV_A_NAPI_NAPI_INFO);
+	if (!napi_info)
+		return -EMSGSIZE;
+
+	if (nla_put_u32(msg, NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID, napi->napi_id))
+		goto nla_put_failure;
+
+	list_for_each_entry_safe(rx_queue, rxq, &napi->napi_rxq_list, q_list) {
+		rx_qid = get_netdev_rx_queue_index(rx_queue);
+		if (nla_put_u32(msg, NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES, rx_qid))
+			goto nla_put_failure;
+	}
+
+	list_for_each_entry_safe(tx_queue, txq, &napi->napi_txq_list, q_list) {
+		tx_qid = get_netdev_queue_index(tx_queue);
+		if (nla_put_u32(msg, NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES, tx_qid))
+			goto nla_put_failure;
+	}
+
+	nla_nest_end(msg, napi_info);
+	return 0;
+nla_put_failure:
+	nla_nest_cancel(msg, napi_info);
+	return -EMSGSIZE;
+}
+
+static int
+netdev_nl_napi_fill(struct net_device *netdev, struct sk_buff *msg, int *start)
+{
+	struct napi_struct *napi, *n;
+	int i = 0;
+
+	list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) {
+		if (i < *start) {
+			i++;
+			continue;
+		}
+		if (netdev_nl_napi_fill_one(msg, napi))
+			return -EMSGSIZE;
+		*start = ++i;
+	}
+	return 0;
+}
+
+static int
+netdev_nl_napi_prepare_fill(struct net_device *netdev, u32 portid, u32 seq,
+			    int flags, u32 cmd)
+{
+	struct nlmsghdr *nlh;
+	struct sk_buff *skb;
+	bool last = false;
+	int index = 0;
+	void *hdr;
+	int err;
+
+	while (!last) {
+		int tmp_index = index;
+
+		skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+		if (!skb)
+			return -ENOMEM;
+
+		hdr = genlmsg_put(skb, portid, seq, &netdev_nl_family,
+				  flags | NLM_F_MULTI, cmd);
+		if (!hdr) {
+			err = -EMSGSIZE;
+			goto nla_put_failure;
+		}
+		err = netdev_nl_napi_fill(netdev, skb, &index);
+		if (!err)
+			last = true;
+		else if (err != -EMSGSIZE || tmp_index == index)
+			goto nla_put_failure;
+
+		genlmsg_end(skb, hdr);
+		err = genlmsg_unicast(dev_net(netdev), skb, portid);
+		if (err)
+			return err;
+	}
+
+	skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+	nlh = nlmsg_put(skb, portid, seq, NLMSG_DONE, 0, flags | NLM_F_MULTI);
+	if (!nlh) {
+		err = -EMSGSIZE;
+		goto nla_put_failure;
+	}
+
+	return genlmsg_unicast(dev_net(netdev), skb, portid);
+
+nla_put_failure:
+	nlmsg_free(skb);
+	return err;
+}
+
+static int
+netdev_nl_napi_info_fill(struct net_device *netdev, u32 portid, u32 seq,
+			 int flags, u32 cmd)
+{
+	struct sk_buff *skb;
+	void *hdr;
+	int err;
+
+	skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+
+	hdr = genlmsg_put(skb, portid, seq, &netdev_nl_family, flags, cmd);
+	if (!hdr) {
+		err = -EMSGSIZE;
+		goto err_free_msg;
+	}
+	if (nla_put_u32(skb, NETDEV_A_NAPI_IFINDEX, netdev->ifindex)) {
+		genlmsg_cancel(skb, hdr);
+		err = -EINVAL;
+		goto err_free_msg;
+	}
+
+	genlmsg_end(skb, hdr);
+
+	err = genlmsg_unicast(dev_net(netdev), skb, portid);
+	if (err)
+		return err;
+
+	return netdev_nl_napi_prepare_fill(netdev, portid, seq, flags, cmd);
+
+err_free_msg:
+	nlmsg_free(skb);
+	return err;
+}
+
 int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
 {
-	return -EOPNOTSUPP;
+	struct net_device *netdev;
+	u32 ifindex;
+	int err;
+
+	if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_NAPI_IFINDEX))
+		return -EINVAL;
+
+	ifindex = nla_get_u32(info->attrs[NETDEV_A_NAPI_IFINDEX]);
+
+	rtnl_lock();
+
+	netdev = __dev_get_by_index(genl_info_net(info), ifindex);
+	if (netdev)
+		err = netdev_nl_napi_info_fill(netdev, info->snd_portid,
+					       info->snd_seq, 0, info->genlhdr->cmd);
+	else
+		err = -ENODEV;
+
+	rtnl_unlock();
+
+	return err;
+}
+
+static int
+netdev_nl_napi_dump_entry(struct net_device *netdev, struct sk_buff *rsp,
+			  struct netlink_callback *cb, int *start)
+{
+	int index = *start;
+	int tmp_index = index;
+	void *hdr;
+	int err;
+
+	hdr = genlmsg_put(rsp, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+			  &netdev_nl_family, NLM_F_MULTI, NETDEV_CMD_NAPI_GET);
+	if (!hdr)
+		return -EMSGSIZE;
+
+	if (nla_put_u32(rsp, NETDEV_A_NAPI_IFINDEX, netdev->ifindex))
+		goto nla_put_failure;
+
+	err =  netdev_nl_napi_fill(netdev, rsp, &index);
+	if (err && (err != -EMSGSIZE || tmp_index == index))
+		goto nla_put_failure;
+
+	*start = index;
+	genlmsg_end(rsp, hdr);
+
+	return err;
+
+nla_put_failure:
+	genlmsg_cancel(rsp, hdr);
+	return -EINVAL;
 }
 
 int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
 {
-	return -EOPNOTSUPP;
+	struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb);
+	struct net *net = sock_net(skb->sk);
+	struct net_device *netdev;
+	int idx = 0, s_idx, n_idx;
+	int h, s_h;
+	int err;
+
+	s_h = ctx->dev_entry_hash;
+	s_idx = ctx->dev_entry_idx;
+	n_idx = ctx->napi_idx;
+
+	rtnl_lock();
+
+	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
+		struct hlist_head *head;
+
+		idx = 0;
+		head = &net->dev_index_head[h];
+		hlist_for_each_entry(netdev, head, index_hlist) {
+			if (idx < s_idx)
+				goto cont;
+			err = netdev_nl_napi_dump_entry(netdev, skb, cb, &n_idx);
+			if (err == -EMSGSIZE)
+				goto out;
+			n_idx = 0;
+			if (err < 0)
+				break;
+cont:
+			idx++;
+		}
+	}
+
+	rtnl_unlock();
+
+	return err;
+
+out:
+	rtnl_unlock();
+
+	ctx->dev_entry_idx = idx;
+	ctx->dev_entry_hash = h;
+	ctx->napi_idx = n_idx;
+	cb->seq = net->dev_base_seq;
+
+	return skb->len;
 }
 
 static int netdev_genl_netdevice_event(struct notifier_block *nb,


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 6/9] netdev-genl: spec: Add irq in netdev netlink YAML spec
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
                   ` (4 preceding siblings ...)
  2023-07-29  0:47 ` [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-29  0:47 ` [net-next PATCH v1 7/9] net: Add NAPI IRQ support Amritha Nambiar
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Add support in netlink spec(netdev.yaml) for interrupt number
among the NAPI attributes. Add code generated from the spec.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 Documentation/netlink/specs/netdev.yaml |    4 ++++
 include/uapi/linux/netdev.h             |    1 +
 tools/include/uapi/linux/netdev.h       |    1 +
 tools/net/ynl/generated/netdev-user.c   |    6 ++++++
 tools/net/ynl/generated/netdev-user.h   |    2 ++
 5 files changed, 14 insertions(+)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 507cea4f2319..c7f72038184d 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -85,6 +85,10 @@ attribute-sets:
         doc: list of tx queues associated with a napi
         type: u32
         multi-attr: true
+      -
+        name: irq
+        doc: The associated interrupt vector number for the napi
+        type: u32
   -
     name: napi
     attributes:
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index bc06f692d9fd..17782585be72 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -52,6 +52,7 @@ enum {
 	NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID = 1,
 	NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES,
 	NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES,
+	NETDEV_A_NAPI_INFO_ENTRY_IRQ,
 
 	__NETDEV_A_NAPI_INFO_ENTRY_MAX,
 	NETDEV_A_NAPI_INFO_ENTRY_MAX = (__NETDEV_A_NAPI_INFO_ENTRY_MAX - 1)
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index bc06f692d9fd..17782585be72 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -52,6 +52,7 @@ enum {
 	NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID = 1,
 	NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES,
 	NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES,
+	NETDEV_A_NAPI_INFO_ENTRY_IRQ,
 
 	__NETDEV_A_NAPI_INFO_ENTRY_MAX,
 	NETDEV_A_NAPI_INFO_ENTRY_MAX = (__NETDEV_A_NAPI_INFO_ENTRY_MAX - 1)
diff --git a/tools/net/ynl/generated/netdev-user.c b/tools/net/ynl/generated/netdev-user.c
index e9a6c8cb5c68..74c24be5641c 100644
--- a/tools/net/ynl/generated/netdev-user.c
+++ b/tools/net/ynl/generated/netdev-user.c
@@ -51,6 +51,7 @@ struct ynl_policy_attr netdev_napi_info_entry_policy[NETDEV_A_NAPI_INFO_ENTRY_MA
 	[NETDEV_A_NAPI_INFO_ENTRY_NAPI_ID] = { .name = "napi-id", .type = YNL_PT_U32, },
 	[NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES] = { .name = "rx-queues", .type = YNL_PT_U32, },
 	[NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES] = { .name = "tx-queues", .type = YNL_PT_U32, },
+	[NETDEV_A_NAPI_INFO_ENTRY_IRQ] = { .name = "irq", .type = YNL_PT_U32, },
 };
 
 struct ynl_policy_nest netdev_napi_info_entry_nest = {
@@ -113,6 +114,11 @@ int netdev_napi_info_entry_parse(struct ynl_parse_arg *yarg,
 			n_rx_queues++;
 		} else if (type == NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES) {
 			n_tx_queues++;
+		} else if (type == NETDEV_A_NAPI_INFO_ENTRY_IRQ) {
+			if (ynl_attr_validate(yarg, attr))
+				return MNL_CB_ERROR;
+			dst->_present.irq = 1;
+			dst->irq = mnl_attr_get_u32(attr);
 		}
 	}
 
diff --git a/tools/net/ynl/generated/netdev-user.h b/tools/net/ynl/generated/netdev-user.h
index 9274711bd862..a0833eb9a52f 100644
--- a/tools/net/ynl/generated/netdev-user.h
+++ b/tools/net/ynl/generated/netdev-user.h
@@ -23,6 +23,7 @@ const char *netdev_xdp_act_str(enum netdev_xdp_act value);
 struct netdev_napi_info_entry {
 	struct {
 		__u32 napi_id:1;
+		__u32 irq:1;
 	} _present;
 
 	__u32 napi_id;
@@ -30,6 +31,7 @@ struct netdev_napi_info_entry {
 	__u32 *rx_queues;
 	unsigned int n_tx_queues;
 	__u32 *tx_queues;
+	__u32 irq;
 };
 
 /* ============== NETDEV_CMD_DEV_GET ============== */


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 7/9] net: Add NAPI IRQ support
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
                   ` (5 preceding siblings ...)
  2023-07-29  0:47 ` [net-next PATCH v1 6/9] netdev-genl: spec: Add irq in netdev netlink YAML spec Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-29  4:05   ` Stephen Hemminger
  2023-07-29  0:47 ` [net-next PATCH v1 8/9] netdev-genl: spec: Add PID in netdev netlink YAML spec Amritha Nambiar
  2023-07-29  0:47 ` [net-next PATCH v1 9/9] netdev-genl: Add PID for the NAPI thread Amritha Nambiar
  8 siblings, 1 reply; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Add support to associate the interrupt vector number for a
NAPI instance.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c |    3 +++
 include/linux/netdevice.h                |    6 ++++++
 net/core/dev.c                           |    1 +
 net/core/netdev-genl.c                   |    4 ++++
 4 files changed, 14 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 171177db8fb4..1ebd293ca7de 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2975,6 +2975,9 @@ int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector)
 			return ret;
 	}
 
+	/* Also set the interrupt number for the NAPI */
+	napi_set_irq(&q_vector->napi, q_vector->irq.virq);
+
 	return ret;
 }
 
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 7afbf346dfd1..a0ae6de1a4aa 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -386,6 +386,7 @@ struct napi_struct {
 	struct hlist_node	napi_hash_node;
 	struct list_head	napi_rxq_list;
 	struct list_head	napi_txq_list;
+	int			irq;
 };
 
 enum {
@@ -2646,6 +2647,11 @@ static inline void *netdev_priv(const struct net_device *dev)
  */
 #define SET_NETDEV_DEVTYPE(net, devtype)	((net)->dev.type = (devtype))
 
+static inline void napi_set_irq(struct napi_struct *napi, int irq)
+{
+	napi->irq = irq;
+}
+
 int netif_napi_add_queue(struct napi_struct *napi, unsigned int queue_index,
 			 enum queue_type type);
 
diff --git a/net/core/dev.c b/net/core/dev.c
index 875023ab614c..118f0b957b6e 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6463,6 +6463,7 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
 
 	INIT_LIST_HEAD(&napi->napi_rxq_list);
 	INIT_LIST_HEAD(&napi->napi_txq_list);
+	napi->irq = -1;
 }
 EXPORT_SYMBOL(netif_napi_add_weight);
 
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index ca3ed6eb457b..8401f646a10b 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -161,6 +161,10 @@ netdev_nl_napi_fill_one(struct sk_buff *msg, struct napi_struct *napi)
 			goto nla_put_failure;
 	}
 
+	if (napi->irq >= 0)
+		if (nla_put_u32(msg, NETDEV_A_NAPI_INFO_ENTRY_IRQ, napi->irq))
+			goto nla_put_failure;
+
 	nla_nest_end(msg, napi_info);
 	return 0;
 nla_put_failure:


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 8/9] netdev-genl: spec: Add PID in netdev netlink YAML spec
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
                   ` (6 preceding siblings ...)
  2023-07-29  0:47 ` [net-next PATCH v1 7/9] net: Add NAPI IRQ support Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  2023-07-29  0:47 ` [net-next PATCH v1 9/9] netdev-genl: Add PID for the NAPI thread Amritha Nambiar
  8 siblings, 0 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

Add support in netlink spec(netdev.yaml) for PID of the
NAPI thread. Add code generated from the spec.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 Documentation/netlink/specs/netdev.yaml |    4 ++++
 include/uapi/linux/netdev.h             |    1 +
 tools/include/uapi/linux/netdev.h       |    1 +
 tools/net/ynl/generated/netdev-user.c   |    6 ++++++
 tools/net/ynl/generated/netdev-user.h   |    2 ++
 5 files changed, 14 insertions(+)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index c7f72038184d..8cbdf1f72527 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -89,6 +89,10 @@ attribute-sets:
         name: irq
         doc: The associated interrupt vector number for the napi
         type: u32
+      -
+        name: pid
+        doc: PID of the napi thread
+        type: s32
   -
     name: napi
     attributes:
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index 17782585be72..d01db79615e4 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -53,6 +53,7 @@ enum {
 	NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES,
 	NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES,
 	NETDEV_A_NAPI_INFO_ENTRY_IRQ,
+	NETDEV_A_NAPI_INFO_ENTRY_PID,
 
 	__NETDEV_A_NAPI_INFO_ENTRY_MAX,
 	NETDEV_A_NAPI_INFO_ENTRY_MAX = (__NETDEV_A_NAPI_INFO_ENTRY_MAX - 1)
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index 17782585be72..d01db79615e4 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -53,6 +53,7 @@ enum {
 	NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES,
 	NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES,
 	NETDEV_A_NAPI_INFO_ENTRY_IRQ,
+	NETDEV_A_NAPI_INFO_ENTRY_PID,
 
 	__NETDEV_A_NAPI_INFO_ENTRY_MAX,
 	NETDEV_A_NAPI_INFO_ENTRY_MAX = (__NETDEV_A_NAPI_INFO_ENTRY_MAX - 1)
diff --git a/tools/net/ynl/generated/netdev-user.c b/tools/net/ynl/generated/netdev-user.c
index 74c24be5641c..51f69a4ea59b 100644
--- a/tools/net/ynl/generated/netdev-user.c
+++ b/tools/net/ynl/generated/netdev-user.c
@@ -52,6 +52,7 @@ struct ynl_policy_attr netdev_napi_info_entry_policy[NETDEV_A_NAPI_INFO_ENTRY_MA
 	[NETDEV_A_NAPI_INFO_ENTRY_RX_QUEUES] = { .name = "rx-queues", .type = YNL_PT_U32, },
 	[NETDEV_A_NAPI_INFO_ENTRY_TX_QUEUES] = { .name = "tx-queues", .type = YNL_PT_U32, },
 	[NETDEV_A_NAPI_INFO_ENTRY_IRQ] = { .name = "irq", .type = YNL_PT_U32, },
+	[NETDEV_A_NAPI_INFO_ENTRY_PID] = { .name = "pid", .type = YNL_PT_U32, },
 };
 
 struct ynl_policy_nest netdev_napi_info_entry_nest = {
@@ -119,6 +120,11 @@ int netdev_napi_info_entry_parse(struct ynl_parse_arg *yarg,
 				return MNL_CB_ERROR;
 			dst->_present.irq = 1;
 			dst->irq = mnl_attr_get_u32(attr);
+		} else if (type == NETDEV_A_NAPI_INFO_ENTRY_PID) {
+			if (ynl_attr_validate(yarg, attr))
+				return MNL_CB_ERROR;
+			dst->_present.pid = 1;
+			dst->pid = mnl_attr_get_u32(attr);
 		}
 	}
 
diff --git a/tools/net/ynl/generated/netdev-user.h b/tools/net/ynl/generated/netdev-user.h
index a0833eb9a52f..942f377876b0 100644
--- a/tools/net/ynl/generated/netdev-user.h
+++ b/tools/net/ynl/generated/netdev-user.h
@@ -24,6 +24,7 @@ struct netdev_napi_info_entry {
 	struct {
 		__u32 napi_id:1;
 		__u32 irq:1;
+		__u32 pid:1;
 	} _present;
 
 	__u32 napi_id;
@@ -32,6 +33,7 @@ struct netdev_napi_info_entry {
 	unsigned int n_tx_queues;
 	__u32 *tx_queues;
 	__u32 irq;
+	__s32 pid;
 };
 
 /* ============== NETDEV_CMD_DEV_GET ============== */


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [net-next PATCH v1 9/9] netdev-genl: Add PID for the NAPI thread
  2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
                   ` (7 preceding siblings ...)
  2023-07-29  0:47 ` [net-next PATCH v1 8/9] netdev-genl: spec: Add PID in netdev netlink YAML spec Amritha Nambiar
@ 2023-07-29  0:47 ` Amritha Nambiar
  8 siblings, 0 replies; 26+ messages in thread
From: Amritha Nambiar @ 2023-07-29  0:47 UTC (permalink / raw)
  To: netdev, kuba, davem; +Cc: sridhar.samudrala, amritha.nambiar

In the threaded NAPI mode, expose the PID of the NAPI thread.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 net/core/netdev-genl.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 8401f646a10b..60af99ffb9ec 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -141,6 +141,7 @@ netdev_nl_napi_fill_one(struct sk_buff *msg, struct napi_struct *napi)
 	struct netdev_queue *tx_queue, *txq;
 	unsigned int rx_qid, tx_qid;
 	struct nlattr *napi_info;
+	pid_t pid;
 
 	napi_info = nla_nest_start(msg, NETDEV_A_NAPI_NAPI_INFO);
 	if (!napi_info)
@@ -165,6 +166,12 @@ netdev_nl_napi_fill_one(struct sk_buff *msg, struct napi_struct *napi)
 		if (nla_put_u32(msg, NETDEV_A_NAPI_INFO_ENTRY_IRQ, napi->irq))
 			goto nla_put_failure;
 
+	if (napi->thread) {
+		pid = task_pid_nr(napi->thread);
+		if (nla_put_s32(msg, NETDEV_A_NAPI_INFO_ENTRY_PID, pid))
+			goto nla_put_failure;
+	}
+
 	nla_nest_end(msg, napi_info);
 	return 0;
 nla_put_failure:


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 7/9] net: Add NAPI IRQ support
  2023-07-29  0:47 ` [net-next PATCH v1 7/9] net: Add NAPI IRQ support Amritha Nambiar
@ 2023-07-29  4:05   ` Stephen Hemminger
  2023-07-31 23:22     ` Nambiar, Amritha
  0 siblings, 1 reply; 26+ messages in thread
From: Stephen Hemminger @ 2023-07-29  4:05 UTC (permalink / raw)
  To: Amritha Nambiar; +Cc: netdev, kuba, davem, sridhar.samudrala

On Fri, 28 Jul 2023 17:47:28 -0700
Amritha Nambiar <amritha.nambiar@intel.com> wrote:

> Add support to associate the interrupt vector number for a
> NAPI instance.
> 
> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c |    3 +++
>  include/linux/netdevice.h                |    6 ++++++
>  net/core/dev.c                           |    1 +
>  net/core/netdev-genl.c                   |    4 ++++
>  4 files changed, 14 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> index 171177db8fb4..1ebd293ca7de 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -2975,6 +2975,9 @@ int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector)
>  			return ret;
>  	}
>  
> +	/* Also set the interrupt number for the NAPI */
> +	napi_set_irq(&q_vector->napi, q_vector->irq.virq);
> +
>  	return ret;
>  }

Doing this for only one device seems like a potential problem.
Also, there are some weird devices where there may not be a 1:1:1 mapping
between IRQ, NAPI, and netdev.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations
  2023-07-29  0:46 ` [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations Amritha Nambiar
@ 2023-07-29  9:55   ` kernel test robot
  2023-07-30 17:10   ` Simon Horman
  1 sibling, 0 replies; 26+ messages in thread
From: kernel test robot @ 2023-07-29  9:55 UTC (permalink / raw)
  To: Amritha Nambiar, netdev, kuba, davem
  Cc: oe-kbuild-all, sridhar.samudrala, amritha.nambiar

Hi Amritha,

kernel test robot noticed the following build errors:

[auto build test ERROR on net-next/main]

url:    https://github.com/intel-lab-lkp/linux/commits/Amritha-Nambiar/net-Introduce-new-fields-for-napi-and-queue-associations/20230729-083646
base:   net-next/main
patch link:    https://lore.kernel.org/r/169059161688.3736.18170697577939556255.stgit%40anambiarhost.jf.intel.com
patch subject: [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20230729/202307291714.SUP7uQyV-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce: (https://download.01.org/0day-ci/archive/20230729/202307291714.SUP7uQyV-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202307291714.SUP7uQyV-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from drivers/infiniband/sw/rxe/rxe_comp.c:11:
>> drivers/infiniband/sw/rxe/rxe_queue.h:53:6: error: redeclaration of 'enum queue_type'
      53 | enum queue_type {
         |      ^~~~~~~~~~
   In file included from include/net/sock.h:46,
                    from include/linux/tcp.h:19,
                    from include/linux/ipv6.h:94,
                    from include/net/ipv6.h:12,
                    from include/rdma/ib_verbs.h:25,
                    from drivers/infiniband/sw/rxe/rxe.h:17,
                    from drivers/infiniband/sw/rxe/rxe_comp.c:9:
   include/linux/netdevice.h:348:6: note: originally defined here
     348 | enum queue_type {
         |      ^~~~~~~~~~


vim +53 drivers/infiniband/sw/rxe/rxe_queue.h

8700e3e7c4857d Moni Shoua  2016-06-16   9  
ae6e843fe08d0e Bob Pearson 2021-09-14  10  /* Implements a simple circular buffer that is shared between user
ae6e843fe08d0e Bob Pearson 2021-09-14  11   * and the driver and can be resized. The requested element size is
ae6e843fe08d0e Bob Pearson 2021-09-14  12   * rounded up to a power of 2 and the number of elements in the buffer
ae6e843fe08d0e Bob Pearson 2021-09-14  13   * is also rounded up to a power of 2. Since the queue is empty when
ae6e843fe08d0e Bob Pearson 2021-09-14  14   * the producer and consumer indices match the maximum capacity of the
ae6e843fe08d0e Bob Pearson 2021-09-14  15   * queue is one less than the number of element slots.
5bcf5a59c41e19 Bob Pearson 2021-05-27  16   *
5bcf5a59c41e19 Bob Pearson 2021-05-27  17   * Notes:
ae6e843fe08d0e Bob Pearson 2021-09-14  18   *   - The driver indices are always masked off to q->index_mask
5bcf5a59c41e19 Bob Pearson 2021-05-27  19   *     before storing so do not need to be checked on reads.
ae6e843fe08d0e Bob Pearson 2021-09-14  20   *   - The user whether user space or kernel is generally
ae6e843fe08d0e Bob Pearson 2021-09-14  21   *     not trusted so its parameters are masked to make sure
ae6e843fe08d0e Bob Pearson 2021-09-14  22   *     they do not access the queue out of bounds on reads.
ae6e843fe08d0e Bob Pearson 2021-09-14  23   *   - The driver indices for queues must not be written
ae6e843fe08d0e Bob Pearson 2021-09-14  24   *     by user so a local copy is used and a shared copy is
ae6e843fe08d0e Bob Pearson 2021-09-14  25   *     stored when the local copy is changed.
5bcf5a59c41e19 Bob Pearson 2021-05-27  26   *   - By passing the type in the parameter list separate from q
5bcf5a59c41e19 Bob Pearson 2021-05-27  27   *     the compiler can eliminate the switch statement when the
ae6e843fe08d0e Bob Pearson 2021-09-14  28   *     actual queue type is known when the function is called at
ae6e843fe08d0e Bob Pearson 2021-09-14  29   *     compile time.
ae6e843fe08d0e Bob Pearson 2021-09-14  30   *   - These queues are lock free. The user and driver must protect
ae6e843fe08d0e Bob Pearson 2021-09-14  31   *     changes to their end of the queues with locks if more than one
ae6e843fe08d0e Bob Pearson 2021-09-14  32   *     CPU can be accessing it at the same time.
8700e3e7c4857d Moni Shoua  2016-06-16  33   */
8700e3e7c4857d Moni Shoua  2016-06-16  34  
ae6e843fe08d0e Bob Pearson 2021-09-14  35  /**
ae6e843fe08d0e Bob Pearson 2021-09-14  36   * enum queue_type - type of queue
ae6e843fe08d0e Bob Pearson 2021-09-14  37   * @QUEUE_TYPE_TO_CLIENT:	Queue is written by rxe driver and
a77a52385e9a76 Bob Pearson 2023-02-14  38   *				read by client which may be a user space
a77a52385e9a76 Bob Pearson 2023-02-14  39   *				application or a kernel ulp.
a77a52385e9a76 Bob Pearson 2023-02-14  40   *				Used by rxe internals only.
ae6e843fe08d0e Bob Pearson 2021-09-14  41   * @QUEUE_TYPE_FROM_CLIENT:	Queue is written by client and
a77a52385e9a76 Bob Pearson 2023-02-14  42   *				read by rxe driver.
a77a52385e9a76 Bob Pearson 2023-02-14  43   *				Used by rxe internals only.
a77a52385e9a76 Bob Pearson 2023-02-14  44   * @QUEUE_TYPE_FROM_ULP:	Queue is written by kernel ulp and
a77a52385e9a76 Bob Pearson 2023-02-14  45   *				read by rxe driver.
a77a52385e9a76 Bob Pearson 2023-02-14  46   *				Used by kernel verbs APIs only on
a77a52385e9a76 Bob Pearson 2023-02-14  47   *				behalf of ulps.
a77a52385e9a76 Bob Pearson 2023-02-14  48   * @QUEUE_TYPE_TO_ULP:		Queue is written by rxe driver and
a77a52385e9a76 Bob Pearson 2023-02-14  49   *				read by kernel ulp.
a77a52385e9a76 Bob Pearson 2023-02-14  50   *				Used by kernel verbs APIs only on
a77a52385e9a76 Bob Pearson 2023-02-14  51   *				behalf of ulps.
ae6e843fe08d0e Bob Pearson 2021-09-14  52   */
59daff49f25fbb Bob Pearson 2021-05-27 @53  enum queue_type {
ae6e843fe08d0e Bob Pearson 2021-09-14  54  	QUEUE_TYPE_TO_CLIENT,
ae6e843fe08d0e Bob Pearson 2021-09-14  55  	QUEUE_TYPE_FROM_CLIENT,
a77a52385e9a76 Bob Pearson 2023-02-14  56  	QUEUE_TYPE_FROM_ULP,
a77a52385e9a76 Bob Pearson 2023-02-14  57  	QUEUE_TYPE_TO_ULP,
59daff49f25fbb Bob Pearson 2021-05-27  58  };
59daff49f25fbb Bob Pearson 2021-05-27  59  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations
  2023-07-29  0:46 ` [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations Amritha Nambiar
  2023-07-29  9:55   ` kernel test robot
@ 2023-07-30 17:10   ` Simon Horman
  2023-07-31 22:57     ` Nambiar, Amritha
  1 sibling, 1 reply; 26+ messages in thread
From: Simon Horman @ 2023-07-30 17:10 UTC (permalink / raw)
  To: Amritha Nambiar; +Cc: netdev, kuba, davem, sridhar.samudrala

On Fri, Jul 28, 2023 at 05:46:56PM -0700, Amritha Nambiar wrote:

...

> diff --git a/net/core/dev.c b/net/core/dev.c
> index b58674774a57..875023ab614c 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -6389,6 +6389,42 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
>  }
>  EXPORT_SYMBOL(dev_set_threaded);
>  
> +/**
> + * netif_napi_add_queue - Associate queue with the napi
> + * @napi: NAPI context
> + * @queue_index: Index of queue
> + * @queue_type: queue type as RX or TX

Hi Arithma,

a minor nit from my side: @queue_type -> @type

> + *
> + * Add queue with its corresponding napi context
> + */
> +int netif_napi_add_queue(struct napi_struct *napi, unsigned int queue_index,
> +			 enum queue_type type)
> +{

...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi
  2023-07-29  0:47 ` [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi Amritha Nambiar
@ 2023-07-30 17:15   ` Simon Horman
  2023-07-31 23:00     ` Nambiar, Amritha
  2023-07-31 19:37   ` Jakub Kicinski
  1 sibling, 1 reply; 26+ messages in thread
From: Simon Horman @ 2023-07-30 17:15 UTC (permalink / raw)
  To: Amritha Nambiar; +Cc: netdev, kuba, davem, sridhar.samudrala

On Fri, Jul 28, 2023 at 05:47:17PM -0700, Amritha Nambiar wrote:
> Implement the netdev netlink framework functions for
> napi support. The netdev structure tracks all the napi
> instances and napi fields. The napi instances and associated
> queue[s] can be retrieved this way.
> 
> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
> ---
>  net/core/netdev-genl.c |  253 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 251 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c

...

>  int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
>  {
> -	return -EOPNOTSUPP;
> +	struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb);
> +	struct net *net = sock_net(skb->sk);
> +	struct net_device *netdev;
> +	int idx = 0, s_idx, n_idx;
> +	int h, s_h;
> +	int err;
> +
> +	s_h = ctx->dev_entry_hash;
> +	s_idx = ctx->dev_entry_idx;
> +	n_idx = ctx->napi_idx;
> +
> +	rtnl_lock();
> +
> +	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
> +		struct hlist_head *head;
> +
> +		idx = 0;
> +		head = &net->dev_index_head[h];
> +		hlist_for_each_entry(netdev, head, index_hlist) {
> +			if (idx < s_idx)
> +				goto cont;
> +			err = netdev_nl_napi_dump_entry(netdev, skb, cb, &n_idx);
> +			if (err == -EMSGSIZE)
> +				goto out;
> +			n_idx = 0;
> +			if (err < 0)
> +				break;
> +cont:
> +			idx++;
> +		}
> +	}
> +
> +	rtnl_unlock();
> +
> +	return err;

Hi Amritha,

I'm unsure if this can happen, but if loop iteration occurs zero times
above in such a way that netdev_nl_napi_dump_entry() isn't called, then err
will be uninitialised here.

This is also the case in netdev_nl_dev_get_dumpit
(both before and after this patch.

As flagged by Smatch.

> +
> +out:
> +	rtnl_unlock();
> +
> +	ctx->dev_entry_idx = idx;
> +	ctx->dev_entry_hash = h;
> +	ctx->napi_idx = n_idx;
> +	cb->seq = net->dev_base_seq;
> +
> +	return skb->len;
>  }

...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-07-29  0:47 ` [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI Amritha Nambiar
@ 2023-07-31 19:36   ` Jakub Kicinski
  2023-07-31 23:12     ` Nambiar, Amritha
  0 siblings, 1 reply; 26+ messages in thread
From: Jakub Kicinski @ 2023-07-31 19:36 UTC (permalink / raw)
  To: Amritha Nambiar; +Cc: netdev, davem, sridhar.samudrala

On Fri, 28 Jul 2023 17:47:07 -0700 Amritha Nambiar wrote:
> +  -
> +    name: napi
> +    attributes:
> +      -
> +        name: ifindex
> +        doc: netdev ifindex
> +        type: u32
> +        checks:
> +          min: 1
> +      -
> +        name: napi-info
> +        doc: napi information such as napi-id, napi queues etc.
> +        type: nest
> +        multi-attr: true
> +        nested-attributes: napi-info-entry

Every NAPI instance should be dumped as a separate object. We can
implemented filtered dump to get NAPIs of a single netdev.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi
  2023-07-29  0:47 ` [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi Amritha Nambiar
  2023-07-30 17:15   ` Simon Horman
@ 2023-07-31 19:37   ` Jakub Kicinski
  2023-07-31 23:01     ` Nambiar, Amritha
  1 sibling, 1 reply; 26+ messages in thread
From: Jakub Kicinski @ 2023-07-31 19:37 UTC (permalink / raw)
  To: Amritha Nambiar; +Cc: netdev, davem, sridhar.samudrala

On Fri, 28 Jul 2023 17:47:17 -0700 Amritha Nambiar wrote:
>  int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
>  {
> -	return -EOPNOTSUPP;
> +	struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb);
> +	struct net *net = sock_net(skb->sk);
> +	struct net_device *netdev;
> +	int idx = 0, s_idx, n_idx;
> +	int h, s_h;
> +	int err;
> +
> +	s_h = ctx->dev_entry_hash;
> +	s_idx = ctx->dev_entry_idx;
> +	n_idx = ctx->napi_idx;
> +
> +	rtnl_lock();
> +
> +	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
> +		struct hlist_head *head;
> +
> +		idx = 0;
> +		head = &net->dev_index_head[h];
> +		hlist_for_each_entry(netdev, head, index_hlist) {

Please rebased on latest net-next you can ditch all this iteration
stuff and use the new xarray.
-- 
pw-bot: cr

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations
  2023-07-30 17:10   ` Simon Horman
@ 2023-07-31 22:57     ` Nambiar, Amritha
  0 siblings, 0 replies; 26+ messages in thread
From: Nambiar, Amritha @ 2023-07-31 22:57 UTC (permalink / raw)
  To: Simon Horman; +Cc: netdev, kuba, davem, sridhar.samudrala

On 7/30/2023 10:10 AM, Simon Horman wrote:
> On Fri, Jul 28, 2023 at 05:46:56PM -0700, Amritha Nambiar wrote:
> 
> ...
> 
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index b58674774a57..875023ab614c 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>> @@ -6389,6 +6389,42 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
>>   }
>>   EXPORT_SYMBOL(dev_set_threaded);
>>   
>> +/**
>> + * netif_napi_add_queue - Associate queue with the napi
>> + * @napi: NAPI context
>> + * @queue_index: Index of queue
>> + * @queue_type: queue type as RX or TX
> 
> Hi Arithma,
> 
> a minor nit from my side: @queue_type -> @type

Will fix in the next version. Thanks.

> 
>> + *
>> + * Add queue with its corresponding napi context
>> + */
>> +int netif_napi_add_queue(struct napi_struct *napi, unsigned int queue_index,
>> +			 enum queue_type type)
>> +{
> 
> ...
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi
  2023-07-30 17:15   ` Simon Horman
@ 2023-07-31 23:00     ` Nambiar, Amritha
  0 siblings, 0 replies; 26+ messages in thread
From: Nambiar, Amritha @ 2023-07-31 23:00 UTC (permalink / raw)
  To: Simon Horman; +Cc: netdev, kuba, davem, sridhar.samudrala

On 7/30/2023 10:15 AM, Simon Horman wrote:
> On Fri, Jul 28, 2023 at 05:47:17PM -0700, Amritha Nambiar wrote:
>> Implement the netdev netlink framework functions for
>> napi support. The netdev structure tracks all the napi
>> instances and napi fields. The napi instances and associated
>> queue[s] can be retrieved this way.
>>
>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>> ---
>>   net/core/netdev-genl.c |  253 ++++++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 251 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> 
> ...
> 
>>   int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
>>   {
>> -	return -EOPNOTSUPP;
>> +	struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb);
>> +	struct net *net = sock_net(skb->sk);
>> +	struct net_device *netdev;
>> +	int idx = 0, s_idx, n_idx;
>> +	int h, s_h;
>> +	int err;
>> +
>> +	s_h = ctx->dev_entry_hash;
>> +	s_idx = ctx->dev_entry_idx;
>> +	n_idx = ctx->napi_idx;
>> +
>> +	rtnl_lock();
>> +
>> +	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
>> +		struct hlist_head *head;
>> +
>> +		idx = 0;
>> +		head = &net->dev_index_head[h];
>> +		hlist_for_each_entry(netdev, head, index_hlist) {
>> +			if (idx < s_idx)
>> +				goto cont;
>> +			err = netdev_nl_napi_dump_entry(netdev, skb, cb, &n_idx);
>> +			if (err == -EMSGSIZE)
>> +				goto out;
>> +			n_idx = 0;
>> +			if (err < 0)
>> +				break;
>> +cont:
>> +			idx++;
>> +		}
>> +	}
>> +
>> +	rtnl_unlock();
>> +
>> +	return err;
> 
> Hi Amritha,
> 
> I'm unsure if this can happen, but if loop iteration occurs zero times
> above in such a way that netdev_nl_napi_dump_entry() isn't called, then err
> will be uninitialised here.
> 
> This is also the case in netdev_nl_dev_get_dumpit
> (both before and after this patch.
> 
> As flagged by Smatch.
> 

Will fix the initialization in the next version.

>> +
>> +out:
>> +	rtnl_unlock();
>> +
>> +	ctx->dev_entry_idx = idx;
>> +	ctx->dev_entry_hash = h;
>> +	ctx->napi_idx = n_idx;
>> +	cb->seq = net->dev_base_seq;
>> +
>> +	return skb->len;
>>   }
> 
> ...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi
  2023-07-31 19:37   ` Jakub Kicinski
@ 2023-07-31 23:01     ` Nambiar, Amritha
  0 siblings, 0 replies; 26+ messages in thread
From: Nambiar, Amritha @ 2023-07-31 23:01 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, davem, sridhar.samudrala

On 7/31/2023 12:37 PM, Jakub Kicinski wrote:
> On Fri, 28 Jul 2023 17:47:17 -0700 Amritha Nambiar wrote:
>>   int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
>>   {
>> -	return -EOPNOTSUPP;
>> +	struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb);
>> +	struct net *net = sock_net(skb->sk);
>> +	struct net_device *netdev;
>> +	int idx = 0, s_idx, n_idx;
>> +	int h, s_h;
>> +	int err;
>> +
>> +	s_h = ctx->dev_entry_hash;
>> +	s_idx = ctx->dev_entry_idx;
>> +	n_idx = ctx->napi_idx;
>> +
>> +	rtnl_lock();
>> +
>> +	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
>> +		struct hlist_head *head;
>> +
>> +		idx = 0;
>> +		head = &net->dev_index_head[h];
>> +		hlist_for_each_entry(netdev, head, index_hlist) {
> 
> Please rebased on latest net-next you can ditch all this iteration
> stuff and use the new xarray.

Sure, will fix in the next version.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-07-31 19:36   ` Jakub Kicinski
@ 2023-07-31 23:12     ` Nambiar, Amritha
  2023-08-01  0:13       ` Jakub Kicinski
  0 siblings, 1 reply; 26+ messages in thread
From: Nambiar, Amritha @ 2023-07-31 23:12 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, davem, sridhar.samudrala

On 7/31/2023 12:36 PM, Jakub Kicinski wrote:
> On Fri, 28 Jul 2023 17:47:07 -0700 Amritha Nambiar wrote:
>> +  -
>> +    name: napi
>> +    attributes:
>> +      -
>> +        name: ifindex
>> +        doc: netdev ifindex
>> +        type: u32
>> +        checks:
>> +          min: 1
>> +      -
>> +        name: napi-info
>> +        doc: napi information such as napi-id, napi queues etc.
>> +        type: nest
>> +        multi-attr: true
>> +        nested-attributes: napi-info-entry
> 
> Every NAPI instance should be dumped as a separate object. We can
> implemented filtered dump to get NAPIs of a single netdev.
> 

Today, the 'do napi-get <ifindex>' will show all the NAPIs for a single 
netdev:
Example: --do napi-get --json='{"ifindex": 6}'

and the 'dump napi-get' will dump all the NAPIs for all the netdevs.
Example: netdev.yaml  --dump napi-get

Are you suggesting that we also dump each NAPI instance individually,
'do napi-get <ifindex> <NAPI_ID>'

Example:
netdev.yaml  --do napi-get --json='{"ifindex": 6, "napi-id": 390}'

[{'ifindex': 6},
  {'napi-info': [{'irq': 296,
                  'napi-id': 390,
                  'pid': 3475,
                  'rx-queues': [5],
                  'tx-queues': [5]}]}]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 7/9] net: Add NAPI IRQ support
  2023-07-29  4:05   ` Stephen Hemminger
@ 2023-07-31 23:22     ` Nambiar, Amritha
  0 siblings, 0 replies; 26+ messages in thread
From: Nambiar, Amritha @ 2023-07-31 23:22 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: netdev, kuba, davem, sridhar.samudrala

On 7/28/2023 9:05 PM, Stephen Hemminger wrote:
> On Fri, 28 Jul 2023 17:47:28 -0700
> Amritha Nambiar <amritha.nambiar@intel.com> wrote:
> 
>> Add support to associate the interrupt vector number for a
>> NAPI instance.
>>
>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>> ---
>>   drivers/net/ethernet/intel/ice/ice_lib.c |    3 +++
>>   include/linux/netdevice.h                |    6 ++++++
>>   net/core/dev.c                           |    1 +
>>   net/core/netdev-genl.c                   |    4 ++++
>>   4 files changed, 14 insertions(+)
>>
>> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
>> index 171177db8fb4..1ebd293ca7de 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
>> @@ -2975,6 +2975,9 @@ int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector)
>>   			return ret;
>>   	}
>>   
>> +	/* Also set the interrupt number for the NAPI */
>> +	napi_set_irq(&q_vector->napi, q_vector->irq.virq);
>> +
>>   	return ret;
>>   }
> 
> Doing this for only one device seems like a potential problem.

For devices that does not call napi_set_irq(), irq will be initialized 
to -1 as part of netif_napi_add_weight().

> Also, there are some weird devices where there may not be a 1:1:1 mapping
> between IRQ, NAPI, and netdev.
> 

IIUC, there's a 1:1 mapping between IRQ and NAPI, and need not be mapped 
:1 with netdev.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-07-31 23:12     ` Nambiar, Amritha
@ 2023-08-01  0:13       ` Jakub Kicinski
  2023-08-01  0:24         ` Nambiar, Amritha
  0 siblings, 1 reply; 26+ messages in thread
From: Jakub Kicinski @ 2023-08-01  0:13 UTC (permalink / raw)
  To: Nambiar, Amritha; +Cc: netdev, davem, sridhar.samudrala

On Mon, 31 Jul 2023 16:12:23 -0700 Nambiar, Amritha wrote:
> > Every NAPI instance should be dumped as a separate object. We can
> > implemented filtered dump to get NAPIs of a single netdev.
> 
> Today, the 'do napi-get <ifindex>' will show all the NAPIs for a single 
> netdev:
> Example: --do napi-get --json='{"ifindex": 6}'
> 
> and the 'dump napi-get' will dump all the NAPIs for all the netdevs.
> Example: netdev.yaml  --dump napi-get
> 
> Are you suggesting that we also dump each NAPI instance individually,
> 'do napi-get <ifindex> <NAPI_ID>'
> 
> Example:
> netdev.yaml  --do napi-get --json='{"ifindex": 6, "napi-id": 390}'
> 
> [{'ifindex': 6},
>   {'napi-info': [{'irq': 296,
>                   'napi-id': 390,
>                   'pid': 3475,
>                   'rx-queues': [5],
>                   'tx-queues': [5]}]}]

Dumps can be filtered, I'm saying:

$ netdev.yaml --dump napi-get --json='{"ifindex": 6}'
                ^^^^

[{'napi-id': 390, 'ifindex': 6, 'irq': 296, ...},
 {'napi-id': 391, 'ifindex': 6, 'irq': 297, ...}]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-08-01  0:13       ` Jakub Kicinski
@ 2023-08-01  0:24         ` Nambiar, Amritha
  2023-08-01  0:35           ` Jakub Kicinski
  0 siblings, 1 reply; 26+ messages in thread
From: Nambiar, Amritha @ 2023-08-01  0:24 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, davem, sridhar.samudrala

On 7/31/2023 5:13 PM, Jakub Kicinski wrote:
> On Mon, 31 Jul 2023 16:12:23 -0700 Nambiar, Amritha wrote:
>>> Every NAPI instance should be dumped as a separate object. We can
>>> implemented filtered dump to get NAPIs of a single netdev.
>>
>> Today, the 'do napi-get <ifindex>' will show all the NAPIs for a single
>> netdev:
>> Example: --do napi-get --json='{"ifindex": 6}'
>>
>> and the 'dump napi-get' will dump all the NAPIs for all the netdevs.
>> Example: netdev.yaml  --dump napi-get
>>
>> Are you suggesting that we also dump each NAPI instance individually,
>> 'do napi-get <ifindex> <NAPI_ID>'
>>
>> Example:
>> netdev.yaml  --do napi-get --json='{"ifindex": 6, "napi-id": 390}'
>>
>> [{'ifindex': 6},
>>    {'napi-info': [{'irq': 296,
>>                    'napi-id': 390,
>>                    'pid': 3475,
>>                    'rx-queues': [5],
>>                    'tx-queues': [5]}]}]
> 
> Dumps can be filtered, I'm saying:
> 
> $ netdev.yaml --dump napi-get --json='{"ifindex": 6}'
>                  ^^^^
> 
> [{'napi-id': 390, 'ifindex': 6, 'irq': 296, ...},
>   {'napi-id': 391, 'ifindex': 6, 'irq': 297, ...}]

I see. Okay. Looks like this needs to be supported for "dump dev-get 
ifindex" as well.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-08-01  0:24         ` Nambiar, Amritha
@ 2023-08-01  0:35           ` Jakub Kicinski
  2023-08-09  0:17             ` Nambiar, Amritha
  0 siblings, 1 reply; 26+ messages in thread
From: Jakub Kicinski @ 2023-08-01  0:35 UTC (permalink / raw)
  To: Nambiar, Amritha; +Cc: netdev, davem, sridhar.samudrala

On Mon, 31 Jul 2023 17:24:51 -0700 Nambiar, Amritha wrote:
> >> [{'ifindex': 6},
> >>    {'napi-info': [{'irq': 296,
> >>                    'napi-id': 390,
> >>                    'pid': 3475,
> >>                    'rx-queues': [5],
> >>                    'tx-queues': [5]}]}]  
> > 
> > Dumps can be filtered, I'm saying:
> > 
> > $ netdev.yaml --dump napi-get --json='{"ifindex": 6}'
> >                  ^^^^
> > 
> > [{'napi-id': 390, 'ifindex': 6, 'irq': 296, ...},
> >   {'napi-id': 391, 'ifindex': 6, 'irq': 297, ...}]  
> 
> I see. Okay. Looks like this needs to be supported for "dump dev-get 
> ifindex" as well.

The main thing to focus on for next version is to make the NAPI objects
"flat" and individual, rather than entries in multi-attr nest within
per-netdev object.

I'm 100% sure implementing the filtering by ifindex will be doable as
a follow up so we can defer it.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-08-01  0:35           ` Jakub Kicinski
@ 2023-08-09  0:17             ` Nambiar, Amritha
  2023-08-09  2:45               ` Jakub Kicinski
  0 siblings, 1 reply; 26+ messages in thread
From: Nambiar, Amritha @ 2023-08-09  0:17 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, davem, sridhar.samudrala

On 7/31/2023 5:35 PM, Jakub Kicinski wrote:
> On Mon, 31 Jul 2023 17:24:51 -0700 Nambiar, Amritha wrote:
>>>> [{'ifindex': 6},
>>>>     {'napi-info': [{'irq': 296,
>>>>                     'napi-id': 390,
>>>>                     'pid': 3475,
>>>>                     'rx-queues': [5],
>>>>                     'tx-queues': [5]}]}]
>>>
>>> Dumps can be filtered, I'm saying:
>>>
>>> $ netdev.yaml --dump napi-get --json='{"ifindex": 6}'
>>>                   ^^^^
>>>
>>> [{'napi-id': 390, 'ifindex': 6, 'irq': 296, ...},
>>>    {'napi-id': 391, 'ifindex': 6, 'irq': 297, ...}]
>>
>> I see. Okay. Looks like this needs to be supported for "dump dev-get
>> ifindex" as well.
> 
> The main thing to focus on for next version is to make the NAPI objects
> "flat" and individual, rather than entries in multi-attr nest within
> per-netdev object.
> 

Would this be acceptable:
$ netdev.yaml  --do napi-get --json='{"ifindex": 12}'

{'napi-info': [{'ifindex': 12, 'irq': 293, 'napi-id': 595, ...},
                {'ifindex': 12, 'irq': 292, 'napi-id': 594, ...},
                {'ifindex': 12, 'irq': 291, 'napi-id': 593, ...}]}

Here, "napi-info" represents a list of NAPI objects. A NAPI object is an 
individual element in the list, and are not within per-netdev object. 
The ifindex is just another attribute that is part of the NAPI object. 
The result for non-NAPI devices will just be empty.

I am not sure how to totally avoid multi-attr nest in this case. For the 
'do napi-get' command, the response is a list of elements with multiple 
attributes and not an individual entry. This transforms to a struct 
array of NAPI objects in the 'do' response, and is achieved with the 
multi-attr nest in the YAML. Subsequently, the 'dump napi-get' response 
is a list of the 'NAPI objects struct list' for all netdevs. Am I 
missing any special type in the YAML that can also give out a 
struct-array in the 'do' response besides using multi-attr nest ?


> I'm 100% sure implementing the filtering by ifindex will be doable as
> a follow up so we can defer it.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
  2023-08-09  0:17             ` Nambiar, Amritha
@ 2023-08-09  2:45               ` Jakub Kicinski
  0 siblings, 0 replies; 26+ messages in thread
From: Jakub Kicinski @ 2023-08-09  2:45 UTC (permalink / raw)
  To: Nambiar, Amritha; +Cc: netdev, davem, sridhar.samudrala

On Tue, 8 Aug 2023 17:17:34 -0700 Nambiar, Amritha wrote:
> > The main thing to focus on for next version is to make the NAPI objects
> > "flat" and individual, rather than entries in multi-attr nest within
> > per-netdev object.
> 
> Would this be acceptable:
> $ netdev.yaml  --do napi-get --json='{"ifindex": 12}'
> 
> {'napi-info': [{'ifindex': 12, 'irq': 293, 'napi-id': 595, ...},
>                 {'ifindex': 12, 'irq': 292, 'napi-id': 594, ...},
>                 {'ifindex': 12, 'irq': 291, 'napi-id': 593, ...}]}
> 
> Here, "napi-info" represents a list of NAPI objects. A NAPI object is an 
> individual element in the list, and are not within per-netdev object. 
> The ifindex is just another attribute that is part of the NAPI object. 
> The result for non-NAPI devices will just be empty.
> 
> I am not sure how to totally avoid multi-attr nest in this case. For the 
> 'do napi-get' command, the response is a list of elements with multiple 
> attributes and not an individual entry. This transforms to a struct 
> array of NAPI objects in the 'do' response, and is achieved with the 
> multi-attr nest in the YAML. Subsequently, the 'dump napi-get' response 
> is a list of the 'NAPI objects struct list' for all netdevs. Am I 
> missing any special type in the YAML that can also give out a 
> struct-array in the 'do' response besides using multi-attr nest ?

napi-get needs to take napi-id as an argument, if you want to filter 
by ifindex - that means a dump. Dumps work kind-of similar to do, you
still get the attributes for the request (although in a somewhat
convoluted way:

static int your_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
{
       const struct genl_dumpit_info *info = genl_dumpit_info(cb);

	info->attr[attrs-as-normal]

So if info->attrs[IFINDEX] is provided by the user - limit the dump to
only napis from that netdev.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-08-09  2:45 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-29  0:46 [net-next PATCH v1 0/9] Introduce NAPI queues support Amritha Nambiar
2023-07-29  0:46 ` [net-next PATCH v1 1/9] net: Introduce new fields for napi and queue associations Amritha Nambiar
2023-07-29  9:55   ` kernel test robot
2023-07-30 17:10   ` Simon Horman
2023-07-31 22:57     ` Nambiar, Amritha
2023-07-29  0:47 ` [net-next PATCH v1 2/9] ice: Add support in the driver for associating napi with queue[s] Amritha Nambiar
2023-07-29  0:47 ` [net-next PATCH v1 3/9] netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI Amritha Nambiar
2023-07-31 19:36   ` Jakub Kicinski
2023-07-31 23:12     ` Nambiar, Amritha
2023-08-01  0:13       ` Jakub Kicinski
2023-08-01  0:24         ` Nambiar, Amritha
2023-08-01  0:35           ` Jakub Kicinski
2023-08-09  0:17             ` Nambiar, Amritha
2023-08-09  2:45               ` Jakub Kicinski
2023-07-29  0:47 ` [net-next PATCH v1 4/9] net: Move kernel helpers for queue index outside sysfs Amritha Nambiar
2023-07-29  0:47 ` [net-next PATCH v1 5/9] netdev-genl: Add netlink framework functions for napi Amritha Nambiar
2023-07-30 17:15   ` Simon Horman
2023-07-31 23:00     ` Nambiar, Amritha
2023-07-31 19:37   ` Jakub Kicinski
2023-07-31 23:01     ` Nambiar, Amritha
2023-07-29  0:47 ` [net-next PATCH v1 6/9] netdev-genl: spec: Add irq in netdev netlink YAML spec Amritha Nambiar
2023-07-29  0:47 ` [net-next PATCH v1 7/9] net: Add NAPI IRQ support Amritha Nambiar
2023-07-29  4:05   ` Stephen Hemminger
2023-07-31 23:22     ` Nambiar, Amritha
2023-07-29  0:47 ` [net-next PATCH v1 8/9] netdev-genl: spec: Add PID in netdev netlink YAML spec Amritha Nambiar
2023-07-29  0:47 ` [net-next PATCH v1 9/9] netdev-genl: Add PID for the NAPI thread Amritha Nambiar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).