Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [RFC v2 net-next 0/2] e1000/e1000e: Link IRQs, NAPIs, and queues
@ 2024-09-25 16:29 Joe Damato
  2024-09-25 16:29 ` [Intel-wired-lan] [RFC v2 net-next 1/2] e1000e: link NAPI instances to queues and IRQs Joe Damato
  2024-09-25 16:29 ` [Intel-wired-lan] [RFC v2 net-next 2/2] e1000: Link IRQs and queues to NAPIs Joe Damato
  0 siblings, 2 replies; 3+ messages in thread
From: Joe Damato @ 2024-09-25 16:29 UTC (permalink / raw)
  To: netdev
  Cc: Przemek Kitszel, Joe Damato, open list, Eric Dumazet, Tony Nguyen,
	moderated list:INTEL ETHERNET DRIVERS, Jakub Kicinski,
	Paolo Abeni, David S. Miller

Greetings:

This RFC v2 follows from an RFC submission I sent [1] for e1000e. The
original RFC added netdev-genl support for e1000e, but this new RFC
includes a patch to add support for e1000, as well.

Supporting this API in these drivers is very useful as commonly used
virtualization software, like VMWare Fusion and VirtualBox, expose e1000e
and e1000 NICs to VMs.

Developers who work on user apps in VMs may find themselves in need of
access to this API to build, test, or run CI on their apps. This is
especially true for apps which use epoll based busy poll and rely on
userland mapping NAPI IDs to queues.

I plan to send this series as an official patch series next week when
net-next reopens, but wanted to give the Intel folks a head's up incase
they had any comments or feedback I could address before then.

I've tested both patches; please see the commit messages for more details.

Thanks,
Joe

[1]: https://lore.kernel.org/lkml/20240918135726.1330-1-jdamato@fastly.com/

rfcv2:
  - Added patch 2 which includes netdev-genl support for e1000


Joe Damato (2):
  e1000e: link NAPI instances to queues and IRQs
  e1000: Link IRQs and queues to NAPIs

 drivers/net/ethernet/intel/e1000/e1000_main.c |  5 +++++
 drivers/net/ethernet/intel/e1000e/netdev.c    | 11 +++++++++++
 2 files changed, 16 insertions(+)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] e1000e: link NAPI instances to queues and IRQs
  2024-09-25 16:29 [Intel-wired-lan] [RFC v2 net-next 0/2] e1000/e1000e: Link IRQs, NAPIs, and queues Joe Damato
@ 2024-09-25 16:29 ` Joe Damato
  2024-09-25 16:29 ` [Intel-wired-lan] [RFC v2 net-next 2/2] e1000: Link IRQs and queues to NAPIs Joe Damato
  1 sibling, 0 replies; 3+ messages in thread
From: Joe Damato @ 2024-09-25 16:29 UTC (permalink / raw)
  To: netdev
  Cc: Przemek Kitszel, Joe Damato, open list, Eric Dumazet, Tony Nguyen,
	moderated list:INTEL ETHERNET DRIVERS, Jakub Kicinski,
	Paolo Abeni, David S. Miller

Make e1000e compatible with the newly added netdev-genl APIs.

$ cat /proc/interrupts | grep ens | cut -f1 --delimiter=':'
 50
 51
 52

While e1000e allocates 3 IRQs (RX, TX, and other), it looks like e1000e
only has a single NAPI, so I've associated the NAPI with the RX IRQ (50
on my system, seen above):

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                       --dump napi-get --json='{"ifindex": 2}'
[{'id': 145, 'ifindex': 2, 'irq': 50}]

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                       --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 145, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'napi-id': 145, 'type': 'tx'}]

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 drivers/net/ethernet/intel/e1000e/netdev.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
index f103249b12fa..b527642c3a82 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -4613,6 +4613,7 @@ int e1000e_open(struct net_device *netdev)
 	struct e1000_hw *hw = &adapter->hw;
 	struct pci_dev *pdev = adapter->pdev;
 	int err;
+	int irq;
 
 	/* disallow open during test */
 	if (test_bit(__E1000_TESTING, &adapter->state))
@@ -4676,7 +4677,15 @@ int e1000e_open(struct net_device *netdev)
 	/* From here on the code is the same as e1000e_up() */
 	clear_bit(__E1000_DOWN, &adapter->state);
 
+	if (adapter->int_mode == E1000E_INT_MODE_MSIX)
+		irq = adapter->msix_entries[0].vector;
+	else
+		irq = adapter->pdev->irq;
+
+	netif_napi_set_irq(&adapter->napi, irq);
 	napi_enable(&adapter->napi);
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_RX, &adapter->napi);
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_TX, &adapter->napi);
 
 	e1000_irq_enable(adapter);
 
@@ -4735,6 +4744,8 @@ int e1000e_close(struct net_device *netdev)
 		netdev_info(netdev, "NIC Link is Down\n");
 	}
 
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_RX, NULL);
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_TX, NULL);
 	napi_disable(&adapter->napi);
 
 	e1000e_free_tx_resources(adapter->tx_ring);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 2/2] e1000: Link IRQs and queues to NAPIs
  2024-09-25 16:29 [Intel-wired-lan] [RFC v2 net-next 0/2] e1000/e1000e: Link IRQs, NAPIs, and queues Joe Damato
  2024-09-25 16:29 ` [Intel-wired-lan] [RFC v2 net-next 1/2] e1000e: link NAPI instances to queues and IRQs Joe Damato
@ 2024-09-25 16:29 ` Joe Damato
  1 sibling, 0 replies; 3+ messages in thread
From: Joe Damato @ 2024-09-25 16:29 UTC (permalink / raw)
  To: netdev
  Cc: Przemek Kitszel, Joe Damato, open list, Eric Dumazet, Tony Nguyen,
	moderated list:INTEL ETHERNET DRIVERS, Jakub Kicinski,
	Paolo Abeni, David S. Miller

Add support for netdev-genl, allowing users to query IRQ, NAPI, and
queue information.

After this patch is applied, note the IRQ assigned to my NIC:

$ cat /proc/interrupts | grep enp0s8 | cut -f1 --delimiter=':'
 18

Note the output from the cli:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 2}'
[{'id': 513, 'ifindex': 2, 'irq': 18}]

This device supports only 1 rx and 1 tx queue, so querying that:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'napi-id': 513, 'type': 'tx'}]

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 drivers/net/ethernet/intel/e1000/e1000_main.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index ab7ae418d294..4de9b156b2be 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -513,6 +513,8 @@ void e1000_down(struct e1000_adapter *adapter)
 	 */
 	netif_carrier_off(netdev);
 
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_RX, NULL);
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_TX, NULL);
 	napi_disable(&adapter->napi);
 
 	e1000_irq_disable(adapter);
@@ -1392,7 +1394,10 @@ int e1000_open(struct net_device *netdev)
 	/* From here on the code is the same as e1000_up() */
 	clear_bit(__E1000_DOWN, &adapter->flags);
 
+	netif_napi_set_irq(&adapter->napi, adapter->pdev->irq);
 	napi_enable(&adapter->napi);
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_RX, &adapter->napi);
+	netif_queue_set_napi(netdev, 0, NETDEV_QUEUE_TYPE_TX, &adapter->napi);
 
 	e1000_irq_enable(adapter);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-09-25 16:31 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-25 16:29 [Intel-wired-lan] [RFC v2 net-next 0/2] e1000/e1000e: Link IRQs, NAPIs, and queues Joe Damato
2024-09-25 16:29 ` [Intel-wired-lan] [RFC v2 net-next 1/2] e1000e: link NAPI instances to queues and IRQs Joe Damato
2024-09-25 16:29 ` [Intel-wired-lan] [RFC v2 net-next 2/2] e1000: Link IRQs and queues to NAPIs Joe Damato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox