linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Add octeon_ep driver
@ 2022-02-10 21:33 Veerasenareddy Burru
  2022-02-10 21:33 ` [PATCH 2/4] octeon_ep: add support for ndo ops Veerasenareddy Burru
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Veerasenareddy Burru @ 2022-02-10 21:33 UTC (permalink / raw)
  To: vburru, davem, kuba, corbet, netdev, linux-doc, linux-kernel

This driver implements networking functionality of Marvell's Octeon
PCI Endpoint NIC.

This driver support following devices:
 * Network controller: Cavium, Inc. Device b200

Veerasenareddy Burru (4):
  octeon_ep: Add driver framework and device initiazliation.
  octeon_ep: add support for ndo ops.
  octeon_ep: add Tx/Rx and interrupt support.
  octeon_ep: add ethtool support for Octeon PCI Endpoint NIC.

 .../device_drivers/ethernet/index.rst         |    1 +
 .../ethernet/marvell/octeon_ep.rst            |   35 +
 MAINTAINERS                                   |    7 +
 drivers/net/ethernet/marvell/Kconfig          |    1 +
 drivers/net/ethernet/marvell/Makefile         |    1 +
 .../net/ethernet/marvell/octeon_ep/Kconfig    |   20 +
 .../net/ethernet/marvell/octeon_ep/Makefile   |    9 +
 .../marvell/octeon_ep/octep_cn9k_pf.c         |  737 +++++++++++
 .../ethernet/marvell/octeon_ep/octep_config.h |  204 +++
 .../marvell/octeon_ep/octep_ctrl_mbox.c       |  254 ++++
 .../marvell/octeon_ep/octep_ctrl_mbox.h       |  170 +++
 .../marvell/octeon_ep/octep_ctrl_net.c        |  194 +++
 .../marvell/octeon_ep/octep_ctrl_net.h        |  299 +++++
 .../marvell/octeon_ep/octep_ethtool.c         |  509 +++++++
 .../ethernet/marvell/octeon_ep/octep_main.c   | 1177 +++++++++++++++++
 .../ethernet/marvell/octeon_ep/octep_main.h   |  379 ++++++
 .../marvell/octeon_ep/octep_regs_cn9k_pf.h    |  367 +++++
 .../net/ethernet/marvell/octeon_ep/octep_rx.c |  512 +++++++
 .../net/ethernet/marvell/octeon_ep/octep_rx.h |  199 +++
 .../net/ethernet/marvell/octeon_ep/octep_tx.c |  334 +++++
 .../net/ethernet/marvell/octeon_ep/octep_tx.h |  284 ++++
 21 files changed, 5693 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/marvell/octeon_ep.rst
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/Kconfig
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/Makefile
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_config.h
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_mbox.h
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.h
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_main.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_main.h
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_tx.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/4] octeon_ep: add support for ndo ops.
  2022-02-10 21:33 [PATCH 0/4] Add octeon_ep driver Veerasenareddy Burru
@ 2022-02-10 21:33 ` Veerasenareddy Burru
  2022-02-13 15:16   ` Leon Romanovsky
  2022-02-10 21:33 ` [PATCH 3/4] octeon_ep: add Tx/Rx and interrupt support Veerasenareddy Burru
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Veerasenareddy Burru @ 2022-02-10 21:33 UTC (permalink / raw)
  To: vburru, davem, kuba, corbet, netdev, linux-doc, linux-kernel
  Cc: Abhijit Ayarekar, Satananda Burla

Add support for ndo ops to set MAC address, change MTU, get stats.
Add control path support to set MAC address, change MTU, get stats,
set speed, get and set link mode.

Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
Signed-off-by: Abhijit Ayarekar <aayarekar@marvell.com>
Signed-off-by: Satananda Burla <sburla@marvell.com>
---
 .../marvell/octeon_ep/octep_ctrl_net.c        | 105 ++++++++++++++++++
 .../ethernet/marvell/octeon_ep/octep_main.c   |  67 +++++++++++
 2 files changed, 172 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
index 1f0d8ba3c8ee..be9b0f31c754 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
@@ -87,3 +87,108 @@ int octep_get_mac_addr(struct octep_device *oct, u8 *addr)
 
 	return 0;
 }
+
+int octep_set_mac_addr(struct octep_device *oct, u8 *addr)
+{
+	struct octep_ctrl_mbox_msg msg = { 0 };
+	struct octep_ctrl_net_h2f_req req = { 0 };
+
+	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MAC;
+	req.mac.cmd = OCTEP_CTRL_NET_CMD_SET;
+	memcpy(&req.mac.addr, addr, ETH_ALEN);
+
+	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
+	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MAC_REQ_SZW;
+	msg.msg = &req;
+	return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+}
+
+int octep_set_mtu(struct octep_device *oct, int mtu)
+{
+	struct octep_ctrl_mbox_msg msg = { 0 };
+	struct octep_ctrl_net_h2f_req req = { 0 };
+
+	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MTU;
+	req.mtu.cmd = OCTEP_CTRL_NET_CMD_SET;
+	req.mtu.val = mtu;
+
+	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
+	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MTU_REQ_SZW;
+	msg.msg = &req;
+	return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+}
+
+int octep_get_if_stats(struct octep_device *oct)
+{
+	struct octep_ctrl_mbox_msg msg = { 0 };
+	struct octep_ctrl_net_h2f_req req = { 0 };
+	struct octep_iface_rx_stats *iface_rx_stats;
+	struct octep_iface_tx_stats *iface_tx_stats;
+	int err;
+
+	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_GET_IF_STATS;
+	req.mac.cmd = OCTEP_CTRL_NET_CMD_GET;
+	req.get_stats.offset = oct->ctrl_mbox_ifstats_offset;
+
+	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
+	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_GET_STATS_REQ_SZW;
+	msg.msg = &req;
+	err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+	if (!err) {
+		iface_rx_stats = (struct octep_iface_rx_stats *)(oct->ctrl_mbox.barmem +
+								 oct->ctrl_mbox_ifstats_offset);
+		iface_tx_stats = (struct octep_iface_tx_stats *)(oct->ctrl_mbox.barmem +
+								 oct->ctrl_mbox_ifstats_offset +
+								 sizeof(struct octep_iface_rx_stats)
+								);
+		memcpy(&oct->iface_rx_stats, iface_rx_stats, sizeof(struct octep_iface_rx_stats));
+		memcpy(&oct->iface_tx_stats, iface_tx_stats, sizeof(struct octep_iface_tx_stats));
+	}
+
+	return 0;
+}
+
+int octep_get_link_info(struct octep_device *oct)
+{
+	struct octep_ctrl_mbox_msg msg = { 0 };
+	struct octep_ctrl_net_h2f_req req = { 0 };
+	struct octep_ctrl_net_h2f_resp *resp;
+	int err;
+
+	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_INFO;
+	req.mac.cmd = OCTEP_CTRL_NET_CMD_GET;
+
+	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
+	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_LINK_INFO_REQ_SZW;
+	msg.msg = &req;
+	err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+	if (err)
+		return err;
+
+	resp = (struct octep_ctrl_net_h2f_resp *)&req;
+	oct->link_info.supported_modes = resp->link_info.supported_modes;
+	oct->link_info.advertised_modes = resp->link_info.advertised_modes;
+	oct->link_info.autoneg = resp->link_info.autoneg;
+	oct->link_info.pause = resp->link_info.pause;
+	oct->link_info.speed = resp->link_info.speed;
+
+	return 0;
+}
+
+int octep_set_link_info(struct octep_device *oct, struct octep_iface_link_info *link_info)
+{
+	struct octep_ctrl_mbox_msg msg = { 0 };
+	struct octep_ctrl_net_h2f_req req = { 0 };
+
+	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_INFO;
+	req.link_info.cmd = OCTEP_CTRL_NET_CMD_SET;
+	req.link_info.info.advertised_modes = link_info->advertised_modes;
+	req.link_info.info.autoneg = link_info->autoneg;
+	req.link_info.info.pause = link_info->pause;
+	req.link_info.info.speed = link_info->speed;
+
+	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
+	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_LINK_INFO_REQ_SZW;
+	msg.msg = &req;
+	return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+}
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
index ec8e8ad37789..307a9ce2b67e 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
@@ -306,6 +306,32 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
 static void octep_get_stats64(struct net_device *netdev,
 			      struct rtnl_link_stats64 *stats)
 {
+	u64 tx_packets, tx_bytes, rx_packets, rx_bytes;
+	struct octep_device *oct = netdev_priv(netdev);
+	int q;
+
+	octep_get_if_stats(oct);
+	tx_packets = 0;
+	tx_bytes = 0;
+	rx_packets = 0;
+	rx_bytes = 0;
+	for (q = 0; q < oct->num_oqs; q++) {
+		struct octep_iq *iq = oct->iq[q];
+		struct octep_oq *oq = oct->oq[q];
+
+		tx_packets += iq->stats.instr_completed;
+		tx_bytes += iq->stats.bytes_sent;
+		rx_packets += oq->stats.packets;
+		rx_bytes += oq->stats.bytes;
+	}
+	stats->tx_packets = tx_packets;
+	stats->tx_bytes = tx_bytes;
+	stats->rx_packets = rx_packets;
+	stats->rx_bytes = rx_bytes;
+	stats->multicast = oct->iface_rx_stats.mcast_pkts;
+	stats->rx_errors = oct->iface_rx_stats.err_pkts;
+	stats->collisions = oct->iface_tx_stats.xscol;
+	stats->tx_fifo_errors = oct->iface_tx_stats.undflw;
 }
 
 /**
@@ -346,11 +372,52 @@ static void octep_tx_timeout(struct net_device *netdev, unsigned int txqueue)
 	queue_work(octep_wq, &oct->tx_timeout_task);
 }
 
+static int octep_set_mac(struct net_device *netdev, void *p)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	struct sockaddr *addr = (struct sockaddr *)p;
+	int err;
+
+	if (!is_valid_ether_addr(addr->sa_data))
+		return -EADDRNOTAVAIL;
+
+	err = octep_set_mac_addr(oct, addr->sa_data);
+	if (err)
+		return err;
+
+	memcpy(oct->mac_addr, addr->sa_data, ETH_ALEN);
+	memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+
+	return 0;
+}
+
+static int octep_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	struct octep_iface_link_info *link_info;
+	int err = 0;
+
+	link_info = &oct->link_info;
+	if (link_info->mtu == new_mtu)
+		return 0;
+
+	err = octep_set_mtu(oct, new_mtu);
+	if (!err) {
+		oct->link_info.mtu = new_mtu;
+		netdev->mtu = new_mtu;
+	}
+
+	return err;
+}
+
 static const struct net_device_ops octep_netdev_ops = {
 	.ndo_open                = octep_open,
 	.ndo_stop                = octep_stop,
 	.ndo_start_xmit          = octep_start_xmit,
+	.ndo_get_stats64         = octep_get_stats64,
 	.ndo_tx_timeout          = octep_tx_timeout,
+	.ndo_set_mac_address     = octep_set_mac,
+	.ndo_change_mtu          = octep_change_mtu,
 };
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] octeon_ep: add Tx/Rx and interrupt support.
  2022-02-10 21:33 [PATCH 0/4] Add octeon_ep driver Veerasenareddy Burru
  2022-02-10 21:33 ` [PATCH 2/4] octeon_ep: add support for ndo ops Veerasenareddy Burru
@ 2022-02-10 21:33 ` Veerasenareddy Burru
  2022-02-10 21:33 ` [PATCH 4/4] octeon_ep: add ethtool support for Octeon PCI Endpoint NIC Veerasenareddy Burru
       [not found] ` <20220210213306.3599-2-vburru@marvell.com>
  3 siblings, 0 replies; 7+ messages in thread
From: Veerasenareddy Burru @ 2022-02-10 21:33 UTC (permalink / raw)
  To: vburru, davem, kuba, corbet, netdev, linux-doc, linux-kernel
  Cc: Abhijit Ayarekar, Satananda Burla

Add support to enable MSI-x and register interrupts.
Add support to process Tx and Rx traffic. Includes processing
Tx completions and Rx refill.

Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
Signed-off-by: Abhijit Ayarekar <aayarekar@marvell.com>
Signed-off-by: Satananda Burla <sburla@marvell.com>
---
 .../ethernet/marvell/octeon_ep/octep_main.c   | 441 ++++++++++++++++++
 .../net/ethernet/marvell/octeon_ep/octep_rx.c | 252 +++++++++-
 .../net/ethernet/marvell/octeon_ep/octep_tx.c |  73 +++
 3 files changed, 764 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
index 307a9ce2b67e..700852fd4c3a 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
@@ -31,6 +31,271 @@ MODULE_DESCRIPTION(OCTEP_DRV_STRING);
 MODULE_LICENSE("GPL");
 MODULE_VERSION(OCTEP_DRV_VERSION_STR);
 
+/**
+ * octep_alloc_ioq_vectors() - Allocate Tx/Rx Queue interrupt info.
+ *
+ * @oct: Octeon device private data structure.
+ *
+ * Allocate resources to hold per Tx/Rx queue interrupt info.
+ * This is the information passed to interrupt handler, from which napi poll
+ * is scheduled and includes quick access to private data of Tx/Rx queue
+ * corresponding to the interrupt being handled.
+ *
+ * Return: 0, on successful allocation of resources for all queue interrupts.
+ *         -1, if failed to allocate any resource.
+ */
+static int octep_alloc_ioq_vectors(struct octep_device *oct)
+{
+	int i;
+	struct octep_ioq_vector *ioq_vector;
+
+	for (i = 0; i < oct->num_oqs; i++) {
+		oct->ioq_vector[i] = vzalloc(sizeof(*oct->ioq_vector[i]));
+		if (!oct->ioq_vector[i])
+			goto free_ioq_vector;
+
+		ioq_vector = oct->ioq_vector[i];
+		ioq_vector->iq = oct->iq[i];
+		ioq_vector->oq = oct->oq[i];
+		ioq_vector->octep_dev = oct;
+	}
+
+	dev_info(&oct->pdev->dev, "Allocated %d IOQ vectors\n", oct->num_oqs);
+	return 0;
+
+free_ioq_vector:
+	while (i) {
+		i--;
+		vfree(oct->ioq_vector[i]);
+		oct->ioq_vector[i] = NULL;
+	}
+	return -1;
+}
+
+/**
+ * octep_free_ioq_vectors() - Free Tx/Rx Queue interrupt vector info.
+ *
+ * @oct: Octeon device private data structure.
+ */
+static void octep_free_ioq_vectors(struct octep_device *oct)
+{
+	int i;
+
+	for (i = 0; i < oct->num_oqs; i++) {
+		if (oct->ioq_vector[i]) {
+			vfree(oct->ioq_vector[i]);
+			oct->ioq_vector[i] = NULL;
+		}
+	}
+	netdev_info(oct->netdev, "Freed IOQ Vectors\n");
+}
+
+/**
+ * octep_enable_msix_range() - enable MSI-x interrupts.
+ *
+ * @oct: Octeon device private data structure.
+ *
+ * Allocate and enable all MSI-x interrupts (queue and non-queue interrupts)
+ * for the Octeon device.
+ *
+ * Return: 0, on successfully enabling all MSI-x interrupts.
+ *         -1, if failed to enable any MSI-x interrupt.
+ */
+static int octep_enable_msix_range(struct octep_device *oct)
+{
+	int num_msix, msix_allocated;
+	int i;
+
+	/* Generic interrupts apart from input/output queues */
+	num_msix = oct->num_oqs + CFG_GET_NON_IOQ_MSIX(oct->conf);
+	oct->msix_entries = kcalloc(num_msix,
+				    sizeof(struct msix_entry), GFP_KERNEL);
+	if (!oct->msix_entries)
+		goto msix_alloc_err;
+
+	for (i = 0; i < num_msix; i++)
+		oct->msix_entries[i].entry = i;
+
+	msix_allocated = pci_enable_msix_range(oct->pdev, oct->msix_entries,
+					       num_msix, num_msix);
+	if (msix_allocated != num_msix) {
+		dev_err(&oct->pdev->dev,
+			"Failed to enable %d msix irqs; got only %d\n",
+			num_msix, msix_allocated);
+		goto enable_msix_err;
+	}
+	oct->num_irqs = msix_allocated;
+	dev_info(&oct->pdev->dev, "MSI-X enabled successfully\n");
+
+	return 0;
+
+enable_msix_err:
+	if (msix_allocated > 0)
+		pci_disable_msix(oct->pdev);
+	kfree(oct->msix_entries);
+	oct->msix_entries = NULL;
+msix_alloc_err:
+	return -1;
+}
+
+/**
+ * octep_disable_msix() - disable MSI-x interrupts.
+ *
+ * @oct: Octeon device private data structure.
+ *
+ * Disable MSI-x on the Octeon device.
+ */
+static void octep_disable_msix(struct octep_device *oct)
+{
+	pci_disable_msix(oct->pdev);
+	kfree(oct->msix_entries);
+	oct->msix_entries = NULL;
+	dev_info(&oct->pdev->dev, "Disabled MSI-X\n");
+}
+
+/**
+ * octep_non_ioq_intr_handler() - common handler for all generic interrupts.
+ *
+ * @irq - Interrupt number.
+ * @data - interrupt data.
+ *
+ * this is common handler for all non-queue (generic) interrupts.
+ */
+static irqreturn_t octep_non_ioq_intr_handler(int irq, void *data)
+{
+	struct octep_device *oct = data;
+
+	return oct->hw_ops.non_ioq_intr_handler(oct);
+}
+
+/**
+ * octep_ioq_intr_handler() - handler for all Tx/Rx queue interrupts.
+ *
+ * @irq - Interrupt number.
+ * @data - interrupt data contains pointers to Tx/Rx queue private data
+ *         and correspong NAPI context.
+ *
+ * this is common handler for all non-queue (generic) interrupts.
+ */
+static irqreturn_t octep_ioq_intr_handler(int irq, void *data)
+{
+	struct octep_ioq_vector *ioq_vector = data;
+	struct octep_device *oct = ioq_vector->octep_dev;
+
+	return oct->hw_ops.ioq_intr_handler(ioq_vector);
+}
+
+/**
+ * octep_request_irqs() - Register interrupt handlers.
+ *
+ * @oct: Octeon device private data structure.
+ *
+ * Register handlers for all queue and non-queue interrupts.
+ *
+ * Return: 0, on successful registration of all interrupt handlers.
+ *         -1, on any error.
+ */
+static int octep_request_irqs(struct octep_device *oct)
+{
+	struct net_device *netdev = oct->netdev;
+	struct octep_ioq_vector *ioq_vector;
+	struct msix_entry *msix_entry;
+	char **non_ioq_msix_names;
+	int num_non_ioq_msix;
+	int ret, i;
+
+	num_non_ioq_msix = CFG_GET_NON_IOQ_MSIX(oct->conf);
+	non_ioq_msix_names = CFG_GET_NON_IOQ_MSIX_NAMES(oct->conf);
+
+	oct->non_ioq_irq_names = kcalloc(num_non_ioq_msix,
+					 OCTEP_MSIX_NAME_SIZE, GFP_KERNEL);
+	if (!oct->non_ioq_irq_names)
+		goto alloc_err;
+
+	/* First few MSI-X interrupts are non-queue interrupts */
+	for (i = 0; i < num_non_ioq_msix; i++) {
+		char *irq_name;
+
+		irq_name = &oct->non_ioq_irq_names[i * OCTEP_MSIX_NAME_SIZE];
+		msix_entry = &oct->msix_entries[i];
+
+		snprintf(irq_name, OCTEP_MSIX_NAME_SIZE,
+			 "%s-%s", netdev->name, non_ioq_msix_names[i]);
+		ret = request_irq(msix_entry->vector,
+				  octep_non_ioq_intr_handler, 0,
+				  irq_name, oct);
+		if (ret) {
+			netdev_err(netdev,
+				   "request_irq failed for %s; err=%d",
+				   irq_name, ret);
+			goto non_ioq_irq_err;
+		}
+	}
+
+	/* Request IRQs for Tx/Rx queues */
+	for (i = 0; i < oct->num_oqs; i++) {
+		ioq_vector = oct->ioq_vector[i];
+		msix_entry = &oct->msix_entries[i + num_non_ioq_msix];
+
+		snprintf(ioq_vector->name, sizeof(ioq_vector->name),
+			 "%s-q%d", netdev->name, i);
+		ret = request_irq(msix_entry->vector,
+				  octep_ioq_intr_handler, 0,
+				  ioq_vector->name, ioq_vector);
+		if (ret) {
+			netdev_err(netdev,
+				   "request_irq failed for Q-%d; err=%d",
+				   i, ret);
+			goto ioq_irq_err;
+		}
+
+		cpumask_set_cpu(i % num_online_cpus(),
+				&ioq_vector->affinity_mask);
+		irq_set_affinity_hint(msix_entry->vector,
+				      &ioq_vector->affinity_mask);
+	}
+
+	return 0;
+ioq_irq_err:
+	while (i > num_non_ioq_msix) {
+		--i;
+		irq_set_affinity_hint(oct->msix_entries[i].vector, NULL);
+		free_irq(oct->msix_entries[i].vector, oct->ioq_vector[i]);
+	}
+non_ioq_irq_err:
+	while (i) {
+		--i;
+		free_irq(oct->msix_entries[i].vector, oct);
+	}
+alloc_err:
+	return -1;
+}
+
+/**
+ * octep_free_irqs() - free all registered interrupts.
+ *
+ * @oct: Octeon device private data structure.
+ *
+ * Free all queue and non-queue interrupts of the Octeon device.
+ */
+static void octep_free_irqs(struct octep_device *oct)
+{
+	int i;
+
+	/* First few MSI-X interrupts are non queue interrupts; free them */
+	for (i = 0; i < CFG_GET_NON_IOQ_MSIX(oct->conf); i++)
+		free_irq(oct->msix_entries[i].vector, oct);
+	kfree(oct->non_ioq_irq_names);
+
+	/* Free IRQs for Input/Output (Tx/Rx) queues */
+	for (i = CFG_GET_NON_IOQ_MSIX(oct->conf); i < oct->num_irqs; i++) {
+		irq_set_affinity_hint(oct->msix_entries[i].vector, NULL);
+		free_irq(oct->msix_entries[i].vector,
+			 oct->ioq_vector[i - CFG_GET_NON_IOQ_MSIX(oct->conf)]);
+	}
+	netdev_info(oct->netdev, "IRQs freed\n");
+}
+
 /**
  * octep_setup_irqs() - setup interrupts for the Octeon device.
  *
@@ -102,6 +367,32 @@ static void octep_enable_ioq_irq(struct octep_iq *iq, struct octep_oq *oq)
 	writeq(1UL << OCTEP_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg);
 }
 
+/**
+ * octep_napi_poll() - NAPI poll function for Tx/Rx.
+ *
+ * @napi: pointer to napi context.
+ * @budget: max number of packets to be processed in single invocation.
+ */
+int octep_napi_poll(struct napi_struct *napi, int budget)
+{
+	struct octep_ioq_vector *ioq_vector =
+		container_of(napi, struct octep_ioq_vector, napi);
+	u32 tx_pending, rx_done;
+
+	tx_pending = octep_iq_process_completions(ioq_vector->iq, budget);
+	rx_done = octep_oq_process_rx(ioq_vector->oq, budget);
+
+	/* need more polling if tx completion processing is still pending or
+	 * processed at least 'budget' number of rx packets.
+	 */
+	if (tx_pending || rx_done >= budget)
+		return budget;
+
+	napi_complete(napi);
+	octep_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq);
+	return rx_done;
+}
+
 /**
  * octep_napi_add() - Add NAPI poll for all Tx/Rx queues.
  *
@@ -282,6 +573,36 @@ static int octep_stop(struct net_device *netdev)
 	return 0;
 }
 
+/**
+ * octep_iq_full_check() - check if a Tx queue is full.
+ *
+ * @iq: Octeon Tx queue data structure.
+ *
+ * Return: 0, if the Tx queue is not full.
+ *         1, if the Tx queue is full.
+ */
+static inline int octep_iq_full_check(struct octep_iq *iq)
+{
+	if (likely((iq->max_count - atomic_read(&iq->instr_pending)) >=
+		   OCTEP_WAKE_QUEUE_THRESHOLD))
+		return 0;
+
+	/* Stop the queue if unable to send */
+	netif_stop_subqueue(iq->netdev, iq->q_no);
+
+	/* check again and restart the queue, in case NAPI has just freed
+	 * enough Tx ring entries.
+	 */
+	if (unlikely((iq->max_count - atomic_read(&iq->instr_pending)) >=
+		     OCTEP_WAKE_QUEUE_THRESHOLD)) {
+		netif_start_subqueue(iq->netdev, iq->q_no);
+		iq->stats.restart_cnt++;
+		return 0;
+	}
+
+	return 1;
+}
+
 /**
  * octep_start_xmit() - Enqueue packet to Octoen hardware Tx Queue.
  *
@@ -294,6 +615,126 @@ static int octep_stop(struct net_device *netdev)
 static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
 				    struct net_device *netdev)
 {
+	struct octep_device *oct = netdev_priv(netdev);
+	struct octep_tx_sglist_desc *sglist;
+	struct octep_tx_buffer *tx_buffer;
+	struct octep_tx_desc_hw *hw_desc;
+	struct skb_shared_info *shinfo;
+	struct octep_instr_hdr *ih;
+	struct octep_iq *iq;
+	skb_frag_t *frag;
+	u16 nr_frags, si;
+	u16 q_no, wi;
+
+	q_no = skb_get_queue_mapping(skb);
+	if (q_no >= oct->num_iqs) {
+		netdev_err(netdev, "Invalid Tx skb->queue_mapping=%d\n", q_no);
+		q_no = q_no % oct->num_iqs;
+	}
+
+	iq = oct->iq[q_no];
+	if (octep_iq_full_check(iq)) {
+		iq->stats.tx_busy++;
+		return NETDEV_TX_BUSY;
+	}
+
+	shinfo = skb_shinfo(skb);
+	nr_frags = shinfo->nr_frags;
+
+	wi = iq->host_write_index;
+	hw_desc = &iq->desc_ring[wi];
+	hw_desc->ih64 = 0;
+
+	tx_buffer = iq->buff_info + wi;
+	tx_buffer->skb = skb;
+
+	ih = &hw_desc->ih;
+	ih->tlen = skb->len;
+	ih->pkind = oct->pkind;
+
+	if (!nr_frags) {
+		tx_buffer->gather = 0;
+		tx_buffer->dma = dma_map_single(iq->dev, skb->data,
+						skb->len, DMA_TO_DEVICE);
+		if (dma_mapping_error(iq->dev, tx_buffer->dma))
+			goto dma_map_err;
+		hw_desc->dptr = tx_buffer->dma;
+	} else {
+		/* Scatter/Gather */
+		dma_addr_t dma;
+		u16 len;
+
+		sglist = tx_buffer->sglist;
+
+		ih->gsz = nr_frags + 1;
+		ih->gather = 1;
+		tx_buffer->gather = 1;
+
+		len = skb_headlen(skb);
+		dma = dma_map_single(iq->dev, skb->data, len, DMA_TO_DEVICE);
+		if (dma_mapping_error(iq->dev, dma))
+			goto dma_map_err;
+
+		dma_sync_single_for_cpu(iq->dev, tx_buffer->sglist_dma,
+					OCTEP_SGLIST_SIZE_PER_PKT,
+					DMA_TO_DEVICE);
+		memset(sglist, 0, OCTEP_SGLIST_SIZE_PER_PKT);
+		sglist[0].len[3] = len;
+		sglist[0].dma_ptr[0] = dma;
+
+		si = 1; /* entry 0 is main skb, mapped above */
+		frag = &shinfo->frags[0];
+		while (nr_frags--) {
+			len = skb_frag_size(frag);
+			dma = skb_frag_dma_map(iq->dev, frag, 0,
+					       len, DMA_TO_DEVICE);
+			if (dma_mapping_error(iq->dev, dma))
+				goto dma_map_sg_err;
+
+			sglist[si >> 2].len[3 - (si & 3)] = len;
+			sglist[si >> 2].dma_ptr[si & 3] = dma;
+
+			frag++;
+			si++;
+		}
+		dma_sync_single_for_device(iq->dev, tx_buffer->sglist_dma,
+					   OCTEP_SGLIST_SIZE_PER_PKT,
+					   DMA_TO_DEVICE);
+
+		hw_desc->dptr = tx_buffer->sglist_dma;
+	}
+
+	/* Flush the hw descriptor before writing to doorbell */
+	wmb();
+
+	/* Ring Doorbell to notify the NIC there is a new packet */
+	writel(1, iq->doorbell_reg);
+	atomic_inc(&iq->instr_pending);
+	wi++;
+	if (wi == iq->max_count)
+		wi = 0;
+	iq->host_write_index = wi;
+
+	netdev_tx_sent_queue(iq->netdev_q, skb->len);
+	iq->stats.instr_posted++;
+	skb_tx_timestamp(skb);
+	return NETDEV_TX_OK;
+
+dma_map_sg_err:
+	if (si > 0) {
+		dma_unmap_single(iq->dev, sglist[0].dma_ptr[0],
+				 sglist[0].len[0], DMA_TO_DEVICE);
+		sglist[0].len[0] = 0;
+	}
+	while (si > 1) {
+		dma_unmap_page(iq->dev, sglist[si >> 2].dma_ptr[si & 3],
+			       sglist[si >> 2].len[si & 3], DMA_TO_DEVICE);
+		sglist[si >> 2].len[si & 3] = 0;
+		si--;
+	}
+	tx_buffer->gather = 0;
+dma_map_err:
+	dev_kfree_skb_any(skb);
 	return NETDEV_TX_OK;
 }
 
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
index 97de3d1b6a00..009e245dca24 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
@@ -59,8 +59,7 @@ static int octep_oq_fill_ring_buffers(struct octep_oq *oq)
 rx_buf_alloc_err:
 	while (i) {
 		i--;
-		dma_unmap_page(oq->dev, desc_ring[i].buffer_ptr,
-			       PAGE_SIZE, DMA_FROM_DEVICE);
+		dma_unmap_page(oq->dev, desc_ring[i].buffer_ptr, PAGE_SIZE, DMA_FROM_DEVICE);
 		put_page(oq->buff_info[i].page);
 		oq->buff_info[i].page = NULL;
 	}
@@ -68,6 +67,50 @@ static int octep_oq_fill_ring_buffers(struct octep_oq *oq)
 	return -1;
 }
 
+/**
+ * octep_oq_fill_ring_buffers() - refill buffers for used Rx ring descriptors.
+ *
+ * @oct: Octeon device private data structure.
+ * @oq: Octeon Rx queue data structure.
+ *
+ * Return: number of descriptors successfully refilled with receive buffers.
+ */
+static int octep_oq_refill(struct octep_device *oct, struct octep_oq *oq)
+{
+	struct octep_oq_desc_hw *desc_ring = oq->desc_ring;
+	struct page *page;
+	u32 refill_idx, i;
+
+	refill_idx = oq->host_refill_idx;
+	for (i = 0; i < oq->refill_count; i++) {
+		page = dev_alloc_page();
+		if (unlikely(!page)) {
+			dev_err(oq->dev, "refill: rx buffer alloc failed\n");
+			oq->stats.alloc_failures++;
+			break;
+		}
+
+		desc_ring[refill_idx].buffer_ptr = dma_map_page(oq->dev, page, 0,
+								PAGE_SIZE, DMA_FROM_DEVICE);
+		if (dma_mapping_error(oq->dev, desc_ring[refill_idx].buffer_ptr)) {
+			dev_err(oq->dev,
+				"OQ-%d buffer refill: DMA mapping error!\n",
+				oq->q_no);
+			put_page(page);
+			oq->stats.alloc_failures++;
+			break;
+		}
+		oq->buff_info[refill_idx].page = page;
+		refill_idx++;
+		if (refill_idx == oq->max_count)
+			refill_idx = 0;
+	}
+	oq->host_refill_idx = refill_idx;
+	oq->refill_count -= i;
+
+	return i;
+}
+
 /**
  * octep_setup_oq() - Setup a Rx queue.
  *
@@ -262,3 +305,208 @@ void octep_free_oqs(struct octep_device *oct)
 			"Successfully freed OQ(RxQ)-%d.\n", i);
 	}
 }
+
+/**
+ * octep_oq_check_hw_for_pkts() - Check for new Rx packets.
+ *
+ * @oct: Octeon device private data structure.
+ * @oq: Octeon Rx queue data structure.
+ *
+ * Return: packets received after previous check.
+ */
+static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
+				      struct octep_oq *oq)
+{
+	u32 pkt_count, new_pkts;
+
+	pkt_count = readl(oq->pkts_sent_reg);
+	new_pkts = pkt_count - oq->last_pkt_count;
+
+	/* Clear the hardware packets counter register if the rx queue is
+	 * being processed continuously with-in a single interrupt and
+	 * reached half its max value.
+	 * this counter is not cleared everytime read, to save write cycles.
+	 */
+	if (unlikely(pkt_count > 0xF0000000U)) {
+		writel(pkt_count, oq->pkts_sent_reg);
+		pkt_count = readl(oq->pkts_sent_reg);
+		new_pkts += pkt_count;
+	}
+	oq->last_pkt_count = pkt_count;
+	oq->pkts_pending += new_pkts;
+	return new_pkts;
+}
+
+/**
+ * __octep_oq_process_rx() - Process hardware Rx queue and push to stack.
+ *
+ * @oct: Octeon device private data structure.
+ * @oq: Octeon Rx queue data structure.
+ * @pkts_to_process: number of packets to be processed.
+ *
+ * Process the new packets in Rx queue.
+ * Packets larger than single Rx buffer arrive in consecutive descriptors.
+ * But, count returned by the API only accounts full packets, not fragments.
+ *
+ * Return: number of packets processed and pushed to stack.
+ */
+static int __octep_oq_process_rx(struct octep_device *oct,
+				 struct octep_oq *oq, u16 pkts_to_process)
+{
+	struct octep_oq_resp_hw_ext *resp_hw_ext = NULL;
+	struct octep_rx_buffer *buff_info;
+	struct octep_oq_resp_hw *resp_hw;
+	u32 pkt, rx_bytes, desc_used;
+	struct sk_buff *skb;
+	u16 data_offset;
+	u32 read_idx;
+	void *data;
+
+	read_idx = oq->host_read_idx;
+	rx_bytes = 0;
+	desc_used = 0;
+	for (pkt = 0; pkt < pkts_to_process; pkt++) {
+		buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx];
+		dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
+			       PAGE_SIZE, DMA_FROM_DEVICE);
+		resp_hw = page_address(buff_info->page);
+		buff_info->page = NULL;
+
+		/* Swap the length field that is in Big-Endian to CPU */
+		buff_info->len = be64_to_cpu(resp_hw->length);
+		if (oct->caps_enabled & OCTEP_CAP_RX_CHECKSUM) {
+			/* Extended response header is immediately after
+			 * response header (resp_hw)
+			 */
+			resp_hw_ext = (struct octep_oq_resp_hw_ext *)
+				      (resp_hw + 1);
+			buff_info->len -= OCTEP_OQ_RESP_HW_EXT_SIZE;
+			/* Packet Data is immediately after
+			 * extended response header.
+			 */
+			data = resp_hw_ext + 1;
+			data_offset = OCTEP_OQ_RESP_HW_SIZE +
+				      OCTEP_OQ_RESP_HW_EXT_SIZE;
+		} else {
+			/* Data is immediately after
+			 * Hardware Rx response header.
+			 */
+			data = resp_hw + 1;
+			data_offset = OCTEP_OQ_RESP_HW_SIZE;
+		}
+		rx_bytes += buff_info->len;
+
+		if (buff_info->len <= oq->max_single_buffer_size) {
+			skb = build_skb((void *)resp_hw, PAGE_SIZE);
+			skb_reserve(skb, data_offset);
+			skb_put(skb, buff_info->len);
+			read_idx++;
+			desc_used++;
+			if (read_idx == oq->max_count)
+				read_idx = 0;
+		} else {
+			struct skb_shared_info *shinfo;
+			u16 data_len;
+
+			skb = build_skb((void *)resp_hw, PAGE_SIZE);
+			skb_reserve(skb, data_offset);
+			/* Head fragment includes response header(s);
+			 * subsequent fragments contains only data.
+			 */
+			skb_put(skb, oq->max_single_buffer_size);
+			read_idx++;
+			desc_used++;
+			if (read_idx == oq->max_count)
+				read_idx = 0;
+
+			shinfo = skb_shinfo(skb);
+			data_len = buff_info->len - oq->max_single_buffer_size;
+			while (data_len) {
+				dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
+					       PAGE_SIZE, DMA_FROM_DEVICE);
+				buff_info = (struct octep_rx_buffer *)
+					    &oq->buff_info[read_idx];
+				if (data_len < oq->buffer_size) {
+					buff_info->len = data_len;
+					data_len = 0;
+				} else {
+					buff_info->len = oq->buffer_size;
+					data_len -= oq->buffer_size;
+				}
+
+				skb_add_rx_frag(skb, shinfo->nr_frags,
+						buff_info->page, 0,
+						buff_info->len,
+						buff_info->len);
+				buff_info->page = NULL;
+				read_idx++;
+				desc_used++;
+				if (read_idx == oq->max_count)
+					read_idx = 0;
+			}
+		}
+
+		skb->dev = oq->netdev;
+		skb->protocol =  eth_type_trans(skb, skb->dev);
+		if (resp_hw_ext &&
+		    resp_hw_ext->csum_verified == OCTEP_CSUM_VERIFIED)
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+		else
+			skb->ip_summed = CHECKSUM_NONE;
+		napi_gro_receive(oq->napi, skb);
+	}
+
+	oq->host_read_idx = read_idx;
+	oq->refill_count += desc_used;
+	oq->stats.packets += pkt;
+	oq->stats.bytes += rx_bytes;
+
+	return pkt;
+}
+
+/**
+ * octep_oq_process_rx() - Process Rx queue.
+ *
+ * @oct: Octeon device private data structure.
+ * @oq: Octeon Rx queue data structure.
+ * @budget: max number of packets can be processed in one invocation.
+ *
+ * Check for newly received packets and process them.
+ * Keeps checking for new packets until budget is used or no new packets seen.
+ *
+ * Return: number of packets processed.
+ */
+int octep_oq_process_rx(struct octep_oq *oq, int budget)
+{
+	u32 pkts_available, pkts_processed, total_pkts_processed;
+	struct octep_device *oct = oq->octep_dev;
+
+	pkts_available = 0;
+	pkts_processed = 0;
+	total_pkts_processed = 0;
+	while (total_pkts_processed < budget) {
+		 /* update pending count only when current one exhausted */
+		if (oq->pkts_pending == 0)
+			octep_oq_check_hw_for_pkts(oct, oq);
+		pkts_available = min(budget - total_pkts_processed,
+				     oq->pkts_pending);
+		if (!pkts_available)
+			break;
+
+		pkts_processed = __octep_oq_process_rx(oct, oq,
+						       pkts_available);
+		oq->pkts_pending -= pkts_processed;
+		total_pkts_processed += pkts_processed;
+	}
+
+	if (oq->refill_count >= oq->refill_threshold) {
+		u32 desc_refilled = octep_oq_refill(oct, oq);
+
+		/* flush pending writes before updating credits */
+		wmb();
+		writel(desc_refilled, oq->pkts_credit_reg);
+	}
+
+	return total_pkts_processed;
+}
+
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
index 66c2172d68e2..4596c9516030 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
@@ -23,6 +23,79 @@ static void octep_iq_reset_indices(struct octep_iq *iq)
 	atomic_set(&iq->instr_pending, 0);
 }
 
+/**
+ * octep_iq_process_completions() - Process Tx queue completions.
+ *
+ * @iq: Octeon Tx queue data structure.
+ * @budget: max number of completions to be processed in one invocation.
+ */
+int octep_iq_process_completions(struct octep_iq *iq, u16 budget)
+{
+	u32 compl_pkts, compl_bytes, compl_sg;
+	struct octep_device *oct = iq->octep_dev;
+	struct octep_tx_buffer *tx_buffer;
+	struct skb_shared_info *shinfo;
+	u32 fi = iq->flush_index;
+	struct sk_buff *skb;
+	u8 frags, i;
+
+	compl_pkts = 0;
+	compl_sg = 0;
+	compl_bytes = 0;
+	iq->octep_read_index = oct->hw_ops.update_iq_read_idx(iq);
+
+	while (likely(budget && (fi != iq->octep_read_index))) {
+		tx_buffer = iq->buff_info + fi;
+		skb = tx_buffer->skb;
+
+		fi++;
+		if (unlikely(fi == iq->max_count))
+			fi = 0;
+		compl_bytes += skb->len;
+		compl_pkts++;
+		budget--;
+
+		if (!tx_buffer->gather) {
+			dma_unmap_single(iq->dev, tx_buffer->dma,
+					 tx_buffer->skb->len, DMA_TO_DEVICE);
+			dev_kfree_skb_any(skb);
+			continue;
+		}
+
+		/* Scatter/Gather */
+		shinfo = skb_shinfo(skb);
+		frags = skb_shinfo(skb)->nr_frags;
+		compl_sg++;
+
+		dma_unmap_single(iq->dev, tx_buffer->sglist[0].dma_ptr[0],
+				 tx_buffer->sglist[0].len[0], DMA_TO_DEVICE);
+
+		i = 1; /* entry 0 is main skb, unmapped above */
+		while (frags--) {
+			dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3],
+				       tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE);
+			i++;
+		}
+
+		dev_kfree_skb_any(skb);
+	}
+
+	iq->pkts_processed += compl_pkts;
+	atomic_sub(compl_pkts, &iq->instr_pending);
+	iq->stats.instr_completed += compl_pkts;
+	iq->stats.bytes_sent += compl_bytes;
+	iq->stats.sgentry_sent += compl_sg;
+	iq->flush_index = fi;
+
+	netdev_tx_completed_queue(iq->netdev_q, compl_pkts, compl_bytes);
+
+	if (unlikely(__netif_subqueue_stopped(iq->netdev, iq->q_no)) &&
+	    ((iq->max_count - atomic_read(&iq->instr_pending)) >
+	     OCTEP_WAKE_QUEUE_THRESHOLD))
+		netif_wake_subqueue(iq->netdev, iq->q_no);
+	return !budget;
+}
+
 /**
  * octep_iq_free_pending() - Free Tx buffers for pending completions.
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] octeon_ep: add ethtool support for Octeon PCI Endpoint NIC.
  2022-02-10 21:33 [PATCH 0/4] Add octeon_ep driver Veerasenareddy Burru
  2022-02-10 21:33 ` [PATCH 2/4] octeon_ep: add support for ndo ops Veerasenareddy Burru
  2022-02-10 21:33 ` [PATCH 3/4] octeon_ep: add Tx/Rx and interrupt support Veerasenareddy Burru
@ 2022-02-10 21:33 ` Veerasenareddy Burru
  2022-02-11 21:40   ` Andrew Lunn
       [not found] ` <20220210213306.3599-2-vburru@marvell.com>
  3 siblings, 1 reply; 7+ messages in thread
From: Veerasenareddy Burru @ 2022-02-10 21:33 UTC (permalink / raw)
  To: vburru, davem, kuba, corbet, netdev, linux-doc, linux-kernel
  Cc: Abhijit Ayarekar, Satananda Burla

Add support for the following ethtool commands:

ethtool -i|--driver devname
ethtool devname
ethtool -s devname [speed N] [autoneg on|off] [advertise N]
ethtool -S|--statistics devname

Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
Signed-off-by: Abhijit Ayarekar <aayarekar@marvell.com>
Signed-off-by: Satananda Burla <sburla@marvell.com>
---
 .../net/ethernet/marvell/octeon_ep/Makefile   |   2 +-
 .../marvell/octeon_ep/octep_ethtool.c         | 509 ++++++++++++++++++
 .../ethernet/marvell/octeon_ep/octep_main.c   |   5 +-
 3 files changed, 513 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c

diff --git a/drivers/net/ethernet/marvell/octeon_ep/Makefile b/drivers/net/ethernet/marvell/octeon_ep/Makefile
index 6e2db8e80b4a..2026c8118158 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/Makefile
+++ b/drivers/net/ethernet/marvell/octeon_ep/Makefile
@@ -6,4 +6,4 @@
 obj-$(CONFIG_OCTEON_EP) += octeon_ep.o
 
 octeon_ep-y := octep_main.o octep_cn9k_pf.o octep_tx.o octep_rx.o \
-	       octep_ctrl_mbox.o octep_ctrl_net.o
+	       octep_ethtool.o octep_ctrl_mbox.o octep_ctrl_net.o
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
new file mode 100644
index 000000000000..0263cfbb2dfb
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
@@ -0,0 +1,509 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell Octeon EP (EndPoint) Ethernet Driver
+ *
+ * Copyright (C) 2020 Marvell.
+ *
+ */
+
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+
+#include "octep_config.h"
+#include "octep_main.h"
+#include "octep_ctrl_net.h"
+
+static const char octep_gstrings_global_stats[][ETH_GSTRING_LEN] = {
+	"rx_packets",
+	"tx_packets",
+	"rx_bytes",
+	"tx_bytes",
+	"rx_alloc_errors",
+	"tx_busy_errors",
+	"rx_dropped",
+	"tx_dropped",
+	"tx_hw_pkts",
+	"tx_hw_octs",
+	"tx_hw_bcast",
+	"tx_hw_mcast",
+	"tx_hw_underflow",
+	"tx_hw_control",
+	"tx_less_than_64",
+	"tx_equal_64",
+	"tx_equal_65_to_127",
+	"tx_equal_128_to_255",
+	"tx_equal_256_to_511",
+	"tx_equal_512_to_1023",
+	"tx_equal_1024_to_1518",
+	"tx_greater_than_1518",
+	"rx_hw_pkts",
+	"rx_hw_bytes",
+	"rx_hw_bcast",
+	"rx_hw_mcast",
+	"rx_pause_pkts",
+	"rx_pause_bytes",
+	"rx_dropped_pkts_fifo_full",
+	"rx_dropped_bytes_fifo_full",
+	"rx_err_pkts",
+};
+
+#define OCTEP_GLOBAL_STATS_CNT (sizeof(octep_gstrings_global_stats) / ETH_GSTRING_LEN)
+
+static const char octep_gstrings_tx_q_stats[][ETH_GSTRING_LEN] = {
+	"tx_packets_posted[Q-%u]",
+	"tx_packets_completed[Q-%u]",
+	"tx_bytes[Q-%u]",
+	"tx_busy[Q-%u]",
+};
+
+#define OCTEP_TX_Q_STATS_CNT (sizeof(octep_gstrings_tx_q_stats) / ETH_GSTRING_LEN)
+
+static const char octep_gstrings_rx_q_stats[][ETH_GSTRING_LEN] = {
+	"rx_packets[Q-%u]",
+	"rx_bytes[Q-%u]",
+	"rx_alloc_errors[Q-%u]",
+};
+
+#define OCTEP_RX_Q_STATS_CNT (sizeof(octep_gstrings_rx_q_stats) / ETH_GSTRING_LEN)
+
+static void octep_get_drvinfo(struct net_device *netdev,
+			      struct ethtool_drvinfo *info)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+
+	strscpy(info->driver, OCTEP_DRV_NAME, sizeof(info->driver));
+	strscpy(info->version, OCTEP_DRV_VERSION_STR, sizeof(info->version));
+	strscpy(info->bus_info, pci_name(oct->pdev), sizeof(info->bus_info));
+}
+
+static void octep_get_strings(struct net_device *netdev,
+			      u32 stringset, u8 *data)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	u16 num_queues = CFG_GET_PORTS_ACTIVE_IO_RINGS(oct->conf);
+	char *strings = (char *)data;
+	int i, j;
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		for (i = 0; i < OCTEP_GLOBAL_STATS_CNT; i++) {
+			snprintf(strings, ETH_GSTRING_LEN,
+				 octep_gstrings_global_stats[i]);
+			strings += ETH_GSTRING_LEN;
+		}
+
+		for (i = 0; i < num_queues; i++) {
+			for (j = 0; j < OCTEP_TX_Q_STATS_CNT; j++) {
+				snprintf(strings, ETH_GSTRING_LEN,
+					 octep_gstrings_tx_q_stats[j], i);
+				strings += ETH_GSTRING_LEN;
+			}
+		}
+
+		for (i = 0; i < num_queues; i++) {
+			for (j = 0; j < OCTEP_RX_Q_STATS_CNT; j++) {
+				snprintf(strings, ETH_GSTRING_LEN,
+					 octep_gstrings_rx_q_stats[j], i);
+				strings += ETH_GSTRING_LEN;
+			}
+		}
+		break;
+	default:
+		break;
+	}
+}
+
+static int octep_get_sset_count(struct net_device *netdev, int sset)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	u16 num_queues = CFG_GET_PORTS_ACTIVE_IO_RINGS(oct->conf);
+
+	switch (sset) {
+	case ETH_SS_STATS:
+		return OCTEP_GLOBAL_STATS_CNT + (num_queues *
+		       (OCTEP_TX_Q_STATS_CNT + OCTEP_RX_Q_STATS_CNT));
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static void
+octep_get_ethtool_stats(struct net_device *netdev,
+			struct ethtool_stats *stats, u64 *data)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	struct octep_iface_tx_stats *iface_tx_stats;
+	struct octep_iface_rx_stats *iface_rx_stats;
+	u64 rx_packets, rx_bytes, rx_errors;
+	u64 tx_packets, tx_bytes, tx_errors;
+	u64 rx_alloc_errors, tx_busy_errors;
+	int q, i;
+
+	rx_packets = 0;
+	rx_bytes = 0;
+	rx_errors = 0;
+	tx_packets = 0;
+	tx_bytes = 0;
+	tx_errors = 0;
+	rx_alloc_errors = 0;
+	tx_busy_errors = 0;
+	tx_packets = 0;
+	tx_bytes = 0;
+	rx_packets = 0;
+	rx_bytes = 0;
+
+	octep_get_if_stats(oct);
+	iface_tx_stats = &oct->iface_tx_stats;
+	iface_rx_stats = &oct->iface_rx_stats;
+
+	for (q = 0; q < oct->num_oqs; q++) {
+		struct octep_iq *iq = oct->iq[q];
+		struct octep_oq *oq = oct->oq[q];
+
+		tx_packets += iq->stats.instr_completed;
+		tx_bytes += iq->stats.bytes_sent;
+		tx_busy_errors += iq->stats.tx_busy;
+
+		rx_packets += oq->stats.packets;
+		rx_bytes += oq->stats.bytes;
+		rx_alloc_errors += oq->stats.alloc_failures;
+	}
+	i = 0;
+	data[i++] = rx_packets;
+	data[i++] = tx_packets;
+	data[i++] = rx_bytes;
+	data[i++] = tx_bytes;
+	data[i++] = rx_alloc_errors;
+	data[i++] = tx_busy_errors;
+	data[i++] = iface_rx_stats->dropped_pkts_fifo_full +
+		    iface_rx_stats->err_pkts;
+	data[i++] = iface_tx_stats->xscol +
+		    iface_tx_stats->xsdef;
+	data[i++] = iface_tx_stats->pkts;
+	data[i++] = iface_tx_stats->octs;
+	data[i++] = iface_tx_stats->bcst;
+	data[i++] = iface_tx_stats->mcst;
+	data[i++] = iface_tx_stats->undflw;
+	data[i++] = iface_tx_stats->ctl;
+	data[i++] = iface_tx_stats->hist_lt64;
+	data[i++] = iface_tx_stats->hist_eq64;
+	data[i++] = iface_tx_stats->hist_65to127;
+	data[i++] = iface_tx_stats->hist_128to255;
+	data[i++] = iface_tx_stats->hist_256to511;
+	data[i++] = iface_tx_stats->hist_512to1023;
+	data[i++] = iface_tx_stats->hist_1024to1518;
+	data[i++] = iface_tx_stats->hist_gt1518;
+	data[i++] = iface_rx_stats->pkts;
+	data[i++] = iface_rx_stats->octets;
+	data[i++] = iface_rx_stats->mcast_pkts;
+	data[i++] = iface_rx_stats->bcast_pkts;
+	data[i++] = iface_rx_stats->pause_pkts;
+	data[i++] = iface_rx_stats->pause_octets;
+	data[i++] = iface_rx_stats->dropped_pkts_fifo_full;
+	data[i++] = iface_rx_stats->dropped_octets_fifo_full;
+	data[i++] = iface_rx_stats->err_pkts;
+
+	/* Per Tx Queue stats */
+	for (q = 0; q < oct->num_iqs; q++) {
+		struct octep_iq *iq = oct->iq[q];
+
+		data[i++] = iq->stats.instr_posted;
+		data[i++] = iq->stats.instr_completed;
+		data[i++] = iq->stats.bytes_sent;
+		data[i++] = iq->stats.tx_busy;
+	}
+
+	/* Per Rx Queue stats */
+	for (q = 0; q < oct->num_oqs; q++) {
+		struct octep_oq *oq = oct->oq[q];
+
+		data[i++] = oq->stats.packets;
+		data[i++] = oq->stats.bytes;
+		data[i++] = oq->stats.alloc_failures;
+	}
+}
+
+static int octep_get_link_ksettings(struct net_device *netdev,
+				    struct ethtool_link_ksettings *cmd)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	struct octep_iface_link_info *link_info;
+	u32 advertised, supported;
+
+	ethtool_link_ksettings_zero_link_mode(cmd, supported);
+	ethtool_link_ksettings_zero_link_mode(cmd, advertising);
+
+	octep_get_link_info(oct);
+
+	advertised = oct->link_info.advertised_modes;
+	supported = oct->link_info.supported_modes;
+	link_info = &oct->link_info;
+
+	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_T))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_R))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseR_FEC);
+	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_CR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseCR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_KR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseKR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_LR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseLR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_SR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseSR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_25GBASE_CR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 25000baseCR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_25GBASE_KR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 25000baseKR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_25GBASE_SR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 25000baseSR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_40GBASE_CR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 40000baseCR4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_40GBASE_KR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 40000baseKR4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_40GBASE_LR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 40000baseLR4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_40GBASE_SR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 40000baseSR4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_CR2))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseCR2_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_KR2))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseKR2_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_SR2))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseSR2_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_CR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseCR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_KR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseKR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_LR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseLR_ER_FR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_50GBASE_SR))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 50000baseSR_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_100GBASE_CR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 100000baseCR4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_100GBASE_KR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 100000baseKR4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_100GBASE_LR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 100000baseLR4_ER4_Full);
+	if (supported & BIT(OCTEP_LINK_MODE_100GBASE_SR4))
+		ethtool_link_ksettings_add_link_mode(cmd, supported, 100000baseSR4_Full);
+
+	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_T))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_R))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseR_FEC);
+	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_CR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseCR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_KR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseKR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_LR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseLR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_SR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseSR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_25GBASE_CR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 25000baseCR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_25GBASE_KR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 25000baseKR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_25GBASE_SR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 25000baseSR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_40GBASE_CR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 40000baseCR4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_40GBASE_KR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 40000baseKR4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_40GBASE_LR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 40000baseLR4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_40GBASE_SR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 40000baseSR4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_CR2))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseCR2_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_KR2))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseKR2_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_SR2))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseSR2_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_CR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseCR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_KR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseKR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_LR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseLR_ER_FR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_50GBASE_SR))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 50000baseSR_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_100GBASE_CR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 100000baseCR4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_100GBASE_KR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 100000baseKR4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_100GBASE_LR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 100000baseLR4_ER4_Full);
+	if (advertised & BIT(OCTEP_LINK_MODE_100GBASE_SR4))
+		ethtool_link_ksettings_add_link_mode(cmd, advertising, 100000baseSR4_Full);
+
+	if (link_info->autoneg) {
+		if (link_info->autoneg & OCTEP_LINK_MODE_AUTONEG_SUPPORTED)
+			ethtool_link_ksettings_add_link_mode(cmd, supported, Autoneg);
+		if (link_info->autoneg & OCTEP_LINK_MODE_AUTONEG_ADVERTISED) {
+			ethtool_link_ksettings_add_link_mode(cmd, advertising, Autoneg);
+			cmd->base.autoneg = AUTONEG_ENABLE;
+		} else {
+			cmd->base.autoneg = AUTONEG_DISABLE;
+		}
+	} else {
+		cmd->base.autoneg = AUTONEG_DISABLE;
+	}
+
+	if (link_info->pause) {
+		if (link_info->pause & OCTEP_LINK_MODE_PAUSE_SUPPORTED)
+			ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
+		if (link_info->pause & OCTEP_LINK_MODE_PAUSE_ADVERTISED)
+			ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+	}
+
+	cmd->base.port = PORT_FIBRE;
+	ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
+	ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE);
+
+	if (netif_carrier_ok(netdev)) {
+		cmd->base.speed = link_info->speed;
+		cmd->base.duplex = DUPLEX_FULL;
+	} else {
+		cmd->base.speed = SPEED_UNKNOWN;
+		cmd->base.duplex = DUPLEX_UNKNOWN;
+	}
+	return 0;
+}
+
+static int octep_set_link_ksettings(struct net_device *netdev,
+				    const struct ethtool_link_ksettings *cmd)
+{
+	struct octep_device *oct = netdev_priv(netdev);
+	struct octep_iface_link_info link_info_new;
+	struct octep_iface_link_info *link_info;
+	u64 advertised;
+	u8 autoneg = 0;
+	int err;
+
+	link_info = &oct->link_info;
+	memcpy(&link_info_new, link_info, sizeof(struct octep_iface_link_info));
+
+	/* Only Full duplex is supported;
+	 * Assume full duplex when duplex is unknown.
+	 */
+	if (cmd->base.duplex != DUPLEX_FULL &&
+	    cmd->base.duplex != DUPLEX_UNKNOWN)
+		return -EOPNOTSUPP;
+
+	if (cmd->base.autoneg == AUTONEG_ENABLE) {
+		if (!(link_info->autoneg & OCTEP_LINK_MODE_AUTONEG_SUPPORTED))
+			return -EOPNOTSUPP;
+		autoneg = 1;
+	}
+
+	if (!bitmap_subset(cmd->link_modes.advertising,
+			   cmd->link_modes.supported,
+			   __ETHTOOL_LINK_MODE_MASK_NBITS))
+		return -EINVAL;
+
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  10000baseT_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_10GBASE_T);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  10000baseR_FEC))
+		advertised |= BIT(OCTEP_LINK_MODE_10GBASE_R);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  10000baseCR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_10GBASE_CR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  10000baseKR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_10GBASE_KR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  10000baseLR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_10GBASE_LR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  10000baseSR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_10GBASE_SR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  25000baseCR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_25GBASE_CR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  25000baseKR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_25GBASE_KR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  25000baseSR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_25GBASE_SR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  40000baseCR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_40GBASE_CR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  40000baseKR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_40GBASE_KR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  40000baseLR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_40GBASE_LR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  40000baseSR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_40GBASE_SR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseCR2_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_CR2);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseKR2_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_KR2);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseSR2_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_SR2);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseCR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_CR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseKR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_KR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseLR_ER_FR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_LR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  50000baseSR_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_50GBASE_SR);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  100000baseCR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_100GBASE_CR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  100000baseKR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_100GBASE_KR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  100000baseLR4_ER4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_100GBASE_LR4);
+	if (ethtool_link_ksettings_test_link_mode(cmd, advertising,
+						  100000baseSR4_Full))
+		advertised |= BIT(OCTEP_LINK_MODE_100GBASE_SR4);
+
+	if (advertised == link_info->advertised_modes &&
+	    cmd->base.speed == link_info->speed &&
+	    cmd->base.autoneg == link_info->autoneg)
+		return 0;
+
+	link_info_new.advertised_modes = advertised;
+	link_info_new.speed = cmd->base.speed;
+	link_info_new.autoneg = cmd->base.autoneg;
+
+	err = octep_set_link_info(oct, &link_info_new);
+	if (err)
+		return err;
+
+	memcpy(link_info, &link_info_new, sizeof(struct octep_iface_link_info));
+	return 0;
+}
+
+const struct ethtool_ops octep_ethtool_ops = {
+	.get_drvinfo = octep_get_drvinfo,
+	.get_link = ethtool_op_get_link,
+	.get_strings = octep_get_strings,
+	.get_sset_count = octep_get_sset_count,
+	.get_ethtool_stats = octep_get_ethtool_stats,
+	.get_link_ksettings = octep_get_link_ksettings,
+	.set_link_ksettings = octep_set_link_ksettings,
+};
+
+void octep_set_ethtool_ops(struct net_device *netdev)
+{
+	netdev->ethtool_ops = &octep_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
index 700852fd4c3a..00c6ca047332 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
@@ -827,7 +827,7 @@ static int octep_set_mac(struct net_device *netdev, void *p)
 		return err;
 
 	memcpy(oct->mac_addr, addr->sa_data, ETH_ALEN);
-	memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+	eth_hw_addr_set(netdev, addr->sa_data);
 
 	return 0;
 }
@@ -1058,6 +1058,7 @@ static int octep_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	INIT_WORK(&octep_dev->ctrl_mbox_task, octep_ctrl_mbox_task);
 
 	netdev->netdev_ops = &octep_netdev_ops;
+	octep_set_ethtool_ops(netdev);
 	netif_carrier_off(netdev);
 
 	netdev->hw_features = NETIF_F_SG;
@@ -1067,7 +1068,7 @@ static int octep_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	netdev->mtu = OCTEP_DEFAULT_MTU;
 
 	octep_get_mac_addr(octep_dev, octep_dev->mac_addr);
-	memcpy(netdev->dev_addr, octep_dev->mac_addr, netdev->addr_len);
+	eth_hw_addr_set(netdev, octep_dev->mac_addr);
 
 	if (register_netdev(netdev)) {
 		dev_err(&pdev->dev, "Failed to register netdev\n");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/4] octeon_ep: Add driver framework and device initiazliation.
       [not found] ` <20220210213306.3599-2-vburru@marvell.com>
@ 2022-02-11  2:29   ` Jakub Kicinski
  0 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2022-02-11  2:29 UTC (permalink / raw)
  To: Veerasenareddy Burru
  Cc: davem, corbet, netdev, linux-doc, linux-kernel, Abhijit Ayarekar,
	Satananda Burla

On Thu, 10 Feb 2022 13:33:03 -0800 Veerasenareddy Burru wrote:
>  20 files changed, 4249 insertions(+)

Please break this down into smaller logically separate changes.

The driver must build cleanly with W=1 C=1 and with clang.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 4/4] octeon_ep: add ethtool support for Octeon PCI Endpoint NIC.
  2022-02-10 21:33 ` [PATCH 4/4] octeon_ep: add ethtool support for Octeon PCI Endpoint NIC Veerasenareddy Burru
@ 2022-02-11 21:40   ` Andrew Lunn
  0 siblings, 0 replies; 7+ messages in thread
From: Andrew Lunn @ 2022-02-11 21:40 UTC (permalink / raw)
  To: Veerasenareddy Burru
  Cc: davem, kuba, corbet, netdev, linux-doc, linux-kernel,
	Abhijit Ayarekar, Satananda Burla

> +static void octep_get_drvinfo(struct net_device *netdev,
> +			      struct ethtool_drvinfo *info)
> +{
> +	struct octep_device *oct = netdev_priv(netdev);
> +
> +	strscpy(info->driver, OCTEP_DRV_NAME, sizeof(info->driver));
> +	strscpy(info->version, OCTEP_DRV_VERSION_STR, sizeof(info->version));

A driver version string is meaningless. If you don't set it, the core
will fill in the kernel version, which is actually usable information.

> +static int octep_get_link_ksettings(struct net_device *netdev,
> +				    struct ethtool_link_ksettings *cmd)
> +{
> +	struct octep_device *oct = netdev_priv(netdev);
> +	struct octep_iface_link_info *link_info;
> +	u32 advertised, supported;
> +
> +	ethtool_link_ksettings_zero_link_mode(cmd, supported);
> +	ethtool_link_ksettings_zero_link_mode(cmd, advertising);
> +
> +	octep_get_link_info(oct);
> +
> +	advertised = oct->link_info.advertised_modes;
> +	supported = oct->link_info.supported_modes;
> +	link_info = &oct->link_info;
> +
> +	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_T))
> +		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
> +	if (supported & BIT(OCTEP_LINK_MODE_10GBASE_R))
> +		ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseR_FEC);

....

> +
> +	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_T))
> +		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
> +	if (advertised & BIT(OCTEP_LINK_MODE_10GBASE_R))
> +		ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseR_FEC);

It looks like you are doing the same thing twice, just different
variables. Pull this out into a helper.

Do you know what the link partner is advertising? It is useful debug
information if your firmware will tell you.

> diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
> index 700852fd4c3a..00c6ca047332 100644
> --- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
> +++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
> @@ -827,7 +827,7 @@ static int octep_set_mac(struct net_device *netdev, void *p)
>  		return err;
>  
>  	memcpy(oct->mac_addr, addr->sa_data, ETH_ALEN);
> -	memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
> +	eth_hw_addr_set(netdev, addr->sa_data);
>  
>  	return 0;
>  }
> @@ -1067,7 +1068,7 @@ static int octep_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  	netdev->mtu = OCTEP_DEFAULT_MTU;
>  
>  	octep_get_mac_addr(octep_dev, octep_dev->mac_addr);
> -	memcpy(netdev->dev_addr, octep_dev->mac_addr, netdev->addr_len);
> +	eth_hw_addr_set(netdev, octep_dev->mac_addr);

These two changes don't belong here.

      Andrew

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/4] octeon_ep: add support for ndo ops.
  2022-02-10 21:33 ` [PATCH 2/4] octeon_ep: add support for ndo ops Veerasenareddy Burru
@ 2022-02-13 15:16   ` Leon Romanovsky
  0 siblings, 0 replies; 7+ messages in thread
From: Leon Romanovsky @ 2022-02-13 15:16 UTC (permalink / raw)
  To: Veerasenareddy Burru
  Cc: davem, kuba, corbet, netdev, linux-doc, linux-kernel,
	Abhijit Ayarekar, Satananda Burla

On Thu, Feb 10, 2022 at 01:33:04PM -0800, Veerasenareddy Burru wrote:
> Add support for ndo ops to set MAC address, change MTU, get stats.
> Add control path support to set MAC address, change MTU, get stats,
> set speed, get and set link mode.
> 
> Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
> Signed-off-by: Abhijit Ayarekar <aayarekar@marvell.com>
> Signed-off-by: Satananda Burla <sburla@marvell.com>
> ---
>  .../marvell/octeon_ep/octep_ctrl_net.c        | 105 ++++++++++++++++++
>  .../ethernet/marvell/octeon_ep/octep_main.c   |  67 +++++++++++
>  2 files changed, 172 insertions(+)

Please don't put "." in end of patch title.

> 
> diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
> index 1f0d8ba3c8ee..be9b0f31c754 100644
> --- a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
> +++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
> @@ -87,3 +87,108 @@ int octep_get_mac_addr(struct octep_device *oct, u8 *addr)
>  
>  	return 0;
>  }
> +
> +int octep_set_mac_addr(struct octep_device *oct, u8 *addr)
> +{
> +	struct octep_ctrl_mbox_msg msg = { 0 };
> +	struct octep_ctrl_net_h2f_req req = { 0 };

It is enough to write {} without 0.

> +
> +	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MAC;
> +	req.mac.cmd = OCTEP_CTRL_NET_CMD_SET;
> +	memcpy(&req.mac.addr, addr, ETH_ALEN);
> +
> +	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
> +	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MAC_REQ_SZW;
> +	msg.msg = &req;
> +	return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
> +}
> +
> +int octep_set_mtu(struct octep_device *oct, int mtu)
> +{
> +	struct octep_ctrl_mbox_msg msg = { 0 };
> +	struct octep_ctrl_net_h2f_req req = { 0 };
> +
> +	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MTU;
> +	req.mtu.cmd = OCTEP_CTRL_NET_CMD_SET;
> +	req.mtu.val = mtu;
> +
> +	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
> +	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MTU_REQ_SZW;
> +	msg.msg = &req;
> +	return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
> +}
> +
> +int octep_get_if_stats(struct octep_device *oct)
> +{
> +	struct octep_ctrl_mbox_msg msg = { 0 };
> +	struct octep_ctrl_net_h2f_req req = { 0 };
> +	struct octep_iface_rx_stats *iface_rx_stats;
> +	struct octep_iface_tx_stats *iface_tx_stats;
> +	int err;

Reversed Christmas tree, in all functions.

> +
> +	req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_GET_IF_STATS;
> +	req.mac.cmd = OCTEP_CTRL_NET_CMD_GET;
> +	req.get_stats.offset = oct->ctrl_mbox_ifstats_offset;
> +
> +	msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
> +	msg.hdr.sizew = OCTEP_CTRL_NET_H2F_GET_STATS_REQ_SZW;
> +	msg.msg = &req;
> +	err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
> +	if (!err) {

Please use success oriented approach, in all places.

if (err)
   return err;

....

> +		iface_rx_stats = (struct octep_iface_rx_stats *)(oct->ctrl_mbox.barmem +
> +								 oct->ctrl_mbox_ifstats_offset);
> +		iface_tx_stats = (struct octep_iface_tx_stats *)(oct->ctrl_mbox.barmem +
> +								 oct->ctrl_mbox_ifstats_offset +
> +								 sizeof(struct octep_iface_rx_stats)
> +								);
> +		memcpy(&oct->iface_rx_stats, iface_rx_stats, sizeof(struct octep_iface_rx_stats));
> +		memcpy(&oct->iface_tx_stats, iface_tx_stats, sizeof(struct octep_iface_tx_stats));
> +	}
> +
> +	return 0;
> +}
> +

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-02-13 15:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-02-10 21:33 [PATCH 0/4] Add octeon_ep driver Veerasenareddy Burru
2022-02-10 21:33 ` [PATCH 2/4] octeon_ep: add support for ndo ops Veerasenareddy Burru
2022-02-13 15:16   ` Leon Romanovsky
2022-02-10 21:33 ` [PATCH 3/4] octeon_ep: add Tx/Rx and interrupt support Veerasenareddy Burru
2022-02-10 21:33 ` [PATCH 4/4] octeon_ep: add ethtool support for Octeon PCI Endpoint NIC Veerasenareddy Burru
2022-02-11 21:40   ` Andrew Lunn
     [not found] ` <20220210213306.3599-2-vburru@marvell.com>
2022-02-11  2:29   ` [PATCH 1/4] octeon_ep: Add driver framework and device initiazliation Jakub Kicinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).