netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver
@ 2024-05-31  6:40 Yojana Mallik
  2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
                   ` (3 more replies)
  0 siblings, 4 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-05-31  6:40 UTC (permalink / raw)
  To: y-mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: netdev, linux-kernel, srk, rogerq

virtio-net provides a solution for virtual ethernet interface in a
virtualized environment.

There might be a use-case for traffic tunneling between heterogeneous
processors in a non virtualized environment such as TI's AM64x that has
Cortex A53 and Cortex R5 where Linux runs on A53 and a flavour of RTOS
on R5(FreeRTOS) and the ethernet controller is managed by R5 and needs
to pass some low priority data to A53.

One solution for such an use case where the ethernet controller does
not support DMA for Tx/Rx channel, could be a RPMsg based shared memory
ethernet driver. The data plane is over the shared memory while the control
plane is over RPMsg end point channel.

Two separate regions can be carved out in the shared memory, one for the
A53 -> R5 data path, and other for R5 -> A53 data path.

The shared memory layout is modelled as circular buffer.
-------------------------
|          HEAD         |
-------------------------
|          TAIL         |
-------------------------
|       PKT_1_LEN       |
|         PKT_1         |
-------------------------
|       PKT_2_LEN       |
|         PKT_2         |
-------------------------
|           .           |
|           .           |
-------------------------
|       PKT_N_LEN       |
|         PKT_N         |
-------------------------

Polling mechanism can used to check for the offset between head and
tail index to process the packets by both the cores.

This is the v2 of this series. It addresses comments made on v1.

Changes from v1 to v2:
*) Addressed open comments on v1.
*) Added patch 3/3 to add support for multicast filtering

v1:
https://lore.kernel.org/all/20240130110944.26771-1-r-gunasekaran@ti.com/

Ravi Gunasekaran (1):
  net: ethernet: ti: RPMsg based shared memory ethernet driver

Yojana Mallik (2):
  net: ethernet: ti: Register the RPMsg driver as network device
  net: ethernet: ti: icve: Add support for multicast filtering

 drivers/net/ethernet/ti/Kconfig               |   9 +
 drivers/net/ethernet/ti/Makefile              |   1 +
 drivers/net/ethernet/ti/icve_rpmsg_common.h   | 137 ++++
 drivers/net/ethernet/ti/inter_core_virt_eth.c | 591 ++++++++++++++++++
 drivers/net/ethernet/ti/inter_core_virt_eth.h |  62 ++
 5 files changed, 800 insertions(+)
 create mode 100644 drivers/net/ethernet/ti/icve_rpmsg_common.h
 create mode 100644 drivers/net/ethernet/ti/inter_core_virt_eth.c
 create mode 100644 drivers/net/ethernet/ti/inter_core_virt_eth.h

-- 
2.40.1


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-05-31  6:40 [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Yojana Mallik
@ 2024-05-31  6:40 ` Yojana Mallik
  2024-05-31 15:30   ` Randy Dunlap
                     ` (2 more replies)
  2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-05-31  6:40 UTC (permalink / raw)
  To: y-mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: netdev, linux-kernel, srk, rogerq

From: Ravi Gunasekaran <r-gunasekaran@ti.com>

TI's K3 SoCs comprises heterogeneous processors (Cortex A, Cortex R).
When the ethernet controller is completely managed by a core (Cortex R)
running a flavor of RTOS, in a non virtualized environment, network traffic
tunnelling between heterogeneous processors can be realized by means of
RPMsg based shared memory ethernet driver. With the shared memory used
for the data plane and the RPMsg end point channel used for control plane.

inter-core-virt-eth driver is modelled as a RPMsg based shared
memory ethernet driver for such an use case.

As a first step, register the inter-core-virt-eth as a RPMsg driver.
And introduce basic control messages for querying and responding.

Signed-off-by: Ravi Gunasekaran <r-gunasekaran@ti.com>
Signed-off-by: Yojana Mallik <y-mallik@ti.com>
---
 drivers/net/ethernet/ti/Kconfig               |  9 +++
 drivers/net/ethernet/ti/Makefile              |  1 +
 drivers/net/ethernet/ti/icve_rpmsg_common.h   | 47 +++++++++++
 drivers/net/ethernet/ti/inter_core_virt_eth.c | 81 +++++++++++++++++++
 drivers/net/ethernet/ti/inter_core_virt_eth.h | 27 +++++++
 5 files changed, 165 insertions(+)
 create mode 100644 drivers/net/ethernet/ti/icve_rpmsg_common.h
 create mode 100644 drivers/net/ethernet/ti/inter_core_virt_eth.c
 create mode 100644 drivers/net/ethernet/ti/inter_core_virt_eth.h

diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
index 1729eb0e0b41..4f00cb8fe9f1 100644
--- a/drivers/net/ethernet/ti/Kconfig
+++ b/drivers/net/ethernet/ti/Kconfig
@@ -145,6 +145,15 @@ config TI_AM65_CPSW_QOS
 	  The EST scheduler runs on CPTS and the TAS/EST schedule is
 	  updated in the Fetch RAM memory of the CPSW.
 
+config TI_K3_INTERCORE_VIRT_ETH
+	tristate "TI K3 Intercore Virtual Ethernet driver"
+	help
+	  This driver provides intercore virtual ethernet driver
+	  capability.Intercore Virtual Ethernet driver is modelled as
+	  a RPMsg based shared memory ethernet driver for network traffic
+	  tunnelling between heterogeneous processors Cortex A and Cortex R
+	  used in TI's K3 SoCs.
+
 config TI_KEYSTONE_NETCP
 	tristate "TI Keystone NETCP Core Support"
 	select TI_DAVINCI_MDIO
diff --git a/drivers/net/ethernet/ti/Makefile b/drivers/net/ethernet/ti/Makefile
index 6e086b4c0384..24b14ca73407 100644
--- a/drivers/net/ethernet/ti/Makefile
+++ b/drivers/net/ethernet/ti/Makefile
@@ -49,3 +49,4 @@ icssg-prueth-sr1-y := icssg/icssg_prueth_sr1.o \
 		      icssg/icssg_stats.o \
 		      icssg/icssg_ethtool.o
 obj-$(CONFIG_TI_ICSS_IEP) += icssg/icss_iep.o
+obj-$(CONFIG_TI_K3_INTERCORE_VIRT_ETH) += inter_core_virt_eth.o
diff --git a/drivers/net/ethernet/ti/icve_rpmsg_common.h b/drivers/net/ethernet/ti/icve_rpmsg_common.h
new file mode 100644
index 000000000000..7cd157479d4d
--- /dev/null
+++ b/drivers/net/ethernet/ti/icve_rpmsg_common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Texas Instruments K3 Inter Core Virtual Ethernet Driver common header
+ *
+ * Copyright (C) 2024 Texas Instruments Incorporated - https://www.ti.com/
+ */
+
+#ifndef __ICVE_RPMSG_COMMON_H__
+#define __ICVE_RPMSG_COMMON_H__
+
+#include <linux/if_ether.h>
+
+enum icve_msg_type {
+	ICVE_REQUEST_MSG = 0,
+	ICVE_RESPONSE_MSG,
+	ICVE_NOTIFY_MSG,
+};
+
+struct request_message {
+	u32 type; /* Request Type */
+	u32 id;	  /* Request ID */
+} __packed;
+
+struct response_message {
+	u32 type;	/* Response Type */
+	u32 id;		/* Response ID */
+} __packed;
+
+struct notify_message {
+	u32 type;	/* Notify Type */
+	u32 id;		/* Notify ID */
+} __packed;
+
+struct message_header {
+	u32 src_id;
+	u32 msg_type; /* Do not use enum type, as enum size is compiler dependent */
+} __packed;
+
+struct message {
+	struct message_header msg_hdr;
+	union {
+		struct request_message req_msg;
+		struct response_message resp_msg;
+		struct notify_message notify_msg;
+	};
+} __packed;
+
+#endif /* __ICVE_RPMSG_COMMON_H__ */
diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.c b/drivers/net/ethernet/ti/inter_core_virt_eth.c
new file mode 100644
index 000000000000..bea822d2373a
--- /dev/null
+++ b/drivers/net/ethernet/ti/inter_core_virt_eth.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Texas Instruments K3 Inter Core Virtual Ethernet Driver
+ *
+ * Copyright (C) 2024 Texas Instruments Incorporated - https://www.ti.com/
+ */
+
+#include "inter_core_virt_eth.h"
+
+static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
+			 void *priv, u32 src)
+{
+	struct icve_common *common = dev_get_drvdata(&rpdev->dev);
+	struct message *msg = (struct message *)data;
+	u32 msg_type = msg->msg_hdr.msg_type;
+	u32 rpmsg_type;
+
+	switch (msg_type) {
+	case ICVE_REQUEST_MSG:
+		rpmsg_type = msg->req_msg.type;
+		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
+			msg_type, rpmsg_type);
+		break;
+	case ICVE_RESPONSE_MSG:
+		rpmsg_type = msg->resp_msg.type;
+		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
+			msg_type, rpmsg_type);
+		break;
+	case ICVE_NOTIFY_MSG:
+		rpmsg_type = msg->notify_msg.type;
+		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
+			msg_type, rpmsg_type);
+		break;
+	default:
+		dev_err(common->dev, "Invalid msg type\n");
+		break;
+	}
+	return 0;
+}
+
+static int icve_rpmsg_probe(struct rpmsg_device *rpdev)
+{
+	struct device *dev = &rpdev->dev;
+	struct icve_common *common;
+
+	common = devm_kzalloc(&rpdev->dev, sizeof(*common), GFP_KERNEL);
+	if (!common)
+		return -ENOMEM;
+
+	dev_set_drvdata(dev, common);
+
+	common->port = devm_kzalloc(dev, sizeof(*common->port), GFP_KERNEL);
+	common->dev = dev;
+	common->rpdev = rpdev;
+
+	return 0;
+}
+
+static void icve_rpmsg_remove(struct rpmsg_device *rpdev)
+{
+	dev_info(&rpdev->dev, "icve rpmsg client driver is removed\n");
+}
+
+static struct rpmsg_device_id icve_rpmsg_id_table[] = {
+	{ .name = "ti.icve" },
+	{},
+};
+MODULE_DEVICE_TABLE(rpmsg, icve_rpmsg_id_table);
+
+static struct rpmsg_driver icve_rpmsg_client = {
+	.drv.name = KBUILD_MODNAME,
+	.id_table = icve_rpmsg_id_table,
+	.probe = icve_rpmsg_probe,
+	.callback = icve_rpmsg_cb,
+	.remove = icve_rpmsg_remove,
+};
+module_rpmsg_driver(icve_rpmsg_client);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Siddharth Vadapalli <s-vadapalli@ti.com>");
+MODULE_AUTHOR("Ravi Gunasekaran <r-gunasekaran@ti.com");
+MODULE_DESCRIPTION("TI Inter Core Virtual Ethernet driver");
diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.h b/drivers/net/ethernet/ti/inter_core_virt_eth.h
new file mode 100644
index 000000000000..91a3aba96996
--- /dev/null
+++ b/drivers/net/ethernet/ti/inter_core_virt_eth.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Texas Instruments K3 Inter Core Virtual Ethernet Driver
+ *
+ * Copyright (C) 2024 Texas Instruments Incorporated - https://www.ti.com/
+ */
+
+#ifndef __INTER_CORE_VIRT_ETH_H__
+#define __INTER_CORE_VIRT_ETH_H__
+
+#include <linux/etherdevice.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/rpmsg.h>
+#include "icve_rpmsg_common.h"
+
+struct icve_port {
+	struct icve_common *common;
+} __packed;
+
+struct icve_common {
+	struct rpmsg_device *rpdev;
+	struct icve_port *port;
+	struct device *dev;
+} __packed;
+
+#endif /* __INTER_CORE_VIRT_ETH_H__ */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-05-31  6:40 [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Yojana Mallik
  2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
@ 2024-05-31  6:40 ` Yojana Mallik
  2024-06-01  3:13   ` kernel test robot
                     ` (3 more replies)
  2024-05-31  6:40 ` [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering Yojana Mallik
  2024-06-02 15:45 ` [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Andrew Lunn
  3 siblings, 4 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-05-31  6:40 UTC (permalink / raw)
  To: y-mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: netdev, linux-kernel, srk, rogerq

Register the RPMsg driver as network device and add support for
basic ethernet functionality by using the shared memory for data
plane.

The shared memory layout is as below, with the region between
PKT_1_LEN to PKT_N modelled as circular buffer.

-------------------------
|          HEAD         |
-------------------------
|          TAIL         |
-------------------------
|       PKT_1_LEN       |
|         PKT_1         |
-------------------------
|       PKT_2_LEN       |
|         PKT_2         |
-------------------------
|           .           |
|           .           |
-------------------------
|       PKT_N_LEN       |
|         PKT_N         |
-------------------------

The offset between the HEAD and TAIL is polled to process the Rx packets.

Signed-off-by: Yojana Mallik <y-mallik@ti.com>
---
 drivers/net/ethernet/ti/icve_rpmsg_common.h   |  86 ++++
 drivers/net/ethernet/ti/inter_core_virt_eth.c | 453 +++++++++++++++++-
 drivers/net/ethernet/ti/inter_core_virt_eth.h |  35 +-
 3 files changed, 570 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/ti/icve_rpmsg_common.h b/drivers/net/ethernet/ti/icve_rpmsg_common.h
index 7cd157479d4d..2e3833de14bd 100644
--- a/drivers/net/ethernet/ti/icve_rpmsg_common.h
+++ b/drivers/net/ethernet/ti/icve_rpmsg_common.h
@@ -15,14 +15,58 @@ enum icve_msg_type {
 	ICVE_NOTIFY_MSG,
 };
 
+enum icve_rpmsg_type {
+	/* Request types */
+	ICVE_REQ_SHM_INFO = 0,
+	ICVE_REQ_SET_MAC_ADDR,
+
+	/* Response types */
+	ICVE_RESP_SHM_INFO,
+	ICVE_RESP_SET_MAC_ADDR,
+
+	/* Notification types */
+	ICVE_NOTIFY_PORT_UP,
+	ICVE_NOTIFY_PORT_DOWN,
+	ICVE_NOTIFY_PORT_READY,
+	ICVE_NOTIFY_REMOTE_READY,
+};
+
+struct icve_shm_info {
+	/* Total shared memory size */
+	u32 total_shm_size;
+	/* Total number of buffers */
+	u32 num_pkt_bufs;
+	/* Per buff slot size i.e MTU Size + 4 bytes for magic number + 4 bytes
+	 * for Pkt len
+	 */
+	u32 buff_slot_size;
+	/* Base Address for Tx or Rx shared memory */
+	u32 base_addr;
+} __packed;
+
+struct icve_shm {
+	struct icve_shm_info shm_info_tx;
+	struct icve_shm_info shm_info_rx;
+} __packed;
+
+struct icve_mac_addr {
+	char addr[ETH_ALEN];
+} __packed;
+
 struct request_message {
 	u32 type; /* Request Type */
 	u32 id;	  /* Request ID */
+	union {
+		struct icve_mac_addr mac_addr;
+	};
 } __packed;
 
 struct response_message {
 	u32 type;	/* Response Type */
 	u32 id;		/* Response ID */
+	union {
+		struct icve_shm shm_info;
+	};
 } __packed;
 
 struct notify_message {
@@ -44,4 +88,46 @@ struct message {
 	};
 } __packed;
 
+/*      Shared Memory Layout
+ *
+ *	---------------------------	*****************
+ *	|        MAGIC_NUM        |	 icve_shm_head
+ *	|          HEAD           |
+ *	---------------------------	*****************
+ *	|        MAGIC_NUM        |
+ *	|        PKT_1_LEN        |
+ *	|          PKT_1          |
+ *	---------------------------
+ *	|        MAGIC_NUM        |
+ *	|        PKT_2_LEN        |	 icve_shm_buf
+ *	|          PKT_2          |
+ *	---------------------------
+ *	|           .             |
+ *	|           .             |
+ *	---------------------------
+ *	|        MAGIC_NUM        |
+ *	|        PKT_N_LEN        |
+ *	|          PKT_N          |
+ *	---------------------------	****************
+ *	|        MAGIC_NUM        |      icve_shm_tail
+ *	|          TAIL           |
+ *	---------------------------	****************
+ */
+
+struct icve_shm_index {
+	u32 magic_num;
+	u32 index;
+}  __packed;
+
+struct icve_shm_buf {
+	char __iomem *base_addr;	/* start addr of first buffer */
+	u32 magic_num;
+} __packed;
+
+struct icve_shared_mem {
+	struct icve_shm_index __iomem *head;
+	struct icve_shm_buf __iomem *buf;
+	struct icve_shm_index __iomem *tail;
+} __packed;
+
 #endif /* __ICVE_RPMSG_COMMON_H__ */
diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.c b/drivers/net/ethernet/ti/inter_core_virt_eth.c
index bea822d2373a..d96547d317fe 100644
--- a/drivers/net/ethernet/ti/inter_core_virt_eth.c
+++ b/drivers/net/ethernet/ti/inter_core_virt_eth.c
@@ -6,11 +6,145 @@
 
 #include "inter_core_virt_eth.h"
 
+#define ICVE_MIN_PACKET_SIZE ETH_ZLEN
+#define ICVE_MAX_PACKET_SIZE 1540 //(ETH_FRAME_LEN + ETH_FCS_LEN)
+#define ICVE_MAX_TX_QUEUES 1
+#define ICVE_MAX_RX_QUEUES 1
+
+#define PKT_LEN_SIZE_TYPE sizeof(u32)
+#define MAGIC_NUM_SIZE_TYPE sizeof(u32)
+
+/* 4 bytes to hold packet length and ICVE_MAX_PACKET_SIZE to hold packet */
+#define ICVE_BUFFER_SIZE \
+	(ICVE_MAX_PACKET_SIZE + PKT_LEN_SIZE_TYPE + MAGIC_NUM_SIZE_TYPE)
+
+#define RX_POLL_TIMEOUT 1000 /* 1000usec */
+#define RX_POLL_JIFFIES (jiffies + usecs_to_jiffies(RX_POLL_TIMEOUT))
+
+#define STATE_MACHINE_TIME msecs_to_jiffies(100)
+#define ICVE_REQ_TIMEOUT msecs_to_jiffies(100)
+
+#define icve_ndev_to_priv(ndev) ((struct icve_ndev_priv *)netdev_priv(ndev))
+#define icve_ndev_to_port(ndev) (icve_ndev_to_priv(ndev)->port)
+#define icve_ndev_to_common(ndev) (icve_ndev_to_port(ndev)->common)
+
+static int create_request(struct icve_common *common,
+			  enum icve_rpmsg_type rpmsg_type)
+{
+	struct message *msg = &common->send_msg;
+	int ret = 0;
+
+	msg->msg_hdr.src_id = common->port->port_id;
+	msg->req_msg.type = rpmsg_type;
+
+	switch (rpmsg_type) {
+	case ICVE_REQ_SHM_INFO:
+		msg->msg_hdr.msg_type = ICVE_REQUEST_MSG;
+		break;
+	case ICVE_REQ_SET_MAC_ADDR:
+		msg->msg_hdr.msg_type = ICVE_REQUEST_MSG;
+		ether_addr_copy(msg->req_msg.mac_addr.addr,
+				common->port->ndev->dev_addr);
+		break;
+	case ICVE_NOTIFY_PORT_UP:
+	case ICVE_NOTIFY_PORT_DOWN:
+		msg->msg_hdr.msg_type = ICVE_NOTIFY_MSG;
+		break;
+	default:
+		ret = -EINVAL;
+		dev_err(common->dev, "Invalid RPMSG request\n");
+	};
+	return ret;
+}
+
+static int icve_create_send_request(struct icve_common *common,
+				    enum icve_rpmsg_type rpmsg_type,
+				    bool wait)
+{
+	unsigned long flags;
+	int ret;
+
+	if (wait)
+		reinit_completion(&common->sync_msg);
+
+	spin_lock_irqsave(&common->send_msg_lock, flags);
+	create_request(common, rpmsg_type);
+	rpmsg_send(common->rpdev->ept, (void *)(&common->send_msg),
+		   sizeof(common->send_msg));
+	spin_unlock_irqrestore(&common->send_msg_lock, flags);
+
+	if (wait) {
+		ret = wait_for_completion_timeout(&common->sync_msg,
+						  ICVE_REQ_TIMEOUT);
+
+		if (!ret) {
+			dev_err(common->dev, "Failed to receive response within %ld jiffies\n",
+				ICVE_REQ_TIMEOUT);
+			ret = -ETIMEDOUT;
+			return ret;
+		}
+	}
+	return ret;
+}
+
+static void icve_state_machine(struct work_struct *work)
+{
+	struct delayed_work *dwork = to_delayed_work(work);
+	struct icve_common *common;
+	struct icve_port *port;
+
+	common = container_of(dwork, struct icve_common, state_work);
+	port = common->port;
+
+	mutex_lock(&common->state_lock);
+
+	switch (common->state) {
+	case ICVE_STATE_PROBE:
+		break;
+	case ICVE_STATE_OPEN:
+		icve_create_send_request(common, ICVE_REQ_SHM_INFO, false);
+		break;
+	case ICVE_STATE_CLOSE:
+		break;
+	case ICVE_STATE_READY:
+		icve_create_send_request(common, ICVE_REQ_SET_MAC_ADDR, false);
+		napi_enable(&port->rx_napi);
+		netif_carrier_on(port->ndev);
+		mod_timer(&port->rx_timer, RX_POLL_TIMEOUT);
+		break;
+	case ICVE_STATE_RUNNING:
+		break;
+	}
+	mutex_unlock(&common->state_lock);
+}
+
+static void icve_rx_timer(struct timer_list *timer)
+{
+	struct icve_port *port = from_timer(port, timer, rx_timer);
+	struct napi_struct *napi;
+	int num_pkts = 0;
+	u32 head, tail;
+
+	head = port->rx_buffer->head->index;
+	tail = port->rx_buffer->tail->index;
+
+	num_pkts = tail - head;
+	num_pkts = num_pkts >= 0 ? num_pkts :
+				   (num_pkts + port->icve_rx_max_buffers);
+
+	napi = &port->rx_napi;
+	if (num_pkts && likely(napi_schedule_prep(napi)))
+		__napi_schedule(napi);
+	else
+		mod_timer(&port->rx_timer, RX_POLL_JIFFIES);
+}
+
 static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
 			 void *priv, u32 src)
 {
 	struct icve_common *common = dev_get_drvdata(&rpdev->dev);
 	struct message *msg = (struct message *)data;
+	struct icve_port *port = common->port;
 	u32 msg_type = msg->msg_hdr.msg_type;
 	u32 rpmsg_type;
 
@@ -24,11 +158,79 @@ static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
 		rpmsg_type = msg->resp_msg.type;
 		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
 			msg_type, rpmsg_type);
+		switch (rpmsg_type) {
+		case ICVE_RESP_SHM_INFO:
+			/* Retrieve Tx and Rx shared memory info from msg */
+			port->tx_buffer->head =
+				ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr,
+					sizeof(*port->tx_buffer->head));
+
+			port->tx_buffer->buf->base_addr =
+				ioremap((msg->resp_msg.shm_info.shm_info_tx.base_addr +
+					sizeof(*port->tx_buffer->head)),
+					(msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
+					 msg->resp_msg.shm_info.shm_info_tx.buff_slot_size));
+
+			port->tx_buffer->tail =
+				ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr +
+					sizeof(*port->tx_buffer->head) +
+					(msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
+					msg->resp_msg.shm_info.shm_info_tx.buff_slot_size),
+					sizeof(*port->tx_buffer->tail));
+
+			port->icve_tx_max_buffers = msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs;
+
+			port->rx_buffer->head =
+				ioremap(msg->resp_msg.shm_info.shm_info_rx.base_addr,
+					sizeof(*port->rx_buffer->head));
+
+			port->rx_buffer->buf->base_addr =
+				ioremap(msg->resp_msg.shm_info.shm_info_rx.base_addr +
+					sizeof(*port->rx_buffer->head),
+					(msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs *
+					 msg->resp_msg.shm_info.shm_info_rx.buff_slot_size));
+
+			port->rx_buffer->tail =
+				ioremap(msg->resp_msg.shm_info.shm_info_rx.base_addr +
+					sizeof(*port->rx_buffer->head) +
+					(msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs *
+					msg->resp_msg.shm_info.shm_info_rx.buff_slot_size),
+					sizeof(*port->rx_buffer->tail));
+
+			port->icve_rx_max_buffers =
+				msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs;
+
+			mutex_lock(&common->state_lock);
+			common->state = ICVE_STATE_READY;
+			mutex_unlock(&common->state_lock);
+
+			mod_delayed_work(system_wq,
+					 &common->state_work,
+					 STATE_MACHINE_TIME);
+
+			break;
+		case ICVE_RESP_SET_MAC_ADDR:
+			break;
+		}
+
 		break;
+
 	case ICVE_NOTIFY_MSG:
 		rpmsg_type = msg->notify_msg.type;
-		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
-			msg_type, rpmsg_type);
+		switch (rpmsg_type) {
+		case ICVE_NOTIFY_REMOTE_READY:
+			mutex_lock(&common->state_lock);
+			common->state = ICVE_STATE_RUNNING;
+			mutex_unlock(&common->state_lock);
+
+			mod_delayed_work(system_wq,
+					 &common->state_work,
+					 STATE_MACHINE_TIME);
+			break;
+		case ICVE_NOTIFY_PORT_UP:
+		case ICVE_NOTIFY_PORT_DOWN:
+			break;
+		}
 		break;
 	default:
 		dev_err(common->dev, "Invalid msg type\n");
@@ -37,10 +239,242 @@ static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
 	return 0;
 }
 
+static int icve_rx_packets(struct napi_struct *napi, int budget)
+{
+	struct icve_port *port = container_of(napi, struct icve_port, rx_napi);
+	u32 count, process_pkts;
+	struct sk_buff *skb;
+	u32 head, tail;
+	int num_pkts;
+	u32 pkt_len;
+
+	head = port->rx_buffer->head->index;
+	tail = port->rx_buffer->tail->index;
+
+	num_pkts = head - tail;
+
+	num_pkts = num_pkts >= 0 ? num_pkts :
+				   (num_pkts + port->icve_rx_max_buffers);
+	process_pkts = min(num_pkts, budget);
+	count = 0;
+	while (count < process_pkts) {
+		memcpy_fromio((void *)&pkt_len,
+			      (void *)(port->rx_buffer->buf->base_addr +
+			      MAGIC_NUM_SIZE_TYPE +
+			      (((tail + count) % port->icve_rx_max_buffers) *
+			      ICVE_BUFFER_SIZE)),
+			      PKT_LEN_SIZE_TYPE);
+		/* Start building the skb */
+		skb = napi_alloc_skb(napi, pkt_len);
+		if (!skb) {
+			port->ndev->stats.rx_dropped++;
+			goto rx_dropped;
+		}
+
+		skb->dev = port->ndev;
+		skb_put(skb, pkt_len);
+		memcpy_fromio((void *)skb->data,
+			      (void *)(port->rx_buffer->buf->base_addr +
+			      PKT_LEN_SIZE_TYPE + MAGIC_NUM_SIZE_TYPE +
+			      (((tail + count) % port->icve_rx_max_buffers) *
+			      ICVE_BUFFER_SIZE)),
+			      pkt_len);
+
+		skb->protocol = eth_type_trans(skb, port->ndev);
+
+		/* Push skb into network stack */
+		napi_gro_receive(napi, skb);
+
+		count++;
+		port->ndev->stats.rx_packets++;
+		port->ndev->stats.rx_bytes += skb->len;
+	}
+
+rx_dropped:
+
+	if (num_pkts) {
+		port->rx_buffer->tail->index =
+			(port->rx_buffer->tail->index + count) %
+			port->icve_rx_max_buffers;
+
+		if (num_pkts < budget && napi_complete_done(napi, count))
+			mod_timer(&port->rx_timer, RX_POLL_TIMEOUT);
+	}
+
+	return count;
+}
+
+static int icve_ndo_open(struct net_device *ndev)
+{
+	struct icve_common *common = icve_ndev_to_common(ndev);
+
+	mutex_lock(&common->state_lock);
+	common->state = ICVE_STATE_OPEN;
+	mutex_unlock(&common->state_lock);
+	mod_delayed_work(system_wq, &common->state_work, msecs_to_jiffies(100));
+
+	return 0;
+}
+
+static int icve_ndo_stop(struct net_device *ndev)
+{
+	struct icve_common *common = icve_ndev_to_common(ndev);
+	struct icve_port *port = icve_ndev_to_port(ndev);
+
+	mutex_lock(&common->state_lock);
+	common->state = ICVE_STATE_CLOSE;
+	mutex_unlock(&common->state_lock);
+
+	netif_carrier_off(port->ndev);
+
+	__dev_mc_unsync(ndev, icve_del_mc_addr);
+	__hw_addr_init(&common->mc_list);
+
+	cancel_delayed_work_sync(&common->state_work);
+	del_timer_sync(&port->rx_timer);
+	napi_disable(&port->rx_napi);
+
+	cancel_work_sync(&common->rx_mode_work);
+
+	return 0;
+}
+
+static netdev_tx_t icve_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+{
+	struct icve_port *port = icve_ndev_to_port(ndev);
+	u32 head, tail;
+	int num_pkts;
+	u32 len;
+
+	len = skb_headlen(skb);
+	head = port->tx_buffer->head->index;
+	tail = port->tx_buffer->tail->index;
+
+	/* If the buffer queue is full, then drop packet */
+	num_pkts = head - tail;
+	num_pkts = num_pkts >= 0 ? num_pkts :
+				   (num_pkts + port->icve_tx_max_buffers);
+
+	if ((num_pkts + 1) == port->icve_tx_max_buffers) {
+		netdev_warn(ndev, "Tx buffer full %d\n", num_pkts);
+		goto ring_full;
+	}
+	/* Copy length */
+	memcpy_toio((void *)port->tx_buffer->buf->base_addr +
+			    MAGIC_NUM_SIZE_TYPE +
+			    (port->tx_buffer->head->index * ICVE_BUFFER_SIZE),
+		    (void *)&len, PKT_LEN_SIZE_TYPE);
+	/* Copy data to shared mem */
+	memcpy_toio((void *)(port->tx_buffer->buf->base_addr +
+			     MAGIC_NUM_SIZE_TYPE + PKT_LEN_SIZE_TYPE +
+			     (port->tx_buffer->head->index * ICVE_BUFFER_SIZE)),
+		    (void *)skb->data, len);
+	port->tx_buffer->head->index =
+		(port->tx_buffer->head->index + 1) % port->icve_tx_max_buffers;
+
+	ndev->stats.tx_packets++;
+	ndev->stats.tx_bytes += skb->len;
+
+	dev_consume_skb_any(skb);
+	return NETDEV_TX_OK;
+
+ring_full:
+	return NETDEV_TX_BUSY;
+}
+
+static int icve_set_mac_address(struct net_device *ndev, void *addr)
+{
+	struct icve_common *common = icve_ndev_to_common(ndev);
+	int ret;
+
+	ret = eth_mac_addr(ndev, addr);
+
+	if (ret < 0)
+		return ret;
+	icve_create_send_request(common, ICVE_REQ_SET_MAC_ADDR, false);
+	return ret;
+}
+
+static const struct net_device_ops icve_netdev_ops = {
+	.ndo_open = icve_ndo_open,
+	.ndo_stop = icve_ndo_stop,
+	.ndo_start_xmit = icve_start_xmit,
+	.ndo_set_mac_address = icve_set_mac_address,
+};
+
+static int icve_init_ndev(struct icve_common *common)
+{
+	struct device *dev = &common->rpdev->dev;
+	struct icve_ndev_priv *ndev_priv;
+	struct icve_port *port;
+	static u32 port_id;
+	int err;
+
+	port = common->port;
+	port->common = common;
+	port->port_id = port_id++;
+
+	port->ndev = devm_alloc_etherdev_mqs(common->dev, sizeof(*ndev_priv),
+					     ICVE_MAX_TX_QUEUES,
+					     ICVE_MAX_RX_QUEUES);
+
+	if (!port->ndev) {
+		dev_err(dev, "error allocating net_device\n");
+		return -ENOMEM;
+	}
+
+	ndev_priv = netdev_priv(port->ndev);
+	ndev_priv->port = port;
+	SET_NETDEV_DEV(port->ndev, dev);
+
+	port->ndev->min_mtu = ICVE_MIN_PACKET_SIZE;
+	port->ndev->max_mtu = ICVE_MAX_PACKET_SIZE;
+	port->ndev->netdev_ops = &icve_netdev_ops;
+
+	/* Allocate memory to test without actual RPMsg handshaking */
+	port->tx_buffer =
+		devm_kzalloc(dev, sizeof(*port->tx_buffer), GFP_KERNEL);
+	if (!port->tx_buffer) {
+		dev_err(dev, "Memory not available\n");
+		return -ENOMEM;
+	}
+
+	port->tx_buffer->buf =
+		devm_kzalloc(dev, sizeof(*port->tx_buffer->buf), GFP_KERNEL);
+	if (!port->tx_buffer->buf) {
+		dev_err(dev, "Memory not available\n");
+		return -ENOMEM;
+	}
+
+	port->rx_buffer =
+		devm_kzalloc(dev, sizeof(*port->rx_buffer), GFP_KERNEL);
+	if (!port->rx_buffer) {
+		dev_err(dev, "Memory not available\n");
+		return -ENOMEM;
+	}
+
+	port->rx_buffer->buf =
+		devm_kzalloc(dev, sizeof(*port->rx_buffer->buf), GFP_KERNEL);
+	if (!port->rx_buffer->buf) {
+		dev_err(dev, "Memory not available\n");
+		return -ENOMEM;
+	};
+	netif_carrier_off(port->ndev);
+
+	netif_napi_add(port->ndev, &port->rx_napi, icve_rx_packets);
+	timer_setup(&port->rx_timer, icve_rx_timer, 0);
+	err = register_netdev(port->ndev);
+
+	if (err)
+		dev_err(dev, "error registering icve net device %d\n", err);
+	return 0;
+}
+
 static int icve_rpmsg_probe(struct rpmsg_device *rpdev)
 {
 	struct device *dev = &rpdev->dev;
 	struct icve_common *common;
+	int ret = 0;
 
 	common = devm_kzalloc(&rpdev->dev, sizeof(*common), GFP_KERNEL);
 	if (!common)
@@ -51,12 +485,27 @@ static int icve_rpmsg_probe(struct rpmsg_device *rpdev)
 	common->port = devm_kzalloc(dev, sizeof(*common->port), GFP_KERNEL);
 	common->dev = dev;
 	common->rpdev = rpdev;
+	common->state = ICVE_STATE_PROBE;
+	spin_lock_init(&common->send_msg_lock);
+	spin_lock_init(&common->recv_msg_lock);
+	mutex_init(&common->state_lock);
+	INIT_DELAYED_WORK(&common->state_work, icve_state_machine);
+	init_completion(&common->sync_msg);
 
+	/* Register the network device */
+	ret = icve_init_ndev(common);
+	if (ret)
+		return ret;
 	return 0;
 }
 
 static void icve_rpmsg_remove(struct rpmsg_device *rpdev)
 {
+	struct icve_common *common = dev_get_drvdata(&rpdev->dev);
+	struct icve_port *port = common->port;
+
+	netif_napi_del(&port->rx_napi);
+	del_timer_sync(&port->rx_timer);
 	dev_info(&rpdev->dev, "icve rpmsg client driver is removed\n");
 }
 
diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.h b/drivers/net/ethernet/ti/inter_core_virt_eth.h
index 91a3aba96996..4fc420cb9eab 100644
--- a/drivers/net/ethernet/ti/inter_core_virt_eth.h
+++ b/drivers/net/ethernet/ti/inter_core_virt_eth.h
@@ -14,14 +14,45 @@
 #include <linux/rpmsg.h>
 #include "icve_rpmsg_common.h"
 
+enum icve_state {
+	ICVE_STATE_PROBE,
+	ICVE_STATE_OPEN,
+	ICVE_STATE_CLOSE,
+	ICVE_STATE_READY,
+	ICVE_STATE_RUNNING,
+
+};
+
 struct icve_port {
+	struct icve_shared_mem *tx_buffer; /* Write buffer for data to be consumed remote side */
+	struct icve_shared_mem *rx_buffer; /* Read buffer for data to be consumed by this driver */
+	struct timer_list rx_timer;
 	struct icve_common *common;
-} __packed;
+	struct napi_struct rx_napi;
+	u8 local_mac_addr[ETH_ALEN];
+	struct net_device *ndev;
+	u32 icve_tx_max_buffers;
+	u32 icve_rx_max_buffers;
+	u32 port_id;
+};
 
 struct icve_common {
 	struct rpmsg_device *rpdev;
+	spinlock_t send_msg_lock; /* Acquire this lock while sending RPMsg */
+	spinlock_t recv_msg_lock; /* Acquire this lock while processing received RPMsg */
+	struct message send_msg;
+	struct message recv_msg;
 	struct icve_port *port;
 	struct device *dev;
-} __packed;
+	enum icve_state	state;
+	struct mutex state_lock; /* Lock to be used while changing the interface state */
+	struct delayed_work state_work;
+	struct completion sync_msg;
+};
+
+struct icve_ndev_priv {
+	struct icve_port *port;
+};
+
 
 #endif /* __INTER_CORE_VIRT_ETH_H__ */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering
  2024-05-31  6:40 [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Yojana Mallik
  2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
  2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
@ 2024-05-31  6:40 ` Yojana Mallik
  2024-06-03 21:27   ` kernel test robot
  2024-06-02 15:45 ` [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Andrew Lunn
  3 siblings, 1 reply; 31+ messages in thread
From: Yojana Mallik @ 2024-05-31  6:40 UTC (permalink / raw)
  To: y-mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: netdev, linux-kernel, srk, rogerq

Add support for multicast filtering for ICVE driver. Implement the
ndo_set_rx_mode callback as icve_set_rx_mode() API. rx_mode_workqueue is
initialized in icve_rpmsg_probe() and queued in icve_set_rx_mode().

Signed-off-by: Yojana Mallik <y-mallik@ti.com>
---
 drivers/net/ethernet/ti/icve_rpmsg_common.h   |  4 ++
 drivers/net/ethernet/ti/inter_core_virt_eth.c | 63 ++++++++++++++++++-
 drivers/net/ethernet/ti/inter_core_virt_eth.h |  4 ++
 3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/icve_rpmsg_common.h b/drivers/net/ethernet/ti/icve_rpmsg_common.h
index 2e3833de14bd..793baa93e135 100644
--- a/drivers/net/ethernet/ti/icve_rpmsg_common.h
+++ b/drivers/net/ethernet/ti/icve_rpmsg_common.h
@@ -19,10 +19,14 @@ enum icve_rpmsg_type {
 	/* Request types */
 	ICVE_REQ_SHM_INFO = 0,
 	ICVE_REQ_SET_MAC_ADDR,
+	ICVE_REQ_ADD_MC_ADDR,
+	ICVE_REQ_DEL_MC_ADDR,
 
 	/* Response types */
 	ICVE_RESP_SHM_INFO,
 	ICVE_RESP_SET_MAC_ADDR,
+	ICVE_RESP_ADD_MC_ADDR,
+	ICVE_RESP_DEL_MC_ADDR,
 
 	/* Notification types */
 	ICVE_NOTIFY_PORT_UP,
diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.c b/drivers/net/ethernet/ti/inter_core_virt_eth.c
index d96547d317fe..908425af0014 100644
--- a/drivers/net/ethernet/ti/inter_core_virt_eth.c
+++ b/drivers/net/ethernet/ti/inter_core_virt_eth.c
@@ -46,6 +46,11 @@ static int create_request(struct icve_common *common,
 		ether_addr_copy(msg->req_msg.mac_addr.addr,
 				common->port->ndev->dev_addr);
 		break;
+	case ICVE_REQ_ADD_MC_ADDR:
+	case ICVE_REQ_DEL_MC_ADDR:
+		ether_addr_copy(msg->req_msg.mac_addr.addr,
+				common->mcast_addr);
+		break;
 	case ICVE_NOTIFY_PORT_UP:
 	case ICVE_NOTIFY_PORT_DOWN:
 		msg->msg_hdr.msg_type = ICVE_NOTIFY_MSG;
@@ -87,6 +92,26 @@ static int icve_create_send_request(struct icve_common *common,
 	return ret;
 }
 
+static int icve_add_mc_addr(struct net_device *ndev, const u8 *addr)
+{
+	struct icve_common *common = icve_ndev_to_common(ndev);
+	int ret = 0;
+
+	ether_addr_copy(common->mcast_addr, addr);
+	icve_create_send_request(common, ICVE_REQ_ADD_MC_ADDR, true);
+	return ret;
+}
+
+static int icve_del_mc_addr(struct net_device *ndev, const u8 *addr)
+{
+	struct icve_common *common = icve_ndev_to_common(ndev);
+	int ret = 0;
+
+	ether_addr_copy(common->mcast_addr, addr);
+	icve_create_send_request(common, ICVE_REQ_DEL_MC_ADDR, true);
+	return ret;
+}
+
 static void icve_state_machine(struct work_struct *work)
 {
 	struct delayed_work *dwork = to_delayed_work(work);
@@ -211,6 +236,10 @@ static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
 			break;
 		case ICVE_RESP_SET_MAC_ADDR:
 			break;
+		case ICVE_RESP_ADD_MC_ADDR:
+		case ICVE_RESP_DEL_MC_ADDR:
+			complete(&common->sync_msg);
+			break;
 		}
 
 		break;
@@ -395,10 +424,35 @@ static int icve_set_mac_address(struct net_device *ndev, void *addr)
 	return ret;
 }
 
+static void icve_ndo_set_rx_mode_work(struct work_struct *work)
+{
+	struct icve_common *common;
+	struct net_device *ndev;
+
+	common = container_of(work, struct icve_common, rx_mode_work);
+	ndev = common->port->ndev;
+
+	/* make a mc list copy */
+	netif_addr_lock_bh(ndev);
+	__hw_addr_sync(&common->mc_list, &ndev->mc, ndev->addr_len);
+	netif_addr_unlock_bh(ndev);
+
+	__hw_addr_sync_dev(&common->mc_list, ndev, icve_add_mc_addr,
+			   icve_del_mc_addr);
+}
+
+static void icve_set_rx_mode(struct net_device *ndev)
+{
+	struct icve_common *common = icve_ndev_to_common(ndev);
+
+	queue_work(common->cmd_wq, &common->rx_mode_work);
+}
+
 static const struct net_device_ops icve_netdev_ops = {
 	.ndo_open = icve_ndo_open,
 	.ndo_stop = icve_ndo_stop,
 	.ndo_start_xmit = icve_start_xmit,
+	.ndo_set_rx_mode = icve_set_rx_mode,
 	.ndo_set_mac_address = icve_set_mac_address,
 };
 
@@ -491,7 +545,13 @@ static int icve_rpmsg_probe(struct rpmsg_device *rpdev)
 	mutex_init(&common->state_lock);
 	INIT_DELAYED_WORK(&common->state_work, icve_state_machine);
 	init_completion(&common->sync_msg);
-
+	__hw_addr_init(&common->mc_list);
+	INIT_WORK(&common->rx_mode_work, icve_ndo_set_rx_mode_work);
+	common->cmd_wq = create_singlethread_workqueue("icve_rx_work");
+	if (!common->cmd_wq) {
+		dev_err(dev, "Failure requesting workqueue\n");
+		return -ENOMEM;
+	}
 	/* Register the network device */
 	ret = icve_init_ndev(common);
 	if (ret)
@@ -506,6 +566,7 @@ static void icve_rpmsg_remove(struct rpmsg_device *rpdev)
 
 	netif_napi_del(&port->rx_napi);
 	del_timer_sync(&port->rx_timer);
+	destroy_workqueue(common->cmd_wq);
 	dev_info(&rpdev->dev, "icve rpmsg client driver is removed\n");
 }
 
diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.h b/drivers/net/ethernet/ti/inter_core_virt_eth.h
index 4fc420cb9eab..02c4d23395f5 100644
--- a/drivers/net/ethernet/ti/inter_core_virt_eth.h
+++ b/drivers/net/ethernet/ti/inter_core_virt_eth.h
@@ -47,7 +47,11 @@ struct icve_common {
 	enum icve_state	state;
 	struct mutex state_lock; /* Lock to be used while changing the interface state */
 	struct delayed_work state_work;
+	struct work_struct rx_mode_work;
+	struct workqueue_struct *cmd_wq;
+	struct netdev_hw_addr_list mc_list;
 	struct completion sync_msg;
+	u8 mcast_addr[ETH_ALEN];
 };
 
 struct icve_ndev_priv {
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
@ 2024-05-31 15:30   ` Randy Dunlap
  2024-06-03  5:50     ` Yojana Mallik
  2024-06-02  7:01   ` Siddharth Vadapalli
  2024-06-02 16:21   ` Andrew Lunn
  2 siblings, 1 reply; 31+ messages in thread
From: Randy Dunlap @ 2024-05-31 15:30 UTC (permalink / raw)
  To: Yojana Mallik, schnelle, wsa+renesas, diogo.ivo, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: netdev, linux-kernel, srk, rogerq



On 5/30/24 11:40 PM, Yojana Mallik wrote:
> diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
> index 1729eb0e0b41..4f00cb8fe9f1 100644
> --- a/drivers/net/ethernet/ti/Kconfig
> +++ b/drivers/net/ethernet/ti/Kconfig
> @@ -145,6 +145,15 @@ config TI_AM65_CPSW_QOS
>  	  The EST scheduler runs on CPTS and the TAS/EST schedule is
>  	  updated in the Fetch RAM memory of the CPSW.
>  
> +config TI_K3_INTERCORE_VIRT_ETH
> +	tristate "TI K3 Intercore Virtual Ethernet driver"
> +	help
> +	  This driver provides intercore virtual ethernet driver
> +	  capability.Intercore Virtual Ethernet driver is modelled as

	  capability. Intercore

> +	  a RPMsg based shared memory ethernet driver for network traffic

	  a RPMsg-based

> +	  tunnelling between heterogeneous processors Cortex A and Cortex R
> +	  used in TI's K3 SoCs.


OK, the darned British spellings can stay. ;)
(the double-l words)

-- 
#Randy
https://people.kernel.org/tglx/notes-about-netiquette
https://subspace.kernel.org/etiquette.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
@ 2024-06-01  3:13   ` kernel test robot
  2024-06-03  9:26     ` Yojana Mallik
  2024-06-02  7:22   ` Siddharth Vadapalli
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 31+ messages in thread
From: kernel test robot @ 2024-06-01  3:13 UTC (permalink / raw)
  To: Yojana Mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: llvm, oe-kbuild-all, netdev, linux-kernel, srk, rogerq

Hi Yojana,

kernel test robot noticed the following build warnings:

[auto build test WARNING on net-next/main]

url:    https://github.com/intel-lab-lkp/linux/commits/Yojana-Mallik/net-ethernet-ti-RPMsg-based-shared-memory-ethernet-driver/20240531-144258
base:   net-next/main
patch link:    https://lore.kernel.org/r/20240531064006.1223417-3-y-mallik%40ti.com
patch subject: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20240601/202406011038.AwLZhQpy-lkp@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240601/202406011038.AwLZhQpy-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406011038.AwLZhQpy-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/net/ethernet/ti/inter_core_virt_eth.c:76:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
      76 |         if (wait) {
         |             ^~~~
   drivers/net/ethernet/ti/inter_core_virt_eth.c:87:9: note: uninitialized use occurs here
      87 |         return ret;
         |                ^~~
   drivers/net/ethernet/ti/inter_core_virt_eth.c:76:2: note: remove the 'if' if its condition is always true
      76 |         if (wait) {
         |         ^~~~~~~~~
   drivers/net/ethernet/ti/inter_core_virt_eth.c:65:9: note: initialize the variable 'ret' to silence this warning
      65 |         int ret;
         |                ^
         |                 = 0
   drivers/net/ethernet/ti/inter_core_virt_eth.c:330:24: error: use of undeclared identifier 'icve_del_mc_addr'
     330 |         __dev_mc_unsync(ndev, icve_del_mc_addr);
         |                               ^
   drivers/net/ethernet/ti/inter_core_virt_eth.c:331:26: error: no member named 'mc_list' in 'struct icve_common'
     331 |         __hw_addr_init(&common->mc_list);
         |                         ~~~~~~  ^
   drivers/net/ethernet/ti/inter_core_virt_eth.c:337:28: error: no member named 'rx_mode_work' in 'struct icve_common'
     337 |         cancel_work_sync(&common->rx_mode_work);
         |                           ~~~~~~  ^
   1 warning and 3 errors generated.


vim +76 drivers/net/ethernet/ti/inter_core_virt_eth.c

    59	
    60	static int icve_create_send_request(struct icve_common *common,
    61					    enum icve_rpmsg_type rpmsg_type,
    62					    bool wait)
    63	{
    64		unsigned long flags;
    65		int ret;
    66	
    67		if (wait)
    68			reinit_completion(&common->sync_msg);
    69	
    70		spin_lock_irqsave(&common->send_msg_lock, flags);
    71		create_request(common, rpmsg_type);
    72		rpmsg_send(common->rpdev->ept, (void *)(&common->send_msg),
    73			   sizeof(common->send_msg));
    74		spin_unlock_irqrestore(&common->send_msg_lock, flags);
    75	
  > 76		if (wait) {
    77			ret = wait_for_completion_timeout(&common->sync_msg,
    78							  ICVE_REQ_TIMEOUT);
    79	
    80			if (!ret) {
    81				dev_err(common->dev, "Failed to receive response within %ld jiffies\n",
    82					ICVE_REQ_TIMEOUT);
    83				ret = -ETIMEDOUT;
    84				return ret;
    85			}
    86		}
    87		return ret;
    88	}
    89	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
  2024-05-31 15:30   ` Randy Dunlap
@ 2024-06-02  7:01   ` Siddharth Vadapalli
  2024-06-03  6:16     ` Yojana Mallik
  2024-06-02 16:21   ` Andrew Lunn
  2 siblings, 1 reply; 31+ messages in thread
From: Siddharth Vadapalli @ 2024-06-02  7:01 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, s-vadapalli

On Fri, May 31, 2024 at 12:10:04PM +0530, Yojana Mallik wrote:
> From: Ravi Gunasekaran <r-gunasekaran@ti.com>
> 
> TI's K3 SoCs comprises heterogeneous processors (Cortex A, Cortex R).
> When the ethernet controller is completely managed by a core (Cortex R)
> running a flavor of RTOS, in a non virtualized environment, network traffic
> tunnelling between heterogeneous processors can be realized by means of
> RPMsg based shared memory ethernet driver. With the shared memory used
> for the data plane and the RPMsg end point channel used for control plane.
> 
> inter-core-virt-eth driver is modelled as a RPMsg based shared
> memory ethernet driver for such an use case.
> 
> As a first step, register the inter-core-virt-eth as a RPMsg driver.
> And introduce basic control messages for querying and responding.
> 

Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>

My "Signed-off-by" tag was present in the RFC patch at:
https://lore.kernel.org/r/20240130110944.26771-2-r-gunasekaran@ti.com/

Any reason for dropping it? Also, I was in the Cc list of the RFC series.
Please ensure that you don't drop emails which were present in earlier
versions of the series (unless the email is no longer valid), and also
ensure that you Cc all individuals who have commented on the series when
you post a new version of the series.

> Signed-off-by: Ravi Gunasekaran <r-gunasekaran@ti.com>
> Signed-off-by: Yojana Mallik <y-mallik@ti.com>
> ---
>  drivers/net/ethernet/ti/Kconfig               |  9 +++
>  drivers/net/ethernet/ti/Makefile              |  1 +
>  drivers/net/ethernet/ti/icve_rpmsg_common.h   | 47 +++++++++++
>  drivers/net/ethernet/ti/inter_core_virt_eth.c | 81 +++++++++++++++++++

[...]

Regards,
Siddharth.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
  2024-06-01  3:13   ` kernel test robot
@ 2024-06-02  7:22   ` Siddharth Vadapalli
  2024-06-02 15:54     ` Andrew Lunn
  2024-06-02  7:35   ` Siddharth Vadapalli
  2024-06-02 16:45   ` Andrew Lunn
  3 siblings, 1 reply; 31+ messages in thread
From: Siddharth Vadapalli @ 2024-06-02  7:22 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, s-vadapalli

On Fri, May 31, 2024 at 12:10:05PM +0530, Yojana Mallik wrote:
> Register the RPMsg driver as network device and add support for
> basic ethernet functionality by using the shared memory for data
> plane.
> 
> The shared memory layout is as below, with the region between
> PKT_1_LEN to PKT_N modelled as circular buffer.
> 
> -------------------------
> |          HEAD         |
> -------------------------
> |          TAIL         |
> -------------------------
> |       PKT_1_LEN       |
> |         PKT_1         |
> -------------------------
> |       PKT_2_LEN       |
> |         PKT_2         |
> -------------------------
> |           .           |
> |           .           |
> -------------------------
> |       PKT_N_LEN       |
> |         PKT_N         |
> -------------------------
> 
> The offset between the HEAD and TAIL is polled to process the Rx packets.

The author of this patch from the RFC series is:
Ravi Gunasekaran <r-gunasekaran@ti.com>
The authorship should be preserved across versions unless the
implementation has changed drastically requiring major rework
(which doesn't seem to be the case for this patch based on the
Changelog).

> 
> Signed-off-by: Yojana Mallik <y-mallik@ti.com>
> ---
>  drivers/net/ethernet/ti/icve_rpmsg_common.h   |  86 ++++
>  drivers/net/ethernet/ti/inter_core_virt_eth.c | 453 +++++++++++++++++-
>  drivers/net/ethernet/ti/inter_core_virt_eth.h |  35 +-
>  3 files changed, 570 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ti/icve_rpmsg_common.h b/drivers/net/ethernet/ti/icve_rpmsg_common.h
> index 7cd157479d4d..2e3833de14bd 100644
> --- a/drivers/net/ethernet/ti/icve_rpmsg_common.h
> +++ b/drivers/net/ethernet/ti/icve_rpmsg_common.h
> @@ -15,14 +15,58 @@ enum icve_msg_type {
>  	ICVE_NOTIFY_MSG,
>  };

[...]

>  
> +
>  #endif /* __ICVE_RPMSG_COMMON_H__ */
> diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.c b/drivers/net/ethernet/ti/inter_core_virt_eth.c
> index bea822d2373a..d96547d317fe 100644
> --- a/drivers/net/ethernet/ti/inter_core_virt_eth.c
> +++ b/drivers/net/ethernet/ti/inter_core_virt_eth.c
> @@ -6,11 +6,145 @@
>  
>  #include "inter_core_virt_eth.h"
>  
> +#define ICVE_MIN_PACKET_SIZE ETH_ZLEN
> +#define ICVE_MAX_PACKET_SIZE 1540 //(ETH_FRAME_LEN + ETH_FCS_LEN)

Is the commented portion above required?

> +#define ICVE_MAX_TX_QUEUES 1
> +#define ICVE_MAX_RX_QUEUES 1
> +
> +#define PKT_LEN_SIZE_TYPE sizeof(u32)
> +#define MAGIC_NUM_SIZE_TYPE sizeof(u32)
> +
> +/* 4 bytes to hold packet length and ICVE_MAX_PACKET_SIZE to hold packet */
> +#define ICVE_BUFFER_SIZE \
> +	(ICVE_MAX_PACKET_SIZE + PKT_LEN_SIZE_TYPE + MAGIC_NUM_SIZE_TYPE)
> +
> +#define RX_POLL_TIMEOUT 1000 /* 1000usec */

The macro name could be updated to contain "USEC" to make it clear that
the units are in microseconds. Same comment applies to other macros
below where they can be named to contain the units.

> +#define RX_POLL_JIFFIES (jiffies + usecs_to_jiffies(RX_POLL_TIMEOUT))
> +
> +#define STATE_MACHINE_TIME msecs_to_jiffies(100)
> +#define ICVE_REQ_TIMEOUT msecs_to_jiffies(100)
> +
> +#define icve_ndev_to_priv(ndev) ((struct icve_ndev_priv *)netdev_priv(ndev))
> +#define icve_ndev_to_port(ndev) (icve_ndev_to_priv(ndev)->port)

[...]

>  	u32 msg_type = msg->msg_hdr.msg_type;
>  	u32 rpmsg_type;
>  
> @@ -24,11 +158,79 @@ static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
>  		rpmsg_type = msg->resp_msg.type;
>  		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>  			msg_type, rpmsg_type);
> +		switch (rpmsg_type) {
> +		case ICVE_RESP_SHM_INFO:
> +			/* Retrieve Tx and Rx shared memory info from msg */
> +			port->tx_buffer->head =

[...]

> +					sizeof(*port->rx_buffer->tail));
> +
> +			port->icve_rx_max_buffers =
> +				msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs;
> +
> +			mutex_lock(&common->state_lock);
> +			common->state = ICVE_STATE_READY;
> +			mutex_unlock(&common->state_lock);
> +
> +			mod_delayed_work(system_wq,
> +					 &common->state_work,
> +					 STATE_MACHINE_TIME);
> +
> +			break;
> +		case ICVE_RESP_SET_MAC_ADDR:
> +			break;
> +		}
> +
>  		break;
> +
>  	case ICVE_NOTIFY_MSG:
>  		rpmsg_type = msg->notify_msg.type;
> -		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
> -			msg_type, rpmsg_type);

Why does the debug message above have to be deleted? If it was not
required, it could have been omitted in the previous patch itself,
rather than adding it in the previous patch and removing it here.

> +		switch (rpmsg_type) {
> +		case ICVE_NOTIFY_REMOTE_READY:
> +			mutex_lock(&common->state_lock);
> +			common->state = ICVE_STATE_RUNNING;
> +			mutex_unlock(&common->state_lock);
> +
> +			mod_delayed_work(system_wq,
> +					 &common->state_work,
> +					 STATE_MACHINE_TIME);
> +			break;
> +		case ICVE_NOTIFY_PORT_UP:
> +		case ICVE_NOTIFY_PORT_DOWN:
> +			break;
> +		}
>  		break;
>  	default:

[...]

>  }
>  
> diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.h b/drivers/net/ethernet/ti/inter_core_virt_eth.h
> index 91a3aba96996..4fc420cb9eab 100644
> --- a/drivers/net/ethernet/ti/inter_core_virt_eth.h
> +++ b/drivers/net/ethernet/ti/inter_core_virt_eth.h
> @@ -14,14 +14,45 @@
>  #include <linux/rpmsg.h>
>  #include "icve_rpmsg_common.h"
>  
> +enum icve_state {
> +	ICVE_STATE_PROBE,
> +	ICVE_STATE_OPEN,
> +	ICVE_STATE_CLOSE,
> +	ICVE_STATE_READY,
> +	ICVE_STATE_RUNNING,
> +
> +};
> +
>  struct icve_port {
> +	struct icve_shared_mem *tx_buffer; /* Write buffer for data to be consumed remote side */
> +	struct icve_shared_mem *rx_buffer; /* Read buffer for data to be consumed by this driver */
> +	struct timer_list rx_timer;
>  	struct icve_common *common;
> -} __packed;

Is the "__packed" attribute no longer required, or was it overlooked?

> +	struct napi_struct rx_napi;
> +	u8 local_mac_addr[ETH_ALEN];
> +	struct net_device *ndev;
> +	u32 icve_tx_max_buffers;
> +	u32 icve_rx_max_buffers;
> +	u32 port_id;
> +};
>  
>  struct icve_common {
>  	struct rpmsg_device *rpdev;
> +	spinlock_t send_msg_lock; /* Acquire this lock while sending RPMsg */
> +	spinlock_t recv_msg_lock; /* Acquire this lock while processing received RPMsg */
> +	struct message send_msg;
> +	struct message recv_msg;
>  	struct icve_port *port;
>  	struct device *dev;
> -} __packed;

Same comment here as well.

[...]

There seem to be a lot of changes compared to the RFC patch which
haven't been mentioned in the Changelog. Please mention all the changes
when posting new versions.

Regards,
Siddharth.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
  2024-06-01  3:13   ` kernel test robot
  2024-06-02  7:22   ` Siddharth Vadapalli
@ 2024-06-02  7:35   ` Siddharth Vadapalli
  2024-06-02 16:45   ` Andrew Lunn
  3 siblings, 0 replies; 31+ messages in thread
From: Siddharth Vadapalli @ 2024-06-02  7:35 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, s-vadapalli

On Fri, May 31, 2024 at 12:10:05PM +0530, Yojana Mallik wrote:
> Register the RPMsg driver as network device and add support for
> basic ethernet functionality by using the shared memory for data
> plane.
> 
> The shared memory layout is as below, with the region between
> PKT_1_LEN to PKT_N modelled as circular buffer.
> 
> -------------------------
> |          HEAD         |
> -------------------------
> |          TAIL         |
> -------------------------
> |       PKT_1_LEN       |
> |         PKT_1         |
> -------------------------
> |       PKT_2_LEN       |
> |         PKT_2         |
> -------------------------
> |           .           |
> |           .           |
> -------------------------
> |       PKT_N_LEN       |
> |         PKT_N         |
> -------------------------
> 
> The offset between the HEAD and TAIL is polled to process the Rx packets.
> 
> Signed-off-by: Yojana Mallik <y-mallik@ti.com>
> ---
>  drivers/net/ethernet/ti/icve_rpmsg_common.h   |  86 ++++
>  drivers/net/ethernet/ti/inter_core_virt_eth.c | 453 +++++++++++++++++-
>  drivers/net/ethernet/ti/inter_core_virt_eth.h |  35 +-
>  3 files changed, 570 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ti/icve_rpmsg_common.h b/drivers/net/ethernet/ti/icve_rpmsg_common.h
> index 7cd157479d4d..2e3833de14bd 100644
> --- a/drivers/net/ethernet/ti/icve_rpmsg_common.h
> +++ b/drivers/net/ethernet/ti/icve_rpmsg_common.h
> @@ -15,14 +15,58 @@ enum icve_msg_type {
>  	ICVE_NOTIFY_MSG,
>  };

[...]

>  
>  #endif /* __ICVE_RPMSG_COMMON_H__ */
> diff --git a/drivers/net/ethernet/ti/inter_core_virt_eth.c b/drivers/net/ethernet/ti/inter_core_virt_eth.c
> index bea822d2373a..d96547d317fe 100644
> --- a/drivers/net/ethernet/ti/inter_core_virt_eth.c
> +++ b/drivers/net/ethernet/ti/inter_core_virt_eth.c
> @@ -6,11 +6,145 @@
>  
>  #include "inter_core_virt_eth.h"

[...]

>  
> +static int create_request(struct icve_common *common,
> +			  enum icve_rpmsg_type rpmsg_type)
> +{
> +	struct message *msg = &common->send_msg;
> +	int ret = 0;
> +
> +	msg->msg_hdr.src_id = common->port->port_id;
> +	msg->req_msg.type = rpmsg_type;
> +
> +	switch (rpmsg_type) {
> +	case ICVE_REQ_SHM_INFO:
> +		msg->msg_hdr.msg_type = ICVE_REQUEST_MSG;
> +		break;
> +	case ICVE_REQ_SET_MAC_ADDR:
> +		msg->msg_hdr.msg_type = ICVE_REQUEST_MSG;
> +		ether_addr_copy(msg->req_msg.mac_addr.addr,
> +				common->port->ndev->dev_addr);
> +		break;
> +	case ICVE_NOTIFY_PORT_UP:
> +	case ICVE_NOTIFY_PORT_DOWN:
> +		msg->msg_hdr.msg_type = ICVE_NOTIFY_MSG;
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		dev_err(common->dev, "Invalid RPMSG request\n");
> +	};
> +	return ret;
> +}
> +
> +static int icve_create_send_request(struct icve_common *common,
> +				    enum icve_rpmsg_type rpmsg_type,
> +				    bool wait)
> +{
> +	unsigned long flags;
> +	int ret;
> +
> +	if (wait)
> +		reinit_completion(&common->sync_msg);
> +
> +	spin_lock_irqsave(&common->send_msg_lock, flags);
> +	create_request(common, rpmsg_type);

Why isn't the return value of create_request() being checked?
If it is guaranteed to always return 0 based on the design, convert it
to a void function.

> +	rpmsg_send(common->rpdev->ept, (void *)(&common->send_msg),
> +		   sizeof(common->send_msg));
> +	spin_unlock_irqrestore(&common->send_msg_lock, flags);
> +
> +	if (wait) {
> +		ret = wait_for_completion_timeout(&common->sync_msg,
> +						  ICVE_REQ_TIMEOUT);
> +
> +		if (!ret) {
> +			dev_err(common->dev, "Failed to receive response within %ld jiffies\n",
> +				ICVE_REQ_TIMEOUT);
> +			ret = -ETIMEDOUT;
> +			return ret;
> +		}
> +	}
> +	return ret;
> +}
> +
> +static void icve_state_machine(struct work_struct *work)
> +{
> +	struct delayed_work *dwork = to_delayed_work(work);
> +	struct icve_common *common;
> +	struct icve_port *port;
> +
> +	common = container_of(dwork, struct icve_common, state_work);
> +	port = common->port;
> +
> +	mutex_lock(&common->state_lock);
> +
> +	switch (common->state) {
> +	case ICVE_STATE_PROBE:
> +		break;
> +	case ICVE_STATE_OPEN:
> +		icve_create_send_request(common, ICVE_REQ_SHM_INFO, false);

The return value of icve_create_send_request() is not being checked. Is
it guaranteed to succeed? Where is the error handling path if
icve_create_send_request() fails?

> +		break;
> +	case ICVE_STATE_CLOSE:
> +		break;
> +	case ICVE_STATE_READY:
> +		icve_create_send_request(common, ICVE_REQ_SET_MAC_ADDR, false);

Same here and at all other places where icve_create_send_request() is
being invoked. The icve_create_send_request() seems to be newly added in
this version of the series and wasn't there in the RFC patch. This should
be mentioned in the Changelog.

[...]

Regards,
Siddharth.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver
  2024-05-31  6:40 [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Yojana Mallik
                   ` (2 preceding siblings ...)
  2024-05-31  6:40 ` [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering Yojana Mallik
@ 2024-06-02 15:45 ` Andrew Lunn
  2024-06-03 23:54   ` Jakub Kicinski
  2024-06-12 12:48   ` Yojana Mallik
  3 siblings, 2 replies; 31+ messages in thread
From: Andrew Lunn @ 2024-06-02 15:45 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq

On Fri, May 31, 2024 at 12:10:03PM +0530, Yojana Mallik wrote:
> virtio-net provides a solution for virtual ethernet interface in a
> virtualized environment.
> 
> There might be a use-case for traffic tunneling between heterogeneous
> processors in a non virtualized environment such as TI's AM64x that has
> Cortex A53 and Cortex R5 where Linux runs on A53 and a flavour of RTOS
> on R5(FreeRTOS) and the ethernet controller is managed by R5 and needs
> to pass some low priority data to A53.
> 
> One solution for such an use case where the ethernet controller does
> not support DMA for Tx/Rx channel, could be a RPMsg based shared memory
> ethernet driver.

virtio-net is very generic and vendor agnostic.

Looking at icve, what is TI specific? Why not define a generic
solution which could be used for any heterogeneous system? We are
seeming more and more such systems, and there is no point everybody
re-inventing the wheel. So what i would like to see is something
similar to driver/tty/rpmsg_tty.c, a driver/net/ethernet/rpmsg_eth.c,
with good documentation of the protocol used, so that others can
implement it. And since you say you have FreeRTOS on the other end,
you could also contribute that side to FreeRTOS as well. A complete
open source solution everybody can use.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-02  7:22   ` Siddharth Vadapalli
@ 2024-06-02 15:54     ` Andrew Lunn
  0 siblings, 0 replies; 31+ messages in thread
From: Andrew Lunn @ 2024-06-02 15:54 UTC (permalink / raw)
  To: Siddharth Vadapalli
  Cc: Yojana Mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem,
	netdev, linux-kernel, srk, rogerq

> > +#define ICVE_MIN_PACKET_SIZE ETH_ZLEN
> > +#define ICVE_MAX_PACKET_SIZE 1540 //(ETH_FRAME_LEN + ETH_FCS_LEN)
> 
> Is the commented portion above required?

I would actually say the comment part is better, since it gives an
idea where the number comes from. However, 1514 + 4 != 1540. So there
is something missing here.

> >  struct icve_port {
> > +	struct icve_shared_mem *tx_buffer; /* Write buffer for data to be consumed remote side */
> > +	struct icve_shared_mem *rx_buffer; /* Read buffer for data to be consumed by this driver */
> > +	struct timer_list rx_timer;
> >  	struct icve_common *common;
> > -} __packed;
> 
> Is the "__packed" attribute no longer required, or was it overlooked?

Why is packed even needed? This is not a message structure to be
passed over the RPC is it?

I think this is the second time code has been added in one patch, and
then removed in the next. That is bad practice and suggests the
overall code quality is not good. Please do some internal reviews.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
  2024-05-31 15:30   ` Randy Dunlap
  2024-06-02  7:01   ` Siddharth Vadapalli
@ 2024-06-02 16:21   ` Andrew Lunn
  2024-06-03  8:56     ` Yojana Mallik
  2 siblings, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2024-06-02 16:21 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq

> +struct request_message {
> +	u32 type; /* Request Type */
> +	u32 id;	  /* Request ID */
> +} __packed;
> +
> +struct response_message {
> +	u32 type;	/* Response Type */
> +	u32 id;		/* Response ID */
> +} __packed;
> +
> +struct notify_message {
> +	u32 type;	/* Notify Type */
> +	u32 id;		/* Notify ID */
> +} __packed;

These are basically identical.

The packed should not be needed, since these structures are naturally
aligned. The compiler will do the right thing without the
__packet. And there is a general dislike for __packed. It is better to
layout your structures correctly so they are not needed.

> +struct message_header {
> +	u32 src_id;
> +	u32 msg_type; /* Do not use enum type, as enum size is compiler dependent */
> +} __packed;
> +
> +struct message {
> +	struct message_header msg_hdr;
> +	union {
> +		struct request_message req_msg;
> +		struct response_message resp_msg;
> +		struct notify_message notify_msg;
> +	};

Since they are identical, why bother with a union?  It could be argued
it allows future extensions, but i don't see any sort of protocol
version here so you can tell if extra fields have been added.

> +static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
> +			 void *priv, u32 src)
> +{
> +	struct icve_common *common = dev_get_drvdata(&rpdev->dev);
> +	struct message *msg = (struct message *)data;
> +	u32 msg_type = msg->msg_hdr.msg_type;
> +	u32 rpmsg_type;
> +
> +	switch (msg_type) {
> +	case ICVE_REQUEST_MSG:
> +		rpmsg_type = msg->req_msg.type;
> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
> +			msg_type, rpmsg_type);
> +		break;
> +	case ICVE_RESPONSE_MSG:
> +		rpmsg_type = msg->resp_msg.type;
> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
> +			msg_type, rpmsg_type);
> +		break;
> +	case ICVE_NOTIFY_MSG:
> +		rpmsg_type = msg->notify_msg.type;
> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
> +			msg_type, rpmsg_type);

This can be flattened to:

> +	case ICVE_REQUEST_MSG:
> +	case ICVE_RESPONSE_MSG:
> +	case ICVE_NOTIFY_MSG:
> +		rpmsg_type = msg->notify_msg.type;
> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
> +			msg_type, rpmsg_type);

which makes me wounder about the value of this. Yes, later patches are
going to flesh this out, but what value is there in printing the
numerical value of msg_type, when you could easily have the text
"Request", "Response", and "Notify". And why not include src_id and id
in this debug output? If you are going to add debug output, please
make it complete, otherwise it is often not useful.

> +		break;
> +	default:
> +		dev_err(common->dev, "Invalid msg type\n");
> +		break;

That is a potential way for the other end to DoS you. It also makes
changes to the protocol difficult, since you cannot add new messages
without DoS a machine using the old protocol. It would be better to
just increment a counter and keep going.

> +static void icve_rpmsg_remove(struct rpmsg_device *rpdev)
> +{
> +	dev_info(&rpdev->dev, "icve rpmsg client driver is removed\n");

Please don't spam the logs. dev_dbg(), or nothing at all.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
                     ` (2 preceding siblings ...)
  2024-06-02  7:35   ` Siddharth Vadapalli
@ 2024-06-02 16:45   ` Andrew Lunn
  2024-06-04  6:23     ` Yojana Mallik
  3 siblings, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2024-06-02 16:45 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq

> +enum icve_rpmsg_type {
> +	/* Request types */
> +	ICVE_REQ_SHM_INFO = 0,
> +	ICVE_REQ_SET_MAC_ADDR,
> +
> +	/* Response types */
> +	ICVE_RESP_SHM_INFO,
> +	ICVE_RESP_SET_MAC_ADDR,
> +
> +	/* Notification types */
> +	ICVE_NOTIFY_PORT_UP,
> +	ICVE_NOTIFY_PORT_DOWN,
> +	ICVE_NOTIFY_PORT_READY,
> +	ICVE_NOTIFY_REMOTE_READY,
> +};

+struct message_header {
+       u32 src_id;
+       u32 msg_type; /* Do not use enum type, as enum size is compiler dependent */
+} __packed;


Given how you have defined icve_rpmsg_type, what is the point of
message_header.msg_type?

It seems like this would make more sense:

enum icve_rpmsg_request_type {
	ICVE_REQ_SHM_INFO = 0,
	ICVE_REQ_SET_MAC_ADDR,
}

enum icve_rpmsg_response_type {
	ICVE_RESP_SHM_INFO,
	ICVE_RESP_SET_MAC_ADDR,
}
enum icve_rpmsg_notify_type {
	ICVE_NOTIFY_PORT_UP,
	ICVE_NOTIFY_PORT_DOWN,
	ICVE_NOTIFY_PORT_READY,
	ICVE_NOTIFY_REMOTE_READY,
};

Also, why SET_MAC_ADDR? It would be good to document where the MAC
address are coming from. And what address this is setting.

In fact, please put all the protocol documentation into a .rst
file. That will help us discuss the protocol independent of the
implementation. The protocol is an ABI, so needs to be reviewed well.

> +struct icve_shm_info {
> +	/* Total shared memory size */
> +	u32 total_shm_size;
> +	/* Total number of buffers */
> +	u32 num_pkt_bufs;
> +	/* Per buff slot size i.e MTU Size + 4 bytes for magic number + 4 bytes
> +	 * for Pkt len
> +	 */

What is your definition of MTU?

enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000

Typically, MTU does not include the Ethernet header or checksum. Is
that what you mean here?

> +	u32 buff_slot_size;
> +	/* Base Address for Tx or Rx shared memory */
> +	u32 base_addr;
> +} __packed;

What do you mean by address here? Virtual address, physical address,
DMA address? And whos address is this, you have two CPUs here, with no
guaranteed the shared memory is mapped to the same address in both
address spaces.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-05-31 15:30   ` Randy Dunlap
@ 2024-06-03  5:50     ` Yojana Mallik
  0 siblings, 0 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-06-03  5:50 UTC (permalink / raw)
  To: Randy Dunlap, schnelle, wsa+renesas, diogo.ivo, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: netdev, linux-kernel, srk, rogerq



On 5/31/24 21:00, Randy Dunlap wrote:
> 
> 
> On 5/30/24 11:40 PM, Yojana Mallik wrote:
>> diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
>> index 1729eb0e0b41..4f00cb8fe9f1 100644
>> --- a/drivers/net/ethernet/ti/Kconfig
>> +++ b/drivers/net/ethernet/ti/Kconfig
>> @@ -145,6 +145,15 @@ config TI_AM65_CPSW_QOS
>>   	  The EST scheduler runs on CPTS and the TAS/EST schedule is
>>   	  updated in the Fetch RAM memory of the CPSW.
>>   
>> +config TI_K3_INTERCORE_VIRT_ETH
>> +	tristate "TI K3 Intercore Virtual Ethernet driver"
>> +	help
>> +	  This driver provides intercore virtual ethernet driver
>> +	  capability.Intercore Virtual Ethernet driver is modelled as
> 
> 	  capability. Intercore

I will fix this.

> 
>> +	  a RPMsg based shared memory ethernet driver for network traffic
> 
> 	  a RPMsg-based
> 

I will fix this.

>> +	  tunnelling between heterogeneous processors Cortex A and Cortex R
>> +	  used in TI's K3 SoCs.
> 
> 
> OK, the darned British spellings can stay. ;)
> (the double-l words)
>

I will fix this. Thankyou for pointing out the errors.

Regards,
Yojana Mallik


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-06-02  7:01   ` Siddharth Vadapalli
@ 2024-06-03  6:16     ` Yojana Mallik
  0 siblings, 0 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-06-03  6:16 UTC (permalink / raw)
  To: Siddharth Vadapalli
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, y-mallik



On 6/2/24 12:31, Siddharth Vadapalli wrote:
> On Fri, May 31, 2024 at 12:10:04PM +0530, Yojana Mallik wrote:
>> From: Ravi Gunasekaran <r-gunasekaran@ti.com>
>>
>> TI's K3 SoCs comprises heterogeneous processors (Cortex A, Cortex R).
>> When the ethernet controller is completely managed by a core (Cortex R)
>> running a flavor of RTOS, in a non virtualized environment, network traffic
>> tunnelling between heterogeneous processors can be realized by means of
>> RPMsg based shared memory ethernet driver. With the shared memory used
>> for the data plane and the RPMsg end point channel used for control plane.
>>
>> inter-core-virt-eth driver is modelled as a RPMsg based shared
>> memory ethernet driver for such an use case.
>>
>> As a first step, register the inter-core-virt-eth as a RPMsg driver.
>> And introduce basic control messages for querying and responding.
>>
> 
> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>
> 
> My "Signed-off-by" tag was present in the RFC patch at:
> https://lore.kernel.org/r/20240130110944.26771-2-r-gunasekaran@ti.com/
> 

Sorry for the mistake. I will add it.

> Any reason for dropping it? Also, I was in the Cc list of the RFC series.
> Please ensure that you don't drop emails which were present in earlier
> versions of the series (unless the email is no longer valid), and also
> ensure that you Cc all individuals who have commented on the series when
> you post a new version of the series.
> 

Sorry for the mistake. I will ensure that I Cc all the necessary individuals.

>> Signed-off-by: Ravi Gunasekaran <r-gunasekaran@ti.com>
>> Signed-off-by: Yojana Mallik <y-mallik@ti.com>
>> ---
>>  drivers/net/ethernet/ti/Kconfig               |  9 +++
>>  drivers/net/ethernet/ti/Makefile              |  1 +
>>  drivers/net/ethernet/ti/icve_rpmsg_common.h   | 47 +++++++++++
>>  drivers/net/ethernet/ti/inter_core_virt_eth.c | 81 +++++++++++++++++++
> 
> [...]
> 
> Regards,
> Siddharth.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-06-02 16:21   ` Andrew Lunn
@ 2024-06-03  8:56     ` Yojana Mallik
  2024-06-03 12:54       ` Andrew Lunn
  0 siblings, 1 reply; 31+ messages in thread
From: Yojana Mallik @ 2024-06-03  8:56 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, s-vadapalli

Hi Andrew,

On 6/2/24 21:51, Andrew Lunn wrote:
>> +struct request_message {
>> +	u32 type; /* Request Type */
>> +	u32 id;	  /* Request ID */
>> +} __packed;
>> +
>> +struct response_message {
>> +	u32 type;	/* Response Type */
>> +	u32 id;		/* Response ID */
>> +} __packed;
>> +
>> +struct notify_message {
>> +	u32 type;	/* Notify Type */
>> +	u32 id;		/* Notify ID */
>> +} __packed;
> 
> These are basically identical.
> 

The first patch introduces only the RPMsg-based driver.
The RPMsg driver is registered as a network device in the second patch.
struct icve_mac_addr mac_addr is added as a member to
struct request_message in the second patch. Similarly struct icve_shm shm_info
is added as a member to struct response_message in the second patch. From
second patch onward struct request_message and struct response_message are not
identical. These members are used for the network device driver. As this patch
introduces only RPMsg-based ethernet driver these members were not used in this
patch and hence not mentioned in this patch. I understand this has led to the
confusion of the structures looking similar in this patch. Kindly suggest if I
should add these members in this patch itself instead of introducing them in
the next patch.

> The packed should not be needed, since these structures are naturally
> aligned. The compiler will do the right thing without the
> __packet. And there is a general dislike for __packed. It is better to
> layout your structures correctly so they are not needed.
> 

>> +struct message_header {
>> +	u32 src_id;
>> +	u32 msg_type; /* Do not use enum type, as enum size is compiler dependent */
>> +} __packed;
>> +
>> +struct message {
>> +	struct message_header msg_hdr;
>> +	union {
>> +		struct request_message req_msg;
>> +		struct response_message resp_msg;
>> +		struct notify_message notify_msg;
>> +	};
> 
> Since they are identical, why bother with a union?  It could be argued
> it allows future extensions, but i don't see any sort of protocol
> version here so you can tell if extra fields have been added.
> 

struct icve_mac_addr mac_addr is added as a member to
struct request_message in the second patch. Similarly struct icve_shm shm_info
is added as a member to struct response_message in the second patch. So sizes
of the structures are different. Hence union is used. I had moved those newly
added members to second patch because they are not used in the first patch. I
understand this has led to the confusion of the structures looking identical in
this patch. If you suggest I will move the newly added members of the
structures from the second patch to the first patch and then the structures
will not look identical in this patch.

>> +static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
>> +			 void *priv, u32 src)
>> +{
>> +	struct icve_common *common = dev_get_drvdata(&rpdev->dev);
>> +	struct message *msg = (struct message *)data;
>> +	u32 msg_type = msg->msg_hdr.msg_type;
>> +	u32 rpmsg_type;
>> +
>> +	switch (msg_type) {
>> +	case ICVE_REQUEST_MSG:
>> +		rpmsg_type = msg->req_msg.type;
>> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> +			msg_type, rpmsg_type);
>> +		break;
>> +	case ICVE_RESPONSE_MSG:
>> +		rpmsg_type = msg->resp_msg.type;
>> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> +			msg_type, rpmsg_type);
>> +		break;
>> +	case ICVE_NOTIFY_MSG:
>> +		rpmsg_type = msg->notify_msg.type;
>> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> +			msg_type, rpmsg_type);
> 
> This can be flattened to:
> 
>> +	case ICVE_REQUEST_MSG:
>> +	case ICVE_RESPONSE_MSG:
>> +	case ICVE_NOTIFY_MSG:
>> +		rpmsg_type = msg->notify_msg.type;
>> +		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> +			msg_type, rpmsg_type);
> 

New switch case statements have been added in the second patch for rpmsg_type
under under the case ICVE_RESPONSE_MSG. This makes case ICVE_REQUEST_MSG, case
ICVE_RESPONSE_MSG and case ICVE_NOTIFY_MSG different in the second patch. I
have kept icve_rpmsg_cb simple in this patch as it is called by the .callback.
Do you suggest to flatten these switch cases only for this patch?

> which makes me wounder about the value of this. Yes, later patches are
> going to flesh this out, but what value is there in printing the
> numerical value of msg_type, when you could easily have the text
> "Request", "Response", and "Notify". And why not include src_id and id
> in this debug output? If you are going to add debug output, please
> make it complete, otherwise it is often not useful.
> 

I will modify the debug output by including texts like "Request", "Response",
and "Notify" instead of the numerical value of msg_type. I will include src_id
and id and try to make this debug output complete and meaningful.

>> +		break;
>> +	default:
>> +		dev_err(common->dev, "Invalid msg type\n");
>> +		break;

> That is a potential way for the other end to DoS you. It also makes
> changes to the protocol difficult, since you cannot add new messages
> without DoS a machine using the old protocol. It would be better to
> just increment a counter and keep going.
> 

I will modify default case to return -EINVAL.

>> +static void icve_rpmsg_remove(struct rpmsg_device *rpdev)
>> +{
>> +	dev_info(&rpdev->dev, "icve rpmsg client driver is removed\n");
> 
> Please don't spam the logs. dev_dbg(), or nothing at all.
> 
> 	Andrew

I will remove the dev_info from icve_rpmsg_remove.

Thanks for your feedback.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-01  3:13   ` kernel test robot
@ 2024-06-03  9:26     ` Yojana Mallik
  0 siblings, 0 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-06-03  9:26 UTC (permalink / raw)
  To: kernel test robot, schnelle, wsa+renesas, diogo.ivo, rdunlap,
	horms, vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet,
	davem
  Cc: llvm, oe-kbuild-all, netdev, linux-kernel, srk, rogerq,
	s-vadapalli, y-mallik



On 6/1/24 08:43, kernel test robot wrote:
> Hi Yojana,
> 
> kernel test robot noticed the following build warnings:
> 
> [auto build test WARNING on net-next/main]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Yojana-Mallik/net-ethernet-ti-RPMsg-based-shared-memory-ethernet-driver/20240531-144258
> base:   net-next/main
> patch link:    https://lore.kernel.org/r/20240531064006.1223417-3-y-mallik%40ti.com
> patch subject: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
> config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20240601/202406011038.AwLZhQpy-lkp@intel.com/config)
> compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240601/202406011038.AwLZhQpy-lkp@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202406011038.AwLZhQpy-lkp@intel.com/
> 
> All warnings (new ones prefixed by >>):
> 
>>> drivers/net/ethernet/ti/inter_core_virt_eth.c:76:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
>       76 |         if (wait) {
>          |             ^~~~
>    drivers/net/ethernet/ti/inter_core_virt_eth.c:87:9: note: uninitialized use occurs here
>       87 |         return ret;
>          |                ^~~
>    drivers/net/ethernet/ti/inter_core_virt_eth.c:76:2: note: remove the 'if' if its condition is always true
>       76 |         if (wait) {
>          |         ^~~~~~~~~
>    drivers/net/ethernet/ti/inter_core_virt_eth.c:65:9: note: initialize the variable 'ret' to silence this warning
>       65 |         int ret;
>          |                ^
>          |                 = 0
>    drivers/net/ethernet/ti/inter_core_virt_eth.c:330:24: error: use of undeclared identifier 'icve_del_mc_addr'
>      330 |         __dev_mc_unsync(ndev, icve_del_mc_addr);
>          |                               ^
>    drivers/net/ethernet/ti/inter_core_virt_eth.c:331:26: error: no member named 'mc_list' in 'struct icve_common'
>      331 |         __hw_addr_init(&common->mc_list);
>          |                         ~~~~~~  ^
>    drivers/net/ethernet/ti/inter_core_virt_eth.c:337:28: error: no member named 'rx_mode_work' in 'struct icve_common'
>      337 |         cancel_work_sync(&common->rx_mode_work);
>          |                           ~~~~~~  ^
>    1 warning and 3 errors generated.
> 
> 
> vim +76 drivers/net/ethernet/ti/inter_core_virt_eth.c
> 
>     59	
>     60	static int icve_create_send_request(struct icve_common *common,
>     61					    enum icve_rpmsg_type rpmsg_type,
>     62					    bool wait)
>     63	{
>     64		unsigned long flags;
>     65		int ret;
>     66	
>     67		if (wait)
>     68			reinit_completion(&common->sync_msg);
>     69	
>     70		spin_lock_irqsave(&common->send_msg_lock, flags);
>     71		create_request(common, rpmsg_type);
>     72		rpmsg_send(common->rpdev->ept, (void *)(&common->send_msg),
>     73			   sizeof(common->send_msg));
>     74		spin_unlock_irqrestore(&common->send_msg_lock, flags);
>     75	
>   > 76		if (wait) {
>     77			ret = wait_for_completion_timeout(&common->sync_msg,
>     78							  ICVE_REQ_TIMEOUT);
>     79	
>     80			if (!ret) {
>     81				dev_err(common->dev, "Failed to receive response within %ld jiffies\n",
>     82					ICVE_REQ_TIMEOUT);
>     83				ret = -ETIMEDOUT;
>     84				return ret;
>     85			}
>     86		}
>     87		return ret;
>     88	}
>     89	
> 

I will fix all these issues in v3.

Regards,
Yojana Mallik

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver
  2024-06-03  8:56     ` Yojana Mallik
@ 2024-06-03 12:54       ` Andrew Lunn
  0 siblings, 0 replies; 31+ messages in thread
From: Andrew Lunn @ 2024-06-03 12:54 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, s-vadapalli

On Mon, Jun 03, 2024 at 02:26:06PM +0530, Yojana Mallik wrote:
> Hi Andrew,
> 
> On 6/2/24 21:51, Andrew Lunn wrote:
> >> +struct request_message {
> >> +	u32 type; /* Request Type */
> >> +	u32 id;	  /* Request ID */
> >> +} __packed;
> >> +
> >> +struct response_message {
> >> +	u32 type;	/* Response Type */
> >> +	u32 id;		/* Response ID */
> >> +} __packed;
> >> +
> >> +struct notify_message {
> >> +	u32 type;	/* Notify Type */
> >> +	u32 id;		/* Notify ID */
> >> +} __packed;
> > 
> > These are basically identical.
> > 
> 
> The first patch introduces only the RPMsg-based driver.
> The RPMsg driver is registered as a network device in the second patch.
> struct icve_mac_addr mac_addr is added as a member to
> struct request_message in the second patch. Similarly struct icve_shm shm_info
> is added as a member to struct response_message in the second patch. From
> second patch onward struct request_message and struct response_message are not
> identical. These members are used for the network device driver. As this patch
> introduces only RPMsg-based ethernet driver these members were not used in this
> patch and hence not mentioned in this patch. I understand this has led to the
> confusion of the structures looking similar in this patch. Kindly suggest if I
> should add these members in this patch itself instead of introducing them in
> the next patch.

I think your first patch should add documentation of the whole
protocol. With a clear understanding of what the end goal is, it
becomes easier to understand the step by step implementation stages.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering
  2024-05-31  6:40 ` [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering Yojana Mallik
@ 2024-06-03 21:27   ` kernel test robot
  0 siblings, 0 replies; 31+ messages in thread
From: kernel test robot @ 2024-06-03 21:27 UTC (permalink / raw)
  To: Yojana Mallik, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, kuba, edumazet, davem
  Cc: oe-kbuild-all, netdev, linux-kernel, srk, rogerq

Hi Yojana,

kernel test robot noticed the following build warnings:

[auto build test WARNING on net-next/main]

url:    https://github.com/intel-lab-lkp/linux/commits/Yojana-Mallik/net-ethernet-ti-RPMsg-based-shared-memory-ethernet-driver/20240531-144258
base:   net-next/main
patch link:    https://lore.kernel.org/r/20240531064006.1223417-4-y-mallik%40ti.com
patch subject: [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering
config: powerpc64-randconfig-r112-20240604 (https://download.01.org/0day-ci/archive/20240604/202406040524.rKAgLczS-lkp@intel.com/config)
compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project d7d2d4f53fc79b4b58e8d8d08151b577c3699d4a)
reproduce: (https://download.01.org/0day-ci/archive/20240604/202406040524.rKAgLczS-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406040524.rKAgLczS-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
>> drivers/net/ethernet/ti/inter_core_virt_eth.c:291:32: sparse: sparse: cast removes address space '__iomem' of expression
>> drivers/net/ethernet/ti/inter_core_virt_eth.c:291:32: sparse: sparse: incorrect type in argument 2 (different address spaces) @@     expected void const volatile [noderef] __iomem *s @@     got void * @@
   drivers/net/ethernet/ti/inter_core_virt_eth.c:291:32: sparse:     expected void const volatile [noderef] __iomem *s
   drivers/net/ethernet/ti/inter_core_virt_eth.c:291:32: sparse:     got void *
   drivers/net/ethernet/ti/inter_core_virt_eth.c:306:32: sparse: sparse: cast removes address space '__iomem' of expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:306:32: sparse: sparse: incorrect type in argument 2 (different address spaces) @@     expected void const volatile [noderef] __iomem *s @@     got void * @@
   drivers/net/ethernet/ti/inter_core_virt_eth.c:306:32: sparse:     expected void const volatile [noderef] __iomem *s
   drivers/net/ethernet/ti/inter_core_virt_eth.c:306:32: sparse:     got void *
   drivers/net/ethernet/ti/inter_core_virt_eth.c:392:22: sparse: sparse: cast removes address space '__iomem' of expression
>> drivers/net/ethernet/ti/inter_core_virt_eth.c:393:49: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected void volatile [noderef] __iomem *d @@     got void * @@
   drivers/net/ethernet/ti/inter_core_virt_eth.c:393:49: sparse:     expected void volatile [noderef] __iomem *d
   drivers/net/ethernet/ti/inter_core_virt_eth.c:393:49: sparse:     got void *
   drivers/net/ethernet/ti/inter_core_virt_eth.c:397:22: sparse: sparse: cast removes address space '__iomem' of expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:397:22: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected void volatile [noderef] __iomem *d @@     got void * @@
   drivers/net/ethernet/ti/inter_core_virt_eth.c:397:22: sparse:     expected void volatile [noderef] __iomem *d
   drivers/net/ethernet/ti/inter_core_virt_eth.c:397:22: sparse:     got void *
>> drivers/net/ethernet/ti/inter_core_virt_eth.c:496:30: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct icve_shm_buf [noderef] __iomem *buf @@     got void * @@
   drivers/net/ethernet/ti/inter_core_virt_eth.c:496:30: sparse:     expected struct icve_shm_buf [noderef] __iomem *buf
   drivers/net/ethernet/ti/inter_core_virt_eth.c:496:30: sparse:     got void *
   drivers/net/ethernet/ti/inter_core_virt_eth.c:510:30: sparse: sparse: incorrect type in assignment (different address spaces) @@     expected struct icve_shm_buf [noderef] __iomem *buf @@     got void * @@
   drivers/net/ethernet/ti/inter_core_virt_eth.c:510:30: sparse:     expected struct icve_shm_buf [noderef] __iomem *buf
   drivers/net/ethernet/ti/inter_core_virt_eth.c:510:30: sparse:     got void *
   drivers/net/ethernet/ti/inter_core_virt_eth.c: note: in included file (through include/linux/mmzone.h, include/linux/gfp.h, include/linux/xarray.h, ...):
   include/linux/page-flags.h:240:46: sparse: sparse: self-comparison always evaluates to false
   include/linux/page-flags.h:240:46: sparse: sparse: self-comparison always evaluates to false
>> drivers/net/ethernet/ti/inter_core_virt_eth.c:153:31: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:154:31: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:193:40: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:212:40: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:280:31: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:281:31: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:291:55: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:291:55: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:306:55: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:306:55: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:325:32: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:326:41: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:379:31: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:380:31: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:392:44: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:394:45: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:392:44: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:394:45: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:397:45: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:399:46: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:397:45: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:399:46: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:401:24: sparse: sparse: dereference of noderef expression
   drivers/net/ethernet/ti/inter_core_virt_eth.c:402:33: sparse: sparse: dereference of noderef expression

vim +/__iomem +291 drivers/net/ethernet/ti/inter_core_virt_eth.c

5655a9b008b088 Yojana Mallik    2024-05-31  145  
5655a9b008b088 Yojana Mallik    2024-05-31  146  static void icve_rx_timer(struct timer_list *timer)
5655a9b008b088 Yojana Mallik    2024-05-31  147  {
5655a9b008b088 Yojana Mallik    2024-05-31  148  	struct icve_port *port = from_timer(port, timer, rx_timer);
5655a9b008b088 Yojana Mallik    2024-05-31  149  	struct napi_struct *napi;
5655a9b008b088 Yojana Mallik    2024-05-31  150  	int num_pkts = 0;
5655a9b008b088 Yojana Mallik    2024-05-31  151  	u32 head, tail;
5655a9b008b088 Yojana Mallik    2024-05-31  152  
5655a9b008b088 Yojana Mallik    2024-05-31 @153  	head = port->rx_buffer->head->index;
5655a9b008b088 Yojana Mallik    2024-05-31  154  	tail = port->rx_buffer->tail->index;
5655a9b008b088 Yojana Mallik    2024-05-31  155  
5655a9b008b088 Yojana Mallik    2024-05-31  156  	num_pkts = tail - head;
5655a9b008b088 Yojana Mallik    2024-05-31  157  	num_pkts = num_pkts >= 0 ? num_pkts :
5655a9b008b088 Yojana Mallik    2024-05-31  158  				   (num_pkts + port->icve_rx_max_buffers);
5655a9b008b088 Yojana Mallik    2024-05-31  159  
5655a9b008b088 Yojana Mallik    2024-05-31  160  	napi = &port->rx_napi;
5655a9b008b088 Yojana Mallik    2024-05-31  161  	if (num_pkts && likely(napi_schedule_prep(napi)))
5655a9b008b088 Yojana Mallik    2024-05-31  162  		__napi_schedule(napi);
5655a9b008b088 Yojana Mallik    2024-05-31  163  	else
5655a9b008b088 Yojana Mallik    2024-05-31  164  		mod_timer(&port->rx_timer, RX_POLL_JIFFIES);
5655a9b008b088 Yojana Mallik    2024-05-31  165  }
5655a9b008b088 Yojana Mallik    2024-05-31  166  
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  167  static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  168  			 void *priv, u32 src)
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  169  {
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  170  	struct icve_common *common = dev_get_drvdata(&rpdev->dev);
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  171  	struct message *msg = (struct message *)data;
5655a9b008b088 Yojana Mallik    2024-05-31  172  	struct icve_port *port = common->port;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  173  	u32 msg_type = msg->msg_hdr.msg_type;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  174  	u32 rpmsg_type;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  175  
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  176  	switch (msg_type) {
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  177  	case ICVE_REQUEST_MSG:
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  178  		rpmsg_type = msg->req_msg.type;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  179  		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  180  			msg_type, rpmsg_type);
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  181  		break;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  182  	case ICVE_RESPONSE_MSG:
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  183  		rpmsg_type = msg->resp_msg.type;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  184  		dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  185  			msg_type, rpmsg_type);
5655a9b008b088 Yojana Mallik    2024-05-31  186  		switch (rpmsg_type) {
5655a9b008b088 Yojana Mallik    2024-05-31  187  		case ICVE_RESP_SHM_INFO:
5655a9b008b088 Yojana Mallik    2024-05-31  188  			/* Retrieve Tx and Rx shared memory info from msg */
5655a9b008b088 Yojana Mallik    2024-05-31  189  			port->tx_buffer->head =
5655a9b008b088 Yojana Mallik    2024-05-31  190  				ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr,
5655a9b008b088 Yojana Mallik    2024-05-31  191  					sizeof(*port->tx_buffer->head));
5655a9b008b088 Yojana Mallik    2024-05-31  192  
5655a9b008b088 Yojana Mallik    2024-05-31  193  			port->tx_buffer->buf->base_addr =
5655a9b008b088 Yojana Mallik    2024-05-31  194  				ioremap((msg->resp_msg.shm_info.shm_info_tx.base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  195  					sizeof(*port->tx_buffer->head)),
5655a9b008b088 Yojana Mallik    2024-05-31  196  					(msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
5655a9b008b088 Yojana Mallik    2024-05-31  197  					 msg->resp_msg.shm_info.shm_info_tx.buff_slot_size));
5655a9b008b088 Yojana Mallik    2024-05-31  198  
5655a9b008b088 Yojana Mallik    2024-05-31  199  			port->tx_buffer->tail =
5655a9b008b088 Yojana Mallik    2024-05-31  200  				ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  201  					sizeof(*port->tx_buffer->head) +
5655a9b008b088 Yojana Mallik    2024-05-31  202  					(msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
5655a9b008b088 Yojana Mallik    2024-05-31  203  					msg->resp_msg.shm_info.shm_info_tx.buff_slot_size),
5655a9b008b088 Yojana Mallik    2024-05-31  204  					sizeof(*port->tx_buffer->tail));
5655a9b008b088 Yojana Mallik    2024-05-31  205  
5655a9b008b088 Yojana Mallik    2024-05-31  206  			port->icve_tx_max_buffers = msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs;
5655a9b008b088 Yojana Mallik    2024-05-31  207  
5655a9b008b088 Yojana Mallik    2024-05-31  208  			port->rx_buffer->head =
5655a9b008b088 Yojana Mallik    2024-05-31  209  				ioremap(msg->resp_msg.shm_info.shm_info_rx.base_addr,
5655a9b008b088 Yojana Mallik    2024-05-31  210  					sizeof(*port->rx_buffer->head));
5655a9b008b088 Yojana Mallik    2024-05-31  211  
5655a9b008b088 Yojana Mallik    2024-05-31  212  			port->rx_buffer->buf->base_addr =
5655a9b008b088 Yojana Mallik    2024-05-31  213  				ioremap(msg->resp_msg.shm_info.shm_info_rx.base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  214  					sizeof(*port->rx_buffer->head),
5655a9b008b088 Yojana Mallik    2024-05-31  215  					(msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs *
5655a9b008b088 Yojana Mallik    2024-05-31  216  					 msg->resp_msg.shm_info.shm_info_rx.buff_slot_size));
5655a9b008b088 Yojana Mallik    2024-05-31  217  
5655a9b008b088 Yojana Mallik    2024-05-31  218  			port->rx_buffer->tail =
5655a9b008b088 Yojana Mallik    2024-05-31  219  				ioremap(msg->resp_msg.shm_info.shm_info_rx.base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  220  					sizeof(*port->rx_buffer->head) +
5655a9b008b088 Yojana Mallik    2024-05-31  221  					(msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs *
5655a9b008b088 Yojana Mallik    2024-05-31  222  					msg->resp_msg.shm_info.shm_info_rx.buff_slot_size),
5655a9b008b088 Yojana Mallik    2024-05-31  223  					sizeof(*port->rx_buffer->tail));
5655a9b008b088 Yojana Mallik    2024-05-31  224  
5655a9b008b088 Yojana Mallik    2024-05-31  225  			port->icve_rx_max_buffers =
5655a9b008b088 Yojana Mallik    2024-05-31  226  				msg->resp_msg.shm_info.shm_info_rx.num_pkt_bufs;
5655a9b008b088 Yojana Mallik    2024-05-31  227  
5655a9b008b088 Yojana Mallik    2024-05-31  228  			mutex_lock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  229  			common->state = ICVE_STATE_READY;
5655a9b008b088 Yojana Mallik    2024-05-31  230  			mutex_unlock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  231  
5655a9b008b088 Yojana Mallik    2024-05-31  232  			mod_delayed_work(system_wq,
5655a9b008b088 Yojana Mallik    2024-05-31  233  					 &common->state_work,
5655a9b008b088 Yojana Mallik    2024-05-31  234  					 STATE_MACHINE_TIME);
5655a9b008b088 Yojana Mallik    2024-05-31  235  
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  236  			break;
5655a9b008b088 Yojana Mallik    2024-05-31  237  		case ICVE_RESP_SET_MAC_ADDR:
5655a9b008b088 Yojana Mallik    2024-05-31  238  			break;
9ebbebae44242d Yojana Mallik    2024-05-31  239  		case ICVE_RESP_ADD_MC_ADDR:
9ebbebae44242d Yojana Mallik    2024-05-31  240  		case ICVE_RESP_DEL_MC_ADDR:
9ebbebae44242d Yojana Mallik    2024-05-31  241  			complete(&common->sync_msg);
9ebbebae44242d Yojana Mallik    2024-05-31  242  			break;
5655a9b008b088 Yojana Mallik    2024-05-31  243  		}
5655a9b008b088 Yojana Mallik    2024-05-31  244  
5655a9b008b088 Yojana Mallik    2024-05-31  245  		break;
5655a9b008b088 Yojana Mallik    2024-05-31  246  
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  247  	case ICVE_NOTIFY_MSG:
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  248  		rpmsg_type = msg->notify_msg.type;
5655a9b008b088 Yojana Mallik    2024-05-31  249  		switch (rpmsg_type) {
5655a9b008b088 Yojana Mallik    2024-05-31  250  		case ICVE_NOTIFY_REMOTE_READY:
5655a9b008b088 Yojana Mallik    2024-05-31  251  			mutex_lock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  252  			common->state = ICVE_STATE_RUNNING;
5655a9b008b088 Yojana Mallik    2024-05-31  253  			mutex_unlock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  254  
5655a9b008b088 Yojana Mallik    2024-05-31  255  			mod_delayed_work(system_wq,
5655a9b008b088 Yojana Mallik    2024-05-31  256  					 &common->state_work,
5655a9b008b088 Yojana Mallik    2024-05-31  257  					 STATE_MACHINE_TIME);
5655a9b008b088 Yojana Mallik    2024-05-31  258  			break;
5655a9b008b088 Yojana Mallik    2024-05-31  259  		case ICVE_NOTIFY_PORT_UP:
5655a9b008b088 Yojana Mallik    2024-05-31  260  		case ICVE_NOTIFY_PORT_DOWN:
5655a9b008b088 Yojana Mallik    2024-05-31  261  			break;
5655a9b008b088 Yojana Mallik    2024-05-31  262  		}
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  263  		break;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  264  	default:
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  265  		dev_err(common->dev, "Invalid msg type\n");
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  266  		break;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  267  	}
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  268  	return 0;
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  269  }
c7f4ad97418d24 Ravi Gunasekaran 2024-05-31  270  
5655a9b008b088 Yojana Mallik    2024-05-31  271  static int icve_rx_packets(struct napi_struct *napi, int budget)
5655a9b008b088 Yojana Mallik    2024-05-31  272  {
5655a9b008b088 Yojana Mallik    2024-05-31  273  	struct icve_port *port = container_of(napi, struct icve_port, rx_napi);
5655a9b008b088 Yojana Mallik    2024-05-31  274  	u32 count, process_pkts;
5655a9b008b088 Yojana Mallik    2024-05-31  275  	struct sk_buff *skb;
5655a9b008b088 Yojana Mallik    2024-05-31  276  	u32 head, tail;
5655a9b008b088 Yojana Mallik    2024-05-31  277  	int num_pkts;
5655a9b008b088 Yojana Mallik    2024-05-31  278  	u32 pkt_len;
5655a9b008b088 Yojana Mallik    2024-05-31  279  
5655a9b008b088 Yojana Mallik    2024-05-31  280  	head = port->rx_buffer->head->index;
5655a9b008b088 Yojana Mallik    2024-05-31  281  	tail = port->rx_buffer->tail->index;
5655a9b008b088 Yojana Mallik    2024-05-31  282  
5655a9b008b088 Yojana Mallik    2024-05-31  283  	num_pkts = head - tail;
5655a9b008b088 Yojana Mallik    2024-05-31  284  
5655a9b008b088 Yojana Mallik    2024-05-31  285  	num_pkts = num_pkts >= 0 ? num_pkts :
5655a9b008b088 Yojana Mallik    2024-05-31  286  				   (num_pkts + port->icve_rx_max_buffers);
5655a9b008b088 Yojana Mallik    2024-05-31  287  	process_pkts = min(num_pkts, budget);
5655a9b008b088 Yojana Mallik    2024-05-31  288  	count = 0;
5655a9b008b088 Yojana Mallik    2024-05-31  289  	while (count < process_pkts) {
5655a9b008b088 Yojana Mallik    2024-05-31  290  		memcpy_fromio((void *)&pkt_len,
5655a9b008b088 Yojana Mallik    2024-05-31 @291  			      (void *)(port->rx_buffer->buf->base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  292  			      MAGIC_NUM_SIZE_TYPE +
5655a9b008b088 Yojana Mallik    2024-05-31  293  			      (((tail + count) % port->icve_rx_max_buffers) *
5655a9b008b088 Yojana Mallik    2024-05-31  294  			      ICVE_BUFFER_SIZE)),
5655a9b008b088 Yojana Mallik    2024-05-31  295  			      PKT_LEN_SIZE_TYPE);
5655a9b008b088 Yojana Mallik    2024-05-31  296  		/* Start building the skb */
5655a9b008b088 Yojana Mallik    2024-05-31  297  		skb = napi_alloc_skb(napi, pkt_len);
5655a9b008b088 Yojana Mallik    2024-05-31  298  		if (!skb) {
5655a9b008b088 Yojana Mallik    2024-05-31  299  			port->ndev->stats.rx_dropped++;
5655a9b008b088 Yojana Mallik    2024-05-31  300  			goto rx_dropped;
5655a9b008b088 Yojana Mallik    2024-05-31  301  		}
5655a9b008b088 Yojana Mallik    2024-05-31  302  
5655a9b008b088 Yojana Mallik    2024-05-31  303  		skb->dev = port->ndev;
5655a9b008b088 Yojana Mallik    2024-05-31  304  		skb_put(skb, pkt_len);
5655a9b008b088 Yojana Mallik    2024-05-31  305  		memcpy_fromio((void *)skb->data,
5655a9b008b088 Yojana Mallik    2024-05-31  306  			      (void *)(port->rx_buffer->buf->base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  307  			      PKT_LEN_SIZE_TYPE + MAGIC_NUM_SIZE_TYPE +
5655a9b008b088 Yojana Mallik    2024-05-31  308  			      (((tail + count) % port->icve_rx_max_buffers) *
5655a9b008b088 Yojana Mallik    2024-05-31  309  			      ICVE_BUFFER_SIZE)),
5655a9b008b088 Yojana Mallik    2024-05-31  310  			      pkt_len);
5655a9b008b088 Yojana Mallik    2024-05-31  311  
5655a9b008b088 Yojana Mallik    2024-05-31  312  		skb->protocol = eth_type_trans(skb, port->ndev);
5655a9b008b088 Yojana Mallik    2024-05-31  313  
5655a9b008b088 Yojana Mallik    2024-05-31  314  		/* Push skb into network stack */
5655a9b008b088 Yojana Mallik    2024-05-31  315  		napi_gro_receive(napi, skb);
5655a9b008b088 Yojana Mallik    2024-05-31  316  
5655a9b008b088 Yojana Mallik    2024-05-31  317  		count++;
5655a9b008b088 Yojana Mallik    2024-05-31  318  		port->ndev->stats.rx_packets++;
5655a9b008b088 Yojana Mallik    2024-05-31  319  		port->ndev->stats.rx_bytes += skb->len;
5655a9b008b088 Yojana Mallik    2024-05-31  320  	}
5655a9b008b088 Yojana Mallik    2024-05-31  321  
5655a9b008b088 Yojana Mallik    2024-05-31  322  rx_dropped:
5655a9b008b088 Yojana Mallik    2024-05-31  323  
5655a9b008b088 Yojana Mallik    2024-05-31  324  	if (num_pkts) {
5655a9b008b088 Yojana Mallik    2024-05-31  325  		port->rx_buffer->tail->index =
5655a9b008b088 Yojana Mallik    2024-05-31  326  			(port->rx_buffer->tail->index + count) %
5655a9b008b088 Yojana Mallik    2024-05-31  327  			port->icve_rx_max_buffers;
5655a9b008b088 Yojana Mallik    2024-05-31  328  
5655a9b008b088 Yojana Mallik    2024-05-31  329  		if (num_pkts < budget && napi_complete_done(napi, count))
5655a9b008b088 Yojana Mallik    2024-05-31  330  			mod_timer(&port->rx_timer, RX_POLL_TIMEOUT);
5655a9b008b088 Yojana Mallik    2024-05-31  331  	}
5655a9b008b088 Yojana Mallik    2024-05-31  332  
5655a9b008b088 Yojana Mallik    2024-05-31  333  	return count;
5655a9b008b088 Yojana Mallik    2024-05-31  334  }
5655a9b008b088 Yojana Mallik    2024-05-31  335  
5655a9b008b088 Yojana Mallik    2024-05-31  336  static int icve_ndo_open(struct net_device *ndev)
5655a9b008b088 Yojana Mallik    2024-05-31  337  {
5655a9b008b088 Yojana Mallik    2024-05-31  338  	struct icve_common *common = icve_ndev_to_common(ndev);
5655a9b008b088 Yojana Mallik    2024-05-31  339  
5655a9b008b088 Yojana Mallik    2024-05-31  340  	mutex_lock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  341  	common->state = ICVE_STATE_OPEN;
5655a9b008b088 Yojana Mallik    2024-05-31  342  	mutex_unlock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  343  	mod_delayed_work(system_wq, &common->state_work, msecs_to_jiffies(100));
5655a9b008b088 Yojana Mallik    2024-05-31  344  
5655a9b008b088 Yojana Mallik    2024-05-31  345  	return 0;
5655a9b008b088 Yojana Mallik    2024-05-31  346  }
5655a9b008b088 Yojana Mallik    2024-05-31  347  
5655a9b008b088 Yojana Mallik    2024-05-31  348  static int icve_ndo_stop(struct net_device *ndev)
5655a9b008b088 Yojana Mallik    2024-05-31  349  {
5655a9b008b088 Yojana Mallik    2024-05-31  350  	struct icve_common *common = icve_ndev_to_common(ndev);
5655a9b008b088 Yojana Mallik    2024-05-31  351  	struct icve_port *port = icve_ndev_to_port(ndev);
5655a9b008b088 Yojana Mallik    2024-05-31  352  
5655a9b008b088 Yojana Mallik    2024-05-31  353  	mutex_lock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  354  	common->state = ICVE_STATE_CLOSE;
5655a9b008b088 Yojana Mallik    2024-05-31  355  	mutex_unlock(&common->state_lock);
5655a9b008b088 Yojana Mallik    2024-05-31  356  
5655a9b008b088 Yojana Mallik    2024-05-31  357  	netif_carrier_off(port->ndev);
5655a9b008b088 Yojana Mallik    2024-05-31  358  
5655a9b008b088 Yojana Mallik    2024-05-31  359  	__dev_mc_unsync(ndev, icve_del_mc_addr);
5655a9b008b088 Yojana Mallik    2024-05-31  360  	__hw_addr_init(&common->mc_list);
5655a9b008b088 Yojana Mallik    2024-05-31  361  
5655a9b008b088 Yojana Mallik    2024-05-31  362  	cancel_delayed_work_sync(&common->state_work);
5655a9b008b088 Yojana Mallik    2024-05-31  363  	del_timer_sync(&port->rx_timer);
5655a9b008b088 Yojana Mallik    2024-05-31  364  	napi_disable(&port->rx_napi);
5655a9b008b088 Yojana Mallik    2024-05-31  365  
5655a9b008b088 Yojana Mallik    2024-05-31  366  	cancel_work_sync(&common->rx_mode_work);
5655a9b008b088 Yojana Mallik    2024-05-31  367  
5655a9b008b088 Yojana Mallik    2024-05-31  368  	return 0;
5655a9b008b088 Yojana Mallik    2024-05-31  369  }
5655a9b008b088 Yojana Mallik    2024-05-31  370  
5655a9b008b088 Yojana Mallik    2024-05-31  371  static netdev_tx_t icve_start_xmit(struct sk_buff *skb, struct net_device *ndev)
5655a9b008b088 Yojana Mallik    2024-05-31  372  {
5655a9b008b088 Yojana Mallik    2024-05-31  373  	struct icve_port *port = icve_ndev_to_port(ndev);
5655a9b008b088 Yojana Mallik    2024-05-31  374  	u32 head, tail;
5655a9b008b088 Yojana Mallik    2024-05-31  375  	int num_pkts;
5655a9b008b088 Yojana Mallik    2024-05-31  376  	u32 len;
5655a9b008b088 Yojana Mallik    2024-05-31  377  
5655a9b008b088 Yojana Mallik    2024-05-31  378  	len = skb_headlen(skb);
5655a9b008b088 Yojana Mallik    2024-05-31  379  	head = port->tx_buffer->head->index;
5655a9b008b088 Yojana Mallik    2024-05-31  380  	tail = port->tx_buffer->tail->index;
5655a9b008b088 Yojana Mallik    2024-05-31  381  
5655a9b008b088 Yojana Mallik    2024-05-31  382  	/* If the buffer queue is full, then drop packet */
5655a9b008b088 Yojana Mallik    2024-05-31  383  	num_pkts = head - tail;
5655a9b008b088 Yojana Mallik    2024-05-31  384  	num_pkts = num_pkts >= 0 ? num_pkts :
5655a9b008b088 Yojana Mallik    2024-05-31  385  				   (num_pkts + port->icve_tx_max_buffers);
5655a9b008b088 Yojana Mallik    2024-05-31  386  
5655a9b008b088 Yojana Mallik    2024-05-31  387  	if ((num_pkts + 1) == port->icve_tx_max_buffers) {
5655a9b008b088 Yojana Mallik    2024-05-31  388  		netdev_warn(ndev, "Tx buffer full %d\n", num_pkts);
5655a9b008b088 Yojana Mallik    2024-05-31  389  		goto ring_full;
5655a9b008b088 Yojana Mallik    2024-05-31  390  	}
5655a9b008b088 Yojana Mallik    2024-05-31  391  	/* Copy length */
5655a9b008b088 Yojana Mallik    2024-05-31  392  	memcpy_toio((void *)port->tx_buffer->buf->base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31 @393  			    MAGIC_NUM_SIZE_TYPE +
5655a9b008b088 Yojana Mallik    2024-05-31  394  			    (port->tx_buffer->head->index * ICVE_BUFFER_SIZE),
5655a9b008b088 Yojana Mallik    2024-05-31  395  		    (void *)&len, PKT_LEN_SIZE_TYPE);
5655a9b008b088 Yojana Mallik    2024-05-31  396  	/* Copy data to shared mem */
5655a9b008b088 Yojana Mallik    2024-05-31  397  	memcpy_toio((void *)(port->tx_buffer->buf->base_addr +
5655a9b008b088 Yojana Mallik    2024-05-31  398  			     MAGIC_NUM_SIZE_TYPE + PKT_LEN_SIZE_TYPE +
5655a9b008b088 Yojana Mallik    2024-05-31  399  			     (port->tx_buffer->head->index * ICVE_BUFFER_SIZE)),
5655a9b008b088 Yojana Mallik    2024-05-31  400  		    (void *)skb->data, len);
5655a9b008b088 Yojana Mallik    2024-05-31  401  	port->tx_buffer->head->index =
5655a9b008b088 Yojana Mallik    2024-05-31  402  		(port->tx_buffer->head->index + 1) % port->icve_tx_max_buffers;
5655a9b008b088 Yojana Mallik    2024-05-31  403  
5655a9b008b088 Yojana Mallik    2024-05-31  404  	ndev->stats.tx_packets++;
5655a9b008b088 Yojana Mallik    2024-05-31  405  	ndev->stats.tx_bytes += skb->len;
5655a9b008b088 Yojana Mallik    2024-05-31  406  
5655a9b008b088 Yojana Mallik    2024-05-31  407  	dev_consume_skb_any(skb);
5655a9b008b088 Yojana Mallik    2024-05-31  408  	return NETDEV_TX_OK;
5655a9b008b088 Yojana Mallik    2024-05-31  409  
5655a9b008b088 Yojana Mallik    2024-05-31  410  ring_full:
5655a9b008b088 Yojana Mallik    2024-05-31  411  	return NETDEV_TX_BUSY;
5655a9b008b088 Yojana Mallik    2024-05-31  412  }
5655a9b008b088 Yojana Mallik    2024-05-31  413  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver
  2024-06-02 15:45 ` [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Andrew Lunn
@ 2024-06-03 23:54   ` Jakub Kicinski
  2024-06-12 12:48   ` Yojana Mallik
  1 sibling, 0 replies; 31+ messages in thread
From: Jakub Kicinski @ 2024-06-03 23:54 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: Andrew Lunn, schnelle, wsa+renesas, diogo.ivo, rdunlap, horms,
	vigneshr, rogerq, danishanwar, pabeni, edumazet, davem, netdev,
	linux-kernel, srk, rogerq

On Sun, 2 Jun 2024 17:45:29 +0200 Andrew Lunn wrote:
> On Fri, May 31, 2024 at 12:10:03PM +0530, Yojana Mallik wrote:
> > virtio-net provides a solution for virtual ethernet interface in a
> > virtualized environment.
> > 
> > There might be a use-case for traffic tunneling between heterogeneous
> > processors in a non virtualized environment such as TI's AM64x that has
> > Cortex A53 and Cortex R5 where Linux runs on A53 and a flavour of RTOS
> > on R5(FreeRTOS) and the ethernet controller is managed by R5 and needs
> > to pass some low priority data to A53.
> > 
> > One solution for such an use case where the ethernet controller does
> > not support DMA for Tx/Rx channel, could be a RPMsg based shared memory
> > ethernet driver.  
> 
> virtio-net is very generic and vendor agnostic.
> 
> Looking at icve, what is TI specific? Why not define a generic
> solution which could be used for any heterogeneous system? We are
> seeming more and more such systems, and there is no point everybody
> re-inventing the wheel. So what i would like to see is something
> similar to driver/tty/rpmsg_tty.c, a driver/net/ethernet/rpmsg_eth.c,
> with good documentation of the protocol used, so that others can
> implement it. And since you say you have FreeRTOS on the other end,
> you could also contribute that side to FreeRTOS as well. A complete
> open source solution everybody can use.

100% agreed! FWIW there's also a PCIe NTB driver which provides very
similar functionality.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-02 16:45   ` Andrew Lunn
@ 2024-06-04  6:23     ` Yojana Mallik
  2024-06-04 12:54       ` Andrew Lunn
  2024-06-04 13:00       ` Andrew Lunn
  0 siblings, 2 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-06-04  6:23 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli, y-mallik

Hi Andrew,

On 6/2/24 22:15, Andrew Lunn wrote:
>> +enum icve_rpmsg_type {
>> +	/* Request types */
>> +	ICVE_REQ_SHM_INFO = 0,
>> +	ICVE_REQ_SET_MAC_ADDR,
>> +
>> +	/* Response types */
>> +	ICVE_RESP_SHM_INFO,
>> +	ICVE_RESP_SET_MAC_ADDR,
>> +
>> +	/* Notification types */
>> +	ICVE_NOTIFY_PORT_UP,
>> +	ICVE_NOTIFY_PORT_DOWN,
>> +	ICVE_NOTIFY_PORT_READY,
>> +	ICVE_NOTIFY_REMOTE_READY,
>> +};
> 
> +struct message_header {
> +       u32 src_id;
> +       u32 msg_type; /* Do not use enum type, as enum size is compiler dependent */
> +} __packed;
> 
> 
> Given how you have defined icve_rpmsg_type, what is the point of
> message_header.msg_type?
> 


+	rpmsg_send(common->rpdev->ept, (void *)(&common->send_msg),
+		   sizeof(common->send_msg));

rpmsg_send in icve_create_send_request is sending message_header.msg_type to R5
core using (void *)(&common->send_msg).
In icve_rpmsg_cb function switch case statements for option msg_type are used
and cases are from enum icve_rpmsg_type.

> It seems like this would make more sense:
> 
> enum icve_rpmsg_request_type {
> 	ICVE_REQ_SHM_INFO = 0,
> 	ICVE_REQ_SET_MAC_ADDR,
> }
> 
> enum icve_rpmsg_response_type {
> 	ICVE_RESP_SHM_INFO,
> 	ICVE_RESP_SET_MAC_ADDR,
> }
> enum icve_rpmsg_notify_type {
> 	ICVE_NOTIFY_PORT_UP,
> 	ICVE_NOTIFY_PORT_DOWN,
> 	ICVE_NOTIFY_PORT_READY,
> 	ICVE_NOTIFY_REMOTE_READY,
> };
> 

Since msg_hdr.msg_type which takes value from enum icve_msg_type is being used
in rpmsg_send and icve_rpmsg_cb, I would prefer to continue with the 2 separate
enums, that is enum icve_msg_type and enum icve_rpmsg_type. However I am always
open to make improvements and would be happy to discuss this further if you
have additional insights.

> Also, why SET_MAC_ADDR? It would be good to document where the MAC
> address are coming from. And what address this is setting.
> 

The interface which is controlled by Linux and interacting with the R5 core is
assigned mac address 00:00:00:00:00:00 by default. To ensure reliable
communication and compliance with network standards a different MAC address is
set for this interface using icve_set_mac_address.

> In fact, please put all the protocol documentation into a .rst
> file. That will help us discuss the protocol independent of the
> implementation. The protocol is an ABI, so needs to be reviewed well.
> 

I will do it.

>> +struct icve_shm_info {
>> +	/* Total shared memory size */
>> +	u32 total_shm_size;
>> +	/* Total number of buffers */
>> +	u32 num_pkt_bufs;
>> +	/* Per buff slot size i.e MTU Size + 4 bytes for magic number + 4 bytes
>> +	 * for Pkt len
>> +	 */
> 
> What is your definition of MTU?
> 
> enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
> 
> Typically, MTU does not include the Ethernet header or checksum. Is
> that what you mean here?
> 

MTU is only the Payload. I have realized the macro names are misleading and I
will change them as
#define ICVE_PACKET_BUFFER_SIZE   1540
#define MAX_MTU   ICVE_PACKET_BUFFER_SIZE - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN)
Hence value of max mtu is 1518 bytes.

>> +	u32 buff_slot_size;
>> +	/* Base Address for Tx or Rx shared memory */
>> +	u32 base_addr;
>> +} __packed;
> 
> What do you mean by address here? Virtual address, physical address,
> DMA address? And whos address is this, you have two CPUs here, with no
> guaranteed the shared memory is mapped to the same address in both
> address spaces.
> 
> 	Andrew

The address referred above is physical address. It is the address of Tx and Rx
buffer under the control of Linux operating over A53 core. The check if the
shared memory is mapped to the same address in both address spaces is checked
by the R5 core.

Thanks for the feedback.
Regards
Yojana Mallik

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-04  6:23     ` Yojana Mallik
@ 2024-06-04 12:54       ` Andrew Lunn
  2024-06-12 12:52         ` Yojana Mallik
  2024-06-04 13:00       ` Andrew Lunn
  1 sibling, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2024-06-04 12:54 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli

> >> +	u32 buff_slot_size;
> >> +	/* Base Address for Tx or Rx shared memory */
> >> +	u32 base_addr;
> >> +} __packed;
> > 
> > What do you mean by address here? Virtual address, physical address,
> > DMA address? And whos address is this, you have two CPUs here, with no
> > guaranteed the shared memory is mapped to the same address in both
> > address spaces.
> > 
> > 	Andrew
> 
> The address referred above is physical address. It is the address of Tx and Rx
> buffer under the control of Linux operating over A53 core. The check if the
> shared memory is mapped to the same address in both address spaces is checked
> by the R5 core.

u32 is too small for a physical address. I'm sure there are systems
with more than 4G of address space. Also, i would not assume both CPUs
map the memory to the same physical address.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-04  6:23     ` Yojana Mallik
  2024-06-04 12:54       ` Andrew Lunn
@ 2024-06-04 13:00       ` Andrew Lunn
  2024-06-12 12:49         ` Yojana Mallik
  1 sibling, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2024-06-04 13:00 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli

> > Also, why SET_MAC_ADDR? It would be good to document where the MAC
> > address are coming from. And what address this is setting.
> > 
> 
> The interface which is controlled by Linux and interacting with the R5 core is
> assigned mac address 00:00:00:00:00:00 by default. To ensure reliable
> communication and compliance with network standards a different MAC address is
> set for this interface using icve_set_mac_address.

So this is the peer telling the Linux machine what MAC address to use?
As i said, it is not clear what direction this message is flowing. Or
even if it can be used the other way around. Can Linux tell the peer
what address it should use?

Also, what is the big picture here. Is this purely a point to point
link? There is no intention that one or both ends could bridge packets
to another network? Does this link always carrier IP? If so, why do
you need the Ethernet header? Why not just do the same as SLIP, PPP,
other point to point network protocols.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver
  2024-06-02 15:45 ` [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Andrew Lunn
  2024-06-03 23:54   ` Jakub Kicinski
@ 2024-06-12 12:48   ` Yojana Mallik
  1 sibling, 0 replies; 31+ messages in thread
From: Yojana Mallik @ 2024-06-12 12:48 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, y-mallik, Siddharth Vadapalli



On 6/2/24 21:15, Andrew Lunn wrote:
> On Fri, May 31, 2024 at 12:10:03PM +0530, Yojana Mallik wrote:
>> virtio-net provides a solution for virtual ethernet interface in a
>> virtualized environment.
>>
>> There might be a use-case for traffic tunneling between heterogeneous
>> processors in a non virtualized environment such as TI's AM64x that has
>> Cortex A53 and Cortex R5 where Linux runs on A53 and a flavour of RTOS
>> on R5(FreeRTOS) and the ethernet controller is managed by R5 and needs
>> to pass some low priority data to A53.
>>
>> One solution for such an use case where the ethernet controller does
>> not support DMA for Tx/Rx channel, could be a RPMsg based shared memory
>> ethernet driver.
> 
> virtio-net is very generic and vendor agnostic.
> 
> Looking at icve, what is TI specific? Why not define a generic
> solution which could be used for any heterogeneous system? We are
> seeming more and more such systems, and there is no point everybody
> re-inventing the wheel. So what i would like to see is something
> similar to driver/tty/rpmsg_tty.c, a driver/net/ethernet/rpmsg_eth.c,
> with good documentation of the protocol used, so that others can
> implement it. And since you say you have FreeRTOS on the other end,
> you could also contribute that side to FreeRTOS as well. A complete
> open source solution everybody can use.
> 
> 	Andrew

+static struct rpmsg_device_id icve_rpmsg_id_table[] = {
+	{ .name = "ti.icve" },
+	{},
+};
+MODULE_DEVICE_TABLE(rpmsg, icve_rpmsg_id_table);
+
+static struct rpmsg_driver icve_rpmsg_client = {
+	.drv.name = KBUILD_MODNAME,
+	.id_table = icve_rpmsg_id_table,
+	.probe = icve_rpmsg_probe,
+	.callback = icve_rpmsg_cb,
+	.remove = icve_rpmsg_remove,
+};
+module_rpmsg_driver(icve_rpmsg_client);
+
When the Linux kernel detects an rpmsg device (the communication channel), it
looks for a matching driver that can handle this device with the help of name
string in the icve_rpmsg_id_table in driver structure. I will change the name
string to make the driver generic. Apart from the name string other entities
don't look TI specific.
Thank you for the suggestion to make inter-core virtual ethernet driver generic
like drivers/tty/rpmsg_tty.c for a complete open source solution. I will be
working on it.

Regard,
Yojana Mallik

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-04 13:00       ` Andrew Lunn
@ 2024-06-12 12:49         ` Yojana Mallik
  2024-06-12 14:36           ` Andrew Lunn
  0 siblings, 1 reply; 31+ messages in thread
From: Yojana Mallik @ 2024-06-12 12:49 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli, y-mallik



On 6/4/24 18:30, Andrew Lunn wrote:
>>> Also, why SET_MAC_ADDR? It would be good to document where the MAC
>>> address are coming from. And what address this is setting.
>>>
>>
>> The interface which is controlled by Linux and interacting with the R5 core is
>> assigned mac address 00:00:00:00:00:00 by default. To ensure reliable
>> communication and compliance with network standards a different MAC address is
>> set for this interface using icve_set_mac_address.
> 
> So this is the peer telling the Linux machine what MAC address to use?
> As i said, it is not clear what direction this message is flowing. Or
> even if it can be used the other way around. Can Linux tell the peer
> what address it should use?
> 

No, RTOS the peer is not telling the Linux machine what MAC address to use. The
user can add a unicast mac address to the virtual network interface using the
following steps:

Steps to add MAC Address
# Bring down the virtual port interface
$ ifconfig eth1 down
# Set MAC address for the virtual port interface, ex 01:02:03:04:05:06
$ ifconfig eth1 hw ether 01:02:03:04:05:06
# Bring the interface up
$ ifconfig eth1 up

These commands will call the net_device_ops
.ndo_stop = icve_ndo_stop
.ndo_set_mac_address = icve_set_mac_address
.ndo_open = icve_ndo_open,

While adidng the mac address, ndo ops will call set_mac_address which will call
create_send_request which will create a request with command add mac address
and send a rpmsg to R5 core.


> Also, what is the big picture here. Is this purely a point to point
> link? There is no intention that one or both ends could bridge packets
> to another network? Does this link always carrier IP? If so, why do
> you need the Ethernet header? Why not just do the same as SLIP, PPP,
> other point to point network protocols.
> 
> 	Andrew

We want the shared memory to be accessed by 2 cores only. So the inter-core
virtual ethernet driver is indeed a point to point link. There is a bridging
application in R5 core. If there is a cluster of cores like multiple A53 cores
and R5 cores, and one A53 core has to send packets to another A53 core then the
A53 core can send packets to R5 core using a icve a virtual driver and R5 core
can transmit these packets to the other A53 core using another instance of icve
virtual driver. Multiple instances of virtual driver are used by the bridging
application but the virtual driver by itself is a point to point link.

The goal of the inter-core virtual ethernet driver is network traffic
tunneling between heterogeneous processors. R5 core will send ethernet packets
to Linux because R5 core wants Linux to process the ethernet packets further.
It is intended that R5 core should use resources for other processes and A53
core should use resources to take actions on the Ethernet packet. How A53 core
handles the ethernet packets depends on the ethernet header. Hence it is
necessary for the A53 core to receive ethernet header.

Regards,
Yojana Mallik


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-04 12:54       ` Andrew Lunn
@ 2024-06-12 12:52         ` Yojana Mallik
  2024-06-12 14:59           ` Andrew Lunn
  0 siblings, 1 reply; 31+ messages in thread
From: Yojana Mallik @ 2024-06-12 12:52 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli, y-mallik



On 6/4/24 18:24, Andrew Lunn wrote:
>>>> +	u32 buff_slot_size;
>>>> +	/* Base Address for Tx or Rx shared memory */
>>>> +	u32 base_addr;
>>>> +} __packed;
>>>
>>> What do you mean by address here? Virtual address, physical address,
>>> DMA address? And whos address is this, you have two CPUs here, with no
>>> guaranteed the shared memory is mapped to the same address in both
>>> address spaces.
>>>
>>> 	Andrew
>>
>> The address referred above is physical address. It is the address of Tx and Rx
>> buffer under the control of Linux operating over A53 core. The check if the
>> shared memory is mapped to the same address in both address spaces is checked
>> by the R5 core.
> 
> u32 is too small for a physical address. I'm sure there are systems
> with more than 4G of address space. Also, i would not assume both CPUs
> map the memory to the same physical address.
> 
> 	Andrew

The shared memory address space in AM64x board is 2G and u32 data type for
address works to use this address space. In order to make the driver generic,to
work with systems that have more than 4G address space, we can change the base
addr data type to u64 in the virtual driver code and the corresponding
necessary changes have to be made in the firmware.

During handshake between Linux and remote core, the remote core advertises Tx
and Rx shared memory info to Linux using rpmsg framework. Linux retrieves the
info related to shared memory from the response received using icve_rpmsg_cb
function.

+		case ICVE_RESP_SHM_INFO:
+			/* Retrieve Tx and Rx shared memory info from msg */
+			port->tx_buffer->head =
+				ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr,
+					sizeof(*port->tx_buffer->head));
+
+			port->tx_buffer->buf->base_addr =
+				ioremap((msg->resp_msg.shm_info.shm_info_tx.base_addr +
+					sizeof(*port->tx_buffer->head)),
+					(msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
+					 msg->resp_msg.shm_info.shm_info_tx.buff_slot_size));
+
+			port->tx_buffer->tail =
+				ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr +
+					sizeof(*port->tx_buffer->head) +
+					(msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
+					msg->resp_msg.shm_info.shm_info_tx.buff_slot_size),
+					sizeof(*port->tx_buffer->tail));
+
+

	
The shared memory layout is modeled as circular buffer.
/*      Shared Memory Layout
 *
 *	---------------------------	*****************
 *	|        MAGIC_NUM        |	 icve_shm_head
 *	|          HEAD           |
 *	---------------------------	*****************
 *	|        MAGIC_NUM        |
 *	|        PKT_1_LEN        |
 *	|          PKT_1          |
 *	---------------------------
 *	|        MAGIC_NUM        |
 *	|        PKT_2_LEN        |	 icve_shm_buf
 *	|          PKT_2          |
 *	---------------------------
 *	|           .             |
 *	|           .             |
 *	---------------------------
 *	|        MAGIC_NUM        |
 *	|        PKT_N_LEN        |
 *	|          PKT_N          |
 *	---------------------------	****************
 *	|        MAGIC_NUM        |      icve_shm_tail
 *	|          TAIL           |
 *	---------------------------	****************
 */

Linux retrieves the following info provided in response by R5 core:

Tx buffer head address which is stored in port->tx_buffer->head

Tx buffer buffer's base address which is stored in port->tx_buffer->buf->base_addr

Tx buffer tail address which is stored in port->tx_buffer->tail

The number of packets that can be put into Tx buffer which is stored in
port->icve_tx_max_buffers

Rx buffer head address which is stored in port->rx_buffer->head

Rx buffer buffer's base address which is stored in port->rx_buffer->buf->base_addr

Rx buffer tail address which is stored in port->rx_buffer->tail

The number of packets that are put into Rx buffer which is stored in
port->icve_rx_max_buffers

Linux trusts these addresses sent by the R5 core to send or receive ethernet
packets. By this way both the CPUs map to the same physical address.

Regards,
Yojana Mallik

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-12 12:49         ` Yojana Mallik
@ 2024-06-12 14:36           ` Andrew Lunn
  0 siblings, 0 replies; 31+ messages in thread
From: Andrew Lunn @ 2024-06-12 14:36 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli

> No, RTOS the peer is not telling the Linux machine what MAC address to use. The
> user can add a unicast mac address to the virtual network interface using the
> following steps:
> 
> Steps to add MAC Address
> # Bring down the virtual port interface
> $ ifconfig eth1 down
> # Set MAC address for the virtual port interface, ex 01:02:03:04:05:06
> $ ifconfig eth1 hw ether 01:02:03:04:05:06
> # Bring the interface up
> $ ifconfig eth1 up

ifconfig is long deprecated, replaced by iproute2. Please don't use it
for anything modern.

> While adidng the mac address, ndo ops will call set_mac_address which will call
> create_send_request which will create a request with command add mac address
> and send a rpmsg to R5 core.

The protocol documentation should be at a level that your can give it
to somebody else and they can implement it, e.g. for a different
OS. So please make the documentation clearer, what direction are these
messages flowing. And make the protocol description OS agnostic.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-12 12:52         ` Yojana Mallik
@ 2024-06-12 14:59           ` Andrew Lunn
  2024-06-14  9:08             ` Yojana Mallik
  0 siblings, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2024-06-12 14:59 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli

> The shared memory address space in AM64x board is 2G and u32 data type for
> address works to use this address space. In order to make the driver generic,to
> work with systems that have more than 4G address space, we can change the base
> addr data type to u64 in the virtual driver code and the corresponding
> necessary changes have to be made in the firmware.

You probably need to think about this concept in a more generic
way. You have a block of memory which is physically shared between two
CPUs. Does each have its own MMU involved in accesses this memory? Why
would each see the memory at the same physical address? Why does one
CPU actually know anything about the memory layout of another CPU, and
can tell it how to use its own memory? Do not think about your AM64x
board when answering these questions. Think about an abstract system,
two CPUs with a block of shared memory. Maybe it is even a CPU and a
GPU with shared memory, etc. 

> The shared memory layout is modeled as circular buffer.
> /*      Shared Memory Layout
>  *
>  *	---------------------------	*****************
>  *	|        MAGIC_NUM        |	 icve_shm_head
>  *	|          HEAD           |
>  *	---------------------------	*****************
>  *	|        MAGIC_NUM        |
>  *	|        PKT_1_LEN        |
>  *	|          PKT_1          |
>  *	---------------------------
>  *	|        MAGIC_NUM        |
>  *	|        PKT_2_LEN        |	 icve_shm_buf
>  *	|          PKT_2          |
>  *	---------------------------
>  *	|           .             |
>  *	|           .             |
>  *	---------------------------
>  *	|        MAGIC_NUM        |
>  *	|        PKT_N_LEN        |
>  *	|          PKT_N          |
>  *	---------------------------	****************
>  *	|        MAGIC_NUM        |      icve_shm_tail
>  *	|          TAIL           |
>  *	---------------------------	****************
>  */
> 
> Linux retrieves the following info provided in response by R5 core:
> 
> Tx buffer head address which is stored in port->tx_buffer->head
> 
> Tx buffer buffer's base address which is stored in port->tx_buffer->buf->base_addr
> 
> Tx buffer tail address which is stored in port->tx_buffer->tail
> 
> The number of packets that can be put into Tx buffer which is stored in
> port->icve_tx_max_buffers
> 
> Rx buffer head address which is stored in port->rx_buffer->head
> 
> Rx buffer buffer's base address which is stored in port->rx_buffer->buf->base_addr
> 
> Rx buffer tail address which is stored in port->rx_buffer->tail
> 
> The number of packets that are put into Rx buffer which is stored in
> port->icve_rx_max_buffers

I think most of these should not be pointers, but offsets from the
base of the shared memory. It then does not matter if they are mapped
at different physical addresses on each CPU.

> Linux trusts these addresses sent by the R5 core to send or receive ethernet
> packets. By this way both the CPUs map to the same physical address.

I'm not sure Linux should trust the R5. For a generic implementation,
the trust should be held to a minimum. There needs to be an agreement
about how the shared memory is partitioned, but each end needs to
verify that the memory is in fact valid, that none of the data
structures point outside of the shared memory etc. Otherwise one
system can cause memory corruption on the other, and that sort of bug
is going to be very hard to debug.

	Andrew


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-12 14:59           ` Andrew Lunn
@ 2024-06-14  9:08             ` Yojana Mallik
  2024-06-16 16:19               ` Andrew Lunn
  0 siblings, 1 reply; 31+ messages in thread
From: Yojana Mallik @ 2024-06-14  9:08 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli, y-mallik



On 6/12/24 20:29, Andrew Lunn wrote:
>> The shared memory address space in AM64x board is 2G and u32 data type for
>> address works to use this address space. In order to make the driver generic,to
>> work with systems that have more than 4G address space, we can change the base
>> addr data type to u64 in the virtual driver code and the corresponding
>> necessary changes have to be made in the firmware.
> 
> You probably need to think about this concept in a more generic
> way. You have a block of memory which is physically shared between two
> CPUs. Does each have its own MMU involved in accesses this memory? Why
> would each see the memory at the same physical address? Why does one
> CPU actually know anything about the memory layout of another CPU, and
> can tell it how to use its own memory? Do not think about your AM64x
> board when answering these questions. Think about an abstract system,
> two CPUs with a block of shared memory. Maybe it is even a CPU and a
> GPU with shared memory, etc. 
> 
>> The shared memory layout is modeled as circular buffer.
>> /*      Shared Memory Layout
>>  *
>>  *	---------------------------	*****************
>>  *	|        MAGIC_NUM        |	 icve_shm_head
>>  *	|          HEAD           |
>>  *	---------------------------	*****************
>>  *	|        MAGIC_NUM        |
>>  *	|        PKT_1_LEN        |
>>  *	|          PKT_1          |
>>  *	---------------------------
>>  *	|        MAGIC_NUM        |
>>  *	|        PKT_2_LEN        |	 icve_shm_buf
>>  *	|          PKT_2          |
>>  *	---------------------------
>>  *	|           .             |
>>  *	|           .             |
>>  *	---------------------------
>>  *	|        MAGIC_NUM        |
>>  *	|        PKT_N_LEN        |
>>  *	|          PKT_N          |
>>  *	---------------------------	****************
>>  *	|        MAGIC_NUM        |      icve_shm_tail
>>  *	|          TAIL           |
>>  *	---------------------------	****************
>>  */
>>
>> Linux retrieves the following info provided in response by R5 core:
>>
>> Tx buffer head address which is stored in port->tx_buffer->head
>>
>> Tx buffer buffer's base address which is stored in port->tx_buffer->buf->base_addr
>>
>> Tx buffer tail address which is stored in port->tx_buffer->tail
>>
>> The number of packets that can be put into Tx buffer which is stored in
>> port->icve_tx_max_buffers
>>
>> Rx buffer head address which is stored in port->rx_buffer->head
>>
>> Rx buffer buffer's base address which is stored in port->rx_buffer->buf->base_addr
>>
>> Rx buffer tail address which is stored in port->rx_buffer->tail
>>
>> The number of packets that are put into Rx buffer which is stored in
>> port->icve_rx_max_buffers
> 
> I think most of these should not be pointers, but offsets from the
> base of the shared memory. It then does not matter if they are mapped
> at different physical addresses on each CPU.
> 
>> Linux trusts these addresses sent by the R5 core to send or receive ethernet
>> packets. By this way both the CPUs map to the same physical address.
> 
> I'm not sure Linux should trust the R5. For a generic implementation,
> the trust should be held to a minimum. There needs to be an agreement
> about how the shared memory is partitioned, but each end needs to
> verify that the memory is in fact valid, that none of the data
> structures point outside of the shared memory etc. Otherwise one
> system can cause memory corruption on the other, and that sort of bug
> is going to be very hard to debug.
> 
> 	Andrew
> 

The Linux Remoteproc driver which initializes remote processor cores carves out
a section from DDR memory as reserved memory for each remote processor on the
SOC. This memory region has been reserved in the Linux device tree file as
reserved-memory. Out of this reserved memory for R5 core some memory is
reserved for shared memory.

The shared memory is divided into two distinct regions:
one for the A53 -> R5 data path (Tx buffer for Linux), and other for R5 -> A53
data path (Rx buffer for Linux).

Four entities total shared memory size, number of packets, buffer slot size and
base address of buffer has been hardcoded into the firmware code for both the
Tx and Rx buffer. These four entities are informed by the R5 core and Linux
retrieves these info from message received using icve_rpmsg_cb.

Using the Base Address for Tx or Rx shared memory received and the value of
number of packets, buffer slot size received, buffer's head address, shared
memory buffer buffer's base address and tail address is calculated in the driver.

Linux driver uses ioremap function to translate these physical addresses
calculated into virtual address. Linux uses these virtual addresses to send
packets to remote cores using icve_start_xmit function.

It has been agreed upon by design that the remote core will use a particular
start address of buffer and Linux will also use it, and it has been harcoded in
the firmware in the remote core. Since this address has been harcoded in the
firmware, it can be hardcoded in the Linux driver code and then a check can be
made if the address received from remote core matches with the address
hardcoded in the Linux driver.

But viewing the driver from a generic perspective, the driver can interact with
different firmware whose start address for the shared memory region will not
match the one hardcoded into the Linux driver.

This is why it has been decided to hardcode the start address in the firmware
and it will be sent by the remote core to Linux and Linux will use it.

Kindly suggest in what other ways can the driver get to know the start address
of the shared memory if not informed by the remote core. Also how to do a check
if the address is valid.

Thanks and regards,
Yojana Mallik

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-14  9:08             ` Yojana Mallik
@ 2024-06-16 16:19               ` Andrew Lunn
  2024-06-16 19:03                 ` Andrew Lunn
  0 siblings, 1 reply; 31+ messages in thread
From: Andrew Lunn @ 2024-06-16 16:19 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli

> The Linux Remoteproc driver which initializes remote processor cores carves out
> a section from DDR memory as reserved memory for each remote processor on the
> SOC. This memory region has been reserved in the Linux device tree file as
> reserved-memory. Out of this reserved memory for R5 core some memory is
> reserved for shared memory.

I don't know much about rpmsg, so i read the documentation:

https://docs.kernel.org/staging/rpmsg.html

There is no mention of carving out a region of DDR memory. It says
nothing about memory reserved in DT.

The API is pretty much: Please send these bytes to the peer. Call this
callback when bytes have been received from a peer.

> The shared memory is divided into two distinct regions:
> one for the A53 -> R5 data path (Tx buffer for Linux), and other for R5 -> A53
> data path (Rx buffer for Linux).

As i said in my previous message. Forget about the A54->R5. Think
about a generic architecture. You have an RPMSG facility using the API
described in the documentation. You have a block of shared
memory. That memory could be VRAM, it could be a dual port SRAM, a PCI
BAR region, or some DDR which both CPU have mapped into their address
space. Design a protocol around that. This might mean you need to
modify your firmware. It might mean you need to throw away your
firmware and start again. But for mainline, we want something generic
which any vendor can use with a wide range of hardware, with as few
assumptions as possible.

	Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
  2024-06-16 16:19               ` Andrew Lunn
@ 2024-06-16 19:03                 ` Andrew Lunn
  0 siblings, 0 replies; 31+ messages in thread
From: Andrew Lunn @ 2024-06-16 19:03 UTC (permalink / raw)
  To: Yojana Mallik
  Cc: schnelle, wsa+renesas, diogo.ivo, rdunlap, horms, vigneshr,
	rogerq, danishanwar, pabeni, kuba, edumazet, davem, netdev,
	linux-kernel, srk, rogerq, Siddharth Vadapalli

> As i said in my previous message. Forget about the A54->R5. Think
> about a generic architecture. You have an RPMSG facility using the API
> described in the documentation. You have a block of shared
> memory. That memory could be VRAM, it could be a dual port SRAM, a PCI
> BAR region, or some DDR which both CPU have mapped into their address
> space. Design a protocol around that. This might mean you need to
> modify your firmware. It might mean you need to throw away your
> firmware and start again. But for mainline, we want something generic
> which any vendor can use with a wide range of hardware, with as few
> assumptions as possible.

Just adding to my own email. You might want to split the overall
problem into two. It could be, you have 95% of the code in a generic
driver framework. You instantiate it by calling something like:

void *cookie rpmsg_ethernet_shared_mem_create(struct rpmsg_endpoint *ept,
					      void __iomem *shared_memory,
					      size_t size);

The cookie is then used with

int rpmsg_ethernet_cb(cookie, void *data, int len);

which gets called from the rpmsg .callback function.

You then have a vendor specific part which is responsible for creating
the rpmsg endpoint, finding the shared memory, and mapping it into the
address space ready for use.

All data structures inside the shared memory need to be position
independent, since there is no guaranteed the CPUs have it mapped to
the same address, or even have the same size addresses. The easy way
to do that is specify everything as offsets to the base of the memory,
making it easy to validate the offset is actually within the shared
memory.

As i said, i've not used rpmsg, so this might in practice need to be
different. But the basic idea should be O.K, a vendor specific chunk
of code to do the basic setup, and then hand over the memory and RPMSG
endpoint to generic code which does all the real work.

	 Andrew

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2024-06-16 19:04 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-31  6:40 [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Yojana Mallik
2024-05-31  6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
2024-05-31 15:30   ` Randy Dunlap
2024-06-03  5:50     ` Yojana Mallik
2024-06-02  7:01   ` Siddharth Vadapalli
2024-06-03  6:16     ` Yojana Mallik
2024-06-02 16:21   ` Andrew Lunn
2024-06-03  8:56     ` Yojana Mallik
2024-06-03 12:54       ` Andrew Lunn
2024-05-31  6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
2024-06-01  3:13   ` kernel test robot
2024-06-03  9:26     ` Yojana Mallik
2024-06-02  7:22   ` Siddharth Vadapalli
2024-06-02 15:54     ` Andrew Lunn
2024-06-02  7:35   ` Siddharth Vadapalli
2024-06-02 16:45   ` Andrew Lunn
2024-06-04  6:23     ` Yojana Mallik
2024-06-04 12:54       ` Andrew Lunn
2024-06-12 12:52         ` Yojana Mallik
2024-06-12 14:59           ` Andrew Lunn
2024-06-14  9:08             ` Yojana Mallik
2024-06-16 16:19               ` Andrew Lunn
2024-06-16 19:03                 ` Andrew Lunn
2024-06-04 13:00       ` Andrew Lunn
2024-06-12 12:49         ` Yojana Mallik
2024-06-12 14:36           ` Andrew Lunn
2024-05-31  6:40 ` [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering Yojana Mallik
2024-06-03 21:27   ` kernel test robot
2024-06-02 15:45 ` [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Andrew Lunn
2024-06-03 23:54   ` Jakub Kicinski
2024-06-12 12:48   ` Yojana Mallik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).