From: Yojana Mallik <y-mallik@ti.com>
To: Andrew Lunn <andrew@lunn.ch>
Cc: <schnelle@linux.ibm.com>, <wsa+renesas@sang-engineering.com>,
<diogo.ivo@siemens.com>, <rdunlap@infradead.org>,
<horms@kernel.org>, <vigneshr@ti.com>, <rogerq@ti.com>,
<danishanwar@ti.com>, <pabeni@redhat.com>, <kuba@kernel.org>,
<edumazet@google.com>, <davem@davemloft.net>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<srk@ti.com>, <rogerq@kernel.org>,
Siddharth Vadapalli <s-vadapalli@ti.com>, <y-mallik@ti.com>
Subject: Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device
Date: Wed, 12 Jun 2024 18:22:29 +0530 [thread overview]
Message-ID: <b07cfdfe-dce4-484b-b8a8-9d0e49985c60@ti.com> (raw)
In-Reply-To: <f14a554c-555f-4830-8be5-13988ddbf0ba@lunn.ch>
On 6/4/24 18:24, Andrew Lunn wrote:
>>>> + u32 buff_slot_size;
>>>> + /* Base Address for Tx or Rx shared memory */
>>>> + u32 base_addr;
>>>> +} __packed;
>>>
>>> What do you mean by address here? Virtual address, physical address,
>>> DMA address? And whos address is this, you have two CPUs here, with no
>>> guaranteed the shared memory is mapped to the same address in both
>>> address spaces.
>>>
>>> Andrew
>>
>> The address referred above is physical address. It is the address of Tx and Rx
>> buffer under the control of Linux operating over A53 core. The check if the
>> shared memory is mapped to the same address in both address spaces is checked
>> by the R5 core.
>
> u32 is too small for a physical address. I'm sure there are systems
> with more than 4G of address space. Also, i would not assume both CPUs
> map the memory to the same physical address.
>
> Andrew
The shared memory address space in AM64x board is 2G and u32 data type for
address works to use this address space. In order to make the driver generic,to
work with systems that have more than 4G address space, we can change the base
addr data type to u64 in the virtual driver code and the corresponding
necessary changes have to be made in the firmware.
During handshake between Linux and remote core, the remote core advertises Tx
and Rx shared memory info to Linux using rpmsg framework. Linux retrieves the
info related to shared memory from the response received using icve_rpmsg_cb
function.
+ case ICVE_RESP_SHM_INFO:
+ /* Retrieve Tx and Rx shared memory info from msg */
+ port->tx_buffer->head =
+ ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr,
+ sizeof(*port->tx_buffer->head));
+
+ port->tx_buffer->buf->base_addr =
+ ioremap((msg->resp_msg.shm_info.shm_info_tx.base_addr +
+ sizeof(*port->tx_buffer->head)),
+ (msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
+ msg->resp_msg.shm_info.shm_info_tx.buff_slot_size));
+
+ port->tx_buffer->tail =
+ ioremap(msg->resp_msg.shm_info.shm_info_tx.base_addr +
+ sizeof(*port->tx_buffer->head) +
+ (msg->resp_msg.shm_info.shm_info_tx.num_pkt_bufs *
+ msg->resp_msg.shm_info.shm_info_tx.buff_slot_size),
+ sizeof(*port->tx_buffer->tail));
+
+
The shared memory layout is modeled as circular buffer.
/* Shared Memory Layout
*
* --------------------------- *****************
* | MAGIC_NUM | icve_shm_head
* | HEAD |
* --------------------------- *****************
* | MAGIC_NUM |
* | PKT_1_LEN |
* | PKT_1 |
* ---------------------------
* | MAGIC_NUM |
* | PKT_2_LEN | icve_shm_buf
* | PKT_2 |
* ---------------------------
* | . |
* | . |
* ---------------------------
* | MAGIC_NUM |
* | PKT_N_LEN |
* | PKT_N |
* --------------------------- ****************
* | MAGIC_NUM | icve_shm_tail
* | TAIL |
* --------------------------- ****************
*/
Linux retrieves the following info provided in response by R5 core:
Tx buffer head address which is stored in port->tx_buffer->head
Tx buffer buffer's base address which is stored in port->tx_buffer->buf->base_addr
Tx buffer tail address which is stored in port->tx_buffer->tail
The number of packets that can be put into Tx buffer which is stored in
port->icve_tx_max_buffers
Rx buffer head address which is stored in port->rx_buffer->head
Rx buffer buffer's base address which is stored in port->rx_buffer->buf->base_addr
Rx buffer tail address which is stored in port->rx_buffer->tail
The number of packets that are put into Rx buffer which is stored in
port->icve_rx_max_buffers
Linux trusts these addresses sent by the R5 core to send or receive ethernet
packets. By this way both the CPUs map to the same physical address.
Regards,
Yojana Mallik
next prev parent reply other threads:[~2024-06-12 12:52 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-31 6:40 [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Yojana Mallik
2024-05-31 6:40 ` [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared memory ethernet driver Yojana Mallik
2024-05-31 15:30 ` Randy Dunlap
2024-06-03 5:50 ` Yojana Mallik
2024-06-02 7:01 ` Siddharth Vadapalli
2024-06-03 6:16 ` Yojana Mallik
2024-06-02 16:21 ` Andrew Lunn
2024-06-03 8:56 ` Yojana Mallik
2024-06-03 12:54 ` Andrew Lunn
2024-05-31 6:40 ` [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg driver as network device Yojana Mallik
2024-06-01 3:13 ` kernel test robot
2024-06-03 9:26 ` Yojana Mallik
2024-06-02 7:22 ` Siddharth Vadapalli
2024-06-02 15:54 ` Andrew Lunn
2024-06-02 7:35 ` Siddharth Vadapalli
2024-06-02 16:45 ` Andrew Lunn
2024-06-04 6:23 ` Yojana Mallik
2024-06-04 12:54 ` Andrew Lunn
2024-06-12 12:52 ` Yojana Mallik [this message]
2024-06-12 14:59 ` Andrew Lunn
2024-06-14 9:08 ` Yojana Mallik
2024-06-16 16:19 ` Andrew Lunn
2024-06-16 19:03 ` Andrew Lunn
2024-06-04 13:00 ` Andrew Lunn
2024-06-12 12:49 ` Yojana Mallik
2024-06-12 14:36 ` Andrew Lunn
2024-05-31 6:40 ` [PATCH net-next v2 3/3] net: ethernet: ti: icve: Add support for multicast filtering Yojana Mallik
2024-06-03 21:27 ` kernel test robot
2024-06-02 15:45 ` [PATCH net-next v2 0/3] Introducing Intercore Virtual Ethernet (ICVE) driver Andrew Lunn
2024-06-03 23:54 ` Jakub Kicinski
2024-06-12 12:48 ` Yojana Mallik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b07cfdfe-dce4-484b-b8a8-9d0e49985c60@ti.com \
--to=y-mallik@ti.com \
--cc=andrew@lunn.ch \
--cc=danishanwar@ti.com \
--cc=davem@davemloft.net \
--cc=diogo.ivo@siemens.com \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=rdunlap@infradead.org \
--cc=rogerq@kernel.org \
--cc=rogerq@ti.com \
--cc=s-vadapalli@ti.com \
--cc=schnelle@linux.ibm.com \
--cc=srk@ti.com \
--cc=vigneshr@ti.com \
--cc=wsa+renesas@sang-engineering.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox