netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Implement initial driver for virtio-RDMA device(kernel)
@ 2025-12-18  9:09 Xiong Weimin
  2025-12-18  9:09 ` [PATCH 01/10] drivers/infiniband/hw/virtio: Initial driver for virtio RDMA devices Xiong Weimin
                   ` (11 more replies)
  0 siblings, 12 replies; 18+ messages in thread
From: Xiong Weimin @ 2025-12-18  9:09 UTC (permalink / raw)
  To: Michael S . Tsirkin, David Hildenbrand, Jason Wang,
	Stefano Garzarella, Thomas Monjalon, David Marchand,
	Luca Boccassi, Kevin Traynor, Christian Ehrhardt, Xuan Zhuo,
	Eugenio Pérez, Xueming Li, Maxime Coquelin, Chenbo Xia,
	Bruce Richardson
  Cc: kvm, virtualization, netdev

Hi all,

This testing instructions aims to introduce an emulating a soft ROCE 
device with normal NIC(no RDMA), we have finished a vhost-user RDMA
device demo, which can work with RDMA features such as CM, QP type of 
UC/UD and so on.

There are testing instructions of the demo:

1.Test Environment Configuration
Hardware Environment
Servers: 1 identically configured servers

CPU: HUAWEI Kunpeng 920 (96 cores)

Memory: 3T DDR4

NIC: TAP (paired virtio-net device for RDMA)

Software Environment
Server Host OS: 6.4.0-10.1.0.20.oe2309.aarch64

Kernel: linux-6.16.8 (with kernel-vrdma module)

QEMU: 9.0.2 (compiled with vhost-user-rdma virtual device support)

DPDK: 24.07.0-rc2

Dependencies:

	rdma-core
	
	rdma_rxe

	libibverbs-dev
	
2. Test Procedures
a. Starting DPDK with vhost-user-rdma first: 
1). Configure Hugepages
   echo 2048 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
2). app start  
  /DPDKDIR/build/examples/dpdk-vhost_user_rdma -l 1-4 -n 4 --vdev "net_tap0" -- --socket-file /tmp/vhost-rdma0

b. Booting guest kernel with qemu, command line: 
...
-netdev tap,id=hostnet1,ifname=tap1,script=no,downscript=no 
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:14:72:30,bus=pci.3,addr=0x0.0,multifunction=on 
-chardev socket,path=/tmp/vhost-rdma0,id=vurdma 
-device vhost-user-rdma-pci,bus=pci.3,addr=0x0.1,page-per-vq=on,disable-legacy=on,chardev=vurdma
...

c. Guest Kernel Module Loading and Validation
# Load vhost_rdma kernel module
sudo modprobe vrdma

# Verify module loading
lsmod | grep vrdma

# Check kernel logs
dmesg | grep vhost_rdma

# Expected output:
[    4.935473] vrdma_init_device: Initializing vRDMA device with max_cq=64, max_qp=64
[    4.949888] [vrdma_init_device]: Successfully initialized, last qp_vq index=192
[    4.949907] [vrdma_init_netdev]: Found paired net_device 'enp3s0f0' (on 0000:03:00.0)
[    4.949924] Bound vRDMA device to net_device 'enp3s0f0'
[    5.026032] vrdma virtio2: vrdma_alloc_pd: allocated PD 1
[    5.028006] Successfully registered vRDMA device as 'vrdma0'
[    5.028020] [vrdma_probe]: Successfully probed VirtIO RDMA device (index=2)
[    5.028104] VirtIO RDMA driver initialized successfully

d. Inside VM, one rdma device fs node will be generated in /dev/infiniband: 
[root@localhost ~]# ll -h /dev/infiniband/
total 0
drwxr-xr-x. 2 root root       60 Dec 17 11:24 by-ibdev
drwxr-xr-x. 2 root root       60 Dec 17 11:24 by-path
crw-rw-rw-. 1 root root  10, 259 Dec 17 11:24 rdma_cm
crw-rw-rw-. 1 root root 231, 192 Dec 17 11:24 uverbs0

e. The following are to be done in the future version: 
1). SRQ support
2). DPDK support for physical RDMA NIC for handling the datapath between front and backend
3). Reset of VirtQueue
4). Increase size of VirtQueue for PCI transport
5). Performance Testing

f. Test Results
1). Functional Test Results:
Kernel module loading	PASS	Module loaded without errors
DPDK startup	        PASS	vhost-user-rdma backend initialized
QEMU VM launch	        PASS	VM booted using RDMA device
Network connectivity	PASS	Host-VM communication established
RDMA device detection	PASS	Virtual RDMA device recognized

f.Test Conclusion
1). Full functional compliance with specifications
2). Stable operation under extended stress conditions

Recommendations:
1). Optimize memory copy paths for higher throughput
2). Enhance error handling and recovery mechanisms 


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2025-12-25  2:13 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-18  9:09 Implement initial driver for virtio-RDMA device(kernel) Xiong Weimin
2025-12-18  9:09 ` [PATCH 01/10] drivers/infiniband/hw/virtio: Initial driver for virtio RDMA devices Xiong Weimin
2025-12-21  9:11   ` Leon Romanovsky
2025-12-18  9:09 ` [PATCH 02/10] drivers/infiniband/hw/virtio: add vrdma_exec_verbs_cmd to construct verbs sgs using virtio Xiong Weimin
2025-12-18  9:09 ` [PATCH 03/10] drivers/infiniband/hw/virtio: Implement core device and key resource management Xiong Weimin
2025-12-18  9:09 ` [PATCH 04/10] drivers/infiniband/hw/virtio: Implement MR, GID, ucontext and AH resource management verbs Xiong Weimin
2025-12-18  9:09 ` [PATCH 05/10] drivers/infiniband/hw/virtio: Implement memory mapping and MR scatter-gather support Xiong Weimin
2025-12-18  9:09 ` [PATCH 06/10] drivers/infiniband/hw/virtio: Implement port management and QP modification verbs Xiong Weimin
2025-12-18  9:09 ` [PATCH 07/10] drivers/infiniband/hw/virtio: Implement Completion Queue (CQ) polling support Xiong Weimin
2025-12-18  9:09 ` [PATCH 08/10] drivers/infiniband/hw/virtio: Implement send/receive verb support Xiong Weimin
2025-12-18  9:09 ` [PATCH 09/10] drivers/infiniband/hw/virtio: Implement P_key, QP query and user MR resource management verbs Xiong Weimin
2025-12-18  9:09 ` [PATCH 10/10] drivers/infiniband/hw/virtio: Add completion queue notification support Xiong Weimin
2025-12-18 16:30 ` Implement initial driver for virtio-RDMA device(kernel) Leon Romanovsky
2025-12-19  2:27   ` 熊伟民  
     [not found]   ` <6ef11502.4847.19b34677a76.Coremail.15927021679@163.com>
2025-12-21  8:46     ` Leon Romanovsky
2025-12-23  1:16 ` Jason Wang
2025-12-24  9:31   ` 熊伟民  
2025-12-25  2:13     ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).