From mboxrd@z Thu Jan 1 00:00:00 1970 From: James Yu Subject: one directional traffic from SR-IOV port using l2fwd Date: Fri, 10 Jan 2014 20:30:53 -0800 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable To: dev-VfR2kkLFssw@public.gmane.org Return-path: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" I am trying to make SR-IOV + DPDK l2fwd to work together. I can only send one directional traffic, not bi-directional. The traffic is one-directionally looped back by l2fwd using DPDK l2fwd as illustrated below Spirent port 1 --> KVM host PF -> VF (Virtual function) --> DPDK l2fwd (looping back to the other port) --+ Spirent port 2 <-- <- <----------------------------------------------------------------------+ When I send traffic from port 2 to KVM host, I did not receive traffic on port 1. I think the code should use rte_ixgbevf_pmd_init() as shown in l2fwd-vf in DPDK 1.2.3 release (http://www.dpdk.org/browse/dpdk/tree/examples/l2fwd-vf/main.c?h=3D1.2.3). Any one knows how to send/receive ports to/from SR-IOV ports ? Thanks James --- I have the following setup. Anything mis-configured ? ---- I did the following setup to turn on SR-IOV: Test HW Config: CPU =3D Intel=AE Xeon=AE Processor E5506 =3D 4-core, 2.13 Ghz, (VT-d, VT-= x capable), Hyper threading not supported Mem =3D 16GB RAM (800 Mhz) single slot (no NUMA) NIC =3D Intel 82599EB 10-Gigabit Ethernet (SR-IOV capable) *Steps to setup KVM host and guest:* 1) Hypervisor - Enable VT-d and Virtualization support 2) Host Kernel: RHEL 6.1 (2.6.32-431.el6.x86_64) qemu 0.12.1.2, ixgbe 3.15.1-k a) Grub Kernel configuration - Intel =3D Add "intel_iommu=3Don" - AMD =3D Add "iommu=3Don iommu=3Dpt" b) In Host Kernel, Enable virtual functions for ixgbe. - modprobe -r ixgbe - modprobe -v ixgbe max_vfs=3D2 Here is the list of 10G PCI devices. The Virtual Function associated with 1a:10.0 and 1a:10.1 are 1a:10.2 and 1a:10.3. They will be used in the guest VM hostdev XML configuration in 2(d). [root@rh188 ~]# lspci |grep Eth 1a:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) 1a:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) 1a:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 1a:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) *1a:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)* *1a:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)* c) Blacklist ixgbevf driver in the host. Add below two lines into /etc/modprobe.d/blacklist.conf # Intel SR-IOV virtual function driver (ixgbe) blacklist ixgbevf d) Add PCIe Virtual functions to the KVM guest either graphically or through "virsh edit" command. For examples, two <*hostdev*> entries are added in the guest XML configuration. The ones in bold, function 0x2 and 0x3, are the virtual functions listed in the host. They are associated with the slot 0x05 and slot 0x08 on the guest VM. You will use these two PCI devices on the guest. *
*
*
*
3) Guest Kernel To run DPDK l2fwd loopback, run the following script under the dpdk source code directory: *modprobe -r igb_uio* *modprobe uio* *insmod ./build/kmod/igb_uio.ko* *modprobe -r ixgbevf* *insmod /root/rpmbuild/BUILD/ixgbevf-2.12.1/src/ixgbevf.ko* *#Reserve huge pages memory.* *mkdir -p /mnt/huge* *mount -t hugetlbfs nodev /mnt/huge* *echo 196 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages* *./tools/pci_unbind.py --bind=3Digb_uio 00:05.0* *./tools/pci_unbind.py --bind=3Digb_uio 00:08.0* *./examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0 -b 000:00:07.0 -b 000:00:0a.0 -- -q 1 -p 3* *NOTE: ixgbevf.ko is built based on ixgbevf-2.12.1 which has bug fixes to improve the performance.*