* l3fwd doesn't seem to work @ 2013-06-14 5:10 Patrick Mahan [not found] ` <51BAA5BC.1020506-5dHXHCkEAVbYtjvyW6yDsg@public.gmane.org> 0 siblings, 1 reply; 3+ messages in thread From: Patrick Mahan @ 2013-06-14 5:10 UTC (permalink / raw) To: dev-VfR2kkLFssw All, I am trying to run l3fwd example. The system is a single chip E5-2690 (8 core) with 64 GB of DDR3 memory and a 82599 2 port NIC connected to an Ixia traffic generator. I have successfully run test, testpmd and l2fwd. Both testpmd and l2fwd seem to work as documented. However, l3fwd doesn't seem to be working. I made two changes in the code: 1. I changed the hard-coded config to the following {0, 0, 0}, {0, 1, 1}, {0, 2, 2}, {0, 3, 3}, {1, 0, 4}, {1, 1, 5}, {1, 2, 6}, {1, 3, 7} 2. I change the hardcoded LPM routes to the following: {IPv4(192,168,10,0), 24, 0}, {IPv4(192,168,11,0), 24, 0}, {IPv4(192,168,12,0), 24, 0}, {IPv4(192,168,13,0), 24, 0}, {IPv4(192,168,14,0), 24, 0}, {IPv4(192,168,15,0), 24, 0}, {IPv4(192,168,16,0), 24, 0}, {IPv4(192,168,17,0), 24, 0}, {IPv4(192,168,18,0), 24, 0}, {IPv4(192,168,19,0), 24, 0}, {IPv4(192,168,20,0), 24, 1}, {IPv4(192,168,21,0), 24, 1}, {IPv4(192,168,22,0), 24, 1}, {IPv4(192,168,23,0), 24, 1}, {IPv4(192,168,24,0), 24, 1}, {IPv4(192,168,25,0), 24, 1}, {IPv4(192,168,26,0), 24, 1}, {IPv4(192,168,27,0), 24, 1}, {IPv4(192,168,28,0), 24, 1}, {IPv4(192,168,29,0), 24, 1}, The Ixia is setup to generate packets varying from 192.168.20.1-192.168.29.256 to port 0 and 192.168.10.1-192.168.19.256 to port 1. I started the l3fwd with the following command: sudo ./build/l3fwd -b 0000:06:00.0 -b 0000:06:00.1 -c ff -n 3 -- -p 3 Ports 06:00.0 and 06:00.1 are the Intel i350 ports on the motherboard. If I understand the code, this should setup 4 lcores listening for packets on port 0 using 4 queues, and 4 lcores listening for packets on port 1 using 4 queues. The l3pwd example starts up (see the output below) but just sits there and no traffic is routed. I even put in a timeout count to print the interface stats (using rte_eth_stats_get()) to see if there was some issue at play. The reads always return 0 (polling) and it is as if no packets are seen on the interface. When I run testpmd or l2pmd with the same packet stream, I see packets going through. Any ideas what is up? Did I mis-configure something? Thanks, Patrick Here is the output when I run l3pwd: EAL: coremask set to ff EAL: Using native RDTSC EAL: Detected lcore 0 on socket 0 EAL: Detected lcore 1 on socket 0 EAL: Detected lcore 2 on socket 0 EAL: Detected lcore 3 on socket 0 EAL: Detected lcore 4 on socket 0 EAL: Detected lcore 5 on socket 0 EAL: Detected lcore 6 on socket 0 EAL: Detected lcore 7 on socket 0 EAL: Detected lcore 8 on socket 0 EAL: Detected lcore 9 on socket 0 EAL: Detected lcore 10 on socket 0 EAL: Detected lcore 11 on socket 0 EAL: Detected lcore 12 on socket 0 EAL: Detected lcore 13 on socket 0 EAL: Detected lcore 14 on socket 0 EAL: Detected lcore 15 on socket 0 EAL: Requesting 4 pages of size 1073741824 EAL: Ask a virtual area of 0x40000000 bytes EAL: Virtual area found at 0x7f43c0000000 (size = 0x40000000) EAL: Ask a virtual area of 0xc0000000 bytes EAL: Virtual area found at 0x7f42c0000000 (size = 0xc0000000) EAL: Master core 0 is ready (tid=b19a800) EAL: Core 1 is ready (tid=9d97700) EAL: Core 2 is ready (tid=9396700) EAL: Core 3 is ready (tid=3fff700) EAL: Core 4 is ready (tid=bffff700) EAL: Core 5 is ready (tid=35fe700) EAL: Core 6 is ready (tid=2bfd700) EAL: Core 7 is ready (tid=21fc700) Allocated mbuf pool on socket 0 LPM: Allocated LPM with 1024 rules, tbl24: 16777216 entries, tbl8: 65536 groups x 256 entries LPM: Adding route 0xc0a80a00 / 24 (0) LPM: Adding route 0xc0a80b00 / 24 (0) LPM: Adding route 0xc0a80c00 / 24 (0) LPM: Adding route 0xc0a80d00 / 24 (0) LPM: Adding route 0xc0a80e00 / 24 (0) LPM: Adding route 0xc0a80f00 / 24 (0) LPM: Adding route 0xc0a81000 / 24 (0) LPM: Adding route 0xc0a81100 / 24 (0) LPM: Adding route 0xc0a81200 / 24 (0) LPM: Adding route 0xc0a81300 / 24 (0) LPM: Adding route 0xc0a81400 / 24 (1) LPM: Adding route 0xc0a81500 / 24 (1) LPM: Adding route 0xc0a81600 / 24 (1) LPM: Adding route 0xc0a81700 / 24 (1) LPM: Adding route 0xc0a81800 / 24 (1) LPM: Adding route 0xc0a81900 / 24 (1) LPM: Adding route 0xc0a81a00 / 24 (1) LPM: Adding route 0xc0a81b00 / 24 (1) LPM: Adding route 0xc0a81c00 / 24 (1) LPM: Adding route 0xc0a81d00 / 24 (1) EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:03:00.0/driver/unbind EAL: bind PCI device 0000:03:00.0 to igb_uio driver EAL: Device bound EAL: map PCI resource for device 0000:03:00.0 EAL: Mapping resources for '/dev/uio0' starting at 0x00000000 for 524288 bytes EAL: PCI memory mapped at 0x7f4408916000 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:03:00.1/driver/unbind EAL: bind PCI device 0000:03:00.1 to igb_uio driver EAL: Device bound EAL: map PCI resource for device 0000:03:00.1 EAL: Mapping resources for '/dev/uio1' starting at 0x00000000 for 524288 bytes EAL: PCI memory mapped at 0x7f4408896000 EAL: probe driver: 8086:1521 rte_igb_pmd EAL: probe driver: 8086:1521 rte_igb_pmd Initializing port 0 ... Creating queues: nb_rxq=4 nb_txq=8... Address:00:1B:21:6B:8D:D4, txq=0,0,0 txq=1,1,0 txq=2,2,0 txq=3,3,0 txq=4,4,0 txq=5,5,0 txq=6,6,0 txq=7,7,0 Initializing port 1 ... Creating queues: nb_rxq=4 nb_txq=8... Address:00:1B:21:6B:8D:D5, txq=0,0,0 txq=1,1,0 txq=2,2,0 txq=3,3,0 txq=4,4,0 txq=5,5,0 txq=6,6,0 txq=7,7,0 Initializing rx queues on lcore 0 ... rxq=0,0,0 Initializing rx queues on lcore 1 ... rxq=0,1,0 Initializing rx queues on lcore 2 ... rxq=0,2,0 Initializing rx queues on lcore 3 ... rxq=0,3,0 Initializing rx queues on lcore 4 ... rxq=1,0,0 Initializing rx queues on lcore 5 ... rxq=1,1,0 Initializing rx queues on lcore 6 ... rxq=1,2,0 Initializing rx queues on lcore 7 ... rxq=1,3,0 done: Port 0 Link Up - speed 10000 Mbps - full-duplex done: Port 1 Link Up - speed 10000 Mbps - full-duplex L3FWD: entering main loop on lcore 1 L3FWD: -- lcoreid=1 portid=0 rxqueueid=1 L3FWD: entering main loop on lcore 3 L3FWD: -- lcoreid=3 portid=0 rxqueueid=3 L3FWD: entering main loop on lcore 2 L3FWD: -- lcoreid=2 portid=0 rxqueueid=2 L3FWD: entering main loop on lcore 4 L3FWD: -- lcoreid=4 portid=1 rxqueueid=0 L3FWD: entering main loop on lcore 5 L3FWD: -- lcoreid=5 portid=1 rxqueueid=1 L3FWD: entering main loop on lcore 0 L3FWD: -- lcoreid=0 portid=0 rxqueueid=0 L3FWD: entering main loop on lcore 7 L3FWD: -- lcoreid=7 portid=1 rxqueueid=3 L3FWD: entering main loop on lcore 6 L3FWD: -- lcoreid=6 portid=1 rxqueueid=2 ^ permalink raw reply [flat|nested] 3+ messages in thread
[parent not found: <51BAA5BC.1020506-5dHXHCkEAVbYtjvyW6yDsg@public.gmane.org>]
* Re: l3fwd doesn't seem to work [not found] ` <51BAA5BC.1020506-5dHXHCkEAVbYtjvyW6yDsg@public.gmane.org> @ 2013-06-14 5:17 ` Jia.Sui(贾睢) [not found] ` <581E2E1085FAEF45B48CF8A139824CF804E5C14F7C-vBGlDtGOlKBBva6MZ6yAsJWt57iikwTfrKKpfKFFTNI@public.gmane.org> 0 siblings, 1 reply; 3+ messages in thread From: Jia.Sui(贾睢) @ 2013-06-14 5:17 UTC (permalink / raw) To: Patrick Mahan, dev-VfR2kkLFssw@public.gmane.org [-- Attachment #1: Type: text/plain, Size: 7030 bytes --] Hi Patrick for l3fwd you need also set the destination mac address to the receive port mac address on DUT(which run l3fwd) Please refer the attachment. thanks -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Patrick Mahan Sent: Friday, June 14, 2013 1:10 PM To: dev@dpdk.org Subject: [dpdk-dev] l3fwd doesn't seem to work All, I am trying to run l3fwd example. The system is a single chip E5-2690 (8 core) with 64 GB of DDR3 memory and a 82599 2 port NIC connected to an Ixia traffic generator. I have successfully run test, testpmd and l2fwd. Both testpmd and l2fwd seem to work as documented. However, l3fwd doesn't seem to be working. I made two changes in the code: 1. I changed the hard-coded config to the following {0, 0, 0}, {0, 1, 1}, {0, 2, 2}, {0, 3, 3}, {1, 0, 4}, {1, 1, 5}, {1, 2, 6}, {1, 3, 7} 2. I change the hardcoded LPM routes to the following: {IPv4(192,168,10,0), 24, 0}, {IPv4(192,168,11,0), 24, 0}, {IPv4(192,168,12,0), 24, 0}, {IPv4(192,168,13,0), 24, 0}, {IPv4(192,168,14,0), 24, 0}, {IPv4(192,168,15,0), 24, 0}, {IPv4(192,168,16,0), 24, 0}, {IPv4(192,168,17,0), 24, 0}, {IPv4(192,168,18,0), 24, 0}, {IPv4(192,168,19,0), 24, 0}, {IPv4(192,168,20,0), 24, 1}, {IPv4(192,168,21,0), 24, 1}, {IPv4(192,168,22,0), 24, 1}, {IPv4(192,168,23,0), 24, 1}, {IPv4(192,168,24,0), 24, 1}, {IPv4(192,168,25,0), 24, 1}, {IPv4(192,168,26,0), 24, 1}, {IPv4(192,168,27,0), 24, 1}, {IPv4(192,168,28,0), 24, 1}, {IPv4(192,168,29,0), 24, 1}, The Ixia is setup to generate packets varying from 192.168.20.1-192.168.29.256 to port 0 and 192.168.10.1-192.168.19.256 to port 1. I started the l3fwd with the following command: sudo ./build/l3fwd -b 0000:06:00.0 -b 0000:06:00.1 -c ff -n 3 -- -p 3 Ports 06:00.0 and 06:00.1 are the Intel i350 ports on the motherboard. If I understand the code, this should setup 4 lcores listening for packets on port 0 using 4 queues, and 4 lcores listening for packets on port 1 using 4 queues. The l3pwd example starts up (see the output below) but just sits there and no traffic is routed. I even put in a timeout count to print the interface stats (using rte_eth_stats_get()) to see if there was some issue at play. The reads always return 0 (polling) and it is as if no packets are seen on the interface. When I run testpmd or l2pmd with the same packet stream, I see packets going through. Any ideas what is up? Did I mis-configure something? Thanks, Patrick Here is the output when I run l3pwd: EAL: coremask set to ff EAL: Using native RDTSC EAL: Detected lcore 0 on socket 0 EAL: Detected lcore 1 on socket 0 EAL: Detected lcore 2 on socket 0 EAL: Detected lcore 3 on socket 0 EAL: Detected lcore 4 on socket 0 EAL: Detected lcore 5 on socket 0 EAL: Detected lcore 6 on socket 0 EAL: Detected lcore 7 on socket 0 EAL: Detected lcore 8 on socket 0 EAL: Detected lcore 9 on socket 0 EAL: Detected lcore 10 on socket 0 EAL: Detected lcore 11 on socket 0 EAL: Detected lcore 12 on socket 0 EAL: Detected lcore 13 on socket 0 EAL: Detected lcore 14 on socket 0 EAL: Detected lcore 15 on socket 0 EAL: Requesting 4 pages of size 1073741824 EAL: Ask a virtual area of 0x40000000 bytes EAL: Virtual area found at 0x7f43c0000000 (size = 0x40000000) EAL: Ask a virtual area of 0xc0000000 bytes EAL: Virtual area found at 0x7f42c0000000 (size = 0xc0000000) EAL: Master core 0 is ready (tid=b19a800) EAL: Core 1 is ready (tid=9d97700) EAL: Core 2 is ready (tid=9396700) EAL: Core 3 is ready (tid=3fff700) EAL: Core 4 is ready (tid=bffff700) EAL: Core 5 is ready (tid=35fe700) EAL: Core 6 is ready (tid=2bfd700) EAL: Core 7 is ready (tid=21fc700) Allocated mbuf pool on socket 0 LPM: Allocated LPM with 1024 rules, tbl24: 16777216 entries, tbl8: 65536 groups x 256 entries LPM: Adding route 0xc0a80a00 / 24 (0) LPM: Adding route 0xc0a80b00 / 24 (0) LPM: Adding route 0xc0a80c00 / 24 (0) LPM: Adding route 0xc0a80d00 / 24 (0) LPM: Adding route 0xc0a80e00 / 24 (0) LPM: Adding route 0xc0a80f00 / 24 (0) LPM: Adding route 0xc0a81000 / 24 (0) LPM: Adding route 0xc0a81100 / 24 (0) LPM: Adding route 0xc0a81200 / 24 (0) LPM: Adding route 0xc0a81300 / 24 (0) LPM: Adding route 0xc0a81400 / 24 (1) LPM: Adding route 0xc0a81500 / 24 (1) LPM: Adding route 0xc0a81600 / 24 (1) LPM: Adding route 0xc0a81700 / 24 (1) LPM: Adding route 0xc0a81800 / 24 (1) LPM: Adding route 0xc0a81900 / 24 (1) LPM: Adding route 0xc0a81a00 / 24 (1) LPM: Adding route 0xc0a81b00 / 24 (1) LPM: Adding route 0xc0a81c00 / 24 (1) LPM: Adding route 0xc0a81d00 / 24 (1) EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:03:00.0/driver/unbind EAL: bind PCI device 0000:03:00.0 to igb_uio driver EAL: Device bound EAL: map PCI resource for device 0000:03:00.0 EAL: Mapping resources for '/dev/uio0' starting at 0x00000000 for 524288 bytes EAL: PCI memory mapped at 0x7f4408916000 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: unbind kernel driver /sys/bus/pci/devices/0000:03:00.1/driver/unbind EAL: bind PCI device 0000:03:00.1 to igb_uio driver EAL: Device bound EAL: map PCI resource for device 0000:03:00.1 EAL: Mapping resources for '/dev/uio1' starting at 0x00000000 for 524288 bytes EAL: PCI memory mapped at 0x7f4408896000 EAL: probe driver: 8086:1521 rte_igb_pmd EAL: probe driver: 8086:1521 rte_igb_pmd Initializing port 0 ... Creating queues: nb_rxq=4 nb_txq=8... Address:00:1B:21:6B:8D:D4, txq=0,0,0 txq=1,1,0 txq=2,2,0 txq=3,3,0 txq=4,4,0 txq=5,5,0 txq=6,6,0 txq=7,7,0 Initializing port 1 ... Creating queues: nb_rxq=4 nb_txq=8... Address:00:1B:21:6B:8D:D5, txq=0,0,0 txq=1,1,0 txq=2,2,0 txq=3,3,0 txq=4,4,0 txq=5,5,0 txq=6,6,0 txq=7,7,0 Initializing rx queues on lcore 0 ... rxq=0,0,0 Initializing rx queues on lcore 1 ... rxq=0,1,0 Initializing rx queues on lcore 2 ... rxq=0,2,0 Initializing rx queues on lcore 3 ... rxq=0,3,0 Initializing rx queues on lcore 4 ... rxq=1,0,0 Initializing rx queues on lcore 5 ... rxq=1,1,0 Initializing rx queues on lcore 6 ... rxq=1,2,0 Initializing rx queues on lcore 7 ... rxq=1,3,0 done: Port 0 Link Up - speed 10000 Mbps - full-duplex done: Port 1 Link Up - speed 10000 Mbps - full-duplex L3FWD: entering main loop on lcore 1 L3FWD: -- lcoreid=1 portid=0 rxqueueid=1 L3FWD: entering main loop on lcore 3 L3FWD: -- lcoreid=3 portid=0 rxqueueid=3 L3FWD: entering main loop on lcore 2 L3FWD: -- lcoreid=2 portid=0 rxqueueid=2 L3FWD: entering main loop on lcore 4 L3FWD: -- lcoreid=4 portid=1 rxqueueid=0 L3FWD: entering main loop on lcore 5 L3FWD: -- lcoreid=5 portid=1 rxqueueid=1 L3FWD: entering main loop on lcore 0 L3FWD: -- lcoreid=0 portid=0 rxqueueid=0 L3FWD: entering main loop on lcore 7 L3FWD: -- lcoreid=7 portid=1 rxqueueid=3 L3FWD: entering main loop on lcore 6 L3FWD: -- lcoreid=6 portid=1 rxqueueid=2 [-- Attachment #2: DPDK_L3fwd.png --] [-- Type: image/png, Size: 23201 bytes --] ^ permalink raw reply [flat|nested] 3+ messages in thread
[parent not found: <581E2E1085FAEF45B48CF8A139824CF804E5C14F7C-vBGlDtGOlKBBva6MZ6yAsJWt57iikwTfrKKpfKFFTNI@public.gmane.org>]
* Re: l3fwd doesn't seem to work [not found] ` <581E2E1085FAEF45B48CF8A139824CF804E5C14F7C-vBGlDtGOlKBBva6MZ6yAsJWt57iikwTfrKKpfKFFTNI@public.gmane.org> @ 2013-06-14 7:41 ` Patrick Mahan 0 siblings, 0 replies; 3+ messages in thread From: Patrick Mahan @ 2013-06-14 7:41 UTC (permalink / raw) To: "Jia.Sui(贾睢)"; +Cc: dev-VfR2kkLFssw@public.gmane.org Wow, thanks very much for that. I guess I missed that in the documentation. I'll fix that up first thing in the morning. Patrick On 6/13/13 10:17 PM, Jia.Sui(贾睢) wrote: > Hi Patrick > > for l3fwd you need also set the destination mac address to the receive port mac address on DUT(which run l3fwd) > Please refer the attachment. > > thanks > > -----Original Message----- > From: dev [mailto:dev-bounces-VfR2kkLFssw@public.gmane.org] On Behalf Of Patrick Mahan > Sent: Friday, June 14, 2013 1:10 PM > To: dev-VfR2kkLFssw@public.gmane.org > Subject: [dpdk-dev] l3fwd doesn't seem to work > > All, > > I am trying to run l3fwd example. > > The system is a single chip E5-2690 (8 core) with 64 GB of DDR3 memory and > a 82599 2 port NIC connected to an Ixia traffic generator. > > I have successfully run test, testpmd and l2fwd. Both testpmd and l2fwd seem to > work as documented. > > However, l3fwd doesn't seem to be working. > > I made two changes in the code: > > 1. I changed the hard-coded config to the following > > {0, 0, 0}, > {0, 1, 1}, > {0, 2, 2}, > {0, 3, 3}, > {1, 0, 4}, > {1, 1, 5}, > {1, 2, 6}, > {1, 3, 7} > > 2. I change the hardcoded LPM routes to the following: > > {IPv4(192,168,10,0), 24, 0}, > {IPv4(192,168,11,0), 24, 0}, > {IPv4(192,168,12,0), 24, 0}, > {IPv4(192,168,13,0), 24, 0}, > {IPv4(192,168,14,0), 24, 0}, > {IPv4(192,168,15,0), 24, 0}, > {IPv4(192,168,16,0), 24, 0}, > {IPv4(192,168,17,0), 24, 0}, > {IPv4(192,168,18,0), 24, 0}, > {IPv4(192,168,19,0), 24, 0}, > {IPv4(192,168,20,0), 24, 1}, > {IPv4(192,168,21,0), 24, 1}, > {IPv4(192,168,22,0), 24, 1}, > {IPv4(192,168,23,0), 24, 1}, > {IPv4(192,168,24,0), 24, 1}, > {IPv4(192,168,25,0), 24, 1}, > {IPv4(192,168,26,0), 24, 1}, > {IPv4(192,168,27,0), 24, 1}, > {IPv4(192,168,28,0), 24, 1}, > {IPv4(192,168,29,0), 24, 1}, > > The Ixia is setup to generate packets varying from 192.168.20.1-192.168.29.256 to > port 0 and 192.168.10.1-192.168.19.256 to port 1. > > I started the l3fwd with the following command: > > sudo ./build/l3fwd -b 0000:06:00.0 -b 0000:06:00.1 -c ff -n 3 -- -p 3 > > Ports 06:00.0 and 06:00.1 are the Intel i350 ports on the motherboard. > > If I understand the code, this should setup 4 lcores listening for packets > on port 0 using 4 queues, and 4 lcores listening for packets on port 1 > using 4 queues. > > The l3pwd example starts up (see the output below) but just sits there and > no traffic is routed. I even put in a timeout count to print the interface > stats (using rte_eth_stats_get()) to see if there was some issue at play. > > The reads always return 0 (polling) and it is as if no packets are seen on > the interface. When I run testpmd or l2pmd with the same packet stream, I > see packets going through. > > Any ideas what is up? Did I mis-configure something? > > Thanks, > > Patrick > > Here is the output when I run l3pwd: > > EAL: coremask set to ff > EAL: Using native RDTSC > EAL: Detected lcore 0 on socket 0 > EAL: Detected lcore 1 on socket 0 > EAL: Detected lcore 2 on socket 0 > EAL: Detected lcore 3 on socket 0 > EAL: Detected lcore 4 on socket 0 > EAL: Detected lcore 5 on socket 0 > EAL: Detected lcore 6 on socket 0 > EAL: Detected lcore 7 on socket 0 > EAL: Detected lcore 8 on socket 0 > EAL: Detected lcore 9 on socket 0 > EAL: Detected lcore 10 on socket 0 > EAL: Detected lcore 11 on socket 0 > EAL: Detected lcore 12 on socket 0 > EAL: Detected lcore 13 on socket 0 > EAL: Detected lcore 14 on socket 0 > EAL: Detected lcore 15 on socket 0 > EAL: Requesting 4 pages of size 1073741824 > EAL: Ask a virtual area of 0x40000000 bytes > EAL: Virtual area found at 0x7f43c0000000 (size = 0x40000000) > EAL: Ask a virtual area of 0xc0000000 bytes > EAL: Virtual area found at 0x7f42c0000000 (size = 0xc0000000) > EAL: Master core 0 is ready (tid=b19a800) > EAL: Core 1 is ready (tid=9d97700) > EAL: Core 2 is ready (tid=9396700) > EAL: Core 3 is ready (tid=3fff700) > EAL: Core 4 is ready (tid=bffff700) > EAL: Core 5 is ready (tid=35fe700) > EAL: Core 6 is ready (tid=2bfd700) > EAL: Core 7 is ready (tid=21fc700) > Allocated mbuf pool on socket 0 > LPM: Allocated LPM with 1024 rules, tbl24: 16777216 entries, tbl8: 65536 groups x > 256 entries > LPM: Adding route 0xc0a80a00 / 24 (0) > LPM: Adding route 0xc0a80b00 / 24 (0) > LPM: Adding route 0xc0a80c00 / 24 (0) > LPM: Adding route 0xc0a80d00 / 24 (0) > LPM: Adding route 0xc0a80e00 / 24 (0) > LPM: Adding route 0xc0a80f00 / 24 (0) > LPM: Adding route 0xc0a81000 / 24 (0) > LPM: Adding route 0xc0a81100 / 24 (0) > LPM: Adding route 0xc0a81200 / 24 (0) > LPM: Adding route 0xc0a81300 / 24 (0) > LPM: Adding route 0xc0a81400 / 24 (1) > LPM: Adding route 0xc0a81500 / 24 (1) > LPM: Adding route 0xc0a81600 / 24 (1) > LPM: Adding route 0xc0a81700 / 24 (1) > LPM: Adding route 0xc0a81800 / 24 (1) > LPM: Adding route 0xc0a81900 / 24 (1) > LPM: Adding route 0xc0a81a00 / 24 (1) > LPM: Adding route 0xc0a81b00 / 24 (1) > LPM: Adding route 0xc0a81c00 / 24 (1) > LPM: Adding route 0xc0a81d00 / 24 (1) > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > EAL: unbind kernel driver /sys/bus/pci/devices/0000:03:00.0/driver/unbind > EAL: bind PCI device 0000:03:00.0 to igb_uio driver > EAL: Device bound > EAL: map PCI resource for device 0000:03:00.0 > EAL: Mapping resources for '/dev/uio0' starting at 0x00000000 for 524288 bytes > EAL: PCI memory mapped at 0x7f4408916000 > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > EAL: unbind kernel driver /sys/bus/pci/devices/0000:03:00.1/driver/unbind > EAL: bind PCI device 0000:03:00.1 to igb_uio driver > EAL: Device bound > EAL: map PCI resource for device 0000:03:00.1 > EAL: Mapping resources for '/dev/uio1' starting at 0x00000000 for 524288 bytes > EAL: PCI memory mapped at 0x7f4408896000 > EAL: probe driver: 8086:1521 rte_igb_pmd > EAL: probe driver: 8086:1521 rte_igb_pmd > Initializing port 0 ... Creating queues: nb_rxq=4 nb_txq=8... > Address:00:1B:21:6B:8D:D4, txq=0,0,0 txq=1,1,0 txq=2,2,0 txq=3,3,0 txq=4,4,0 > txq=5,5,0 txq=6,6,0 txq=7,7,0 > Initializing port 1 ... Creating queues: nb_rxq=4 nb_txq=8... > Address:00:1B:21:6B:8D:D5, txq=0,0,0 txq=1,1,0 txq=2,2,0 txq=3,3,0 txq=4,4,0 > txq=5,5,0 txq=6,6,0 txq=7,7,0 > > Initializing rx queues on lcore 0 ... rxq=0,0,0 > Initializing rx queues on lcore 1 ... rxq=0,1,0 > Initializing rx queues on lcore 2 ... rxq=0,2,0 > Initializing rx queues on lcore 3 ... rxq=0,3,0 > Initializing rx queues on lcore 4 ... rxq=1,0,0 > Initializing rx queues on lcore 5 ... rxq=1,1,0 > Initializing rx queues on lcore 6 ... rxq=1,2,0 > Initializing rx queues on lcore 7 ... rxq=1,3,0 > done: Port 0 Link Up - speed 10000 Mbps - full-duplex > done: Port 1 Link Up - speed 10000 Mbps - full-duplex > L3FWD: entering main loop on lcore 1 > L3FWD: -- lcoreid=1 portid=0 rxqueueid=1 > L3FWD: entering main loop on lcore 3 > L3FWD: -- lcoreid=3 portid=0 rxqueueid=3 > L3FWD: entering main loop on lcore 2 > L3FWD: -- lcoreid=2 portid=0 rxqueueid=2 > L3FWD: entering main loop on lcore 4 > L3FWD: -- lcoreid=4 portid=1 rxqueueid=0 > L3FWD: entering main loop on lcore 5 > L3FWD: -- lcoreid=5 portid=1 rxqueueid=1 > L3FWD: entering main loop on lcore 0 > L3FWD: -- lcoreid=0 portid=0 rxqueueid=0 > L3FWD: entering main loop on lcore 7 > L3FWD: -- lcoreid=7 portid=1 rxqueueid=3 > L3FWD: entering main loop on lcore 6 > L3FWD: -- lcoreid=6 portid=1 rxqueueid=2 > ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-06-14 7:41 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2013-06-14 5:10 l3fwd doesn't seem to work Patrick Mahan [not found] ` <51BAA5BC.1020506-5dHXHCkEAVbYtjvyW6yDsg@public.gmane.org> 2013-06-14 5:17 ` Jia.Sui(贾睢) [not found] ` <581E2E1085FAEF45B48CF8A139824CF804E5C14F7C-vBGlDtGOlKBBva6MZ6yAsJWt57iikwTfrKKpfKFFTNI@public.gmane.org> 2013-06-14 7:41 ` Patrick Mahan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).