From mboxrd@z Thu Jan 1 00:00:00 1970
From: bugzilla@dpdk.org
Subject: [Bug 183] Problem using cloned rte_mbuf buffers with KNI
interface
Date: Mon, 07 Jan 2019 15:29:42 +0000
Message-ID:
Mime-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
To: dev@dpdk.org
Return-path:
List-Id: DPDK patches and discussions
List-Unsubscribe: ,
List-Archive:
List-Post:
List-Help:
List-Subscribe: ,
Errors-To: dev-bounces@dpdk.org
Sender: "dev"
https://bugs.dpdk.org/show_bug.cgi?id=3D183
Bug ID: 183
Summary: Problem using cloned rte_mbuf buffers with KNI
interface
Product: DPDK
Version: 18.11
Hardware: All
OS: Linux
Status: CONFIRMED
Severity: normal
Priority: Normal
Component: other
Assignee: dev@dpdk.org
Reporter: dinesh.kp78@gmail.com
Target Milestone: ---
problem appears in DPDK 18.11
we have a scenario to send cloned rte_mbuf buffer packets to kernel virtual
interface via KNI api. Things were working fine till DPDK-18.05 but when we
upgraded to DPDK-18.11 noticing some issues that "empty packets are getting
delivered via KNI interface"
environment setup
--------------------------
dpdk-devbind.py --status-dev net
Network devices using DPDK-compatible driver
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
0000:00:0b.0 '82540EM Gigabit Ethernet Controller 100e' drv=3Digb_uio
unused=3De1000
0000:00:0c.0 '82540EM Gigabit Ethernet Controller 100e' drv=3Digb_uio
unused=3De1000
Network devices using kernel driver
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
0000:00:03.0 'Virtio network device 1000' if=3Deth0 drv=3Dvirtio-pci
unused=3Dvirtio_pci,igb_uio *Active*
0000:00:04.0 'Virtio network device 1000' if=3Deth1 drv=3Dvirtio-pci
unused=3Dvirtio_pci,igb_uio *Active*
0000:00:05.0 'Virtio network device 1000' if=3Deth2 drv=3Dvirtio-pci
unused=3Dvirtio_pci,igb_uio *Active*
DPDK kernel modules loaded
--------------------------
lsmod | grep igb_uio
igb_uio 13506 2=20
uio 19259 5 igb_uio
lsmod | grep rte_kni
rte_kni 28122 1=20
Redhat linux7
uname -r
3.10.0-862.9.1.el7.x86_64 #1 SMP Wed Jun 27 04:30:39 EDT 2018 x86_64 x86_64
x86_64 GNU/Linux
Problem simulation
--------------------------
To simulate the scenario, modified dpdk-18.11\examples\kni\main.c
"kni_ingress()" function to use rte_pktmbuf_clone() before sending packets =
to
kni interface
static void
kni_ingress(struct kni_port_params *p)
{
uint8_t i;
uint16_t port_id;
unsigned nb_rx, num;
uint32_t nb_kni;
struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
struct rte_mbuf *pkt;
if (p =3D=3D NULL)
return;
nb_kni =3D p->nb_kni;
port_id =3D p->port_id;
for (i =3D 0; i < nb_kni; i++) {
/* Burst rx from eth */
nb_rx =3D rte_eth_rx_burst(port_id, 0, pkts_burst, PKT_BURS=
T_SZ);
if (unlikely(nb_rx > PKT_BURST_SZ)) {
RTE_LOG(ERR, APP, "Error receiving from eth\n");
return;
}
// ----------- clone pkt start -----------
for (k =3D 0; k < nb_rx; k++) {
pkt =3D pkts_burst[k];
// using 'pkt->pool' for clone pkts is not efficient
way of using memory. perhaps=20
// we should have another pool with no memory reser=
ved
for the packet data as clone will=20
// have new metadata + just a reference to raw data.
for test simulation it's fine to reuse same buffer pool.
pkts_burst[k] =3D rte_pktmbuf_clone(pkt, pkt->pool);
rte_pktmbuf_free(pkt);
} // ----------- clone pkt end -----------
/* Burst tx to kni */
num =3D rte_kni_tx_burst(p->kni[i], pkts_burst, nb_rx);
if (num)
kni_stats[port_id].rx_packets +=3D num;
rte_kni_handle_request(p->kni[i]);
if (unlikely(num < nb_rx)) {
/* Free mbufs not tx to kni interface */
kni_burst_free_mbufs(&pkts_burst[num], nb_rx - num);
kni_stats[port_id].rx_dropped +=3D nb_rx - num;
}
}
}
# /tmp/18.11/kni -l 0-1 -n 4 -b 0000:00:03.0 -b 0000:00:04.0 -b 0000:00:05.0
--proc-type=3Dauto -m 512 -- -p 0x1 -P --config=3D"(0,0,1)"
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using unreli=
able
clock cycles !
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL: Device is blacklisted, not initializing
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL: Device is blacklisted, not initializing
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL: Device is blacklisted, not initializing
EAL: PCI device 0000:00:0b.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100e net_e1000_em
EAL: PCI device 0000:00:0c.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100e net_e1000_em
APP: Initialising port 0 ...
KNI: pci: 00:0b:00 8086:100e
Checking link status
.....done
Port0 Link Up - speed 1000Mbps - full-duplex
APP: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D
APP: KNI Running
APP: kill -SIGUSR1 8903
APP: Show KNI Statistics.
APP: kill -SIGUSR2 8903
APP: Zero KNI Statistics.
APP: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D
APP: Lcore 1 is writing to port 0
APP: Lcore 0 is reading from port 0
APP: Configure network interface of 0 up
KNI: Configure promiscuous mode of 0 to 1
Bring up vEth0 interface created by kni example app
# ifconfig vEth0 up
Dump the content received on vEth0 interface
# tcpdump -vv -i vEth0
tcpdump: listening on vEth0, link-type EN10MB (Ethernet), capture size 2621=
44
bytes
13:38:31.982085 [|ether]
13:38:32.050576 [|ether]
13:38:32.099805 [|ether]
13:38:32.151790 [|ether]
13:38:32.206755 [|ether]
13:38:32.253135 [|ether]
13:38:32.298773 [|ether]
13:38:32.345555 [|ether]
13:38:32.388859 [|ether]
13:38:32.467562 [|ether]
On sending packets to "00:0b:00" interface by using tcpreply, I could see
packets with empty content received on "vEth0". sometime I have seen kni
example app crashing with segmentation fault.
After rte_kni net driver analysis, it appears to be physical to virtual add=
ress
conversion is not proper. perhaps something to do with memory management
changes in recent DPDK versions. (I can also confirm that modified kni exam=
ple
works perfectly fine on DPDK 18.02)
As a workaround used --legacy-mem switch during kni app start up. it seems =
to
be promising, could manage to receive and dump cloned packets without any
issue.
#/tmp/18.11/kni -l 0-1 -n 4 -b 0000:00:03.0 -b 0000:00:04.0 -b 0000:00:05.0
--proc-type=3Dauto --legacy-mem -m 512 -- -p 0x1 -P --config=3D"(0,0,1)"
Could someone confirm that it's a bug in DPDK 18.11 ?
Thanks,
Dinesh
--=20
You are receiving this mail because:
You are the assignee for the bug.=