From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael Hu (NSBU)" Subject: dpdk starting issue with descending virtual address allocation in new kernel Date: Wed, 10 Sep 2014 22:40:36 +0000 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable To: "dev-VfR2kkLFssw@public.gmane.org" Return-path: Content-Language: en-US List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" Hi All, We have a kernel config question to consult you. DPDK failed to start due to mbuf creation issue with new kernel 3.14.17 + g= rsecurity patches. We tries to trace down the issue, it seems that the virtual address of hug= e page is allocated from high address to low address by kernel where dpdk e= xpects it to be low to high to think it is as consecutive. See dumped virt = address bellow. It is first 0x710421400000, then 0x710421200000. Where prev= iously it would be 0x710421200000 first , then 0x710421400000. But they are= still consecutive. ---- Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 00:0c:29:b3:30:db Create: Default RX 0:0 - Memory used (MBUFs 4096 x (size 1984 + Hdr 6= 4)) + 790720 =3D 8965 KB Zone 0: name:, phys:0x6ac00000, len:0x2080, virt:0x71042= 1400000, socket_id:0, flags:0 Zone 1: name:, phys:0x6ac02080, len:0x1d10c0, virt:0x710421= 402080, socket_id:0, flags:0 Zone 2: name:, phys:0x6ae00000, len:0x160000, virt:0x7104= 21200000, socket_id:0, flags:0 Zone 3: name:, phys:0x6add3140, len:0x11a00, virt:0x71042= 15d3140, socket_id:0, flags:0 Zone 4: name:, phys:0x6ade4b40, len:0x300, virt:0= x7104215e4b40, socket_id:0, flags:0 Zone 5: name:, phys:0x6ade4e80, len:0x200, vir= t:0x7104215e4e80, socket_id:0, flags:0 Zone 6: name:, phys:0x6ade5080, len:0x10080, virt:0x= 7104215e5080, socket_id:0, flags:0 Segment 0: phys:0x6ac00000, len:2097152, virt:0x710421400000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: phys:0x6ae00000, len:2097152, virt:0x710421200000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 2: phys:0x6b000000, len:2097152, virt:0x710421000000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 3: phys:0x6b200000, len:2097152, virt:0x710420e00000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4: phys:0x6b400000, len:2097152, virt:0x710420c00000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 5: phys:0x6b600000, len:2097152, virt:0x710420a00000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 6: phys:0x6b800000, len:2097152, virt:0x710420800000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 7: phys:0x6ba00000, len:2097152, virt:0x710420600000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 8: phys:0x6bc00000, len:2097152, virt:0x710420400000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 Segment 9: phys:0x6be00000, len:2097152, virt:0x710420200000, socket_id:0, = hugepage_sz:2097152, nchannel:0, nrank:0 --- Related dpdk code is in dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c :: rte_eal_hugepage_init() for (i =3D 0; i < nr_hugefiles; i++) { new_memseg =3D 0; /* if this is a new section, create a new memseg */ if (i =3D=3D 0) new_memseg =3D 1; else if (hugepage[i].socket_id !=3D hugepage[i-1].socket_id) new_memseg =3D 1; else if (hugepage[i].size !=3D hugepage[i-1].size) new_memseg =3D 1; else if ((hugepage[i].physaddr - hugepage[i-1].physaddr) !=3D hugepage[i].size) new_memseg =3D 1; else if (((unsigned long)hugepage[i].final_va - (unsigned long)hugepage[i-1].final_va) !=3D hugepage[i].size) { new_memseg =3D 1; } Is this a known issue? Is there any workaround? Or Could you advise which k= ernel config may relate this this kernel behavior change? Thanks, Michael