From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?iso-8859-1?Q?Fran=E7ois-Fr=E9d=E9ric_Ozog?= Subject: Receive queue full Date: Tue, 18 Feb 2014 14:14:51 +0100 Message-ID: <005a01cf2cab$6840a310$38c1e930$@com> References: , <59AF69C657FD0841A61C55336867B5B01A99D070@IRSMSX103.ger.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable To: Return-path: In-Reply-To: Content-Language: fr List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" Hi, I am bumping into a similar problem than the one explained here (https://www.mail-archive.com/e1000-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org/msg07684.= htm l): At some point in time, a receive queue gets "FULL" i.e. = tail=3D=3Dhead (reading the NIC registers) and the thread associated to that queue = cannot retrieve any packet from it. The test program derived from L2FWD echoes packets received on one port = back to the same port, it reads up to 32 packets and send them back. The echo works nicely for a few seconds then queues gets full and stall. I have found that setting rx_conf.rx_free_thresh down from 32 to 28 (the multiple of 4 just before 32) avoids the problem and can handle close to 10Mpps per port.=20 Test context: - 4 socket Xeon E7 4800v2 with 256GB RAM - 32GB hugepages reserved, 104 lcores reserved - DPDK 1.5.0, testing with the latest from git presents a performance glitch I can't pinpoint at present. - two ports (either 82599ES or X540) loaded at 10Mpps - various tests with 2 to 15 receive queues per port - various tests with different combinations of RX_PTHRESH, RX_HTHRESH, RX_WTHRESH It really looks like a race condition (32 reads, 32 refresh cycle) but I can't figure it out if it exists. I'd be glad to get any comment or question on the issue. Cordially, Fran=E7ois-Fr=E9d=E9ric