From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bruce Richardson Subject: Re: one worker reading multiple ports Date: Fri, 21 Nov 2014 14:44:30 +0000 Message-ID: <20141121144430.GA9404@bricha3-MOBL3> References: <20141120215233.GA15551@mhcomputing.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "dev-VfR2kkLFssw@public.gmane.org" To: Newman Poborsky Return-path: Content-Disposition: inline In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote: > So, since mempool is multi-consumer (by default), if one is used to > configure queues on multiple NICs that have different socket owners, then > mbuf allocation will fail? But if 2 NICs have the socket owner, everything > should work fine? Since I'm talking about 2 ports on the same NIC, they > must have the same owner, RX receive should work with RX queues configured > with the same mempool, right? But it my case it doesn't so I guess I'm > missing something. Actually, the mempools will work with NICs on multiple sockets - it's just that performance is likely to suffer due to QPI usage. The mempools being on one socket or the other is not going to break your application. > > Any idea how can I troubleshoot why allocation fails with one mempool and > works fine with each queue having its own mempool? At a guess, I'd say that your mempools just aren't bit enough. Try doubling the size of th mempool in the single-pool case and see if it helps things. /Bruce > > Thank you, > > Newman > > On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall > wrote: > > > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote: > > > Thank you for your answer. > > > > > > I just realized that the reason the rte_eth_rx_burst() returns 0 is > > because > > > inside ixgbe_recv_pkts() this fails: > > > nmb = rte_rxmbuf_alloc(rxq->mb_pool); => nmb is NULL > > > > > > Does this mean that every RX queue should have its own rte_mempool? If > > so, > > > are there any optimal values for: number of RX descriptors, per-queue > > > rte_mempool size, number of hugepages (from what I understand, these 3 > > are > > > correlated)? > > > > > > If I'm wrong, please explain why. > > > > > > Thanks! > > > > > > BR, > > > Newman > > > > Newman, > > > > Mempools are created per NUMA node (ordinarily this means per processor > > socket > > if sockets > 1). > > > > When doing Tx / Rx Queue Setup, one should determine the socket which owns > > the > > given PCI NIC, and try to use memory on that same socket to handle traffic > > for > > that NIC and Queues. > > > > So, for N cards with Q * N Tx / Rx queues, you only need S mempools. > > > > Then each of the Q * N queues will use the mempool from the socket closest > > to > > the card. > > > > Matthew. > >