From mboxrd@z Thu Jan 1 00:00:00 1970 From: Olivier MATZ Subject: Re: [PATCH] rte_mbuf: scattered pktmbufs freeing optimization Date: Mon, 09 Mar 2015 09:38:40 +0100 Message-ID: <54FD5C10.7060701@6wind.com> References: <1424992506-20484-1-git-send-email-vadim.suraev@gmail.com> <2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com> <54F06F3A.40401@6wind.com> <54F6C832.4070505@6wind.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: dev-VfR2kkLFssw@public.gmane.org To: Vadim Suraev Return-path: In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" Hi Vadim, On 03/07/2015 12:24 AM, Vadim Suraev wrote: > Hi, Olivier, > I realized that if local cache for the mempool is enabled and greater > than 0, > if, say, the mempool size is X and local cache length is Y (and it is > not empty,Y>0) > an attempt to allocate a bulk, whose size is greater than local cache > size (max) and greater than X-Y (which is the number of entries in the > ring) will fail. > The reason is: > __mempool_get_bulk will check whether the bulk to be allocated is > greater than mp->cache_size and will fall to ring_dequeue. > And the ring does not contain enough entries in this case while the sum > of ring entries + cache length may be greater or equal to the bulk's > size, so theoretically the bulk could be allocated. > Is it an expected behaviour? Am I missing something? I think it's the expected behavior as the code of mempool_get() tries to minimize the number of tests. In this situation, even if len(mempool) + len(cache) is greater than the number of requested objects, we are almost out of buffers, so returning ENOBUF is not a problem. If the user wants to ensure that he can allocates at least X buffers, he can create the pool with: mempool_create(X + cache_size * MAX_LCORE) > By the way, rte_mempool_count returns a ring count + sum of all local > caches, IMHO it could mislead, even twice. Right, today rte_mempool_count() cannot really be used for something else than debug or stats. Adding rte_mempool_common_count() and rte_mempool_cache_len() may be useful to give the user a better control (and they will be faster because they won't browse the cache lengths of all lcores). But we have to keep in mind that for multi-consumer pools checking the common_count before retrieving objects is useless because the other lcores can retrieve objects at the same time. Regards, Olivier