From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Burakov, Anatoly" Subject: Re: CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in memory pool allocations, when enabling/disabling this configuration Date: Mon, 26 Nov 2018 11:09:58 +0000 Message-ID: <2b09cec8-0883-2ed2-0264-aeef871ea6a9@intel.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit To: Asaf Sinai , "dev@dpdk.org" Return-path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 7D38D23D for ; Mon, 26 Nov 2018 12:10:02 +0100 (CET) In-Reply-To: Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 26-Nov-18 9:15 AM, Asaf Sinai wrote: > Hi, > > We have 2 NUMAs in our system, and we try to allocate a single DPDK memory pool on each NUMA. > However, we see no difference when enabling/disabling "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration. > We expected that disabling it will allocate pools only on one NUMA (probably NUMA0), but it actually allocates pools on both NUMAs, according to "socket_id" parameter passed to "rte_mempool_create" API. > We have 192GB memory, so NUMA1 memory starts from address: 0x1800000000. > As you can see below, "undDpdkPoolNameSocket_1" was indeed allocated on NUMA1, as we wanted, although "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" is disabled: > > CONFIG_RTE_LIBRTE_VHOST_NUMA=n > CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n > > created poolName=undDpdkPoolNameSocket_0, nbufs=887808, bufferSize=2432, total=2059MB > (memZone: name=MP_undDpdkPoolNameSocket_0, socket_id=0, vaddr=0x1f2c0427d00-0x1f2c05abe00, paddr=0x178e627d00-0x178e7abe00, len=1589504, hugepage_sz=2MB) > created poolName=undDpdkPoolNameSocket_1, nbufs=887808, bufferSize=2432, total=2059MB > (memZone: name=MP_undDpdkPoolNameSocket_1, socket_id=1, vaddr=0x1f57fa7be40-0x1f57fbfff40, paddr=0x2f8247be40-0x2f825fff40, len=1589504, hugepage_sz=2MB) > > Does anyone know what is "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration used for? > > Thanks, > Asaf > Hi Asaf, I cannot reproduce this behavior. Just tried running testpmd with DPDK 18.08 as well as latest master [1], and DPDK could not successfully allocate a mempool on socket 1. Did you reconfigure and recompile DPDK after this config change? [1] Latest master will crash on init in this configuration, fix: http://patches.dpdk.org/patch/48338/ -- Thanks, Anatoly