From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andi Kleen Subject: Re: Page allocator bottleneck Date: Thu, 14 Sep 2017 13:19:17 -0700 Message-ID: <87vaklyqwq.fsf@linux.intel.com> References: Mime-Version: 1.0 Content-Type: text/plain Cc: David Miller , Jesper Dangaard Brouer , Mel Gorman , Eric Dumazet , Alexei Starovoitov , Saeed Mahameed , Eran Ben Elisha , Linux Kernel Network Developers , Andrew Morton , Michal Hocko , linux-mm To: Tariq Toukan Return-path: Received: from mga06.intel.com ([134.134.136.31]:45246 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751342AbdINUTS (ORCPT ); Thu, 14 Sep 2017 16:19:18 -0400 In-Reply-To: (Tariq Toukan's message of "Thu, 14 Sep 2017 19:49:31 +0300") Sender: netdev-owner@vger.kernel.org List-ID: Tariq Toukan writes: > > Congestion in this case is very clear. > When monitored in perf top: > 85.58% [kernel] [k] queued_spin_lock_slowpath Please look at the callers. Spinlock profiles without callers are usually useless because it's just blaming the messenger. Most likely the PCP lists are too small for your extreme allocation rate, so it goes back too often to the shared pool. You can play with the vm.percpu_pagelist_fraction setting. -Andi