From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753980AbdJIHxm (ORCPT ); Mon, 9 Oct 2017 03:53:42 -0400 Received: from mga02.intel.com ([134.134.136.20]:29073 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751635AbdJIHxk (ORCPT ); Mon, 9 Oct 2017 03:53:40 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,499,1500966000"; d="scan'208";a="160927764" Date: Mon, 9 Oct 2017 15:53:38 +0800 From: Aaron Lu To: Anshuman Khandual Cc: linux-mm , lkml , Andrew Morton , Andi Kleen , Dave Hansen , Huang Ying , Tim Chen , Kemi Wang Subject: Re: [PATCH] page_alloc.c: inline __rmqueue() Message-ID: <20171009075338.GC1798@intel.com> References: <20171009054434.GA1798@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 09, 2017 at 01:07:36PM +0530, Anshuman Khandual wrote: > On 10/09/2017 11:14 AM, Aaron Lu wrote: > > __rmqueue() is called by rmqueue_bulk() and rmqueue() under zone->lock > > and that lock can be heavily contended with memory intensive applications. > > > > Since __rmqueue() is a small function, inline it can save us some time. > > With the will-it-scale/page_fault1/process benchmark, when using nr_cpu > > processes to stress buddy: > > > > On a 2 sockets Intel-Skylake machine: > > base %change head > > 77342 +6.3% 82203 will-it-scale.per_process_ops > > > > On a 4 sockets Intel-Skylake machine: > > base %change head > > 75746 +4.6% 79248 will-it-scale.per_process_ops > > > > This patch adds inline to __rmqueue(). > > > > Signed-off-by: Aaron Lu > > Ran it through kernel bench and ebizzy micro benchmarks. Results > were comparable with and without the patch. May be these are not > the appropriate tests for this inlining improvement. Anyways it I think so. The benefit only appears when the lock contention is huge enough, e.g. perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath is as high as 80% with the workload I have used. > does not have any performance degradation either. > > Reviewed-by: Anshuman Khandual > Tested-by: Anshuman Khandual Thanks!