From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755942AbaHHGcJ (ORCPT ); Fri, 8 Aug 2014 02:32:09 -0400 Received: from mga09.intel.com ([134.134.136.24]:48141 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751193AbaHHGcF (ORCPT ); Fri, 8 Aug 2014 02:32:05 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.01,823,1400050800"; d="scan'208";a="555560485" Date: Fri, 8 Aug 2014 06:30:08 +0800 From: Yuyang Du To: Jason Low Cc: Peter Zijlstra , Ingo Molnar , linux-kernel@vger.kernel.org, Ben Segall , Waiman Long , Mel Gorman , Mike Galbraith , Rik van Riel , Aswin Chandramouleeswaran Subject: Re: [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load Message-ID: <20140807223007.GD2480@intel.com> References: <1407184118.11407.11.camel@j-VirtualBox> <20140804191526.GA2480@intel.com> <1407349295.2384.14.camel@j-VirtualBox> <20140807180239.GC2480@intel.com> <1407471532.8365.18.camel@j-VirtualBox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1407471532.8365.18.camel@j-VirtualBox> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 07, 2014 at 09:18:52PM -0700, Jason Low wrote: > On Fri, 2014-08-08 at 02:02 +0800, Yuyang Du wrote: > > On Wed, Aug 06, 2014 at 11:21:35AM -0700, Jason Low wrote: > > > I ran these tests with most of the AIM7 workloads to compare its > > > performance between a 3.16 kernel and the kernel with these patches > > > applied. > > > > > > The table below contains the percent difference between the baseline > > > kernel and the kernel with the patches at various user counts. A > > > positive percent means the kernel with the patches performed better, > > > while a negative percent means the baseline performed better. > > > > > > Based on these numbers, for many of the workloads, the change was > > > beneficial in those highly contended, while it had - impact in many > > > of the lightly/moderately contended case (10 to 90 users). > > > > > > ----------------------------------------------------- > > > | 10-90 | 100-1000 | 1100-2000 > > > | users | users | users > > > ----------------------------------------------------- > > > alltests | -3.37% | -10.64% | -2.25% > > > ----------------------------------------------------- > > > all_utime | +0.33% | +3.73% | +3.33% > > > ----------------------------------------------------- > > > compute | -5.97% | +2.34% | +3.22% > > > ----------------------------------------------------- > > > custom | -31.61% | -10.29% | +15.23% > > > ----------------------------------------------------- > > > disk | +24.64% | +28.96% | +21.28% > > > ----------------------------------------------------- > > > fserver | -1.35% | +4.82% | +9.35% > > > ----------------------------------------------------- > > > high_systime | -6.73% | -6.28% | +12.36% > > > ----------------------------------------------------- > > > shared | -28.31% | -19.99% | -7.10% > > > ----------------------------------------------------- > > > short | -44.63% | -37.48% | -33.62% > > > ----------------------------------------------------- > > > > > Thanks, Jason. Sorry for late response. > > > > What about the variation of the tests? The machine you test on? > > Hi Yuyang, > > These tests were also done on an 8 socket machine (80 cores). In terms > of variation between the average throughputs, typically the noise range > is about 2% in many of the workloads. > Thanks a lot, Jason. So for this particular set of workloads on a big machine, I think the result is mixed and overall "neutral", but I expected the variation probably could be bigger especially for light workloads. Any comment from the maintainers and others? Ping Peter and Ben, I haven't heard from you for the 5th version. Yuyang