linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Ingo Molnar <mingo@kernel.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5
Date: Tue, 16 Jul 2013 20:40:06 +0530	[thread overview]
Message-ID: <20130716151006.GA13058@linux.vnet.ibm.com> (raw)
In-Reply-To: <1373901620-2021-1-git-send-email-mgorman@suse.de>



Summary:
Seeing improvement on a 2 node when running autonumabenchmark .
But seeing regression for specjbb for the same box.

Also seeing huge regression when running autonumabenchmark
both on 4 node and 8 node box.


Below is the autonuma benchmark results on a 2 node machine.
Autonuma benchmark results.
mainline v3.9: (Ht enabled)
	Testcase:      Min      Max      Avg   StdDev
	  numa01:   220.12   246.96   239.18     9.69
	  numa02:    41.85    43.02    42.43     0.47
v3.9 + Mel's v5 patches:A (Ht enabled)
	Testcase:      Min      Max      Avg   StdDev  %Change
	  numa01:   239.52   242.99   241.61     1.26   -1.00%
	  numa02:    37.94    38.12    38.05     0.06   11.49%

mainline v3.9:
	Testcase:      Min      Max      Avg   StdDev
	  numa01:   118.72   121.04   120.23     0.83
	  numa02:    36.64    37.56    36.99     0.34
v3.9 + Mel's v5 patches:
	Testcase:      Min      Max      Avg   StdDev  %Change
	  numa01:   111.34   122.28   118.61     3.77    1.32%
	  numa02:    36.23    37.27    36.55     0.37    1.18%

Here are results of specjbb run on a 2 node machine.
Specjbb was run on 3 vms.
In the fit case, one vm was big to fit one node size.
In the no-fit case, one vm was bigger than the node size.


Specjbb results.
---------------------------------------------------------------------------------------
|               |   vm|                          nofit|                            fit|
|               |   vm|          noksm|            ksm|          noksm|            ksm|
|               |   vm|  nothp|    thp|  nothp|    thp|  nothp|    thp|  nothp|    thp|
---------------------------------------------------------------------------------------
| mainline_v39+ | vm_1| 136056| 189423| 135359| 186722| 136983| 191669| 136728| 184253|
| mainline_v39+ | vm_2|  66041|  84779|  64564|  86645|  67426|  84427|  63657|  85043|
| mainline_v39+ | vm_3|  67322|  83301|  63731|  85394|  65015|  85156|  63838|  84199|
| mel_numa_balan| vm_1| 133170| 177883| 136385| 176716| 140650| 174535| 132811| 190120|
| mel_numa_balan| vm_2|  65021|  81707|  62876|  81826|  63635|  84943|  58313|  78997|
| mel_numa_balan| vm_3|  61915|  82198|  60106|  81723|  64222|  81123|  59559|  78299|
| change  %     | vm_1|  -2.12|  -6.09|   0.76|  -5.36|   2.68|  -8.94|  -2.86|   3.18|
| change  %     | vm_2|  -1.54|  -3.62|  -2.61|  -5.56|  -5.62|   0.61|  -8.39|  -7.11|
| change  %     | vm_3|  -8.03|  -1.32|  -5.69|  -4.30|  -1.22|  -4.74|  -6.70|  -7.01|
---------------------------------------------------------------------------------------

numactl o/p

available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 12276 MB
node 0 free: 10574 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 12288 MB
node 1 free: 9697 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 


Autonuma results on a 4 node machine.

KernelVersion: 3.9.0(HT)
	Testcase:      Min      Max      Avg   StdDev
	  numa01:   569.80   624.94   593.12    19.14
	  numa02:    18.65    21.32    19.69     0.98

KernelVersion: 3.9.0 + Mel's v5 patches(HT)
	Testcase:      Min      Max      Avg   StdDev  %Change
	  numa01:   718.83   750.46   740.10    11.42  -19.59%
	  numa02:    20.07    22.36    20.97     0.81   -5.72%

KernelVersion: 3.9.0()
	Testcase:      Min      Max      Avg   StdDev
	  numa01:   586.75   628.65   604.15    16.13
	  numa02:    19.67    20.49    19.93     0.29

KernelVersion: 3.9.0 + Mel's v5 patches
	Testcase:      Min      Max      Avg   StdDev  %Change
	  numa01:   741.48   759.37   747.23     6.36  -18.84%
	  numa02:    20.55    22.06    21.21     0.52   -5.80%



	System x3750 M4 -[8722C1A]-

numactl o/p
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 32 33 34 35 36 37 38 39
node 0 size: 65468 MB
node 0 free: 63069 MB
node 1 cpus: 8 9 10 11 12 13 14 15 40 41 42 43 44 45 46 47
node 1 size: 65536 MB
node 1 free: 63497 MB
node 2 cpus: 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55
node 2 size: 65536 MB
node 2 free: 63515 MB
node 3 cpus: 24 25 26 27 28 29 30 31 56 57 58 59 60 61 62 63
node 3 size: 65536 MB
node 3 free: 63659 MB
node distances:
node   0   1   2   3 
  0:  10  11  11  12 
  1:  11  10  12  11 
  2:  11  12  10  11 
  3:  12  11  11  10 

The results on the 8 node also look similar to 4 node.
-- 
Thanks and Regards
Srikar Dronamraju


  parent reply	other threads:[~2013-07-16 15:10 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-15 15:20 [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Mel Gorman
2013-07-15 15:20 ` [PATCH 01/18] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman
2013-07-15 15:20 ` [PATCH 02/18] sched: Track NUMA hinting faults on per-node basis Mel Gorman
2013-07-17 10:50   ` Peter Zijlstra
2013-07-31  7:54     ` Mel Gorman
2013-07-29 10:10   ` Peter Zijlstra
2013-07-31  7:54     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 03/18] mm: numa: Account for THP numa hinting faults on the correct node Mel Gorman
2013-07-17  0:33   ` Hillf Danton
2013-07-15 15:20 ` [PATCH 04/18] mm: numa: Do not migrate or account for hinting faults on the zero page Mel Gorman
2013-07-17 11:00   ` Peter Zijlstra
2013-07-31  8:11     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 05/18] sched: Select a preferred node with the most numa hinting faults Mel Gorman
2013-07-15 15:20 ` [PATCH 06/18] sched: Update NUMA hinting faults once per scan Mel Gorman
2013-07-15 15:20 ` [PATCH 07/18] sched: Favour moving tasks towards the preferred node Mel Gorman
2013-07-25 10:40   ` [PATCH] sched, numa: migrates_degrades_locality() Peter Zijlstra
2013-07-31  8:44     ` Mel Gorman
2013-07-31  8:50       ` Peter Zijlstra
2013-07-15 15:20 ` [PATCH 08/18] sched: Reschedule task on preferred NUMA node once selected Mel Gorman
2013-07-17  1:31   ` Hillf Danton
2013-07-31  9:07     ` Mel Gorman
2013-07-31  9:38       ` Srikar Dronamraju
2013-08-01  4:47   ` Srikar Dronamraju
2013-08-01 15:38     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 09/18] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman
2013-07-17  2:17   ` Hillf Danton
2013-07-31  9:08     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 10/18] sched: Increase NUMA PTE scanning when a new preferred node is selected Mel Gorman
2013-07-15 15:20 ` [PATCH 11/18] sched: Check current->mm before allocating NUMA faults Mel Gorman
2013-07-15 15:20 ` [PATCH 12/18] sched: Set the scan rate proportional to the size of the task being scanned Mel Gorman
2013-07-15 15:20 ` [PATCH 13/18] mm: numa: Scan pages with elevated page_mapcount Mel Gorman
2013-07-17  5:22   ` Sam Ben
2013-07-31  9:13     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 14/18] sched: Remove check that skips small VMAs Mel Gorman
2013-07-15 15:20 ` [PATCH 15/18] sched: Set preferred NUMA node based on number of private faults Mel Gorman
2013-07-18  1:53   ` [PATCH 15/18] fix compilation with !CONFIG_NUMA_BALANCING Rik van Riel
2013-07-31  9:19     ` Mel Gorman
2013-07-26 11:20   ` [PATCH 15/18] sched: Set preferred NUMA node based on number of private faults Peter Zijlstra
2013-07-31  9:29     ` Mel Gorman
2013-07-31  9:34       ` Peter Zijlstra
2013-07-31 10:10         ` Mel Gorman
2013-07-15 15:20 ` [PATCH 16/18] sched: Avoid overloading CPUs on a preferred NUMA node Mel Gorman
2013-07-15 20:03   ` Peter Zijlstra
2013-07-16  8:23     ` Mel Gorman
2013-07-16 10:35       ` Peter Zijlstra
2013-07-16 15:55   ` Hillf Danton
2013-07-16 16:01     ` Mel Gorman
2013-07-17 10:54   ` Peter Zijlstra
2013-07-31  9:49     ` Mel Gorman
2013-08-01  7:10   ` Srikar Dronamraju
2013-08-01 15:42     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 17/18] sched: Retry migration of tasks to CPU on a preferred node Mel Gorman
2013-07-25 10:33   ` Peter Zijlstra
2013-07-31 10:03     ` Mel Gorman
2013-07-31 10:05       ` Peter Zijlstra
2013-07-31 10:07         ` Mel Gorman
2013-07-25 10:35   ` Peter Zijlstra
2013-08-01  5:13   ` Srikar Dronamraju
2013-08-01 15:46     ` Mel Gorman
2013-07-15 15:20 ` [PATCH 18/18] sched: Swap tasks when reschuling if a CPU on a target node is imbalanced Mel Gorman
2013-07-15 20:11   ` Peter Zijlstra
2013-07-16  9:41     ` Mel Gorman
2013-08-01  4:59   ` Srikar Dronamraju
2013-08-01 15:48     ` Mel Gorman
2013-07-15 20:14 ` [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Peter Zijlstra
2013-07-16 15:10 ` Srikar Dronamraju [this message]
2013-07-25 10:36 ` Peter Zijlstra
2013-07-31 10:30   ` Mel Gorman
2013-07-31 10:48     ` Peter Zijlstra
2013-07-31 11:57       ` Mel Gorman
2013-07-31 15:30         ` Peter Zijlstra
2013-07-31 16:11           ` Mel Gorman
2013-07-31 16:39             ` Peter Zijlstra
2013-08-01 15:51               ` Mel Gorman
2013-07-25 10:38 ` [PATCH] mm, numa: Sanitize task_numa_fault() callsites Peter Zijlstra
2013-07-31 11:25   ` Mel Gorman
2013-07-25 10:41 ` [PATCH] sched, numa: Improve scanner Peter Zijlstra
2013-07-25 10:46 ` [PATCH] mm, sched, numa: Create a per-task MPOL_INTERLEAVE policy Peter Zijlstra
2013-07-26  9:55   ` Peter Zijlstra
2013-08-26 16:10     ` Peter Zijlstra
2013-08-26 16:14       ` Peter Zijlstra
2013-07-30 11:24 ` [PATCH] mm, numa: Change page last {nid,pid} into {cpu,pid} Peter Zijlstra
2013-08-01 22:33   ` Rik van Riel
2013-07-30 11:38 ` [PATCH] sched, numa: Use {cpu, pid} to create task groups for shared faults Peter Zijlstra
2013-07-31 15:07   ` Peter Zijlstra
2013-07-31 15:38     ` Peter Zijlstra
2013-07-31 15:45     ` Don Morris
2013-07-31 16:05       ` Peter Zijlstra
2013-08-02 16:47       ` [PATCH -v3] " Peter Zijlstra
2013-08-02 16:50         ` [PATCH] mm, numa: Do not group on RO pages Peter Zijlstra
2013-08-02 19:56           ` Peter Zijlstra
2013-08-05 19:36           ` [PATCH] numa,sched: use group fault statistics in numa placement Rik van Riel
2013-08-09 13:55             ` Don Morris
2013-08-28 16:41         ` [PATCH -v3] sched, numa: Use {cpu, pid} to create task groups for shared faults Peter Zijlstra
2013-08-28 17:10           ` Rik van Riel
2013-08-01  6:23   ` [PATCH,RFC] numa,sched: use group fault statistics in numa placement Rik van Riel
2013-08-01 10:37     ` Peter Zijlstra
2013-08-01 16:35       ` Rik van Riel
2013-08-01 22:36   ` [RFC PATCH -v2] " Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130716151006.GA13058@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=aarcange@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).