linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] memcg: improving scalability by reducing lock contention at charge/uncharge
@ 2009-10-02  4:55 KAMEZAWA Hiroyuki
  2009-10-02  5:01 ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation KAMEZAWA Hiroyuki
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-10-02  4:55 UTC (permalink / raw)
  To: linux-mm@kvack.org
  Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
	balbir@linux.vnet.ibm.com, nishimura@mxp.nes.nec.co.jp

Hi,

This patch is against mmotm + softlimit fix patches.
(which are now in -rc git tree.)

In the latest -rc series, the kernel avoids accessing res_counter when
cgroup is root cgroup. This helps scalabilty when memcg is not used.

It's necessary to improve scalabilty even when memcg is used. This patch
is for that. Previous Balbir's work shows that the biggest obstacles for
better scalabilty is memcg's res_counter. Then, there are 2 ways.

(1) make counter scale well.
(2) avoid accessing core counter as much as possible.

My first direction was (1). But no, there is no counter which is free
from false sharing when it needs system-wide fine grain synchronization.
And res_counter has several functionality...this makes (1) difficult.
spin_lock (in slow path) around counter means tons of invalidation will
happen even when we just access counter without modification.

This patch series is for (2). This implements charge/uncharge in bached manner.
This coalesces access to res_counter at charge/uncharge using nature of
access locality.

Tested for a month. And I got good reorts from Balbir and Nishimura, thanks.
One concern is that this adds some members to the bottom of task_struct.
Better idea is welcome.

Following is test result of continuous page-fault on my 8cpu box(x86-64).

A loop like this runs on all cpus in parallel for 60secs. 
==
        while (1) {
                x = mmap(NULL, MEGA, PROT_READ|PROT_WRITE,
                        MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);

                for (off = 0; off < MEGA; off += PAGE_SIZE)
                        x[off]=0;
                munmap(x, MEGA);
        }
==
please see # of page faults. I think this is good improvement.


[Before]
 Performance counter stats for './runpause.sh' (5 runs):

  474539.756944  task-clock-msecs         #      7.890 CPUs    ( +-   0.015% )
          10284  context-switches         #      0.000 M/sec   ( +-   0.156% )
             12  CPU-migrations           #      0.000 M/sec   ( +-   0.000% )
       18425800  page-faults              #      0.039 M/sec   ( +-   0.107% )
  1486296285360  cycles                   #   3132.080 M/sec   ( +-   0.029% )
   380334406216  instructions             #      0.256 IPC     ( +-   0.058% )
     3274206662  cache-references         #      6.900 M/sec   ( +-   0.453% )
     1272947699  cache-misses             #      2.682 M/sec   ( +-   0.118% )

   60.147907341  seconds time elapsed   ( +-   0.010% )

[After]
 Performance counter stats for './runpause.sh' (5 runs):

  474658.997489  task-clock-msecs         #      7.891 CPUs    ( +-   0.006% )
          10250  context-switches         #      0.000 M/sec   ( +-   0.020% )
             11  CPU-migrations           #      0.000 M/sec   ( +-   0.000% )
       33177858  page-faults              #      0.070 M/sec   ( +-   0.152% )
  1485264748476  cycles                   #   3129.120 M/sec   ( +-   0.021% )
   409847004519  instructions             #      0.276 IPC     ( +-   0.123% )
     3237478723  cache-references         #      6.821 M/sec   ( +-   0.574% )
     1182572827  cache-misses             #      2.491 M/sec   ( +-   0.179% )

   60.151786309  seconds time elapsed   ( +-   0.014% )

Regards,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2009-10-13  1:30 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-10-02  4:55 [PATCH 0/2] memcg: improving scalability by reducing lock contention at charge/uncharge KAMEZAWA Hiroyuki
2009-10-02  5:01 ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation KAMEZAWA Hiroyuki
2009-10-02  6:47   ` Hiroshi Shimamoto
2009-10-02  6:53     ` Hiroshi Shimamoto
2009-10-02  7:04       ` KAMEZAWA Hiroyuki
2009-10-02  7:02     ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation (fixed coimpile bug) KAMEZAWA Hiroyuki
2009-10-08 22:17       ` Andrew Morton
2009-10-08 23:48         ` KAMEZAWA Hiroyuki
2009-10-09  4:01   ` [PATCH 1/2] memcg: coalescing uncharge at unmap and truncation Balbir Singh
2009-10-09  4:17     ` KAMEZAWA Hiroyuki
2009-10-02  5:03 ` [PATCH 2/2] memcg: coalescing charges per cpu KAMEZAWA Hiroyuki
2009-10-08 22:26   ` Andrew Morton
2009-10-08 23:54     ` KAMEZAWA Hiroyuki
2009-10-09  4:15   ` Balbir Singh
2009-10-09  4:25     ` KAMEZAWA Hiroyuki
2009-10-02  8:53 ` [PATCH 0/2] memcg: improving scalability by reducing lock contention at charge/uncharge KAMEZAWA Hiroyuki
2009-10-05  7:18   ` KAMEZAWA Hiroyuki
2009-10-05 10:37 ` Balbir Singh
     [not found] ` <604427e00910091737s52e11ce9p256c95d533dc2837@mail.gmail.com>
2009-10-11  2:33   ` KAMEZAWA Hiroyuki
     [not found]     ` <604427e00910111134o6f22f0ddg2b87124dd334ec02@mail.gmail.com>
2009-10-12 11:38       ` Balbir Singh
2009-10-13  0:29       ` KAMEZAWA Hiroyuki
     [not found]         ` <604427e00910121818w71dd4b7dl8781d7f5bc4f7dd9@mail.gmail.com>
2009-10-13  1:28           ` KAMEZAWA Hiroyuki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).