From: Ying Han <yinghan@google.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Johannes Weiner <jweiner@redhat.com>,
Michal Hocko <mhocko@suse.cz>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
Greg Thelen <gthelen@google.com>
Subject: Re: [RFC][PATCH 0/7] memcg async reclaim
Date: Fri, 13 May 2011 17:29:43 -0700 [thread overview]
Message-ID: <BANLkTikG36NeyaMfOqu5CuLajX9C38+1tw@mail.gmail.com> (raw)
In-Reply-To: <BANLkTi=pzdnMj7ie6kZG8qRe32DhOx6Bsw@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 5670 bytes --]
Sorry forgot to post the script i capture the result:
echo $$ >/dev/cgroup/memory/A/tasks
time cat /export/hdc3/dd_A/tf0 > /dev/zero &
sleep 10
echo $$ >/dev/cgroup/memory/tasks
(
while /root/getdelays -dip `pidof cat`;
do
sleep 10;
done
)
--Ying
On Fri, May 13, 2011 at 5:25 PM, Ying Han <yinghan@google.com> wrote:
> Here I ran some tests and the result.
>
> On a 32G machine, I created a memcg with 4G hard_limit (limit_in_bytes)
> and and ran cat on a 20g file. Then I use getdelays to measure the
> ttfp "delay average" under RECLAIM. When the workload is reaching its
> hard_limit and
> without background reclaim, each ttfp is triggered by a pagefault. I would
> like to demostrate the average delay average for ttfp (thus page fault
> latency) on the streaming read/write workload and compare it w/ per-memcg bg
> reclaim enabled.
>
> Note:
> 1. I applied a patch on getdelays.c from fengguang which shows
> average CPU/IO/SWAP/RECLAIM delays in ns.
>
> 2. I used my latest version of per-memcg-per-kswapd patch for the
> following test. The patch could have been improved since then and I can run
> the same test when Kame has his patch ready.
>
> Configuration:
> $ cat /proc/meminfo
> MemTotal: 33045832 kB
>
> $ cat /dev/cgroup/memory/A/memory.limit_in_bytes
> 4294967296
>
> $ cat /dev/cgroup/memory/A/memory.reclaim_wmarks
> low_wmark 4137680896
> high_wmark 4085252096
>
> Test:
> $ echo $$ >/dev/cgroup/memory/A/tasks
> $ cat /export/hdc3/dd_A/tf0 > /dev/zero
>
> Without per-memcg background reclaim:
>
> CPU count real total virtual total delay total delay
> average
> 176589 17248377848 27344548685 1093693318
> 6193.440ns
> IO count delay total delay average
> 160704 242072632962 1506326ns
> SWAP count delay total delay average
> 0 0 0ns
> RECLAIM count delay total delay average
> 15944 3512140153 220279ns
> cat: read=20947877888, write=0, cancelled_write=0
>
> real>---4m26.912s
> user>---0m0.227s
> sys>----0m27.823s
>
> With per-memcg background reclaim:
>
> $ ps -ef | grep memcg
> root 5803 2 2 13:56 ? 00:04:20 [memcg_4]
>
> CPU count real total virtual total delay total delay
> average
> 161085 13185995424 23863858944 72902585
> 452.572ns
> IO count delay total delay average
> 160915 246145533109 1529661ns
> SWAP count delay total delay average
> 0 0 0ns
> RECLAIM count delay total delay average
> 0 0 0ns
> cat: read=20974891008, write=0, cancelled_write=0
>
> real>---4m26.572s
> user>---0m0.246s
> sys>----0m24.192s
>
> memcg_4 cputime: 2.86sec
>
> Observation:
> 1. Without the background reclaim, the cat hit ttfp heavely and the "delay
> average" goes above 220 microsec.
>
> 2. With background reclaim, the ttfp delay average is always 0. Since the
> ttfp happens synchronously and that implies the latency of the application
> overtime.
>
> 3. The real time goes slighly better w/ bg reclaim and the sys time is
> about the same ( adding the memcg_4 time on top of sys time of cat). But i
> don't expect big cpu benefit. The async reclaim uses spare cputime to
> proactivly reclaim pages on the side which gurantees less latency variation
> of application over time.
>
> --Ying
>
> On Thu, May 12, 2011 at 10:10 PM, Ying Han <yinghan@google.com> wrote:
>
>>
>>
>> On Thu, May 12, 2011 at 8:03 PM, KAMEZAWA Hiroyuki <
>> kamezawa.hiroyu@jp.fujitsu.com> wrote:
>>
>>> On Thu, 12 May 2011 17:17:25 +0900
>>> KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
>>>
>>> > On Thu, 12 May 2011 13:22:37 +0900
>>> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
>>> > I'll check what codes in vmscan.c or /mm affects memcg and post a
>>> > required fix in step by step. I think I found some..
>>> >
>>>
>>> After some tests, I doubt that 'automatic' one is unnecessary until
>>> memcg's dirty_ratio is supported. And as Andrew pointed out,
>>> total cpu consumption is unchanged and I don't have workloads which
>>> shows me meaningful speed up.
>>>
>>
>> The total cpu consumption is one way to measure the background reclaim,
>> another thing I would like to measure is a histogram of page fault latency
>> for a heavy page allocation application. I would expect with background
>> reclaim, we will get less variation on the page fault latency than w/o it.
>>
>> Sorry i haven't got chance to run some tests to back it up. I will try to
>> get some data.
>>
>>
>>> But I guess...with dirty_ratio, amount of dirty pages in memcg is
>>> limited and background reclaim can work enough without noise of
>>> write_page() while applications are throttled by dirty_ratio.
>>>
>>
>> Definitely. I have run into the issue while debugging the soft_limit
>> reclaim. The background reclaim became very inefficient if we have dirty
>> pages greater than the soft_limit. Talking w/ Greg about it regarding his
>> per-memcg dirty page limit effort, we should consider setting the dirty
>> ratio which not allowing the dirty pages greater the reclaim watermarks
>> (here is the soft_limit).
>>
>> --Ying
>>
>>
>>> Hmm, I'll study for a while but it seems better to start active soft
>>> limit,
>>> (or some threshold users can set) first.
>>>
>>> Anyway, this work makes me to see vmscan.c carefully and I think I can
>>> post some patches for fix, tunes.
>>>
>>> Thanks,
>>> -Kame
>>>
>>>
>>
>
[-- Attachment #2: Type: text/html, Size: 8212 bytes --]
prev parent reply other threads:[~2011-05-14 0:29 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-10 10:02 [RFC][PATCH 0/7] memcg async reclaim KAMEZAWA Hiroyuki
2011-05-10 10:04 ` [RFC][PATCH 1/7] memcg: check margin to limit for " KAMEZAWA Hiroyuki
2011-05-10 10:05 ` [RFC][PATCH 2/7] memcg: count reclaimable pages per zone KAMEZAWA Hiroyuki
2011-05-10 10:07 ` [RFC][PATCH 3/7] memcg: export memcg swappiness KAMEZAWA Hiroyuki
2011-05-10 10:08 ` [RFC][PATCH 4/7] memcg : test a memcg is reclaimable KAMEZAWA Hiroyuki
2011-05-10 10:09 ` [RFC][PATCH 5/7] memcg : export select victim memcg KAMEZAWA Hiroyuki
2011-05-10 10:13 ` [RFC][PATCH 6/7] memcg : static scan for async reclaim KAMEZAWA Hiroyuki
2011-05-10 10:13 ` [RFC][PATCH 7/7] memcg: workqueue " KAMEZAWA Hiroyuki
2011-05-12 1:28 ` [RFC][PATCH 0/7] memcg " Andrew Morton
2011-05-12 1:35 ` KAMEZAWA Hiroyuki
2011-05-12 2:11 ` Ying Han
2011-05-12 3:51 ` Andrew Morton
2011-05-12 4:22 ` KAMEZAWA Hiroyuki
2011-05-12 8:17 ` KAMEZAWA Hiroyuki
2011-05-13 3:03 ` KAMEZAWA Hiroyuki
2011-05-13 5:10 ` Ying Han
2011-05-13 9:04 ` KAMEZAWA Hiroyuki
2011-05-14 0:25 ` Ying Han
2011-05-14 0:29 ` Ying Han [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BANLkTikG36NeyaMfOqu5CuLajX9C38+1tw@mail.gmail.com \
--to=yinghan@google.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=gthelen@google.com \
--cc=jweiner@redhat.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).