linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <balbir@linux.vnet.ibm.com>
To: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: linux-kernel@vger.kernel.org, mingo@elte.hu, lwoodman@redhat.com,
	linux-mm@kvack.org, Tom Zanussi <tzanussi@gmail.com>,
	riel@redhat.com, rostedt@goodmis.org, akpm@linux-foundation.org,
	fweisbec@gmail.com, Munehiro Ikeda <m-ikeda@ds.jp.nec.com>,
	Atsushi Tsuji <a-tsuji@bk.jp.nec.com>
Subject: Re: [RFC PATCH -tip 0/2 v3] pagecache tracepoints proposal
Date: Mon, 8 Feb 2010 18:34:12 +0530	[thread overview]
Message-ID: <20100208130412.GB24467@balbir.in.ibm.com> (raw)
In-Reply-To: <4B6B7FBF.9090005@bx.jp.nec.com>

* Keiichi KII <k-keiichi@bx.jp.nec.com> [2010-02-04 21:17:35]:

> Hello,
> 
> This is v3 of a patchset to add some tracepoints for pagecache.
> 
> I would propose several tracepoints for tracing pagecache behavior and
> a script for these.
> By using both the tracepoints and the script, we can analysis pagecache behavior
> like usage or hit ratio with high resolution like per process or per file. 
> Example output of the script looks like:
> 
> [process list]
> o yum-3215
>                           cache find  cache hit  cache hit
>         device      inode      count      count      ratio
>   --------------------------------------------------------
>          253:0         16      34434      34130     99.12%
>          253:0        198       9692       9463     97.64%
>          253:0        639        647        628     97.06%
>          253:0        778         32         29     90.62%
>          253:0       7305      50225      49005     97.57%
>          253:0     144217         12         10     83.33%
>          253:0     262775         16         13     81.25%
> *snip*

Very nice, we should be able to sum these to get a system wide view

> 
> -------------------------------------------------------------------------------
> 
> [file list]
>         device              cached
>      (maj:min)      inode    pages
>   --------------------------------
>          253:0         16     5752
>          253:0        198     2233
>          253:0        639       51
>          253:0        778       86
>          253:0       7305    12307
>          253:0     144217       11
>          253:0     262775       39
> *snip*
> 
> [process list]
> o yum-3215
>         device              cached    added  removed      indirect
>      (maj:min)      inode    pages    pages    pages removed pages
>   ----------------------------------------------------------------
>          253:0         16    34130     5752        0             0
>          253:0        198     9463     2233        0             0
>          253:0        639      628       51        0             0
>          253:0        778       29       78        0             0
>          253:0       7305    49005    12307        0             0
>          253:0     144217       10       11        0             0
>          253:0     262775       13       39        0             0
> *snip*
>   ----------------------------------------------------------------
>   total:                    102346    26165        1             0
                                                    ^^^
                                                Is this 1 stray?
> 
> We can now know system-wide pagecache usage by /proc/meminfo.
> But we have no method to get higher resolution information like per file or
> per process usage than system-wide one.

It would be really nice to see if we can detect the mapped from the
unmapped page cache

> A process may share some pagecache or add a pagecache to the memory or
> remove a pagecache from the memory.
> If a pagecache miss hit ratio rises, maybe it leads to extra I/O and
> affects system performance.
> 
> So, by using the tracepoints we can get the following information.
>  1. how many pagecaches each process has per each file
>  2. how many pages are cached per each file
>  3. how many pagecaches each process shares
>  4. how often each process adds/removes pagecache
>  5. how long a pagecache stays in the memory
>  6. pagecache hit rate per file
> 
> Especially, the monitoring pagecache usage per each file and pagecache hit 
> ratio would help us tune some applications like database.
> And it will also help us tune the kernel parameters like "vm.dirty_*".
> 
> Changelog since v2
>   o add new script to monitor pagecache hit ratio per process.
>   o use DECLARE_EVENT_CLASS
> 
> Changelog since v1
>   o Add a script based on "perf trace stream scripting support".
> 
> Any comments are welcome.

-- 
	Three Cheers,
	Balbir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      parent reply	other threads:[~2010-02-08 13:04 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-05  2:17 [RFC PATCH -tip 0/2 v3] pagecache tracepoints proposal Keiichi KII
2010-02-05  2:24 ` [RFC PATCH -tip 1/2 v3] tracepoints: add tracepoints for pagecache Keiichi KII
2010-02-05  2:25 ` [RFC PATCH -tip 2/2 v3] add scripts for pagecache analysis per process Keiichi KII
2010-02-05  7:28 ` [RFC PATCH -tip 0/2 v3] pagecache tracepoints proposal Ingo Molnar
2010-02-05 21:19   ` Keiichi KII
2010-02-08 15:54   ` Wu Fengguang
2010-02-09 16:21     ` Wu Fengguang
2010-02-13 13:29       ` Balbir Singh
2010-02-14 10:52         ` Balbir Singh
2010-02-21  2:28         ` Wu Fengguang
2010-02-16  3:22       ` KOSAKI Motohiro
2010-02-17 22:38         ` Keiichi KII
2010-02-18  5:34     ` KAMEZAWA Hiroyuki
2010-02-18  9:58       ` Balbir Singh
2010-02-23 14:04         ` Wu Fengguang
2010-02-21  3:09       ` Wu Fengguang
2010-02-08 13:04 ` Balbir Singh [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100208130412.GB24467@balbir.in.ibm.com \
    --to=balbir@linux.vnet.ibm.com \
    --cc=a-tsuji@bk.jp.nec.com \
    --cc=akpm@linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=k-keiichi@bx.jp.nec.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lwoodman@redhat.com \
    --cc=m-ikeda@ds.jp.nec.com \
    --cc=mingo@elte.hu \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=tzanussi@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).