linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: "Li, Shaohua" <shaohua.li@intel.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	Rik van Riel <riel@redhat.com>
Subject: Re: [PATCH]vmscan: handle underflow for get_scan_ratio
Date: Tue, 6 Apr 2010 10:30:43 +0800	[thread overview]
Message-ID: <20100406023043.GA12420@localhost> (raw)
In-Reply-To: <20100406105324.7E30.A69D9226@jp.fujitsu.com>

On Tue, Apr 06, 2010 at 10:06:19AM +0800, KOSAKI Motohiro wrote:
> > On Tue, Apr 06, 2010 at 09:25:36AM +0800, Li, Shaohua wrote:
> > > On Sun, Apr 04, 2010 at 10:19:06PM +0800, KOSAKI Motohiro wrote:
> > > > > On Fri, Apr 02, 2010 at 05:14:38PM +0800, KOSAKI Motohiro wrote:
> > > > > > > > > This patch makes a lot of sense than previous. however I think <1% anon ratio
> > > > > > > > > shouldn't happen anyway because file lru doesn't have reclaimable pages.
> > > > > > > > > <1% seems no good reclaim rate.
> > > > > > > > 
> > > > > > > > Oops, the above mention is wrong. sorry. only 1 page is still too big.
> > > > > > > > because under streaming io workload, the number of scanning anon pages should
> > > > > > > > be zero. this is very strong requirement. if not, backup operation will makes
> > > > > > > > a lot of swapping out.
> > > > > > > Sounds there is no big impact for the workload which you mentioned with the patch.
> > > > > > > please see below descriptions.
> > > > > > > I updated the description of the patch as fengguang suggested.
> > > > > > 
> > > > > > Umm.. sorry, no.
> > > > > > 
> > > > > > "one fix but introduce another one bug" is not good deal. instead, 
> > > > > > I'll revert the guilty commit at first as akpm mentioned.
> > > > > Even we revert the commit, the patch still has its benefit, as it increases
> > > > > calculation precision, right?
> > > > 
> > > > no, you shouldn't ignore the regression case.
> > 
> > > I don't think this is serious. In my calculation, there is only 1 page swapped out
> > > for 6G anonmous memory. 1 page should haven't any performance impact.
> > 
> > 1 anon page scanned for every N file pages scanned?
> > 
> > Is N a _huge_ enough ratio so that the anon list will be very light scanned?
> > 
> > Rik: here is a little background.
> 
> The problem is, the VM are couteniously discarding no longer used file
> cache. if we are scan extra anon 1 page, we will observe tons swap usage
> after few days.
> 
> please don't only think benchmark.

OK the days-of-streaming-io typically happen in file servers.  Suppose
a file server with 16GB memory, 1GB of which is consumed by anonymous
pages, others are for page cache.

Assume that the exact file:anon ratio computed by the get_scan_ratio()
algorithm is 1000:1. In that case percent[0]=0.1 and is rounded down
to 0, which keeps the anon pages in memory for the few days.

Now with Shaohua's patch, nr[0] = (262144/4096)/1000 = 0.06 will also
be rounded down to 0. It only becomes >=1 when
- reclaim runs into trouble and priority goes low
- anon list goes huge

So I guess Shaohua's patch still has reasonable "underflow" threshold :)

Thanks,
Fengguang

> 
> > Under streaming IO, the current get_scan_ratio() will get a percent[0]
> > that is (much) less than 1, so underflows to 0.
> > 
> > It has the bad effect of completely disabling the scan of anon list,
> > which leads to OOM in Shaohua's test case. OTOH, it also has the good
> > side effect of keeping anon pages in memory and totally prevent swap
> > IO.
> > 
> > Shaohua's patch improves the computation precision by computing nr[]
> > directly in get_scan_ratio(). This is good in general, however will
> > enable light scan of the anon list on streaming IO.
> 
> In such case, percent[0] should be big. I think underflowing is not point.
> His test case is merely streaming io copy, why can't we drop tmpfs cached
> page? his /proc/meminfo describe his machine didn't have droppable file cache.
> so, big percent[1] value seems makes no sense. no?
> 
> I'm not sure we need either below detection. I need more investigate.
>  1) detect no discardable file cache
>  2) detect streaming io on tmpfs (as regular file)
> 
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-04-06  2:30 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-30  5:53 [PATCH]vmscan: handle underflow for get_scan_ratio Shaohua Li
2010-03-30  6:08 ` KOSAKI Motohiro
2010-03-30  6:32   ` Shaohua Li
2010-03-30  6:40     ` KOSAKI Motohiro
2010-03-30  6:53       ` Shaohua Li
2010-03-30  7:31         ` KOSAKI Motohiro
2010-03-30  8:13           ` Shaohua Li
2010-03-31  4:53   ` Shaohua Li
2010-03-31  5:38     ` KOSAKI Motohiro
2010-03-31  5:51       ` Wu Fengguang
2010-03-31  6:00         ` KOSAKI Motohiro
2010-03-31  6:03           ` Wu Fengguang
2010-04-01 22:16           ` Andrew Morton
2010-04-02  9:13             ` KOSAKI Motohiro
2010-04-06  1:22               ` Wu Fengguang
2010-04-06  3:36               ` Rik van Riel
2010-03-31  5:53       ` KOSAKI Motohiro
2010-04-02  6:50         ` Shaohua Li
2010-04-02  9:14           ` KOSAKI Motohiro
2010-04-02  9:24             ` Shaohua Li
2010-04-04 14:19               ` KOSAKI Motohiro
2010-04-06  1:25                 ` Shaohua Li
2010-04-06  1:36                   ` KOSAKI Motohiro
2010-04-06  1:50                   ` Wu Fengguang
2010-04-06  2:06                     ` KOSAKI Motohiro
2010-04-06  2:30                       ` Wu Fengguang [this message]
2010-04-06  2:58                         ` KOSAKI Motohiro
2010-04-06  3:31                           ` Wu Fengguang
2010-04-06  3:40                             ` Rik van Riel
2010-04-06  4:49                               ` Wu Fengguang
2010-04-06  5:09                                 ` Shaohua Li
2010-04-04  0:48           ` Wu Fengguang
2010-04-06  1:27             ` Shaohua Li
2010-04-06  5:03           ` Wu Fengguang
2010-04-06  5:36             ` Shaohua Li
2010-04-09  6:51             ` Shaohua Li
2010-04-09 21:20               ` Andrew Morton
2010-04-09 21:25                 ` Rik van Riel
2010-04-13  1:30                   ` KOSAKI Motohiro
2010-04-13  2:42                     ` Rik van Riel
2010-04-13  7:55                       ` KOSAKI Motohiro
2010-04-13  8:55                         ` KOSAKI Motohiro
2010-04-14  1:27                           ` Shaohua Li
2010-04-15  3:25                             ` KOSAKI Motohiro
2010-04-12  1:57                 ` Shaohua Li
2010-03-31  5:41     ` Wu Fengguang
2010-03-30 10:17 ` Minchan Kim
2010-03-30 10:25   ` KOSAKI Motohiro
2010-03-30 11:56 ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100406023043.GA12420@localhost \
    --to=fengguang.wu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    --cc=shaohua.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).