linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Nikolay S." <nowhere@hakkenden.ath.cx>
To: Hillf Danton <dhillf@gmail.com>
Cc: Dave Chinner <david@fromorbit.com>, Michal Hocko <mhocko@suse.cz>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: Kswapd in 3.2.0-rc5 is a CPU hog
Date: Sun, 25 Dec 2011 14:21:59 +0400	[thread overview]
Message-ID: <1324808519.29243.8.camel@hakkenden.homenet> (raw)
In-Reply-To: <CAJd=RBDa4LT1gbh6zPx+bzoOtSUeX=puJe6DVC-WyKoF4nw-dg@mail.gmail.com>

В Вс., 25/12/2011 в 17:09 +0800, Hillf Danton пишет:
> On Sat, Dec 24, 2011 at 4:45 AM, Dave Chinner <david@fromorbit.com> wrote:
> [...]
> >
> > Ok, it's not a shrink_slab() problem - it's just being called ~100uS
> > by kswapd. The pattern is:
> >
> >        - reclaim 94 (batches of 32,32,30) pages from iinactive list
> >          of zone 1, node 0, prio 12
> >        - call shrink_slab
> >                - scan all caches
> >                - all shrinkers return 0 saying nothing to shrink
> >        - 40us gap
> >        - reclaim 10-30 pages from inactive list of zone 2, node 0, prio 12
> >        - call shrink_slab
> >                - scan all caches
> >                - all shrinkers return 0 saying nothing to shrink
> >        - 40us gap
> >        - isolate 9 pages from LRU zone ?, node ?, none isolated, none freed
> >        - isolate 22 pages from LRU zone ?, node ?, none isolated, none freed
> >        - call shrink_slab
> >                - scan all caches
> >                - all shrinkers return 0 saying nothing to shrink
> >        40us gap
> >
> > And it just repeats over and over again. After a while, nid=0,zone=1
> > drops out of the traces, so reclaim only comes in batches of 10-30
> > pages from zone 2 between each shrink_slab() call.
> >
> > The trace starts at 111209.881s, with 944776 pages on the LRUs. It
> > finishes at 111216.1 with kswapd going to sleep on node 0 with
> > 930067 pages on the LRU. So 7 seconds to free 15,000 pages (call it
> > 2,000 pages/s) which is awfully slow....
> >
> Hi all,
> 
> In hope, the added debug info is helpful.
> 
> Hillf
> ---
> 
> --- a/mm/memcontrol.c	Fri Dec  9 21:57:40 2011
> +++ b/mm/memcontrol.c	Sun Dec 25 17:08:14 2011
> @@ -1038,7 +1038,11 @@ void mem_cgroup_lru_del_list(struct page
>  		memcg = root_mem_cgroup;
>  	mz = page_cgroup_zoneinfo(memcg, page);
>  	/* huge page split is done under lru_lock. so, we have no races. */
> -	MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
> +	if (WARN_ON_ONCE(MEM_CGROUP_ZSTAT(mz, lru) <
> +				(1 << compound_order(page))))
> +		MEM_CGROUP_ZSTAT(mz, lru) = 0;
> +	else
> +		MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
>  }
> 
>  void mem_cgroup_lru_del(struct page *page)

Hello,

Uhm.., is this patch against 3.2-rc4? I can not apply it. There's no
mem_cgroup_lru_del_list(), but void mem_cgroup_del_lru_list(). Should I
place changes there?

And also, -rc7 is here. May the problem be addressed as part of some
ongoing work? Is there any reason to try -rc7 (the problem requires
several days of uptime to become obvious)?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-12-25 10:22 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1324437036.4677.5.camel@hakkenden.homenet>
2011-12-21  9:52 ` Kswapd in 3.2.0-rc5 is a CPU hog Michal Hocko
2011-12-21 10:15   ` nowhere
2011-12-21 10:24     ` Michal Hocko
2011-12-21 10:52       ` nowhere
2011-12-21 14:06       ` Alex Elder
2011-12-21 14:19         ` nowhere
2011-12-21 22:55   ` Dave Chinner
2011-12-23  9:01     ` nowhere
2011-12-23 10:20       ` Dave Chinner
2011-12-23 11:04         ` nowhere
2011-12-23 20:45           ` Dave Chinner
2011-12-25  9:09             ` Hillf Danton
2011-12-25 10:21               ` Nikolay S. [this message]
2011-12-26 12:35                 ` Hillf Danton
2011-12-27  0:20                   ` KAMEZAWA Hiroyuki
2011-12-27 13:33                     ` Hillf Danton
2011-12-28  0:06                       ` KAMEZAWA Hiroyuki
2011-12-27  2:15             ` KAMEZAWA Hiroyuki
2011-12-27  2:50               ` Nikolay S.
2011-12-27  4:44                 ` KAMEZAWA Hiroyuki
2011-12-27  6:06                   ` nowhere
2011-12-28 21:33                   ` Dave Chinner
2011-12-28 22:57                     ` KOSAKI Motohiro
2012-01-02  7:00                       ` Dave Chinner
2011-12-27  3:57               ` Minchan Kim
2011-12-27  4:56                 ` KAMEZAWA Hiroyuki
2012-01-10 22:33                   ` Andrew Morton
2012-01-11  3:25                     ` Nikolay S.
2012-01-11  4:42                       ` Andrew Morton
2012-01-11  0:33                   ` Dave Chinner
2012-01-11  1:17                 ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1324808519.29243.8.camel@hakkenden.homenet \
    --to=nowhere@hakkenden.ath.cx \
    --cc=david@fromorbit.com \
    --cc=dhillf@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).