linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Larry Woodman <lwoodman@redhat.com>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: "Rik van Riel" <riel@redhat.com>, "Ingo Molnar" <mingo@elte.hu>,
	"Fr馘駻ic Weisbecker" <fweisbec@gmail.com>,
	"Li Zefan" <lizf@cn.fujitsu.com>,
	"Pekka Enberg" <penberg@cs.helsinki.fi>,
	eduard.munteanu@linux360.ro, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, rostedt@goodmis.org
Subject: Re: [Patch] mm tracepoints update - use case.
Date: Mon, 22 Jun 2009 11:28:08 -0400	[thread overview]
Message-ID: <1245684488.3212.111.camel@dhcp-100-19-198.bos.redhat.com> (raw)
In-Reply-To: <20090622115756.21F3.A69D9226@jp.fujitsu.com>

On Mon, 2009-06-22 at 12:37 +0900, KOSAKI Motohiro wrote:

Thanks for the feedback Kosaki! 

> Hi
> 
> > Thanks for the feedback Kosaki!
> > 
> > 
> > > Scenario 1. OOM killer happend. why? and who bring it?
> > 
> > Doesnt the showmem() and stack trace to the console when the OOM kill
> > occurred show enough in the majority of cases?  I realize that direct
> > alloc_pages() calls are not accounted for here but that can be really
> > invasive.
> 
> showmem() display _result_ of memory usage and fragmentation.
> but Administrator often need to know the _reason_.

Right, thats why I have mm tracepoints in locations like shrink_zone,
shrink_active and shrink_inactive so we can drill down into exactly what
happened when either kswapd ran or a direct reclaim occured out of the
page allocator.  Since we will know the timestamps and the number of
pages scanned and reclaimed we can tell the reason the page reclamation
did not supply enough pages and therefore the OOM occurred.

Do you think this is enough information or do you thine we need more?

> 
> Plus, kmemtrace already trace slab allocate/free activity.
> You mean you think this is really invasive?
> 
> 
> > > Scenario 2. page allocation failure by memory fragmentation
> > 
> > Are you talking about order>0 allocation failures here?  Most of the
> > slabs are single page allocations now.
> 
> Yes, order>0.
> but I confused. Why do you talk about slab, not page alloc?
> 
> Note, non-x86 architecture freqently use order-1 allocation for
> making stack.

OK, I can add a tracepoint in the lumpy reclaim logic when it fails to
get enough contiguous memory to satisfy a high order allocation.

> 
> 
> 
> > > Scenario 3. try_to_free_pages() makes very long latency. why?
> > 
> > This is available in the mm tracepoints, they all include timestamps.
> 
> perhaps, no.
> Administrator need to know the reason. not accumulated time. it's the result.
> 
> We can guess some reason
>   - IO congestion

This can be seen when the number of page scans is significantly greater
than the number pf page frees and pagouts.  Do you thing we need to
combine these tracepoints or add one to throttle_vm_writeout() when it
needs to stall?
 
>   - memory eating speed is fast than reclaim speed

The anonymous and filemapped tracepoints combined with the reclaim
tracepoints will tell us this, do you thing we need more tracepoints to
pinpoint when allocations outpace reclamations?

>   - memory fragmentation

Would adding the order to the page_allocation tracepoint satisfy this?
Currently this tracepoint only triggers when the allocation fails and we
need to reclaim memory.  Another option would be to include the order
information to the direct reclaim tracepoint so we can tell if it was
triggered due to memory fragmentation.  Sorry but I navent seen many
cases in which fragmented memory caused failures.

> 
> but it's only guess. we often need to get data.
> 
> 
> > > Scenario 4. sar output that free memory dramatically reduced at 10 minute ago, and
> > >             it already recover now. What's happen?
> > 
> > Is this really important?  It would take buffering lots of data to
> > figure out what happened in the past.
> 
> ok, my scenario description is a bit wrong.
> 
> if userland process explicitly  consume memory or explicitely write
> many data, it is true.
> 
> Is this more appropriate?
> 
> "userland process take the same action periodically, but only 10 minute ago
> free memory reduced, why?"
> 
We could have a user space script that enabled specific tracepoints only
when it noticed something like the free pages fell below some threshold
and disabled it when free pages climbed back up above some other
threshold.  Would this help?

> 
> 
> > >   - suspects
> > >     - kernel memory leak
> > 
> > Other than direct callers to the page allocator isnt that covered with
> > the kmemtrace stuff?
> 
> Yeah.
> perhaps, kmemtrace enhance to cover page allocator is good approach.
> 
> 
> > >     - userland memory leak
> > 
> > The mm tracepoints track all user space allocations and frees(perhaps
> > too many?).
> 
> hmhm.

Is this a yes?  Would the user space script described above help?

> 
> 
> > 
> > >     - stupid driver use too much memory
> > 
> > hopefully kmemtrace will catch this?
> 
> ditto.
> I agree with kmemtrace enhancement is good idea.
> 
> > 
> > >     - userland application suddenly start to use much memory
> > 
> > The mm tracepoints track all user space allocations and frees.
> 
> ok.
> 
> 
> > >   - what information are valuable?
> > >     - slab usage information (kmemtrace already does)
> > >     - page allocator usage information
> > >     - rss of all processes at oom happend
> > >     - why recent try_to_free_pages() can't reclaim any page?
> > 
> > The counters in the mm tracepoints do give counts but not the reasons
> > that the pagereclaim code fails.
> 
> That's very important key point. please don't ignore.

OK, would you suggest changing the code to count failures or simply
adding a tracepoint to the failure path which would potentially capture
lots more data?

> 
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2009-06-22 15:27 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-21 22:45 [Patch] mm tracepoints update Larry Woodman
2009-04-22  1:00 ` KOSAKI Motohiro
2009-04-22  9:57   ` Ingo Molnar
2009-04-22 12:07     ` Larry Woodman
2009-04-22 19:22       ` [Patch] mm tracepoints update - use case Larry Woodman
2009-04-23  0:48         ` KOSAKI Motohiro
2009-04-23  4:50           ` Andrew Morton
2009-04-23  8:42             ` Ingo Molnar
2009-04-23 11:47               ` Larry Woodman
2009-04-24 20:48                 ` Larry Woodman
2009-06-15 18:26           ` Rik van Riel
2009-06-17 14:07             ` Larry Woodman
2009-06-18  7:57             ` KOSAKI Motohiro
2009-06-18 19:22               ` Larry Woodman
2009-06-18 19:40                 ` Rik van Riel
2009-06-22  3:37                   ` KOSAKI Motohiro
2009-06-22 15:04                     ` Larry Woodman
2009-06-23  5:52                       ` KOSAKI Motohiro
2009-06-22  3:37                 ` KOSAKI Motohiro
2009-06-22 15:28                   ` Larry Woodman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1245684488.3212.111.camel@dhcp-100-19-198.bos.redhat.com \
    --to=lwoodman@redhat.com \
    --cc=eduard.munteanu@linux360.ro \
    --cc=fweisbec@gmail.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=mingo@elte.hu \
    --cc=penberg@cs.helsinki.fi \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).