* [GIT PULL 0/5] perf/core improvements and fixes @ 2015-04-13 22:14 Arnaldo Carvalho de Melo 2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo ` (5 more replies) 0 siblings, 6 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 22:14 UTC (permalink / raw) To: Ingo Molnar Cc: linux-kernel, Arnaldo Carvalho de Melo, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Masami Hiramatsu, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan, Arnaldo Carvalho de Melo Hi Ingo, Please consider pulling, Best regards, - Arnaldo The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) are available in the git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402: perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300) ---------------------------------------------------------------- perf/core improvements and fixes: New features: - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) User visible fixes: - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) - lazy_line probe fixes in 'perf probe' (He Kuang) Infrastructure: - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> ---------------------------------------------------------------- He Kuang (3): perf probe: Set retprobe flag when probe in address-based alternative mode perf probe: Make --source avaiable when probe with lazy_line perf probe: Fix segfault when probe with lazy_line to file Namhyung Kim (2): tracing, mm: Record pfn instead of pointer to struct page perf kmem: Analyze page allocator events also include/trace/events/filemap.h | 8 +- include/trace/events/kmem.h | 42 +-- include/trace/events/vmscan.h | 8 +- tools/perf/Documentation/perf-kmem.txt | 8 +- tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- tools/perf/util/probe-event.c | 3 +- tools/perf/util/probe-event.h | 2 + tools/perf/util/probe-finder.c | 20 +- 8 files changed, 540 insertions(+), 51 deletions(-) ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo @ 2015-04-13 22:14 ` Arnaldo Carvalho de Melo 2017-07-31 7:43 ` Vlastimil Babka 2015-04-13 22:14 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Arnaldo Carvalho de Melo ` (4 subsequent siblings) 5 siblings, 1 reply; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 22:14 UTC (permalink / raw) To: Ingo Molnar Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm, Arnaldo Carvalho de Melo From: Namhyung Kim <namhyung@kernel.org> The struct page is opaque for userspace tools, so it'd be better to save pfn in order to identify page frames. The textual output of $debugfs/tracing/trace file remains unchanged and only raw (binary) data format is changed - but thanks to libtraceevent, userspace tools which deal with the raw data (like perf and trace-cmd) can parse the format easily. So impact on the userspace will also be minimal. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Based-on-patch-by: Joonsoo Kim <js1304@gmail.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1428298576-9785-3-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> --- include/trace/events/filemap.h | 8 ++++---- include/trace/events/kmem.h | 42 +++++++++++++++++++++--------------------- include/trace/events/vmscan.h | 8 ++++---- 3 files changed, 29 insertions(+), 29 deletions(-) diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h index 0421f49a20f7..42febb6bc1d5 100644 --- a/include/trace/events/filemap.h +++ b/include/trace/events/filemap.h @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache, TP_ARGS(page), TP_STRUCT__entry( - __field(struct page *, page) + __field(unsigned long, pfn) __field(unsigned long, i_ino) __field(unsigned long, index) __field(dev_t, s_dev) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page_to_pfn(page); __entry->i_ino = page->mapping->host->i_ino; __entry->index = page->index; if (page->mapping->host->i_sb) @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache, TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu", MAJOR(__entry->s_dev), MINOR(__entry->s_dev), __entry->i_ino, - __entry->page, - page_to_pfn(__entry->page), + pfn_to_page(__entry->pfn), + __entry->pfn, __entry->index << PAGE_SHIFT) ); diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 4ad10baecd4d..81ea59812117 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free, TP_ARGS(page, order), TP_STRUCT__entry( - __field( struct page *, page ) + __field( unsigned long, pfn ) __field( unsigned int, order ) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page_to_pfn(page); __entry->order = order; ), TP_printk("page=%p pfn=%lu order=%d", - __entry->page, - page_to_pfn(__entry->page), + pfn_to_page(__entry->pfn), + __entry->pfn, __entry->order) ); @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched, TP_ARGS(page, cold), TP_STRUCT__entry( - __field( struct page *, page ) + __field( unsigned long, pfn ) __field( int, cold ) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page_to_pfn(page); __entry->cold = cold; ), TP_printk("page=%p pfn=%lu order=0 cold=%d", - __entry->page, - page_to_pfn(__entry->page), + pfn_to_page(__entry->pfn), + __entry->pfn, __entry->cold) ); @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, TP_ARGS(page, order, gfp_flags, migratetype), TP_STRUCT__entry( - __field( struct page *, page ) + __field( unsigned long, pfn ) __field( unsigned int, order ) __field( gfp_t, gfp_flags ) __field( int, migratetype ) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page ? page_to_pfn(page) : -1UL; __entry->order = order; __entry->gfp_flags = gfp_flags; __entry->migratetype = migratetype; ), TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", - __entry->page, - __entry->page ? page_to_pfn(__entry->page) : 0, + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL, + __entry->pfn != -1UL ? __entry->pfn : 0, __entry->order, __entry->migratetype, show_gfp_flags(__entry->gfp_flags)) @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page, TP_ARGS(page, order, migratetype), TP_STRUCT__entry( - __field( struct page *, page ) + __field( unsigned long, pfn ) __field( unsigned int, order ) __field( int, migratetype ) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page ? page_to_pfn(page) : -1UL; __entry->order = order; __entry->migratetype = migratetype; ), TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d", - __entry->page, - __entry->page ? page_to_pfn(__entry->page) : 0, + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL, + __entry->pfn != -1UL ? __entry->pfn : 0, __entry->order, __entry->migratetype, __entry->order == 0) @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain, TP_ARGS(page, order, migratetype), TP_printk("page=%p pfn=%lu order=%d migratetype=%d", - __entry->page, page_to_pfn(__entry->page), + pfn_to_page(__entry->pfn), __entry->pfn, __entry->order, __entry->migratetype) ); @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, alloc_migratetype, fallback_migratetype), TP_STRUCT__entry( - __field( struct page *, page ) + __field( unsigned long, pfn ) __field( int, alloc_order ) __field( int, fallback_order ) __field( int, alloc_migratetype ) @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page_to_pfn(page); __entry->alloc_order = alloc_order; __entry->fallback_order = fallback_order; __entry->alloc_migratetype = alloc_migratetype; @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag, ), TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d", - __entry->page, - page_to_pfn(__entry->page), + pfn_to_page(__entry->pfn), + __entry->pfn, __entry->alloc_order, __entry->fallback_order, pageblock_order, diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index 69590b6ffc09..f66476b96264 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage, TP_ARGS(page, reclaim_flags), TP_STRUCT__entry( - __field(struct page *, page) + __field(unsigned long, pfn) __field(int, reclaim_flags) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page_to_pfn(page); __entry->reclaim_flags = reclaim_flags; ), TP_printk("page=%p pfn=%lu flags=%s", - __entry->page, - page_to_pfn(__entry->page), + pfn_to_page(__entry->pfn), + __entry->pfn, show_reclaim_flags(__entry->reclaim_flags)) ); -- 1.9.3 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo @ 2017-07-31 7:43 ` Vlastimil Babka 2017-08-31 11:38 ` Vlastimil Babka 2017-08-31 13:43 ` Steven Rostedt 0 siblings, 2 replies; 26+ messages in thread From: Vlastimil Babka @ 2017-07-31 7:43 UTC (permalink / raw) To: Arnaldo Carvalho de Melo, Ingo Molnar, Steven Rostedt Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote: > From: Namhyung Kim <namhyung@kernel.org> > > The struct page is opaque for userspace tools, so it'd be better to save > pfn in order to identify page frames. > > The textual output of $debugfs/tracing/trace file remains unchanged and > only raw (binary) data format is changed - but thanks to libtraceevent, > userspace tools which deal with the raw data (like perf and trace-cmd) > can parse the format easily. Hmm it seems trace-cmd doesn't work that well, at least on current x86_64 kernel where I noticed it: trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1 I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page(). The events/kmem/mm_page_alloc/format file contains this for page: REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0) I think userspace can't know vmmemap_base nor the implied sizeof(struct page) for pointer arithmetic? On older 4.4-based kernel: REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0) This also fails to parse, so it must be the struct page part? I think the problem is, even if ve solve this with some more preprocessor trickery to make the format file contain only constant numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is more complicated than simple arithmetic, and can't be exported in the format file. I'm afraid that to support userspace parsing of the trace data, we will have to store both struct page and pfn... or perhaps give up on reporting the struct page pointer completely. Thoughts? > So impact on the userspace will also be > minimal. > > Signed-off-by: Namhyung Kim <namhyung@kernel.org> > Based-on-patch-by: Joonsoo Kim <js1304@gmail.com> > Acked-by: Ingo Molnar <mingo@kernel.org> > Acked-by: Steven Rostedt <rostedt@goodmis.org> > Cc: David Ahern <dsahern@gmail.com> > Cc: Jiri Olsa <jolsa@redhat.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> > Cc: linux-mm@kvack.org > Link: http://lkml.kernel.org/r/1428298576-9785-3-git-send-email-namhyung@kernel.org > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> > --- > include/trace/events/filemap.h | 8 ++++---- > include/trace/events/kmem.h | 42 +++++++++++++++++++++--------------------- > include/trace/events/vmscan.h | 8 ++++---- > 3 files changed, 29 insertions(+), 29 deletions(-) > > diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h > index 0421f49a20f7..42febb6bc1d5 100644 > --- a/include/trace/events/filemap.h > +++ b/include/trace/events/filemap.h > @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache, > TP_ARGS(page), > > TP_STRUCT__entry( > - __field(struct page *, page) > + __field(unsigned long, pfn) > __field(unsigned long, i_ino) > __field(unsigned long, index) > __field(dev_t, s_dev) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page_to_pfn(page); > __entry->i_ino = page->mapping->host->i_ino; > __entry->index = page->index; > if (page->mapping->host->i_sb) > @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache, > TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu", > MAJOR(__entry->s_dev), MINOR(__entry->s_dev), > __entry->i_ino, > - __entry->page, > - page_to_pfn(__entry->page), > + pfn_to_page(__entry->pfn), > + __entry->pfn, > __entry->index << PAGE_SHIFT) > ); > > diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h > index 4ad10baecd4d..81ea59812117 100644 > --- a/include/trace/events/kmem.h > +++ b/include/trace/events/kmem.h > @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free, > TP_ARGS(page, order), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( unsigned int, order ) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page_to_pfn(page); > __entry->order = order; > ), > > TP_printk("page=%p pfn=%lu order=%d", > - __entry->page, > - page_to_pfn(__entry->page), > + pfn_to_page(__entry->pfn), > + __entry->pfn, > __entry->order) > ); > > @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched, > TP_ARGS(page, cold), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( int, cold ) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page_to_pfn(page); > __entry->cold = cold; > ), > > TP_printk("page=%p pfn=%lu order=0 cold=%d", > - __entry->page, > - page_to_pfn(__entry->page), > + pfn_to_page(__entry->pfn), > + __entry->pfn, > __entry->cold) > ); > > @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, > TP_ARGS(page, order, gfp_flags, migratetype), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( unsigned int, order ) > __field( gfp_t, gfp_flags ) > __field( int, migratetype ) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page ? page_to_pfn(page) : -1UL; > __entry->order = order; > __entry->gfp_flags = gfp_flags; > __entry->migratetype = migratetype; > ), > > TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", > - __entry->page, > - __entry->page ? page_to_pfn(__entry->page) : 0, > + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL, > + __entry->pfn != -1UL ? __entry->pfn : 0, > __entry->order, > __entry->migratetype, > show_gfp_flags(__entry->gfp_flags)) > @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page, > TP_ARGS(page, order, migratetype), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( unsigned int, order ) > __field( int, migratetype ) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page ? page_to_pfn(page) : -1UL; > __entry->order = order; > __entry->migratetype = migratetype; > ), > > TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d", > - __entry->page, > - __entry->page ? page_to_pfn(__entry->page) : 0, > + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL, > + __entry->pfn != -1UL ? __entry->pfn : 0, > __entry->order, > __entry->migratetype, > __entry->order == 0) > @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain, > TP_ARGS(page, order, migratetype), > > TP_printk("page=%p pfn=%lu order=%d migratetype=%d", > - __entry->page, page_to_pfn(__entry->page), > + pfn_to_page(__entry->pfn), __entry->pfn, > __entry->order, __entry->migratetype) > ); > > @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, > alloc_migratetype, fallback_migratetype), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( int, alloc_order ) > __field( int, fallback_order ) > __field( int, alloc_migratetype ) > @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page_to_pfn(page); > __entry->alloc_order = alloc_order; > __entry->fallback_order = fallback_order; > __entry->alloc_migratetype = alloc_migratetype; > @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag, > ), > > TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d", > - __entry->page, > - page_to_pfn(__entry->page), > + pfn_to_page(__entry->pfn), > + __entry->pfn, > __entry->alloc_order, > __entry->fallback_order, > pageblock_order, > diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h > index 69590b6ffc09..f66476b96264 100644 > --- a/include/trace/events/vmscan.h > +++ b/include/trace/events/vmscan.h > @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage, > TP_ARGS(page, reclaim_flags), > > TP_STRUCT__entry( > - __field(struct page *, page) > + __field(unsigned long, pfn) > __field(int, reclaim_flags) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page_to_pfn(page); > __entry->reclaim_flags = reclaim_flags; > ), > > TP_printk("page=%p pfn=%lu flags=%s", > - __entry->page, > - page_to_pfn(__entry->page), > + pfn_to_page(__entry->pfn), > + __entry->pfn, > show_reclaim_flags(__entry->reclaim_flags)) > ); > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2017-07-31 7:43 ` Vlastimil Babka @ 2017-08-31 11:38 ` Vlastimil Babka 2017-08-31 13:43 ` Steven Rostedt 1 sibling, 0 replies; 26+ messages in thread From: Vlastimil Babka @ 2017-08-31 11:38 UTC (permalink / raw) To: Arnaldo Carvalho de Melo, Ingo Molnar, Steven Rostedt Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm Ping? On 07/31/2017 09:43 AM, Vlastimil Babka wrote: > On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote: >> From: Namhyung Kim <namhyung@kernel.org> >> >> The struct page is opaque for userspace tools, so it'd be better to save >> pfn in order to identify page frames. >> >> The textual output of $debugfs/tracing/trace file remains unchanged and >> only raw (binary) data format is changed - but thanks to libtraceevent, >> userspace tools which deal with the raw data (like perf and trace-cmd) >> can parse the format easily. > > Hmm it seems trace-cmd doesn't work that well, at least on current > x86_64 kernel where I noticed it: > > trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1 > > I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page(). > The events/kmem/mm_page_alloc/format file contains this for page: > > REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0) > > I think userspace can't know vmmemap_base nor the implied sizeof(struct > page) for pointer arithmetic? > > On older 4.4-based kernel: > > REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0) > > This also fails to parse, so it must be the struct page part? > > I think the problem is, even if ve solve this with some more > preprocessor trickery to make the format file contain only constant > numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is > more complicated than simple arithmetic, and can't be exported in the > format file. > > I'm afraid that to support userspace parsing of the trace data, we will > have to store both struct page and pfn... or perhaps give up on reporting > the struct page pointer completely. Thoughts? > >> So impact on the userspace will also be >> minimal. >> >> Signed-off-by: Namhyung Kim <namhyung@kernel.org> >> Based-on-patch-by: Joonsoo Kim <js1304@gmail.com> >> Acked-by: Ingo Molnar <mingo@kernel.org> >> Acked-by: Steven Rostedt <rostedt@goodmis.org> >> Cc: David Ahern <dsahern@gmail.com> >> Cc: Jiri Olsa <jolsa@redhat.com> >> Cc: Minchan Kim <minchan@kernel.org> >> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> >> Cc: linux-mm@kvack.org >> Link: http://lkml.kernel.org/r/1428298576-9785-3-git-send-email-namhyung@kernel.org >> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> >> --- >> include/trace/events/filemap.h | 8 ++++---- >> include/trace/events/kmem.h | 42 +++++++++++++++++++++--------------------- >> include/trace/events/vmscan.h | 8 ++++---- >> 3 files changed, 29 insertions(+), 29 deletions(-) >> >> diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h >> index 0421f49a20f7..42febb6bc1d5 100644 >> --- a/include/trace/events/filemap.h >> +++ b/include/trace/events/filemap.h >> @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache, >> TP_ARGS(page), >> >> TP_STRUCT__entry( >> - __field(struct page *, page) >> + __field(unsigned long, pfn) >> __field(unsigned long, i_ino) >> __field(unsigned long, index) >> __field(dev_t, s_dev) >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page_to_pfn(page); >> __entry->i_ino = page->mapping->host->i_ino; >> __entry->index = page->index; >> if (page->mapping->host->i_sb) >> @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache, >> TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu", >> MAJOR(__entry->s_dev), MINOR(__entry->s_dev), >> __entry->i_ino, >> - __entry->page, >> - page_to_pfn(__entry->page), >> + pfn_to_page(__entry->pfn), >> + __entry->pfn, >> __entry->index << PAGE_SHIFT) >> ); >> >> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h >> index 4ad10baecd4d..81ea59812117 100644 >> --- a/include/trace/events/kmem.h >> +++ b/include/trace/events/kmem.h >> @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free, >> TP_ARGS(page, order), >> >> TP_STRUCT__entry( >> - __field( struct page *, page ) >> + __field( unsigned long, pfn ) >> __field( unsigned int, order ) >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page_to_pfn(page); >> __entry->order = order; >> ), >> >> TP_printk("page=%p pfn=%lu order=%d", >> - __entry->page, >> - page_to_pfn(__entry->page), >> + pfn_to_page(__entry->pfn), >> + __entry->pfn, >> __entry->order) >> ); >> >> @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched, >> TP_ARGS(page, cold), >> >> TP_STRUCT__entry( >> - __field( struct page *, page ) >> + __field( unsigned long, pfn ) >> __field( int, cold ) >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page_to_pfn(page); >> __entry->cold = cold; >> ), >> >> TP_printk("page=%p pfn=%lu order=0 cold=%d", >> - __entry->page, >> - page_to_pfn(__entry->page), >> + pfn_to_page(__entry->pfn), >> + __entry->pfn, >> __entry->cold) >> ); >> >> @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, >> TP_ARGS(page, order, gfp_flags, migratetype), >> >> TP_STRUCT__entry( >> - __field( struct page *, page ) >> + __field( unsigned long, pfn ) >> __field( unsigned int, order ) >> __field( gfp_t, gfp_flags ) >> __field( int, migratetype ) >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page ? page_to_pfn(page) : -1UL; >> __entry->order = order; >> __entry->gfp_flags = gfp_flags; >> __entry->migratetype = migratetype; >> ), >> >> TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", >> - __entry->page, >> - __entry->page ? page_to_pfn(__entry->page) : 0, >> + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL, >> + __entry->pfn != -1UL ? __entry->pfn : 0, >> __entry->order, >> __entry->migratetype, >> show_gfp_flags(__entry->gfp_flags)) >> @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page, >> TP_ARGS(page, order, migratetype), >> >> TP_STRUCT__entry( >> - __field( struct page *, page ) >> + __field( unsigned long, pfn ) >> __field( unsigned int, order ) >> __field( int, migratetype ) >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page ? page_to_pfn(page) : -1UL; >> __entry->order = order; >> __entry->migratetype = migratetype; >> ), >> >> TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d", >> - __entry->page, >> - __entry->page ? page_to_pfn(__entry->page) : 0, >> + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL, >> + __entry->pfn != -1UL ? __entry->pfn : 0, >> __entry->order, >> __entry->migratetype, >> __entry->order == 0) >> @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain, >> TP_ARGS(page, order, migratetype), >> >> TP_printk("page=%p pfn=%lu order=%d migratetype=%d", >> - __entry->page, page_to_pfn(__entry->page), >> + pfn_to_page(__entry->pfn), __entry->pfn, >> __entry->order, __entry->migratetype) >> ); >> >> @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, >> alloc_migratetype, fallback_migratetype), >> >> TP_STRUCT__entry( >> - __field( struct page *, page ) >> + __field( unsigned long, pfn ) >> __field( int, alloc_order ) >> __field( int, fallback_order ) >> __field( int, alloc_migratetype ) >> @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag, >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page_to_pfn(page); >> __entry->alloc_order = alloc_order; >> __entry->fallback_order = fallback_order; >> __entry->alloc_migratetype = alloc_migratetype; >> @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag, >> ), >> >> TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d", >> - __entry->page, >> - page_to_pfn(__entry->page), >> + pfn_to_page(__entry->pfn), >> + __entry->pfn, >> __entry->alloc_order, >> __entry->fallback_order, >> pageblock_order, >> diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h >> index 69590b6ffc09..f66476b96264 100644 >> --- a/include/trace/events/vmscan.h >> +++ b/include/trace/events/vmscan.h >> @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage, >> TP_ARGS(page, reclaim_flags), >> >> TP_STRUCT__entry( >> - __field(struct page *, page) >> + __field(unsigned long, pfn) >> __field(int, reclaim_flags) >> ), >> >> TP_fast_assign( >> - __entry->page = page; >> + __entry->pfn = page_to_pfn(page); >> __entry->reclaim_flags = reclaim_flags; >> ), >> >> TP_printk("page=%p pfn=%lu flags=%s", >> - __entry->page, >> - page_to_pfn(__entry->page), >> + pfn_to_page(__entry->pfn), >> + __entry->pfn, >> show_reclaim_flags(__entry->reclaim_flags)) >> ); >> >> > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2017-07-31 7:43 ` Vlastimil Babka 2017-08-31 11:38 ` Vlastimil Babka @ 2017-08-31 13:43 ` Steven Rostedt 2017-08-31 14:31 ` Vlastimil Babka 1 sibling, 1 reply; 26+ messages in thread From: Steven Rostedt @ 2017-08-31 13:43 UTC (permalink / raw) To: Vlastimil Babka Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm On Mon, 31 Jul 2017 09:43:41 +0200 Vlastimil Babka <vbabka@suse.cz> wrote: > On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote: > > From: Namhyung Kim <namhyung@kernel.org> > > > > The struct page is opaque for userspace tools, so it'd be better to save > > pfn in order to identify page frames. > > > > The textual output of $debugfs/tracing/trace file remains unchanged and > > only raw (binary) data format is changed - but thanks to libtraceevent, > > userspace tools which deal with the raw data (like perf and trace-cmd) > > can parse the format easily. > > Hmm it seems trace-cmd doesn't work that well, at least on current > x86_64 kernel where I noticed it: > > trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1 Which version of trace-cmd failed? It parses for me. Hmm, the vmemmap_base isn't in the event format file. It's the actually address. That's probably what failed to parse. > > I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page(). > The events/kmem/mm_page_alloc/format file contains this for page: > > REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0) But yeah, I think the output is wrong. I just ran this: page=0xffffea00000a62f4 pfn=680692 order=0 migratetype=0 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK But running it with trace-cmd report -R (raw format): mm_page_alloc: pfn=0xa62f4 order=0 gfp_flags=24150208 migratetype=0 The parser currently ignores types, so it doesn't do pointer arithmetic correctly, and would be hard to here as it doesn't know the size of the struct page. What could work is if we changed the printf fmt to be: (unsigned long)(0xffffea0000000000UL) + (REC->pfn * sizeof(struct page)) > > I think userspace can't know vmmemap_base nor the implied sizeof(struct > page) for pointer arithmetic? > > On older 4.4-based kernel: > > REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0) This is what I have on 4.13-rc7 > > This also fails to parse, so it must be the struct page part? Again, what version of trace-cmd do you have? > > I think the problem is, even if ve solve this with some more > preprocessor trickery to make the format file contain only constant > numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is > more complicated than simple arithmetic, and can't be exported in the > format file. > > I'm afraid that to support userspace parsing of the trace data, we will > have to store both struct page and pfn... or perhaps give up on reporting > the struct page pointer completely. Thoughts? Had some thoughts up above. -- Steve ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2017-08-31 13:43 ` Steven Rostedt @ 2017-08-31 14:31 ` Vlastimil Babka 2017-08-31 14:44 ` Steven Rostedt 0 siblings, 1 reply; 26+ messages in thread From: Vlastimil Babka @ 2017-08-31 14:31 UTC (permalink / raw) To: Steven Rostedt Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm On 08/31/2017 03:43 PM, Steven Rostedt wrote: > On Mon, 31 Jul 2017 09:43:41 +0200 Vlastimil Babka <vbabka@suse.cz> wrote: > >> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote: >>> From: Namhyung Kim <namhyung@kernel.org> >>> >>> The struct page is opaque for userspace tools, so it'd be better to save >>> pfn in order to identify page frames. >>> >>> The textual output of $debugfs/tracing/trace file remains unchanged and >>> only raw (binary) data format is changed - but thanks to libtraceevent, >>> userspace tools which deal with the raw data (like perf and trace-cmd) >>> can parse the format easily. >> >> Hmm it seems trace-cmd doesn't work that well, at least on current >> x86_64 kernel where I noticed it: >> >> trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1 > > Which version of trace-cmd failed? It parses for me. Hmm, the > vmemmap_base isn't in the event format file. It's the actually address. > That's probably what failed to parse. Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE. > >> >> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page(). >> The events/kmem/mm_page_alloc/format file contains this for page: >> >> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0) > > But yeah, I think the output is wrong. I just ran this: > > page=0xffffea00000a62f4 pfn=680692 order=0 migratetype=0 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK > > But running it with trace-cmd report -R (raw format): > > mm_page_alloc: pfn=0xa62f4 order=0 gfp_flags=24150208 migratetype=0 > > The parser currently ignores types, so it doesn't do pointer > arithmetic correctly, and would be hard to here as it doesn't know the > size of the struct page. What could work is if we changed the printf > fmt to be: > > (unsigned long)(0xffffea0000000000UL) + (REC->pfn * sizeof(struct page)) > > >> >> I think userspace can't know vmmemap_base nor the implied sizeof(struct >> page) for pointer arithmetic? >> >> On older 4.4-based kernel: >> >> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0) > > This is what I have on 4.13-rc7 > >> >> This also fails to parse, so it must be the struct page part? > > Again, what version of trace-cmd do you have? On the older distro it was 2.0.4 > >> >> I think the problem is, even if ve solve this with some more >> preprocessor trickery to make the format file contain only constant >> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is >> more complicated than simple arithmetic, and can't be exported in the >> format file. >> >> I'm afraid that to support userspace parsing of the trace data, we will >> have to store both struct page and pfn... or perhaps give up on reporting >> the struct page pointer completely. Thoughts? > > Had some thoughts up above. Yeah, it could be made to work for some configurations, but see the part about "sparse memory model without vmemmap" above. > -- Steve > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2017-08-31 14:31 ` Vlastimil Babka @ 2017-08-31 14:44 ` Steven Rostedt 2017-09-01 8:16 ` Vlastimil Babka 0 siblings, 1 reply; 26+ messages in thread From: Steven Rostedt @ 2017-08-31 14:44 UTC (permalink / raw) To: Vlastimil Babka Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm On Thu, 31 Aug 2017 16:31:36 +0200 Vlastimil Babka <vbabka@suse.cz> wrote: > > Which version of trace-cmd failed? It parses for me. Hmm, the > > vmemmap_base isn't in the event format file. It's the actually address. > > That's probably what failed to parse. > > Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE. Right, but you have the vmemmap_base in the event format, which can't be parsed by userspace because it has no idea what the value of the vmemmap_base is. > > > > >> > >> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page(). > >> The events/kmem/mm_page_alloc/format file contains this for page: > >> > >> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0) > > > >> On older 4.4-based kernel: > >> > >> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0) > > > > This is what I have on 4.13-rc7 > > > >> > >> This also fails to parse, so it must be the struct page part? > > > > Again, what version of trace-cmd do you have? > > On the older distro it was 2.0.4 Right. That's probably why it failed to parse here. If you installed the latest trace-cmd from the git repo, it probably will parse fine. > > > > >> > >> I think the problem is, even if ve solve this with some more > >> preprocessor trickery to make the format file contain only constant > >> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is > >> more complicated than simple arithmetic, and can't be exported in the > >> format file. > >> > >> I'm afraid that to support userspace parsing of the trace data, we will > >> have to store both struct page and pfn... or perhaps give up on reporting > >> the struct page pointer completely. Thoughts? > > > > Had some thoughts up above. > > Yeah, it could be made to work for some configurations, but see the part > about "sparse memory model without vmemmap" above. Right, but that should work with the latest trace-cmd. Does it? -- Steve ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2017-08-31 14:44 ` Steven Rostedt @ 2017-09-01 8:16 ` Vlastimil Babka 2017-09-01 11:15 ` Steven Rostedt 0 siblings, 1 reply; 26+ messages in thread From: Vlastimil Babka @ 2017-09-01 8:16 UTC (permalink / raw) To: Steven Rostedt Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm On 08/31/2017 04:44 PM, Steven Rostedt wrote: > On Thu, 31 Aug 2017 16:31:36 +0200 > Vlastimil Babka <vbabka@suse.cz> wrote: > > >>> Which version of trace-cmd failed? It parses for me. Hmm, the >>> vmemmap_base isn't in the event format file. It's the actually address. >>> That's probably what failed to parse. >> >> Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE. > > Right, but you have the vmemmap_base in the event format, which can't > be parsed by userspace because it has no idea what the value of the > vmemmap_base is. This seems to be caused by CONFIG_RANDOMIZE_MEMORY. If we somehow put the value in the format file, it's an info leak? (but I guess kernels that care must have ftrace disabled anyway :) >> >>> >>>> >>>> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page(). >>>> The events/kmem/mm_page_alloc/format file contains this for page: >>>> >>>> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0) >>>> >>>> I think the problem is, even if ve solve this with some more >>>> preprocessor trickery to make the format file contain only constant >>>> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is >>>> more complicated than simple arithmetic, and can't be exported in the >>>> format file. >>>> >>>> I'm afraid that to support userspace parsing of the trace data, we will >>>> have to store both struct page and pfn... or perhaps give up on reporting >>>> the struct page pointer completely. Thoughts? >>> >>> Had some thoughts up above. >> >> Yeah, it could be made to work for some configurations, but see the part >> about "sparse memory model without vmemmap" above. > > Right, but that should work with the latest trace-cmd. Does it? Hmm, by "sparse memory model without vmemmap" I don't mean there's a number instead of "vmemmap_base". I mean CONFIG_SPARSEMEM=y Then __pfn_to_page() looks like this: #define __page_to_pfn(pg) \ ({ const struct page *__pg = (pg); \ int __sec = page_to_section(__pg); \ (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \ }) Then the part of format file looks like this: REC->pfn != -1UL ? ({ unsigned long __pfn = (REC->pfn); struct mem_section *__sec = __pfn_to_section(__pfn); __section_mem_map_addr(__sec) + __pfn; }) : ((void *)0) The section things involve some array lookups, so I don't see how we could pass it to tracing userspace. Would we want to special-case this config to store both pfn and struct page in the trace frame? And make sure the simpler ones work despite all the exsisting gotchas? I'd rather say we should either store both pfn and page pointer, or just throw away the page pointer as the pfn is enough to e.g. match alloc and free, and also much more deterministic. > -- Steve > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page 2017-09-01 8:16 ` Vlastimil Babka @ 2017-09-01 11:15 ` Steven Rostedt 0 siblings, 0 replies; 26+ messages in thread From: Steven Rostedt @ 2017-09-01 11:15 UTC (permalink / raw) To: Vlastimil Babka Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim, Peter Zijlstra, linux-mm On Fri, 1 Sep 2017 10:16:21 +0200 Vlastimil Babka <vbabka@suse.cz> wrote: > > Right, but that should work with the latest trace-cmd. Does it? > > Hmm, by "sparse memory model without vmemmap" I don't mean there's a > number instead of "vmemmap_base". I mean CONFIG_SPARSEMEM=y > > Then __pfn_to_page() looks like this: > > #define __page_to_pfn(pg) \ > ({ const struct page *__pg = (pg); \ > int __sec = page_to_section(__pg); \ > (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \ > }) > > Then the part of format file looks like this: > > REC->pfn != -1UL ? ({ unsigned long __pfn = (REC->pfn); struct mem_section *__sec = __pfn_to_section(__pfn); __section_mem_map_addr(__sec) + __pfn; }) : ((void *)0) Ouch. > > The section things involve some array lookups, so I don't see how we > could pass it to tracing userspace. Would we want to special-case > this config to store both pfn and struct page in the trace frame? And > make sure the simpler ones work despite all the exsisting gotchas? > I'd rather say we should either store both pfn and page pointer, or > just throw away the page pointer as the pfn is enough to e.g. match > alloc and free, and also much more deterministic. Write up a patch and we'll take a look. -- Steve ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo 2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo @ 2015-04-13 22:14 ` Arnaldo Carvalho de Melo 2015-04-13 22:15 ` [PATCH 3/5] perf probe: Set retprobe flag when probe in address-based alternative mode Arnaldo Carvalho de Melo ` (3 subsequent siblings) 5 siblings, 0 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 22:14 UTC (permalink / raw) To: Ingo Molnar Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Joonsoo Kim, Minchan Kim, Peter Zijlstra, linux-mm, Arnaldo Carvalho de Melo From: Namhyung Kim <namhyung@kernel.org> The perf kmem command records and analyze kernel memory allocation only for SLAB objects. This patch implement a simple page allocator analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. It adds two new options of --slab and --page. The --slab option is for analyzing SLAB allocator and that's what perf kmem currently does. The new --page option enables page allocator events and analyze kernel memory usage in page unit. Currently, 'stat --alloc' subcommand is implemented only. If none of these --slab nor --page is specified, --slab is implied. First run 'perf kmem record' to generate a suitable perf.data file: # perf kmem record --page sleep 5 Then run 'perf kmem stat' to postprocess the perf.data file: # perf kmem stat --page --alloc --line 10 ------------------------------------------------------------------------------- PFN | Total alloc (KB) | Hits | Order | Mig.type | GFP flags ------------------------------------------------------------------------------- 4045014 | 16 | 1 | 2 | RECLAIM | 00285250 4143980 | 16 | 1 | 2 | RECLAIM | 00285250 3938658 | 16 | 1 | 2 | RECLAIM | 00285250 4045400 | 16 | 1 | 2 | RECLAIM | 00285250 3568708 | 16 | 1 | 2 | RECLAIM | 00285250 3729824 | 16 | 1 | 2 | RECLAIM | 00285250 3657210 | 16 | 1 | 2 | RECLAIM | 00285250 4120750 | 16 | 1 | 2 | RECLAIM | 00285250 3678850 | 16 | 1 | 2 | RECLAIM | 00285250 3693874 | 16 | 1 | 2 | RECLAIM | 00285250 ... | ... | ... | ... | ... | ... ------------------------------------------------------------------------------- SUMMARY (page allocator) ======================== Total allocation requests : 44,260 [ 177,256 KB ] Total free requests : 117 [ 468 KB ] Total alloc+freed requests : 49 [ 196 KB ] Total alloc-only requests : 44,211 [ 177,060 KB ] Total free-only requests : 68 [ 272 KB ] Total allocation failures : 0 [ 0 KB ] Order Unmovable Reclaimable Movable Reserved CMA/Isolated ----- ------------ ------------ ------------ ------------ ------------ 0 32 . 44,210 . . 1 . . . . . 2 . 18 . . . 3 . . . . . 4 . . . . . 5 . . . . . 6 . . . . . 7 . . . . . 8 . . . . . 9 . . . . . 10 . . . . . Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1428298576-9785-4-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> --- tools/perf/Documentation/perf-kmem.txt | 8 +- tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- 2 files changed, 491 insertions(+), 17 deletions(-) diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt index 150253cc3c97..23219c65c16f 100644 --- a/tools/perf/Documentation/perf-kmem.txt +++ b/tools/perf/Documentation/perf-kmem.txt @@ -3,7 +3,7 @@ perf-kmem(1) NAME ---- -perf-kmem - Tool to trace/measure kernel memory(slab) properties +perf-kmem - Tool to trace/measure kernel memory properties SYNOPSIS -------- @@ -46,6 +46,12 @@ OPTIONS --raw-ip:: Print raw ip instead of symbol +--slab:: + Analyze SLAB allocator events. + +--page:: + Analyze page allocator events + SEE ALSO -------- linkperf:perf-record[1] diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index 4ebf65c79434..63ea01349b6e 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -22,6 +22,11 @@ #include <linux/string.h> #include <locale.h> +static int kmem_slab; +static int kmem_page; + +static long kmem_page_size; + struct alloc_stat; typedef int (*sort_fn_t)(struct alloc_stat *, struct alloc_stat *); @@ -226,6 +231,244 @@ static int perf_evsel__process_free_event(struct perf_evsel *evsel, return 0; } +static u64 total_page_alloc_bytes; +static u64 total_page_free_bytes; +static u64 total_page_nomatch_bytes; +static u64 total_page_fail_bytes; +static unsigned long nr_page_allocs; +static unsigned long nr_page_frees; +static unsigned long nr_page_fails; +static unsigned long nr_page_nomatch; + +static bool use_pfn; + +#define MAX_MIGRATE_TYPES 6 +#define MAX_PAGE_ORDER 11 + +static int order_stats[MAX_PAGE_ORDER][MAX_MIGRATE_TYPES]; + +struct page_stat { + struct rb_node node; + u64 page; + int order; + unsigned gfp_flags; + unsigned migrate_type; + u64 alloc_bytes; + u64 free_bytes; + int nr_alloc; + int nr_free; +}; + +static struct rb_root page_tree; +static struct rb_root page_alloc_tree; +static struct rb_root page_alloc_sorted; + +static struct page_stat *search_page(unsigned long page, bool create) +{ + struct rb_node **node = &page_tree.rb_node; + struct rb_node *parent = NULL; + struct page_stat *data; + + while (*node) { + s64 cmp; + + parent = *node; + data = rb_entry(*node, struct page_stat, node); + + cmp = data->page - page; + if (cmp < 0) + node = &parent->rb_left; + else if (cmp > 0) + node = &parent->rb_right; + else + return data; + } + + if (!create) + return NULL; + + data = zalloc(sizeof(*data)); + if (data != NULL) { + data->page = page; + + rb_link_node(&data->node, parent, node); + rb_insert_color(&data->node, &page_tree); + } + + return data; +} + +static int page_stat_cmp(struct page_stat *a, struct page_stat *b) +{ + if (a->page > b->page) + return -1; + if (a->page < b->page) + return 1; + if (a->order > b->order) + return -1; + if (a->order < b->order) + return 1; + if (a->migrate_type > b->migrate_type) + return -1; + if (a->migrate_type < b->migrate_type) + return 1; + if (a->gfp_flags > b->gfp_flags) + return -1; + if (a->gfp_flags < b->gfp_flags) + return 1; + return 0; +} + +static struct page_stat *search_page_alloc_stat(struct page_stat *stat, bool create) +{ + struct rb_node **node = &page_alloc_tree.rb_node; + struct rb_node *parent = NULL; + struct page_stat *data; + + while (*node) { + s64 cmp; + + parent = *node; + data = rb_entry(*node, struct page_stat, node); + + cmp = page_stat_cmp(data, stat); + if (cmp < 0) + node = &parent->rb_left; + else if (cmp > 0) + node = &parent->rb_right; + else + return data; + } + + if (!create) + return NULL; + + data = zalloc(sizeof(*data)); + if (data != NULL) { + data->page = stat->page; + data->order = stat->order; + data->gfp_flags = stat->gfp_flags; + data->migrate_type = stat->migrate_type; + + rb_link_node(&data->node, parent, node); + rb_insert_color(&data->node, &page_alloc_tree); + } + + return data; +} + +static bool valid_page(u64 pfn_or_page) +{ + if (use_pfn && pfn_or_page == -1UL) + return false; + if (!use_pfn && pfn_or_page == 0) + return false; + return true; +} + +static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel, + struct perf_sample *sample) +{ + u64 page; + unsigned int order = perf_evsel__intval(evsel, sample, "order"); + unsigned int gfp_flags = perf_evsel__intval(evsel, sample, "gfp_flags"); + unsigned int migrate_type = perf_evsel__intval(evsel, sample, + "migratetype"); + u64 bytes = kmem_page_size << order; + struct page_stat *stat; + struct page_stat this = { + .order = order, + .gfp_flags = gfp_flags, + .migrate_type = migrate_type, + }; + + if (use_pfn) + page = perf_evsel__intval(evsel, sample, "pfn"); + else + page = perf_evsel__intval(evsel, sample, "page"); + + nr_page_allocs++; + total_page_alloc_bytes += bytes; + + if (!valid_page(page)) { + nr_page_fails++; + total_page_fail_bytes += bytes; + + return 0; + } + + /* + * This is to find the current page (with correct gfp flags and + * migrate type) at free event. + */ + stat = search_page(page, true); + if (stat == NULL) + return -ENOMEM; + + stat->order = order; + stat->gfp_flags = gfp_flags; + stat->migrate_type = migrate_type; + + this.page = page; + stat = search_page_alloc_stat(&this, true); + if (stat == NULL) + return -ENOMEM; + + stat->nr_alloc++; + stat->alloc_bytes += bytes; + + order_stats[order][migrate_type]++; + + return 0; +} + +static int perf_evsel__process_page_free_event(struct perf_evsel *evsel, + struct perf_sample *sample) +{ + u64 page; + unsigned int order = perf_evsel__intval(evsel, sample, "order"); + u64 bytes = kmem_page_size << order; + struct page_stat *stat; + struct page_stat this = { + .order = order, + }; + + if (use_pfn) + page = perf_evsel__intval(evsel, sample, "pfn"); + else + page = perf_evsel__intval(evsel, sample, "page"); + + nr_page_frees++; + total_page_free_bytes += bytes; + + stat = search_page(page, false); + if (stat == NULL) { + pr_debug2("missing free at page %"PRIx64" (order: %d)\n", + page, order); + + nr_page_nomatch++; + total_page_nomatch_bytes += bytes; + + return 0; + } + + this.page = page; + this.gfp_flags = stat->gfp_flags; + this.migrate_type = stat->migrate_type; + + rb_erase(&stat->node, &page_tree); + free(stat); + + stat = search_page_alloc_stat(&this, false); + if (stat == NULL) + return -ENOENT; + + stat->nr_free++; + stat->free_bytes += bytes; + + return 0; +} + typedef int (*tracepoint_handler)(struct perf_evsel *evsel, struct perf_sample *sample); @@ -270,8 +513,9 @@ static double fragmentation(unsigned long n_req, unsigned long n_alloc) return 100.0 - (100.0 * n_req / n_alloc); } -static void __print_result(struct rb_root *root, struct perf_session *session, - int n_lines, int is_caller) +static void __print_slab_result(struct rb_root *root, + struct perf_session *session, + int n_lines, int is_caller) { struct rb_node *next; struct machine *machine = &session->machines.host; @@ -323,9 +567,56 @@ static void __print_result(struct rb_root *root, struct perf_session *session, printf("%.105s\n", graph_dotted_line); } -static void print_summary(void) +static const char * const migrate_type_str[] = { + "UNMOVABL", + "RECLAIM", + "MOVABLE", + "RESERVED", + "CMA/ISLT", + "UNKNOWN", +}; + +static void __print_page_result(struct rb_root *root, + struct perf_session *session __maybe_unused, + int n_lines) +{ + struct rb_node *next = rb_first(root); + const char *format; + + printf("\n%.80s\n", graph_dotted_line); + printf(" %-16s | Total alloc (KB) | Hits | Order | Mig.type | GFP flags\n", + use_pfn ? "PFN" : "Page"); + printf("%.80s\n", graph_dotted_line); + + if (use_pfn) + format = " %16llu | %'16llu | %'9d | %5d | %8s | %08lx\n"; + else + format = " %016llx | %'16llu | %'9d | %5d | %8s | %08lx\n"; + + while (next && n_lines--) { + struct page_stat *data; + + data = rb_entry(next, struct page_stat, node); + + printf(format, (unsigned long long)data->page, + (unsigned long long)data->alloc_bytes / 1024, + data->nr_alloc, data->order, + migrate_type_str[data->migrate_type], + (unsigned long)data->gfp_flags); + + next = rb_next(next); + } + + if (n_lines == -1) + printf(" ... | ... | ... | ... | ... | ... \n"); + + printf("%.80s\n", graph_dotted_line); +} + +static void print_slab_summary(void) { - printf("\nSUMMARY\n=======\n"); + printf("\nSUMMARY (SLAB allocator)"); + printf("\n========================\n"); printf("Total bytes requested: %'lu\n", total_requested); printf("Total bytes allocated: %'lu\n", total_allocated); printf("Total bytes wasted on internal fragmentation: %'lu\n", @@ -335,13 +626,73 @@ static void print_summary(void) printf("Cross CPU allocations: %'lu/%'lu\n", nr_cross_allocs, nr_allocs); } -static void print_result(struct perf_session *session) +static void print_page_summary(void) +{ + int o, m; + u64 nr_alloc_freed = nr_page_frees - nr_page_nomatch; + u64 total_alloc_freed_bytes = total_page_free_bytes - total_page_nomatch_bytes; + + printf("\nSUMMARY (page allocator)"); + printf("\n========================\n"); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation requests", + nr_page_allocs, total_page_alloc_bytes / 1024); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free requests", + nr_page_frees, total_page_free_bytes / 1024); + printf("\n"); + + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests", + nr_alloc_freed, (total_alloc_freed_bytes) / 1024); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc-only requests", + nr_page_allocs - nr_alloc_freed, + (total_page_alloc_bytes - total_alloc_freed_bytes) / 1024); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free-only requests", + nr_page_nomatch, total_page_nomatch_bytes / 1024); + printf("\n"); + + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation failures", + nr_page_fails, total_page_fail_bytes / 1024); + printf("\n"); + + printf("%5s %12s %12s %12s %12s %12s\n", "Order", "Unmovable", + "Reclaimable", "Movable", "Reserved", "CMA/Isolated"); + printf("%.5s %.12s %.12s %.12s %.12s %.12s\n", graph_dotted_line, + graph_dotted_line, graph_dotted_line, graph_dotted_line, + graph_dotted_line, graph_dotted_line); + + for (o = 0; o < MAX_PAGE_ORDER; o++) { + printf("%5d", o); + for (m = 0; m < MAX_MIGRATE_TYPES - 1; m++) { + if (order_stats[o][m]) + printf(" %'12d", order_stats[o][m]); + else + printf(" %12c", '.'); + } + printf("\n"); + } +} + +static void print_slab_result(struct perf_session *session) { if (caller_flag) - __print_result(&root_caller_sorted, session, caller_lines, 1); + __print_slab_result(&root_caller_sorted, session, caller_lines, 1); + if (alloc_flag) + __print_slab_result(&root_alloc_sorted, session, alloc_lines, 0); + print_slab_summary(); +} + +static void print_page_result(struct perf_session *session) +{ if (alloc_flag) - __print_result(&root_alloc_sorted, session, alloc_lines, 0); - print_summary(); + __print_page_result(&page_alloc_sorted, session, alloc_lines); + print_page_summary(); +} + +static void print_result(struct perf_session *session) +{ + if (kmem_slab) + print_slab_result(session); + if (kmem_page) + print_page_result(session); } struct sort_dimension { @@ -353,8 +704,8 @@ struct sort_dimension { static LIST_HEAD(caller_sort); static LIST_HEAD(alloc_sort); -static void sort_insert(struct rb_root *root, struct alloc_stat *data, - struct list_head *sort_list) +static void sort_slab_insert(struct rb_root *root, struct alloc_stat *data, + struct list_head *sort_list) { struct rb_node **new = &(root->rb_node); struct rb_node *parent = NULL; @@ -383,8 +734,8 @@ static void sort_insert(struct rb_root *root, struct alloc_stat *data, rb_insert_color(&data->node, root); } -static void __sort_result(struct rb_root *root, struct rb_root *root_sorted, - struct list_head *sort_list) +static void __sort_slab_result(struct rb_root *root, struct rb_root *root_sorted, + struct list_head *sort_list) { struct rb_node *node; struct alloc_stat *data; @@ -396,26 +747,79 @@ static void __sort_result(struct rb_root *root, struct rb_root *root_sorted, rb_erase(node, root); data = rb_entry(node, struct alloc_stat, node); - sort_insert(root_sorted, data, sort_list); + sort_slab_insert(root_sorted, data, sort_list); + } +} + +static void sort_page_insert(struct rb_root *root, struct page_stat *data) +{ + struct rb_node **new = &root->rb_node; + struct rb_node *parent = NULL; + + while (*new) { + struct page_stat *this; + int cmp = 0; + + this = rb_entry(*new, struct page_stat, node); + parent = *new; + + /* TODO: support more sort key */ + cmp = data->alloc_bytes - this->alloc_bytes; + + if (cmp > 0) + new = &parent->rb_left; + else + new = &parent->rb_right; + } + + rb_link_node(&data->node, parent, new); + rb_insert_color(&data->node, root); +} + +static void __sort_page_result(struct rb_root *root, struct rb_root *root_sorted) +{ + struct rb_node *node; + struct page_stat *data; + + for (;;) { + node = rb_first(root); + if (!node) + break; + + rb_erase(node, root); + data = rb_entry(node, struct page_stat, node); + sort_page_insert(root_sorted, data); } } static void sort_result(void) { - __sort_result(&root_alloc_stat, &root_alloc_sorted, &alloc_sort); - __sort_result(&root_caller_stat, &root_caller_sorted, &caller_sort); + if (kmem_slab) { + __sort_slab_result(&root_alloc_stat, &root_alloc_sorted, + &alloc_sort); + __sort_slab_result(&root_caller_stat, &root_caller_sorted, + &caller_sort); + } + if (kmem_page) { + __sort_page_result(&page_alloc_tree, &page_alloc_sorted); + } } static int __cmd_kmem(struct perf_session *session) { int err = -EINVAL; + struct perf_evsel *evsel; const struct perf_evsel_str_handler kmem_tracepoints[] = { + /* slab allocator */ { "kmem:kmalloc", perf_evsel__process_alloc_event, }, { "kmem:kmem_cache_alloc", perf_evsel__process_alloc_event, }, { "kmem:kmalloc_node", perf_evsel__process_alloc_node_event, }, { "kmem:kmem_cache_alloc_node", perf_evsel__process_alloc_node_event, }, { "kmem:kfree", perf_evsel__process_free_event, }, { "kmem:kmem_cache_free", perf_evsel__process_free_event, }, + /* page allocator */ + { "kmem:mm_page_alloc", perf_evsel__process_page_alloc_event, }, + { "kmem:mm_page_free", perf_evsel__process_page_free_event, }, }; if (!perf_session__has_traces(session, "kmem record")) @@ -426,10 +830,20 @@ static int __cmd_kmem(struct perf_session *session) goto out; } + evlist__for_each(session->evlist, evsel) { + if (!strcmp(perf_evsel__name(evsel), "kmem:mm_page_alloc") && + perf_evsel__field(evsel, "pfn")) { + use_pfn = true; + break; + } + } + setup_pager(); err = perf_session__process_events(session); - if (err != 0) + if (err != 0) { + pr_err("error during process events: %d\n", err); goto out; + } sort_result(); print_result(session); out: @@ -612,6 +1026,22 @@ static int parse_alloc_opt(const struct option *opt __maybe_unused, return 0; } +static int parse_slab_opt(const struct option *opt __maybe_unused, + const char *arg __maybe_unused, + int unset __maybe_unused) +{ + kmem_slab = (kmem_page + 1); + return 0; +} + +static int parse_page_opt(const struct option *opt __maybe_unused, + const char *arg __maybe_unused, + int unset __maybe_unused) +{ + kmem_page = (kmem_slab + 1); + return 0; +} + static int parse_line_opt(const struct option *opt __maybe_unused, const char *arg, int unset __maybe_unused) { @@ -634,6 +1064,8 @@ static int __cmd_record(int argc, const char **argv) { const char * const record_args[] = { "record", "-a", "-R", "-c", "1", + }; + const char * const slab_events[] = { "-e", "kmem:kmalloc", "-e", "kmem:kmalloc_node", "-e", "kmem:kfree", @@ -641,10 +1073,19 @@ static int __cmd_record(int argc, const char **argv) "-e", "kmem:kmem_cache_alloc_node", "-e", "kmem:kmem_cache_free", }; + const char * const page_events[] = { + "-e", "kmem:mm_page_alloc", + "-e", "kmem:mm_page_free", + }; unsigned int rec_argc, i, j; const char **rec_argv; rec_argc = ARRAY_SIZE(record_args) + argc - 1; + if (kmem_slab) + rec_argc += ARRAY_SIZE(slab_events); + if (kmem_page) + rec_argc += ARRAY_SIZE(page_events); + rec_argv = calloc(rec_argc + 1, sizeof(char *)); if (rec_argv == NULL) @@ -653,6 +1094,15 @@ static int __cmd_record(int argc, const char **argv) for (i = 0; i < ARRAY_SIZE(record_args); i++) rec_argv[i] = strdup(record_args[i]); + if (kmem_slab) { + for (j = 0; j < ARRAY_SIZE(slab_events); j++, i++) + rec_argv[i] = strdup(slab_events[j]); + } + if (kmem_page) { + for (j = 0; j < ARRAY_SIZE(page_events); j++, i++) + rec_argv[i] = strdup(page_events[j]); + } + for (j = 1; j < (unsigned int)argc; j++, i++) rec_argv[i] = argv[j]; @@ -679,6 +1129,10 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused) OPT_CALLBACK('l', "line", NULL, "num", "show n lines", parse_line_opt), OPT_BOOLEAN(0, "raw-ip", &raw_ip, "show raw ip instead of symbol"), OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"), + OPT_CALLBACK_NOOPT(0, "slab", NULL, NULL, "Analyze slab allocator", + parse_slab_opt), + OPT_CALLBACK_NOOPT(0, "page", NULL, NULL, "Analyze page allocator", + parse_page_opt), OPT_END() }; const char *const kmem_subcommands[] = { "record", "stat", NULL }; @@ -695,6 +1149,9 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused) if (!argc) usage_with_options(kmem_usage, kmem_options); + if (kmem_slab == 0 && kmem_page == 0) + kmem_slab = 1; /* for backward compatibility */ + if (!strncmp(argv[0], "rec", 3)) { symbol__init(NULL); return __cmd_record(argc, argv); @@ -706,6 +1163,17 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused) if (session == NULL) return -1; + if (kmem_page) { + struct perf_evsel *evsel = perf_evlist__first(session->evlist); + + if (evsel == NULL || evsel->tp_format == NULL) { + pr_err("invalid event found.. aborting\n"); + return -1; + } + + kmem_page_size = pevent_get_page_size(evsel->tp_format->pevent); + } + symbol__init(&session->header.env); if (!strcmp(argv[0], "stat")) { -- 1.9.3 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 3/5] perf probe: Set retprobe flag when probe in address-based alternative mode 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo 2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo 2015-04-13 22:14 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Arnaldo Carvalho de Melo @ 2015-04-13 22:15 ` Arnaldo Carvalho de Melo 2015-04-13 22:15 ` [PATCH 4/5] perf probe: Make --source avaiable when probe with lazy_line Arnaldo Carvalho de Melo ` (2 subsequent siblings) 5 siblings, 0 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 22:15 UTC (permalink / raw) To: Ingo Molnar Cc: linux-kernel, He Kuang, Namhyung Kim, Peter Zijlstra, Wang Nan, Arnaldo Carvalho de Melo From: He Kuang <hekuang@huawei.com> When perf probe searched in a debuginfo file and failed, it tried with an alternative, in function get_alternative_probe_event(): memcpy(tmp, &pev->point, sizeof(*tmp)); memset(&pev->point, 0, sizeof(pev->point)); In this case, it drops the retprobe flag and forgets to set it back in find_alternative_probe_point(), so the problem occurs. Can be reproduced as following: $ perf probe -v -k vmlinux --add='sys_write%return' ... Added new event: Writing event: p:probe/sys_write _stext+1584952 probe:sys_write (on sys_write%return) $ cat /sys/kernel/debug/tracing/kprobe_events p:probe/sys_write _stext+1584952 After this patch: $ perf probe -v -k vmlinux --add='sys_write%return' Added new event: Writing event: r:probe/sys_write SyS_write+0 probe:sys_write (on sys_write%return) $ cat /sys/kernel/debug/tracing/kprobe_events r:probe/sys_write SyS_write Signed-off-by: He Kuang <hekuang@huawei.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1428925290-5623-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> --- tools/perf/util/probe-event.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c index 30545ce2c712..5483d98236d3 100644 --- a/tools/perf/util/probe-event.c +++ b/tools/perf/util/probe-event.c @@ -332,6 +332,7 @@ static int find_alternative_probe_point(struct debuginfo *dinfo, else { result->offset += pp->offset; result->line += pp->line; + result->retprobe = pp->retprobe; ret = 0; } -- 1.9.3 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 4/5] perf probe: Make --source avaiable when probe with lazy_line 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo ` (2 preceding siblings ...) 2015-04-13 22:15 ` [PATCH 3/5] perf probe: Set retprobe flag when probe in address-based alternative mode Arnaldo Carvalho de Melo @ 2015-04-13 22:15 ` Arnaldo Carvalho de Melo 2015-04-13 22:15 ` [PATCH 5/5] perf probe: Fix segfault when probe with lazy_line to file Arnaldo Carvalho de Melo 2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu 5 siblings, 0 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 22:15 UTC (permalink / raw) To: Ingo Molnar Cc: linux-kernel, He Kuang, Namhyung Kim, Peter Zijlstra, Wang Nan, Arnaldo Carvalho de Melo From: He Kuang <hekuang@huawei.com> Use get_real_path() to enable --source option when probe with lazy_line pattern. Before this patch: $ perf probe -s ./kernel_src/ -k ./vmlinux --add='fs/super.c;s->s_count=1;' Failed to open fs/super.c: No such file or directory Error: Failed to add events. After this patch: $ perf probe -s ./kernel_src/ -k ./vmlinux --add='fs/super.c;s->s_count=1;' Added new events: probe:_stext (on @fs/super.c) probe:_stext_1 (on @fs/super.c) ... Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1428925290-5623-2-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> --- tools/perf/util/probe-event.c | 2 +- tools/perf/util/probe-event.h | 2 ++ tools/perf/util/probe-finder.c | 18 +++++++++++++++--- 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c index 5483d98236d3..35ee51a8724f 100644 --- a/tools/perf/util/probe-event.c +++ b/tools/perf/util/probe-event.c @@ -661,7 +661,7 @@ static int try_to_find_probe_trace_events(struct perf_probe_event *pev, * a newly allocated path on success. * Return 0 if file was found and readable, -errno otherwise. */ -static int get_real_path(const char *raw_path, const char *comp_dir, +int get_real_path(const char *raw_path, const char *comp_dir, char **new_path) { const char *prefix = symbol_conf.source_prefix; diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h index d6b783447be9..21809ea9b2b4 100644 --- a/tools/perf/util/probe-event.h +++ b/tools/perf/util/probe-event.h @@ -135,6 +135,8 @@ extern int show_available_vars(struct perf_probe_event *pevs, int npevs, struct strfilter *filter, bool externs); extern int show_available_funcs(const char *module, struct strfilter *filter, bool user); +extern int get_real_path(const char *raw_path, const char *comp_dir, + char **new_path); /* Maximum index number of event-name postfix */ #define MAX_EVENT_INDEX 1024 diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index 7831e2d93949..431c12d299a2 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -791,11 +791,20 @@ static int find_lazy_match_lines(struct intlist *list, ssize_t len; int count = 0, linenum = 1; char sbuf[STRERR_BUFSIZE]; + char *realname = NULL; + int ret; - fp = fopen(fname, "r"); + ret = get_real_path(fname, NULL, &realname); + if (ret < 0) { + pr_warning("Failed to find source file %s.\n", fname); + return ret; + } + + fp = fopen(realname, "r"); if (!fp) { - pr_warning("Failed to open %s: %s\n", fname, + pr_warning("Failed to open %s: %s\n", realname, strerror_r(errno, sbuf, sizeof(sbuf))); + free(realname); return -errno; } @@ -817,7 +826,10 @@ static int find_lazy_match_lines(struct intlist *list, fclose(fp); if (count == 0) - pr_debug("No matched lines found in %s.\n", fname); + pr_debug("No matched lines found in %s.\n", realname); + + free(realname); + return count; } -- 1.9.3 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 5/5] perf probe: Fix segfault when probe with lazy_line to file 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo ` (3 preceding siblings ...) 2015-04-13 22:15 ` [PATCH 4/5] perf probe: Make --source avaiable when probe with lazy_line Arnaldo Carvalho de Melo @ 2015-04-13 22:15 ` Arnaldo Carvalho de Melo 2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu 5 siblings, 0 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 22:15 UTC (permalink / raw) To: Ingo Molnar Cc: linux-kernel, He Kuang, Namhyung Kim, Peter Zijlstra, Wang Nan, Arnaldo Carvalho de Melo From: He Kuang <hekuang@huawei.com> The first argument passed to find_probe_point_lazy() should be CU die, which will be passed to die_walk_lines() when lazy_line matches. Currently, when we probe with lazy_line pattern to file without function name, NULL pointer is passed and causes a segment fault. Can be reproduced as following: $ perf probe -k vmlinux --add='fs/super.c;s->s_count=1;' [ 1958.984658] perf[1020]: segfault at 10 ip 00007fc6e10d8c71 sp 00007ffcbfaaf900 error 4 in libdw-0.161.so[7fc6e10ce000+34000] Segmentation fault After this patch: $ perf probe -k vmlinux --add='fs/super.c;s->s_count=1;' Added new event: probe:_stext (on @fs/super.c) You can now use it in all perf tools, such as: perf record -e probe:_stext -aR sleep 1 Signed-off-by: He Kuang <hekuang@huawei.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1428925290-5623-3-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> --- tools/perf/util/probe-finder.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index 431c12d299a2..e91101b8f60f 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -1067,7 +1067,7 @@ static int debuginfo__find_probes(struct debuginfo *dbg, if (pp->function) ret = find_probe_point_by_func(pf); else if (pp->lazy_line) - ret = find_probe_point_lazy(NULL, pf); + ret = find_probe_point_lazy(&pf->cu_die, pf); else { pf->lno = pp->line; ret = find_probe_point_by_line(pf); -- 1.9.3 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [GIT PULL 0/5] perf/core improvements and fixes 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo ` (4 preceding siblings ...) 2015-04-13 22:15 ` [PATCH 5/5] perf probe: Fix segfault when probe with lazy_line to file Arnaldo Carvalho de Melo @ 2015-04-13 22:33 ` Masami Hiramatsu 2015-04-13 23:09 ` Arnaldo Carvalho de Melo 5 siblings, 1 reply; 26+ messages in thread From: Masami Hiramatsu @ 2015-04-13 22:33 UTC (permalink / raw) To: Arnaldo Carvalho de Melo Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan, Arnaldo Carvalho de Melo Hi, Arnaldo, > perf probe: Make --source avaiable when probe with lazy_line No, could you pull Naohiro's patch? I'd like to move get_real_path to probe_finder.c Thank you, (2015/04/14 7:14), Arnaldo Carvalho de Melo wrote: > Hi Ingo, > > Please consider pulling, > > Best regards, > > - Arnaldo > > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: > > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) > > are available in the git repository at: > > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo > > for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402: > > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300) > > ---------------------------------------------------------------- > perf/core improvements and fixes: > > New features: > > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) > > User visible fixes: > > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) > > - lazy_line probe fixes in 'perf probe' (He Kuang) > > Infrastructure: > > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) > > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> > > ---------------------------------------------------------------- > He Kuang (3): > perf probe: Set retprobe flag when probe in address-based alternative mode > perf probe: Make --source avaiable when probe with lazy_line > perf probe: Fix segfault when probe with lazy_line to file > > Namhyung Kim (2): > tracing, mm: Record pfn instead of pointer to struct page > perf kmem: Analyze page allocator events also > > include/trace/events/filemap.h | 8 +- > include/trace/events/kmem.h | 42 +-- > include/trace/events/vmscan.h | 8 +- > tools/perf/Documentation/perf-kmem.txt | 8 +- > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- > tools/perf/util/probe-event.c | 3 +- > tools/perf/util/probe-event.h | 2 + > tools/perf/util/probe-finder.c | 20 +- > 8 files changed, 540 insertions(+), 51 deletions(-) > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > > -- Masami HIRAMATSU Linux Technology Research Center, System Productivity Research Dept. Center for Technology Innovation - Systems Engineering Hitachi, Ltd., Research & Development Group E-mail: masami.hiramatsu.pt@hitachi.com ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [GIT PULL 0/5] perf/core improvements and fixes 2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu @ 2015-04-13 23:09 ` Arnaldo Carvalho de Melo 2015-04-13 23:19 ` Arnaldo Carvalho de Melo 0 siblings, 1 reply; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 23:09 UTC (permalink / raw) To: Masami Hiramatsu Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu: > Hi, Arnaldo, > > > perf probe: Make --source avaiable when probe with lazy_line > > No, could you pull Naohiro's patch? > I'd like to move get_real_path to probe_finder.c OOps, yeah, you asked for that... Ingo, please ignore this pull request for now, thanks, - Arnaldo > Thank you, > > (2015/04/14 7:14), Arnaldo Carvalho de Melo wrote: > > Hi Ingo, > > > > Please consider pulling, > > > > Best regards, > > > > - Arnaldo > > > > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: > > > > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) > > > > are available in the git repository at: > > > > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo > > > > for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402: > > > > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300) > > > > ---------------------------------------------------------------- > > perf/core improvements and fixes: > > > > New features: > > > > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) > > > > User visible fixes: > > > > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) > > > > - lazy_line probe fixes in 'perf probe' (He Kuang) > > > > Infrastructure: > > > > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) > > > > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> > > > > ---------------------------------------------------------------- > > He Kuang (3): > > perf probe: Set retprobe flag when probe in address-based alternative mode > > perf probe: Make --source avaiable when probe with lazy_line > > perf probe: Fix segfault when probe with lazy_line to file > > > > Namhyung Kim (2): > > tracing, mm: Record pfn instead of pointer to struct page > > perf kmem: Analyze page allocator events also > > > > include/trace/events/filemap.h | 8 +- > > include/trace/events/kmem.h | 42 +-- > > include/trace/events/vmscan.h | 8 +- > > tools/perf/Documentation/perf-kmem.txt | 8 +- > > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- > > tools/perf/util/probe-event.c | 3 +- > > tools/perf/util/probe-event.h | 2 + > > tools/perf/util/probe-finder.c | 20 +- > > 8 files changed, 540 insertions(+), 51 deletions(-) > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please read the FAQ at http://www.tux.org/lkml/ > > > > > > > -- > Masami HIRAMATSU > Linux Technology Research Center, System Productivity Research Dept. > Center for Technology Innovation - Systems Engineering > Hitachi, Ltd., Research & Development Group > E-mail: masami.hiramatsu.pt@hitachi.com > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [GIT PULL 0/5] perf/core improvements and fixes 2015-04-13 23:09 ` Arnaldo Carvalho de Melo @ 2015-04-13 23:19 ` Arnaldo Carvalho de Melo 2015-04-14 7:04 ` Masami Hiramatsu 2015-04-14 12:12 ` Ingo Molnar 0 siblings, 2 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-13 23:19 UTC (permalink / raw) To: Masami Hiramatsu, Ingo Molnar Cc: linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu: > Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu: > > Hi, Arnaldo, > > > > > perf probe: Make --source avaiable when probe with lazy_line > > > > No, could you pull Naohiro's patch? > > I'd like to move get_real_path to probe_finder.c > > OOps, yeah, you asked for that... Ingo, please ignore this pull request > for now, thanks, Ok, I did that and created a perf-core-for-mingo-2, Masami, please check that all is right, ok? - Arnaldo The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) are available in the git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2 for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690: perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300) ---------------------------------------------------------------- perf/core improvements and fixes: New features: - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) User visible fixes: - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang) Infrastructure: - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> ---------------------------------------------------------------- He Kuang (2): perf probe: Set retprobe flag when probe in address-based alternative mode perf probe: Fix segfault when probe with lazy_line to file Namhyung Kim (2): tracing, mm: Record pfn instead of pointer to struct page perf kmem: Analyze page allocator events also Naohiro Aota (1): perf probe: Find compilation directory path for lazy matching include/trace/events/filemap.h | 8 +- include/trace/events/kmem.h | 42 +-- include/trace/events/vmscan.h | 8 +- tools/perf/Documentation/perf-kmem.txt | 8 +- tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- tools/perf/util/probe-event.c | 60 +--- tools/perf/util/probe-finder.c | 73 ++++- tools/perf/util/probe-finder.h | 4 + 8 files changed, 596 insertions(+), 107 deletions(-) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Re: [GIT PULL 0/5] perf/core improvements and fixes 2015-04-13 23:19 ` Arnaldo Carvalho de Melo @ 2015-04-14 7:04 ` Masami Hiramatsu 2015-04-14 12:17 ` Arnaldo Carvalho de Melo 2015-04-14 12:12 ` Ingo Molnar 1 sibling, 1 reply; 26+ messages in thread From: Masami Hiramatsu @ 2015-04-14 7:04 UTC (permalink / raw) To: Arnaldo Carvalho de Melo Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan (2015/04/14 8:19), Arnaldo Carvalho de Melo wrote: > Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu: >> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu: >>> Hi, Arnaldo, >>> >>>> perf probe: Make --source avaiable when probe with lazy_line >>> >>> No, could you pull Naohiro's patch? >>> I'd like to move get_real_path to probe_finder.c >> >> OOps, yeah, you asked for that... Ingo, please ignore this pull request >> for now, thanks, > > Ok, I did that and created a perf-core-for-mingo-2, Masami, please check > that all is right, ok? OK, I've built and tested it :) Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Thank you! > > - Arnaldo > > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: > > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) > > are available in the git repository at: > > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2 > > for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690: > > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300) > > ---------------------------------------------------------------- > perf/core improvements and fixes: > > New features: > > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) > > User visible fixes: > > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) > > - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang) > > Infrastructure: > > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) > > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> > > ---------------------------------------------------------------- > He Kuang (2): > perf probe: Set retprobe flag when probe in address-based alternative mode > perf probe: Fix segfault when probe with lazy_line to file > > Namhyung Kim (2): > tracing, mm: Record pfn instead of pointer to struct page > perf kmem: Analyze page allocator events also > > Naohiro Aota (1): > perf probe: Find compilation directory path for lazy matching > > include/trace/events/filemap.h | 8 +- > include/trace/events/kmem.h | 42 +-- > include/trace/events/vmscan.h | 8 +- > tools/perf/Documentation/perf-kmem.txt | 8 +- > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- > tools/perf/util/probe-event.c | 60 +--- > tools/perf/util/probe-finder.c | 73 ++++- > tools/perf/util/probe-finder.h | 4 + > 8 files changed, 596 insertions(+), 107 deletions(-) > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > -- Masami HIRAMATSU Linux Technology Research Center, System Productivity Research Dept. Center for Technology Innovation - Systems Engineering Hitachi, Ltd., Research & Development Group E-mail: masami.hiramatsu.pt@hitachi.com ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Re: [GIT PULL 0/5] perf/core improvements and fixes 2015-04-14 7:04 ` Masami Hiramatsu @ 2015-04-14 12:17 ` Arnaldo Carvalho de Melo 0 siblings, 0 replies; 26+ messages in thread From: Arnaldo Carvalho de Melo @ 2015-04-14 12:17 UTC (permalink / raw) To: Masami Hiramatsu Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan Em Tue, Apr 14, 2015 at 04:04:29PM +0900, Masami Hiramatsu escreveu: > (2015/04/14 8:19), Arnaldo Carvalho de Melo wrote: > > Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu: > >> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu: > >>> Hi, Arnaldo, > >>> > >>>> perf probe: Make --source avaiable when probe with lazy_line > >>> > >>> No, could you pull Naohiro's patch? > >>> I'd like to move get_real_path to probe_finder.c > >> > >> OOps, yeah, you asked for that... Ingo, please ignore this pull request > >> for now, thanks, > > > > Ok, I did that and created a perf-core-for-mingo-2, Masami, please check > > that all is right, ok? > > OK, I've built and tested it :) > > Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> > Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Thanks, and sorry for the slip up in getting the right patch as we agreed in that discussion, Regards, - Arnaldo > Thank you! > > > > > - Arnaldo > > > > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: > > > > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) > > > > are available in the git repository at: > > > > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2 > > > > for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690: > > > > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300) > > > > ---------------------------------------------------------------- > > perf/core improvements and fixes: > > > > New features: > > > > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) > > > > User visible fixes: > > > > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) > > > > - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang) > > > > Infrastructure: > > > > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) > > > > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> > > > > ---------------------------------------------------------------- > > He Kuang (2): > > perf probe: Set retprobe flag when probe in address-based alternative mode > > perf probe: Fix segfault when probe with lazy_line to file > > > > Namhyung Kim (2): > > tracing, mm: Record pfn instead of pointer to struct page > > perf kmem: Analyze page allocator events also > > > > Naohiro Aota (1): > > perf probe: Find compilation directory path for lazy matching > > > > include/trace/events/filemap.h | 8 +- > > include/trace/events/kmem.h | 42 +-- > > include/trace/events/vmscan.h | 8 +- > > tools/perf/Documentation/perf-kmem.txt | 8 +- > > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- > > tools/perf/util/probe-event.c | 60 +--- > > tools/perf/util/probe-finder.c | 73 ++++- > > tools/perf/util/probe-finder.h | 4 + > > 8 files changed, 596 insertions(+), 107 deletions(-) > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please read the FAQ at http://www.tux.org/lkml/ > > > > > -- > Masami HIRAMATSU > Linux Technology Research Center, System Productivity Research Dept. > Center for Technology Innovation - Systems Engineering > Hitachi, Ltd., Research & Development Group > E-mail: masami.hiramatsu.pt@hitachi.com > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [GIT PULL 0/5] perf/core improvements and fixes 2015-04-13 23:19 ` Arnaldo Carvalho de Melo 2015-04-14 7:04 ` Masami Hiramatsu @ 2015-04-14 12:12 ` Ingo Molnar 1 sibling, 0 replies; 26+ messages in thread From: Ingo Molnar @ 2015-04-14 12:12 UTC (permalink / raw) To: Arnaldo Carvalho de Melo Cc: Masami Hiramatsu, linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan * Arnaldo Carvalho de Melo <acme@kernel.org> wrote: > Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu: > > Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu: > > > Hi, Arnaldo, > > > > > > > perf probe: Make --source avaiable when probe with lazy_line > > > > > > No, could you pull Naohiro's patch? > > > I'd like to move get_real_path to probe_finder.c > > > > OOps, yeah, you asked for that... Ingo, please ignore this pull request > > for now, thanks, > > Ok, I did that and created a perf-core-for-mingo-2, Masami, please check > that all is right, ok? > > - Arnaldo > > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754: > > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200) > > are available in the git repository at: > > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2 > > for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690: > > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300) > > ---------------------------------------------------------------- > perf/core improvements and fixes: > > New features: > > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim) > > User visible fixes: > > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang) > > - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang) > > Infrastructure: > > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim) > > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> > > ---------------------------------------------------------------- > He Kuang (2): > perf probe: Set retprobe flag when probe in address-based alternative mode > perf probe: Fix segfault when probe with lazy_line to file > > Namhyung Kim (2): > tracing, mm: Record pfn instead of pointer to struct page > perf kmem: Analyze page allocator events also > > Naohiro Aota (1): > perf probe: Find compilation directory path for lazy matching > > include/trace/events/filemap.h | 8 +- > include/trace/events/kmem.h | 42 +-- > include/trace/events/vmscan.h | 8 +- > tools/perf/Documentation/perf-kmem.txt | 8 +- > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++-- > tools/perf/util/probe-event.c | 60 +--- > tools/perf/util/probe-finder.c | 73 ++++- > tools/perf/util/probe-finder.h | 4 + > 8 files changed, 596 insertions(+), 107 deletions(-) Pulled, thanks a lot Arnaldo! Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCHSET 0/5] perf kmem: Implement page allocation analysis (v3) @ 2015-03-23 6:30 Namhyung Kim 2015-03-23 6:30 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Namhyung Kim 0 siblings, 1 reply; 26+ messages in thread From: Namhyung Kim @ 2015-03-23 6:30 UTC (permalink / raw) To: Arnaldo Carvalho de Melo Cc: Ingo Molnar, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim, Joonsoo Kim Hello, Currently perf kmem command only analyzes SLAB memory allocation. And I'd like to introduce page allocation analysis also. Users can use --slab and/or --page option to select it. If none of these options are used, it does slab allocation analysis for backward compatibility. * changes in v3) - add live page statistics * changes in v2) - Use thousand grouping for big numbers - i.e. 12345 -> 12,345 (Ingo) - Improve output stat readability (Ingo) - Remove alloc size column as it can be calculated from hits and order Patch 1 is to support thousand grouping on stat output. Patch 2 implements basic support for page allocation analysis, patch 3 deals with the callsite and finally patch 4 implements sorting. In this patchset, I used two kmem events: kmem:mm_page_alloc and kmem_page_free for analysis as they can track almost all of memory allocation/free path AFAIK. However, unlike slab tracepoint events, those page allocation events don't provide callsite info directly. So I recorded callchains and extracted callsites like below: Normal page allocation callchains look like this: 360a7e __alloc_pages_nodemask 3a711c alloc_pages_current 357bc7 __page_cache_alloc <-- callsite 357cf6 pagecache_get_page 48b0a prepare_pages 494d3 __btrfs_buffered_write 49cdf btrfs_file_write_iter 3ceb6e new_sync_write 3cf447 vfs_write 3cff99 sys_write 7556e9 system_call f880 __write_nocancel 33eb9 cmd_record 4b38e cmd_kmem 7aa23 run_builtin 27a9a main 20800 __libc_start_main But first two are internal page allocation functions so it should be skipped. To determine such allocation functions, I used following regex: ^_?_?(alloc|get_free|get_zeroed)_pages? This gave me a following list of functions (you can see this with -v): alloc func: __get_free_pages alloc func: get_zeroed_page alloc func: alloc_pages_exact alloc func: __alloc_pages_direct_compact alloc func: __alloc_pages_nodemask alloc func: alloc_page_interleave alloc func: alloc_pages_current alloc func: alloc_pages_vma alloc func: alloc_page_buffers alloc func: alloc_pages_exact_nid After skipping those function, it got '__page_cache_alloc'. Other information such as allocation order, migration type and gfp flags are provided by tracepoint events. Basically the output will be sorted by total allocation bytes, but you can change it by using -s/--sort option. The following sort keys are added to support page analysis: page, order, mtype, gfp. Existing 'callsite', 'bytes' and 'hit' sort keys also can be used. An example follows: # perf kmem record --slab --page sleep 1 [ perf record: Woken up 0 times to write data ] [ perf record: Captured and wrote 49.277 MB perf.data (191027 samples) ] # perf kmem stat --page --caller -l 10 -s order,hit -------------------------------------------------------------------------------------------- Total alloc (KB) | Hits | Order | Migration type | GFP flags | Callsite -------------------------------------------------------------------------------------------- 64 | 4 | 2 | RECLAIMABLE | 00285250 | new_slab 50,144 | 12,536 | 0 | MOVABLE | 0102005a | __page_cache_alloc 52 | 13 | 0 | UNMOVABLE | 002084d0 | pte_alloc_one 40 | 10 | 0 | MOVABLE | 000280da | handle_mm_fault 28 | 7 | 0 | UNMOVABLE | 000000d0 | __pollwait 20 | 5 | 0 | MOVABLE | 000200da | do_wp_page 20 | 5 | 0 | MOVABLE | 000200da | do_cow_fault 16 | 4 | 0 | UNMOVABLE | 00000200 | __tlb_remove_page 16 | 4 | 0 | UNMOVABLE | 000084d0 | __pmd_alloc 8 | 2 | 0 | UNMOVABLE | 000084d0 | __pud_alloc ... | ... | ... | ... | ... | ... -------------------------------------------------------------------------------------------- SUMMARY (page allocator) ======================== Total allocation requests : 12,594 [ 50,420 KB ] Total free requests : 182 [ 728 KB ] Total alloc+freed requests : 115 [ 460 KB ] Total alloc-only requests : 12,479 [ 49,960 KB ] Total free-only requests : 67 [ 268 KB ] Total allocation failures : 0 [ 0 KB ] Order Unmovable Reclaimable Movable Reserved CMA/Isolated ----- ------------ ------------ ------------ ------------ ------------ 0 32 . 12,558 . . 1 . . . . . 2 . 4 . . . 3 . . . . . 4 . . . . . 5 . . . . . 6 . . . . . 7 . . . . . 8 . . . . . 9 . . . . . 10 . . . . . I have some idea how to improve it. But I'd also like to hear other idea, suggestion, feedback and so on. This is available at perf/kmem-page-v3 branch on my tree: git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git Thanks, Namhyung Namhyung Kim (5): perf kmem: Print big numbers using thousands' group perf kmem: Analyze page allocator events also perf kmem: Implement stat --page --caller perf kmem: Support sort keys on page analysis perf kmem: Add --live option for current allocation stat tools/perf/Documentation/perf-kmem.txt | 19 +- tools/perf/builtin-kmem.c | 1065 ++++++++++++++++++++++++++++++-- 2 files changed, 1022 insertions(+), 62 deletions(-) -- 2.3.3 ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-23 6:30 [PATCHSET 0/5] perf kmem: Implement page allocation analysis (v3) Namhyung Kim @ 2015-03-23 6:30 ` Namhyung Kim 2015-03-23 17:32 ` Joonsoo Kim 0 siblings, 1 reply; 26+ messages in thread From: Namhyung Kim @ 2015-03-23 6:30 UTC (permalink / raw) To: Arnaldo Carvalho de Melo Cc: Ingo Molnar, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim, Joonsoo Kim The perf kmem command records and analyze kernel memory allocation only for SLAB objects. This patch implement a simple page allocator analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. It adds two new options of --slab and --page. The --slab option is for analyzing SLAB allocator and that's what perf kmem currently does. The new --page option enables page allocator events and analyze kernel memory usage in page unit. Currently, 'stat --alloc' subcommand is implemented only. If none of these --slab nor --page is specified, --slab is implied. # perf kmem stat --page --alloc --line 10 ------------------------------------------------------------------------------------- Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags ------------------------------------------------------------------------------------- ffffea0015e48e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea0015e47400 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea001440f600 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea001440cc00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea00140c6300 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea00140c5c00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea00140c5000 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea00140c4f00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea00140c4e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ffffea00140c4d00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 ... | ... | ... | ... | ... | ... ------------------------------------------------------------------------------------- SUMMARY (page allocator) ======================== Total allocation requests : 44,260 [ 177,256 KB ] Total free requests : 117 [ 468 KB ] Total alloc+freed requests : 49 [ 196 KB ] Total alloc-only requests : 44,211 [ 177,060 KB ] Total free-only requests : 68 [ 272 KB ] Total allocation failures : 0 [ 0 KB ] Order Unmovable Reclaimable Movable Reserved CMA/Isolated ----- ------------ ------------ ------------ ------------ ------------ 0 32 . 44,210 . . 1 . . . . . 2 . 18 . . . 3 . . . . . 4 . . . . . 5 . . . . . 6 . . . . . 7 . . . . . 8 . . . . . 9 . . . . . 10 . . . . . Signed-off-by: Namhyung Kim <namhyung@kernel.org> --- tools/perf/Documentation/perf-kmem.txt | 8 +- tools/perf/builtin-kmem.c | 376 +++++++++++++++++++++++++++++++-- 2 files changed, 368 insertions(+), 16 deletions(-) diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt index 150253cc3c97..23219c65c16f 100644 --- a/tools/perf/Documentation/perf-kmem.txt +++ b/tools/perf/Documentation/perf-kmem.txt @@ -3,7 +3,7 @@ perf-kmem(1) NAME ---- -perf-kmem - Tool to trace/measure kernel memory(slab) properties +perf-kmem - Tool to trace/measure kernel memory properties SYNOPSIS -------- @@ -46,6 +46,12 @@ OPTIONS --raw-ip:: Print raw ip instead of symbol +--slab:: + Analyze SLAB allocator events. + +--page:: + Analyze page allocator events + SEE ALSO -------- linkperf:perf-record[1] diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index 64d3623d45a0..76a527dc6ba1 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -22,6 +22,11 @@ #include <linux/string.h> #include <locale.h> +static int kmem_slab; +static int kmem_page; + +static long kmem_page_size; + struct alloc_stat; typedef int (*sort_fn_t)(struct alloc_stat *, struct alloc_stat *); @@ -226,6 +231,139 @@ static int perf_evsel__process_free_event(struct perf_evsel *evsel, return 0; } +static u64 total_page_alloc_bytes; +static u64 total_page_free_bytes; +static u64 total_page_nomatch_bytes; +static u64 total_page_fail_bytes; +static unsigned long nr_page_allocs; +static unsigned long nr_page_frees; +static unsigned long nr_page_fails; +static unsigned long nr_page_nomatch; + +#define MAX_MIGRATE_TYPES 6 +#define MAX_PAGE_ORDER 11 + +static int order_stats[MAX_PAGE_ORDER][MAX_MIGRATE_TYPES]; + +struct page_stat { + struct rb_node node; + u64 page; + int order; + unsigned gfp_flags; + unsigned migrate_type; + u64 alloc_bytes; + u64 free_bytes; + int nr_alloc; + int nr_free; +}; + +static struct rb_root page_tree; +static struct rb_root page_alloc_sorted; + +static struct page_stat *search_page_stat(unsigned long page, bool create) +{ + struct rb_node **node = &page_tree.rb_node; + struct rb_node *parent = NULL; + struct page_stat *data; + + while (*node) { + s64 cmp; + + parent = *node; + data = rb_entry(*node, struct page_stat, node); + + cmp = data->page - page; + if (cmp < 0) + node = &parent->rb_left; + else if (cmp > 0) + node = &parent->rb_right; + else + return data; + } + + if (!create) + return NULL; + + data = zalloc(sizeof(*data)); + if (data != NULL) { + data->page = page; + + rb_link_node(&data->node, parent, node); + rb_insert_color(&data->node, &page_tree); + } + + return data; +} + +static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel, + struct perf_sample *sample) +{ + u64 page = perf_evsel__intval(evsel, sample, "page"); + unsigned int order = perf_evsel__intval(evsel, sample, "order"); + unsigned int gfp_flags = perf_evsel__intval(evsel, sample, "gfp_flags"); + unsigned int migrate_type = perf_evsel__intval(evsel, sample, + "migratetype"); + u64 bytes = kmem_page_size << order; + struct page_stat *stat; + + if (page == 0) { + nr_page_fails++; + total_page_fail_bytes += bytes; + + return 0; + } + + /* + * XXX: We'd better to use PFN instead of page pointer to deal + * with things like partial freeing. But AFAIK there's no way + * to convert a pointer to struct page into PFN in userspace. + */ + stat = search_page_stat(page, true); + if (stat == NULL) + return -1; + + stat->order = order; + stat->gfp_flags = gfp_flags; + stat->migrate_type = migrate_type; + + stat->nr_alloc++; + nr_page_allocs++; + stat->alloc_bytes += bytes; + total_page_alloc_bytes += bytes; + + order_stats[order][migrate_type]++; + + return 0; +} + +static int perf_evsel__process_page_free_event(struct perf_evsel *evsel, + struct perf_sample *sample) +{ + u64 page = perf_evsel__intval(evsel, sample, "page"); + unsigned int order = perf_evsel__intval(evsel, sample, "order"); + u64 bytes = kmem_page_size << order; + struct page_stat *stat; + + nr_page_frees++; + total_page_free_bytes += bytes; + + stat = search_page_stat(page, false); + if (stat == NULL) { + pr_debug2("missing free at page %"PRIx64" (order: %d)\n", + page, order); + + nr_page_nomatch++; + total_page_nomatch_bytes += bytes; + + return 0; + } + + stat->nr_free++; + stat->free_bytes += bytes; + + return 0; +} + typedef int (*tracepoint_handler)(struct perf_evsel *evsel, struct perf_sample *sample); @@ -270,8 +408,9 @@ static double fragmentation(unsigned long n_req, unsigned long n_alloc) return 100.0 - (100.0 * n_req / n_alloc); } -static void __print_result(struct rb_root *root, struct perf_session *session, - int n_lines, int is_caller) +static void __print_slab_result(struct rb_root *root, + struct perf_session *session, + int n_lines, int is_caller) { struct rb_node *next; struct machine *machine = &session->machines.host; @@ -323,9 +462,50 @@ static void __print_result(struct rb_root *root, struct perf_session *session, printf("%.105s\n", graph_dotted_line); } -static void print_summary(void) +static const char * const migrate_type_str[] = { + "UNMOVABLE", + "RECLAIMABLE", + "MOVABLE", + "RESERVED", + "CMA/ISOLATE", + "UNKNOWN", +}; + +static void __print_page_result(struct rb_root *root, + struct perf_session *session __maybe_unused, + int n_lines) +{ + struct rb_node *next = rb_first(root); + + printf("\n%.86s\n", graph_dotted_line); + printf(" Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags\n"); + printf("%.86s\n", graph_dotted_line); + + while (next && n_lines--) { + struct page_stat *data; + + data = rb_entry(next, struct page_stat, node); + + printf(" %016llx | %'16llu | %'9d | %5d | %14s | %08lx\n", + (unsigned long long)data->page, + (unsigned long long)data->alloc_bytes / 1024, + data->nr_alloc, data->order, + migrate_type_str[data->migrate_type], + (unsigned long)data->gfp_flags); + + next = rb_next(next); + } + + if (n_lines == -1) + printf(" ... | ... | ... | ... | ... | ... \n"); + + printf("%.86s\n", graph_dotted_line); +} + +static void print_slab_summary(void) { - printf("\nSUMMARY\n=======\n"); + printf("\nSUMMARY (SLAB allocator)"); + printf("\n========================\n"); printf("Total bytes requested: %'lu\n", total_requested); printf("Total bytes allocated: %'lu\n", total_allocated); printf("Total bytes wasted on internal fragmentation: %'lu\n", @@ -335,13 +515,73 @@ static void print_summary(void) printf("Cross CPU allocations: %'lu/%'lu\n", nr_cross_allocs, nr_allocs); } -static void print_result(struct perf_session *session) +static void print_page_summary(void) +{ + int o, m; + u64 nr_alloc_freed = nr_page_frees - nr_page_nomatch; + u64 total_alloc_freed_bytes = total_page_free_bytes - total_page_nomatch_bytes; + + printf("\nSUMMARY (page allocator)"); + printf("\n========================\n"); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation requests", + nr_page_allocs, total_page_alloc_bytes / 1024); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free requests", + nr_page_frees, total_page_free_bytes / 1024); + printf("\n"); + + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests", + nr_alloc_freed, (total_alloc_freed_bytes) / 1024); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc-only requests", + nr_page_allocs - nr_alloc_freed, + (total_page_alloc_bytes - total_alloc_freed_bytes) / 1024); + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free-only requests", + nr_page_nomatch, total_page_nomatch_bytes / 1024); + printf("\n"); + + printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation failures", + nr_page_fails, total_page_fail_bytes / 1024); + printf("\n"); + + printf("%5s %12s %12s %12s %12s %12s\n", "Order", "Unmovable", + "Reclaimable", "Movable", "Reserved", "CMA/Isolated"); + printf("%.5s %.12s %.12s %.12s %.12s %.12s\n", graph_dotted_line, + graph_dotted_line, graph_dotted_line, graph_dotted_line, + graph_dotted_line, graph_dotted_line); + + for (o = 0; o < MAX_PAGE_ORDER; o++) { + printf("%5d", o); + for (m = 0; m < MAX_MIGRATE_TYPES - 1; m++) { + if (order_stats[o][m]) + printf(" %'12d", order_stats[o][m]); + else + printf(" %12c", '.'); + } + printf("\n"); + } +} + +static void print_slab_result(struct perf_session *session) { if (caller_flag) - __print_result(&root_caller_sorted, session, caller_lines, 1); + __print_slab_result(&root_caller_sorted, session, caller_lines, 1); if (alloc_flag) - __print_result(&root_alloc_sorted, session, alloc_lines, 0); - print_summary(); + __print_slab_result(&root_alloc_sorted, session, alloc_lines, 0); + print_slab_summary(); +} + +static void print_page_result(struct perf_session *session) +{ + if (alloc_flag) + __print_page_result(&page_alloc_sorted, session, alloc_lines); + print_page_summary(); +} + +static void print_result(struct perf_session *session) +{ + if (kmem_slab) + print_slab_result(session); + if (kmem_page) + print_page_result(session); } struct sort_dimension { @@ -353,8 +593,8 @@ struct sort_dimension { static LIST_HEAD(caller_sort); static LIST_HEAD(alloc_sort); -static void sort_insert(struct rb_root *root, struct alloc_stat *data, - struct list_head *sort_list) +static void sort_slab_insert(struct rb_root *root, struct alloc_stat *data, + struct list_head *sort_list) { struct rb_node **new = &(root->rb_node); struct rb_node *parent = NULL; @@ -383,8 +623,8 @@ static void sort_insert(struct rb_root *root, struct alloc_stat *data, rb_insert_color(&data->node, root); } -static void __sort_result(struct rb_root *root, struct rb_root *root_sorted, - struct list_head *sort_list) +static void __sort_slab_result(struct rb_root *root, struct rb_root *root_sorted, + struct list_head *sort_list) { struct rb_node *node; struct alloc_stat *data; @@ -396,26 +636,78 @@ static void __sort_result(struct rb_root *root, struct rb_root *root_sorted, rb_erase(node, root); data = rb_entry(node, struct alloc_stat, node); - sort_insert(root_sorted, data, sort_list); + sort_slab_insert(root_sorted, data, sort_list); + } +} + +static void sort_page_insert(struct rb_root *root, struct page_stat *data) +{ + struct rb_node **new = &root->rb_node; + struct rb_node *parent = NULL; + + while (*new) { + struct page_stat *this; + int cmp = 0; + + this = rb_entry(*new, struct page_stat, node); + parent = *new; + + /* TODO: support more sort key */ + cmp = data->alloc_bytes - this->alloc_bytes; + + if (cmp > 0) + new = &parent->rb_left; + else + new = &parent->rb_right; + } + + rb_link_node(&data->node, parent, new); + rb_insert_color(&data->node, root); +} + +static void __sort_page_result(struct rb_root *root, struct rb_root *root_sorted) +{ + struct rb_node *node; + struct page_stat *data; + + for (;;) { + node = rb_first(root); + if (!node) + break; + + rb_erase(node, root); + data = rb_entry(node, struct page_stat, node); + sort_page_insert(root_sorted, data); } } static void sort_result(void) { - __sort_result(&root_alloc_stat, &root_alloc_sorted, &alloc_sort); - __sort_result(&root_caller_stat, &root_caller_sorted, &caller_sort); + if (kmem_slab) { + __sort_slab_result(&root_alloc_stat, &root_alloc_sorted, + &alloc_sort); + __sort_slab_result(&root_caller_stat, &root_caller_sorted, + &caller_sort); + } + if (kmem_page) { + __sort_page_result(&page_tree, &page_alloc_sorted); + } } static int __cmd_kmem(struct perf_session *session) { int err = -EINVAL; const struct perf_evsel_str_handler kmem_tracepoints[] = { + /* slab allocator */ { "kmem:kmalloc", perf_evsel__process_alloc_event, }, { "kmem:kmem_cache_alloc", perf_evsel__process_alloc_event, }, { "kmem:kmalloc_node", perf_evsel__process_alloc_node_event, }, { "kmem:kmem_cache_alloc_node", perf_evsel__process_alloc_node_event, }, { "kmem:kfree", perf_evsel__process_free_event, }, { "kmem:kmem_cache_free", perf_evsel__process_free_event, }, + /* page allocator */ + { "kmem:mm_page_alloc", perf_evsel__process_page_alloc_event, }, + { "kmem:mm_page_free", perf_evsel__process_page_free_event, }, }; if (!perf_session__has_traces(session, "kmem record")) @@ -612,6 +904,22 @@ static int parse_alloc_opt(const struct option *opt __maybe_unused, return 0; } +static int parse_slab_opt(const struct option *opt __maybe_unused, + const char *arg __maybe_unused, + int unset __maybe_unused) +{ + kmem_slab = (kmem_page + 1); + return 0; +} + +static int parse_page_opt(const struct option *opt __maybe_unused, + const char *arg __maybe_unused, + int unset __maybe_unused) +{ + kmem_page = (kmem_slab + 1); + return 0; +} + static int parse_line_opt(const struct option *opt __maybe_unused, const char *arg, int unset __maybe_unused) { @@ -634,6 +942,8 @@ static int __cmd_record(int argc, const char **argv) { const char * const record_args[] = { "record", "-a", "-R", "-c", "1", + }; + const char * const slab_events[] = { "-e", "kmem:kmalloc", "-e", "kmem:kmalloc_node", "-e", "kmem:kfree", @@ -641,10 +951,19 @@ static int __cmd_record(int argc, const char **argv) "-e", "kmem:kmem_cache_alloc_node", "-e", "kmem:kmem_cache_free", }; + const char * const page_events[] = { + "-e", "kmem:mm_page_alloc", + "-e", "kmem:mm_page_free", + }; unsigned int rec_argc, i, j; const char **rec_argv; rec_argc = ARRAY_SIZE(record_args) + argc - 1; + if (kmem_slab) + rec_argc += ARRAY_SIZE(slab_events); + if (kmem_page) + rec_argc += ARRAY_SIZE(page_events); + rec_argv = calloc(rec_argc + 1, sizeof(char *)); if (rec_argv == NULL) @@ -653,6 +972,15 @@ static int __cmd_record(int argc, const char **argv) for (i = 0; i < ARRAY_SIZE(record_args); i++) rec_argv[i] = strdup(record_args[i]); + if (kmem_slab) { + for (j = 0; j < ARRAY_SIZE(slab_events); j++, i++) + rec_argv[i] = strdup(slab_events[j]); + } + if (kmem_page) { + for (j = 0; j < ARRAY_SIZE(page_events); j++, i++) + rec_argv[i] = strdup(page_events[j]); + } + for (j = 1; j < (unsigned int)argc; j++, i++) rec_argv[i] = argv[j]; @@ -675,6 +1003,10 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused) parse_sort_opt), OPT_CALLBACK('l', "line", NULL, "num", "show n lines", parse_line_opt), OPT_BOOLEAN(0, "raw-ip", &raw_ip, "show raw ip instead of symbol"), + OPT_CALLBACK_NOOPT(0, "slab", NULL, NULL, "Analyze slab allocator", + parse_slab_opt), + OPT_CALLBACK_NOOPT(0, "page", NULL, NULL, "Analyze page allocator", + parse_page_opt), OPT_END() }; const char *const kmem_subcommands[] = { "record", "stat", NULL }; @@ -695,6 +1027,9 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused) if (!argc) usage_with_options(kmem_usage, kmem_options); + if (kmem_slab == 0 && kmem_page == 0) + kmem_slab = 1; /* for backward compatibility */ + if (!strncmp(argv[0], "rec", 3)) { symbol__init(NULL); return __cmd_record(argc, argv); @@ -704,6 +1039,17 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused) if (session == NULL) return -1; + if (kmem_page) { + struct perf_evsel *evsel = perf_evlist__first(session->evlist); + + if (evsel == NULL || evsel->tp_format == NULL) { + pr_err("invalid event found.. aborting\n"); + return -1; + } + + kmem_page_size = pevent_get_page_size(evsel->tp_format->pevent); + } + symbol__init(&session->header.env); if (!strcmp(argv[0], "stat")) { -- 2.3.3 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-23 6:30 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Namhyung Kim @ 2015-03-23 17:32 ` Joonsoo Kim 2015-03-24 0:18 ` Namhyung Kim 0 siblings, 1 reply; 26+ messages in thread From: Joonsoo Kim @ 2015-03-23 17:32 UTC (permalink / raw) To: Namhyung Kim Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim 2015-03-23 15:30 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > The perf kmem command records and analyze kernel memory allocation > only for SLAB objects. This patch implement a simple page allocator > analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. > > It adds two new options of --slab and --page. The --slab option is > for analyzing SLAB allocator and that's what perf kmem currently does. > > The new --page option enables page allocator events and analyze kernel > memory usage in page unit. Currently, 'stat --alloc' subcommand is > implemented only. > > If none of these --slab nor --page is specified, --slab is implied. > > # perf kmem stat --page --alloc --line 10 > > ------------------------------------------------------------------------------------- > Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags > ------------------------------------------------------------------------------------- > ffffea0015e48e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea0015e47400 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea001440f600 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea001440cc00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea00140c6300 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea00140c5c00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea00140c5000 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea00140c4f00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea00140c4e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ffffea00140c4d00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > ... | ... | ... | ... | ... | ... > ------------------------------------------------------------------------------------- Tracepoint on mm_page_alloc print out pfn as well as pointer of struct page. How about printing pfn rather than pointer of struct page? Thanks. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-23 17:32 ` Joonsoo Kim @ 2015-03-24 0:18 ` Namhyung Kim 2015-03-24 5:26 ` Joonsoo Kim 0 siblings, 1 reply; 26+ messages in thread From: Namhyung Kim @ 2015-03-24 0:18 UTC (permalink / raw) To: Joonsoo Kim Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim On Tue, Mar 24, 2015 at 02:32:17AM +0900, Joonsoo Kim wrote: > 2015-03-23 15:30 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > > The perf kmem command records and analyze kernel memory allocation > > only for SLAB objects. This patch implement a simple page allocator > > analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. > > > > It adds two new options of --slab and --page. The --slab option is > > for analyzing SLAB allocator and that's what perf kmem currently does. > > > > The new --page option enables page allocator events and analyze kernel > > memory usage in page unit. Currently, 'stat --alloc' subcommand is > > implemented only. > > > > If none of these --slab nor --page is specified, --slab is implied. > > > > # perf kmem stat --page --alloc --line 10 > > > > ------------------------------------------------------------------------------------- > > Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags > > ------------------------------------------------------------------------------------- > > ffffea0015e48e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea0015e47400 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea001440f600 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea001440cc00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea00140c6300 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea00140c5c00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea00140c5000 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea00140c4f00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea00140c4e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ffffea00140c4d00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > > ... | ... | ... | ... | ... | ... > > ------------------------------------------------------------------------------------- > > Tracepoint on mm_page_alloc print out pfn as well as pointer of struct page. > How about printing pfn rather than pointer of struct page? I'd really like to have pfn rather than struct page. But I don't know how to convert page pointer to pfn in userspace. The output of tracepoint via $debugfs/tracing/trace file is generated from kernel-side, so it can easily have pfn from page pointer. But tracepoint itself only saves page pointer and we need to convert/print it in userspace. Yes, perf script (or libtraceevent) shows pfn when printing those events. But that's bogus since it cannot determine the size of the struct page so the pointer arithmetic in open-coded page_to_pfn() which is saved in the print_fmt of the tracepoint will end up with an normal integer arithmatic. Do I miss something? Thanks, Namhyung ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-24 0:18 ` Namhyung Kim @ 2015-03-24 5:26 ` Joonsoo Kim 2015-03-24 6:05 ` Namhyung Kim 2015-03-24 7:08 ` Ingo Molnar 0 siblings, 2 replies; 26+ messages in thread From: Joonsoo Kim @ 2015-03-24 5:26 UTC (permalink / raw) To: Namhyung Kim Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim 2015-03-24 9:18 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > On Tue, Mar 24, 2015 at 02:32:17AM +0900, Joonsoo Kim wrote: >> 2015-03-23 15:30 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: >> > The perf kmem command records and analyze kernel memory allocation >> > only for SLAB objects. This patch implement a simple page allocator >> > analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. >> > >> > It adds two new options of --slab and --page. The --slab option is >> > for analyzing SLAB allocator and that's what perf kmem currently does. >> > >> > The new --page option enables page allocator events and analyze kernel >> > memory usage in page unit. Currently, 'stat --alloc' subcommand is >> > implemented only. >> > >> > If none of these --slab nor --page is specified, --slab is implied. >> > >> > # perf kmem stat --page --alloc --line 10 >> > >> > ------------------------------------------------------------------------------------- >> > Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags >> > ------------------------------------------------------------------------------------- >> > ffffea0015e48e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea0015e47400 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea001440f600 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea001440cc00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea00140c6300 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea00140c5c00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea00140c5000 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea00140c4f00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea00140c4e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ffffea00140c4d00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 >> > ... | ... | ... | ... | ... | ... >> > ------------------------------------------------------------------------------------- >> >> Tracepoint on mm_page_alloc print out pfn as well as pointer of struct page. >> How about printing pfn rather than pointer of struct page? > > I'd really like to have pfn rather than struct page. But I don't know > how to convert page pointer to pfn in userspace. > > The output of tracepoint via $debugfs/tracing/trace file is generated > from kernel-side, so it can easily have pfn from page pointer. But > tracepoint itself only saves page pointer and we need to convert/print > it in userspace. Ah...I didn't realize that perf don't use output of $debugfs/tracing/trace file. So, perf just uses raw trace buffer directly? If pfn is saved to the trace buffer, perf can print pfn rather than pointer of struct page? > Yes, perf script (or libtraceevent) shows pfn when printing those > events. But that's bogus since it cannot determine the size of the > struct page so the pointer arithmetic in open-coded page_to_pfn() > which is saved in the print_fmt of the tracepoint will end up with an > normal integer arithmatic. How about following change and making 'perf kmem' print pfn? If we store pfn on the trace buffer, we can print $debugfs/tracing/trace as is and 'perf kmem' can also print pfn. Thanks. diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 4ad10ba..9dcfd0b 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, TP_ARGS(page, order, gfp_flags, migratetype), TP_STRUCT__entry( - __field( struct page *, page ) + __field( unsigned long, pfn ) __field( unsigned int, order ) __field( gfp_t, gfp_flags ) __field( int, migratetype ) ), TP_fast_assign( - __entry->page = page; + __entry->pfn = page ? page_to_pfn(page) : -1; __entry->order = order; __entry->gfp_flags = gfp_flags; __entry->migratetype = migratetype; ), TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", - __entry->page, - __entry->page ? page_to_pfn(__entry->page) : 0, + __entry->pfn != -1 ? pfn_to_page(__entry->pfn) : NULL, + __entry->pfn != -1 ? __entry->pfn : 0, __entry->order, __entry->migratetype, show_gfp_flags(__entry->gfp_flags)) ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-24 5:26 ` Joonsoo Kim @ 2015-03-24 6:05 ` Namhyung Kim 2015-03-24 7:08 ` Ingo Molnar 1 sibling, 0 replies; 26+ messages in thread From: Namhyung Kim @ 2015-03-24 6:05 UTC (permalink / raw) To: Joonsoo Kim Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim Hi Joonsoo, On Tue, Mar 24, 2015 at 02:26:39PM +0900, Joonsoo Kim wrote: > 2015-03-24 9:18 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > > On Tue, Mar 24, 2015 at 02:32:17AM +0900, Joonsoo Kim wrote: > >> 2015-03-23 15:30 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > >> > The perf kmem command records and analyze kernel memory allocation > >> > only for SLAB objects. This patch implement a simple page allocator > >> > analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. > >> > > >> > It adds two new options of --slab and --page. The --slab option is > >> > for analyzing SLAB allocator and that's what perf kmem currently does. > >> > > >> > The new --page option enables page allocator events and analyze kernel > >> > memory usage in page unit. Currently, 'stat --alloc' subcommand is > >> > implemented only. > >> > > >> > If none of these --slab nor --page is specified, --slab is implied. > >> > > >> > # perf kmem stat --page --alloc --line 10 > >> > > >> > ------------------------------------------------------------------------------------- > >> > Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags > >> > ------------------------------------------------------------------------------------- > >> > ffffea0015e48e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea0015e47400 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea001440f600 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea001440cc00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c6300 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c5c00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c5000 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c4f00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c4e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c4d00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ... | ... | ... | ... | ... | ... > >> > ------------------------------------------------------------------------------------- > >> > >> Tracepoint on mm_page_alloc print out pfn as well as pointer of struct page. > >> How about printing pfn rather than pointer of struct page? > > > > I'd really like to have pfn rather than struct page. But I don't know > > how to convert page pointer to pfn in userspace. > > > > The output of tracepoint via $debugfs/tracing/trace file is generated > > from kernel-side, so it can easily have pfn from page pointer. But > > tracepoint itself only saves page pointer and we need to convert/print > > it in userspace. > > Ah...I didn't realize that perf don't use output of $debugfs/tracing/trace > file. So, perf just uses raw trace buffer directly? If pfn is saved to > the trace buffer, perf can print pfn rather than pointer of struct page? Yes, perf uses raw (binary) trace data.. If the tracepoint saves pfn directly it would be nice! > > > Yes, perf script (or libtraceevent) shows pfn when printing those > > events. But that's bogus since it cannot determine the size of the > > struct page so the pointer arithmetic in open-coded page_to_pfn() > > which is saved in the print_fmt of the tracepoint will end up with an > > normal integer arithmatic. > > How about following change and making 'perf kmem' print pfn? > If we store pfn on the trace buffer, we can print $debugfs/tracing/trace > as is and 'perf kmem' can also print pfn. I'm very happy with this change. The textual output via debugfs/tracefs will be same, binary data will be changed but libtraceevent will handle it seamlessly. Thanks, Namhyung > > diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h > index 4ad10ba..9dcfd0b 100644 > --- a/include/trace/events/kmem.h > +++ b/include/trace/events/kmem.h > @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, > TP_ARGS(page, order, gfp_flags, migratetype), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( unsigned int, order ) > __field( gfp_t, gfp_flags ) > __field( int, migratetype ) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page ? page_to_pfn(page) : -1; > __entry->order = order; > __entry->gfp_flags = gfp_flags; > __entry->migratetype = migratetype; > ), > > TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", > - __entry->page, > - __entry->page ? page_to_pfn(__entry->page) : 0, > + __entry->pfn != -1 ? pfn_to_page(__entry->pfn) : NULL, > + __entry->pfn != -1 ? __entry->pfn : 0, > __entry->order, > __entry->migratetype, > show_gfp_flags(__entry->gfp_flags)) > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-24 5:26 ` Joonsoo Kim 2015-03-24 6:05 ` Namhyung Kim @ 2015-03-24 7:08 ` Ingo Molnar 2015-03-24 13:17 ` Namhyung Kim 1 sibling, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2015-03-24 7:08 UTC (permalink / raw) To: Joonsoo Kim Cc: Namhyung Kim, Arnaldo Carvalho de Melo, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim * Joonsoo Kim <js1304@gmail.com> wrote: > 2015-03-24 9:18 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > > On Tue, Mar 24, 2015 at 02:32:17AM +0900, Joonsoo Kim wrote: > >> 2015-03-23 15:30 GMT+09:00 Namhyung Kim <namhyung@kernel.org>: > >> > The perf kmem command records and analyze kernel memory allocation > >> > only for SLAB objects. This patch implement a simple page allocator > >> > analyzer using kmem:mm_page_alloc and kmem:mm_page_free events. > >> > > >> > It adds two new options of --slab and --page. The --slab option is > >> > for analyzing SLAB allocator and that's what perf kmem currently does. > >> > > >> > The new --page option enables page allocator events and analyze kernel > >> > memory usage in page unit. Currently, 'stat --alloc' subcommand is > >> > implemented only. > >> > > >> > If none of these --slab nor --page is specified, --slab is implied. > >> > > >> > # perf kmem stat --page --alloc --line 10 > >> > > >> > ------------------------------------------------------------------------------------- > >> > Page | Total alloc (KB) | Hits | Order | Migration type | GFP flags > >> > ------------------------------------------------------------------------------------- > >> > ffffea0015e48e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea0015e47400 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea001440f600 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea001440cc00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c6300 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c5c00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c5000 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c4f00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c4e00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ffffea00140c4d00 | 16 | 1 | 2 | RECLAIMABLE | 00285250 > >> > ... | ... | ... | ... | ... | ... > >> > ------------------------------------------------------------------------------------- > >> > >> Tracepoint on mm_page_alloc print out pfn as well as pointer of struct page. > >> How about printing pfn rather than pointer of struct page? > > > > I'd really like to have pfn rather than struct page. But I don't know > > how to convert page pointer to pfn in userspace. > > > > The output of tracepoint via $debugfs/tracing/trace file is generated > > from kernel-side, so it can easily have pfn from page pointer. But > > tracepoint itself only saves page pointer and we need to convert/print > > it in userspace. > > Ah...I didn't realize that perf don't use output of $debugfs/tracing/trace > file. So, perf just uses raw trace buffer directly? If pfn is saved to > the trace buffer, perf can print pfn rather than pointer of struct page? > > > Yes, perf script (or libtraceevent) shows pfn when printing those > > events. But that's bogus since it cannot determine the size of the > > struct page so the pointer arithmetic in open-coded page_to_pfn() > > which is saved in the print_fmt of the tracepoint will end up with an > > normal integer arithmatic. > > How about following change and making 'perf kmem' print pfn? > If we store pfn on the trace buffer, we can print $debugfs/tracing/trace > as is and 'perf kmem' can also print pfn. > > Thanks. > > diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h > index 4ad10ba..9dcfd0b 100644 > --- a/include/trace/events/kmem.h > +++ b/include/trace/events/kmem.h > @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, > TP_ARGS(page, order, gfp_flags, migratetype), > > TP_STRUCT__entry( > - __field( struct page *, page ) > + __field( unsigned long, pfn ) > __field( unsigned int, order ) > __field( gfp_t, gfp_flags ) > __field( int, migratetype ) > ), > > TP_fast_assign( > - __entry->page = page; > + __entry->pfn = page ? page_to_pfn(page) : -1; > __entry->order = order; > __entry->gfp_flags = gfp_flags; > __entry->migratetype = migratetype; > ), > > TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", > - __entry->page, > - __entry->page ? page_to_pfn(__entry->page) : 0, > + __entry->pfn != -1 ? pfn_to_page(__entry->pfn) : NULL, > + __entry->pfn != -1 ? __entry->pfn : 0, > __entry->order, > __entry->migratetype, > show_gfp_flags(__entry->gfp_flags)) Acked-by: Ingo Molnar <mingo@kernel.org> It would be very nice to make all the other page granular tracepoints output pfn (which is a physical address that can be resolved to 'node' and other properties), not 'struct page *' (which is a kernel resource with little meaning to user-space tooling). I.e. the following tracepoints: triton:~/tip> git grep -E '__field.*struct page *' include/trace/ include/trace/events/filemap.h: __field(struct page *, page) include/trace/events/kmem.h: __field( struct page *, page ) include/trace/events/kmem.h: __field( struct page *, page ) include/trace/events/kmem.h: __field( struct page *, page ) include/trace/events/kmem.h: __field( struct page *, page ) include/trace/events/kmem.h: __field( struct page *, page ) include/trace/events/pagemap.h: __field(struct page *, page ) include/trace/events/pagemap.h: __field(struct page *, page ) include/trace/events/vmscan.h: __field(struct page *, page) there's very little breakage I can imagine: they have traced pointers to 'struct page', which is a pretty opaque page identifier to user-space, and they'll trace pfn's in the future, which still serves as a page identifier. One thing would be important: to do all these changes at once, to make sure that the various page identifiers can be compared. Also, we might keep the 'page' field name if anything relies on that - but 'pfn' is even better. Thanks, Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] perf kmem: Analyze page allocator events also 2015-03-24 7:08 ` Ingo Molnar @ 2015-03-24 13:17 ` Namhyung Kim 0 siblings, 0 replies; 26+ messages in thread From: Namhyung Kim @ 2015-03-24 13:17 UTC (permalink / raw) To: Ingo Molnar Cc: Joonsoo Kim, Arnaldo Carvalho de Melo, Peter Zijlstra, Jiri Olsa, LKML, David Ahern, Minchan Kim Hi Ingo, On Tue, Mar 24, 2015 at 08:08:03AM +0100, Ingo Molnar wrote: > * Joonsoo Kim <js1304@gmail.com> wrote: > > How about following change and making 'perf kmem' print pfn? > > If we store pfn on the trace buffer, we can print $debugfs/tracing/trace > > as is and 'perf kmem' can also print pfn. > > > > Thanks. > > > > diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h > > index 4ad10ba..9dcfd0b 100644 > > --- a/include/trace/events/kmem.h > > +++ b/include/trace/events/kmem.h > > @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc, > > TP_ARGS(page, order, gfp_flags, migratetype), > > > > TP_STRUCT__entry( > > - __field( struct page *, page ) > > + __field( unsigned long, pfn ) > > __field( unsigned int, order ) > > __field( gfp_t, gfp_flags ) > > __field( int, migratetype ) > > ), > > > > TP_fast_assign( > > - __entry->page = page; > > + __entry->pfn = page ? page_to_pfn(page) : -1; > > __entry->order = order; > > __entry->gfp_flags = gfp_flags; > > __entry->migratetype = migratetype; > > ), > > > > TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s", > > - __entry->page, > > - __entry->page ? page_to_pfn(__entry->page) : 0, > > + __entry->pfn != -1 ? pfn_to_page(__entry->pfn) : NULL, > > + __entry->pfn != -1 ? __entry->pfn : 0, > > __entry->order, > > __entry->migratetype, > > show_gfp_flags(__entry->gfp_flags)) > > Acked-by: Ingo Molnar <mingo@kernel.org> > > It would be very nice to make all the other page granular tracepoints > output pfn (which is a physical address that can be resolved to 'node' > and other properties), not 'struct page *' (which is a kernel resource > with little meaning to user-space tooling). > > I.e. the following tracepoints: > > triton:~/tip> git grep -E '__field.*struct page *' include/trace/ > include/trace/events/filemap.h: __field(struct page *, page) > include/trace/events/kmem.h: __field( struct page *, page ) > include/trace/events/kmem.h: __field( struct page *, page ) > include/trace/events/kmem.h: __field( struct page *, page ) > include/trace/events/kmem.h: __field( struct page *, page ) > include/trace/events/kmem.h: __field( struct page *, page ) > include/trace/events/pagemap.h: __field(struct page *, page ) > include/trace/events/pagemap.h: __field(struct page *, page ) > include/trace/events/vmscan.h: __field(struct page *, page) Okay, will do. > > there's very little breakage I can imagine: they have traced pointers > to 'struct page', which is a pretty opaque page identifier to > user-space, and they'll trace pfn's in the future, which still serves > as a page identifier. Agreed. > > One thing would be important: to do all these changes at once, to make > sure that the various page identifiers can be compared. OK > > Also, we might keep the 'page' field name if anything relies on that - > but 'pfn' is even better. Another option is to keep page field and add new pfn field. The events in pagemap.h already do this way. This will minimize the possible breakage but increase the trace size somewhat. Thanks, Namhyung ^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2017-09-01 11:15 UTC | newest] Thread overview: 26+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo 2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo 2017-07-31 7:43 ` Vlastimil Babka 2017-08-31 11:38 ` Vlastimil Babka 2017-08-31 13:43 ` Steven Rostedt 2017-08-31 14:31 ` Vlastimil Babka 2017-08-31 14:44 ` Steven Rostedt 2017-09-01 8:16 ` Vlastimil Babka 2017-09-01 11:15 ` Steven Rostedt 2015-04-13 22:14 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Arnaldo Carvalho de Melo 2015-04-13 22:15 ` [PATCH 3/5] perf probe: Set retprobe flag when probe in address-based alternative mode Arnaldo Carvalho de Melo 2015-04-13 22:15 ` [PATCH 4/5] perf probe: Make --source avaiable when probe with lazy_line Arnaldo Carvalho de Melo 2015-04-13 22:15 ` [PATCH 5/5] perf probe: Fix segfault when probe with lazy_line to file Arnaldo Carvalho de Melo 2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu 2015-04-13 23:09 ` Arnaldo Carvalho de Melo 2015-04-13 23:19 ` Arnaldo Carvalho de Melo 2015-04-14 7:04 ` Masami Hiramatsu 2015-04-14 12:17 ` Arnaldo Carvalho de Melo 2015-04-14 12:12 ` Ingo Molnar -- strict thread matches above, loose matches on Subject: below -- 2015-03-23 6:30 [PATCHSET 0/5] perf kmem: Implement page allocation analysis (v3) Namhyung Kim 2015-03-23 6:30 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Namhyung Kim 2015-03-23 17:32 ` Joonsoo Kim 2015-03-24 0:18 ` Namhyung Kim 2015-03-24 5:26 ` Joonsoo Kim 2015-03-24 6:05 ` Namhyung Kim 2015-03-24 7:08 ` Ingo Molnar 2015-03-24 13:17 ` Namhyung Kim
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).