* [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
@ 2014-07-31 5:50 Aaron Lu
2014-07-31 8:48 ` Mel Gorman
0 siblings, 1 reply; 4+ messages in thread
From: Aaron Lu @ 2014-07-31 5:50 UTC (permalink / raw)
To: Mel Gorman; +Cc: Stephen Rothwell, LKML, lkp
[-- Attachment #1: Type: text/plain, Size: 1535 bytes --]
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit b72fd1470c9735f53485d089aa918dc327a86077 ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
test case: lkp-st02/dd-write/5m-11HDD-JBOD-cfq-xfs-10dd
e28c951ff01a805 b72fd1470c9735f53485d089a
--------------- -------------------------
1.06 ~ 6% -41.7% 0.62 ~ 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
1.34 ~ 2% -19.8% 1.07 ~ 2% TOTAL perf-profile.cpu-cycles.__block_write_begin.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
1.19 ~ 5% -12.1% 1.05 ~ 4% TOTAL perf-profile.cpu-cycles.copy_from_user_atomic_iovec.iov_iter_copy_from_user_atomic.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
2.78 ~ 1% -16.3% 2.32 ~ 4% TOTAL perf-profile.cpu-cycles.__clear_user.read_zero.read_zero.vfs_read.sys_read
2.96e+09 ~ 4% -5.2% 2.806e+09 ~ 0% TOTAL perf-stat.cache-misses
3.86e+12 ~ 5% -5.2% 3.658e+12 ~ 1% TOTAL perf-stat.ref-cycles
Legend:
~XX% - stddev percent
[+-]XX% - change percent
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Aaron
[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 6898 bytes --]
echo 1 > /sys/kernel/debug/tracing/events/writeback/balance_dirty_pages/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/bdi_dirty_ratelimit/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/global_dirty_state/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/writeback_single_inode/enable
mkfs -t xfs /dev/sdb1
mkfs -t xfs /dev/sdj1
mkfs -t xfs /dev/sdh1
mkfs -t xfs /dev/sde1
mkfs -t xfs /dev/sdc1
mkfs -t xfs /dev/sdk1
mkfs -t xfs /dev/sdd1
mkfs -t xfs /dev/sdf1
mkfs -t xfs /dev/sdg1
mkfs -t xfs /dev/sdi1
mkfs -t xfs /dev/sdl1
mount -t xfs -o nobarrier,inode64 /dev/sdb1 /fs/sdb1
mount -t xfs -o nobarrier,inode64 /dev/sdc1 /fs/sdc1
mount -t xfs -o nobarrier,inode64 /dev/sdd1 /fs/sdd1
mount -t xfs -o nobarrier,inode64 /dev/sde1 /fs/sde1
mount -t xfs -o nobarrier,inode64 /dev/sdf1 /fs/sdf1
mount -t xfs -o nobarrier,inode64 /dev/sdg1 /fs/sdg1
mount -t xfs -o nobarrier,inode64 /dev/sdh1 /fs/sdh1
mount -t xfs -o nobarrier,inode64 /dev/sdi1 /fs/sdi1
mount -t xfs -o nobarrier,inode64 /dev/sdj1 /fs/sdj1
mount -t xfs -o nobarrier,inode64 /dev/sdk1 /fs/sdk1
mount -t xfs -o nobarrier,inode64 /dev/sdl1 /fs/sdl1
dd if=/dev/zero of=/fs/sdb1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-1 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-2 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-3 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-4 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-5 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-6 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-7 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-8 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-9 status=noxfer &
dd if=/dev/zero of=/fs/sdb1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdc1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdd1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sde1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdf1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdg1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdh1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdi1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdj1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdk1/zero-10 status=noxfer &
dd if=/dev/zero of=/fs/sdl1/zero-10 status=noxfer &
sleep 298
killall -9 dd
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
2014-07-31 5:50 [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page Aaron Lu
@ 2014-07-31 8:48 ` Mel Gorman
2014-07-31 9:01 ` Aaron Lu
0 siblings, 1 reply; 4+ messages in thread
From: Mel Gorman @ 2014-07-31 8:48 UTC (permalink / raw)
To: Aaron Lu; +Cc: Stephen Rothwell, LKML, lkp
On Thu, Jul 31, 2014 at 01:50:35PM +0800, Aaron Lu wrote:
> FYI, we noticed the below changes on
>
> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> commit b72fd1470c9735f53485d089aa918dc327a86077 ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
>
> test case: lkp-st02/dd-write/5m-11HDD-JBOD-cfq-xfs-10dd
>
> e28c951ff01a805 b72fd1470c9735f53485d089a
> --------------- -------------------------
> 1.06 ~ 6% -41.7% 0.62 ~ 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
> 1.34 ~ 2% -19.8% 1.07 ~ 2% TOTAL perf-profile.cpu-cycles.__block_write_begin.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
> 1.19 ~ 5% -12.1% 1.05 ~ 4% TOTAL perf-profile.cpu-cycles.copy_from_user_atomic_iovec.iov_iter_copy_from_user_atomic.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
> 2.78 ~ 1% -16.3% 2.32 ~ 4% TOTAL perf-profile.cpu-cycles.__clear_user.read_zero.read_zero.vfs_read.sys_read
> 2.96e+09 ~ 4% -5.2% 2.806e+09 ~ 0% TOTAL perf-stat.cache-misses
> 3.86e+12 ~ 5% -5.2% 3.658e+12 ~ 1% TOTAL perf-stat.ref-cycles
>
> Legend:
> ~XX% - stddev percent
> [+-]XX% - change percent
>
I'm not exactly sure what I'm reading here. I think it is reporting on cpu
cycles and cache misses used in various kernel functions. It's not clear what
the units are but it looks like percentages of overall cycles spent in the
reported functions. That may or may not be good depending on whether there
is a higher cost elsewhere pushing the percentages down but that detail
is not in the report. It looks like this is reporting that fewer clock
cycles are being spent and incurring fewer cache misses. What is the problem?
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
2014-07-31 8:48 ` Mel Gorman
@ 2014-07-31 9:01 ` Aaron Lu
2014-07-31 9:32 ` Mel Gorman
0 siblings, 1 reply; 4+ messages in thread
From: Aaron Lu @ 2014-07-31 9:01 UTC (permalink / raw)
To: Mel Gorman; +Cc: Stephen Rothwell, LKML, lkp
On Thu, Jul 31, 2014 at 09:48:32AM +0100, Mel Gorman wrote:
> On Thu, Jul 31, 2014 at 01:50:35PM +0800, Aaron Lu wrote:
> > FYI, we noticed the below changes on
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> > commit b72fd1470c9735f53485d089aa918dc327a86077 ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
> >
> > test case: lkp-st02/dd-write/5m-11HDD-JBOD-cfq-xfs-10dd
> >
> > e28c951ff01a805 b72fd1470c9735f53485d089a
> > --------------- -------------------------
> > 1.06 ~ 6% -41.7% 0.62 ~ 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
> > 1.34 ~ 2% -19.8% 1.07 ~ 2% TOTAL perf-profile.cpu-cycles.__block_write_begin.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
> > 1.19 ~ 5% -12.1% 1.05 ~ 4% TOTAL perf-profile.cpu-cycles.copy_from_user_atomic_iovec.iov_iter_copy_from_user_atomic.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
> > 2.78 ~ 1% -16.3% 2.32 ~ 4% TOTAL perf-profile.cpu-cycles.__clear_user.read_zero.read_zero.vfs_read.sys_read
> > 2.96e+09 ~ 4% -5.2% 2.806e+09 ~ 0% TOTAL perf-stat.cache-misses
> > 3.86e+12 ~ 5% -5.2% 3.658e+12 ~ 1% TOTAL perf-stat.ref-cycles
> >
> > Legend:
> > ~XX% - stddev percent
> > [+-]XX% - change percent
> >
>
> I'm not exactly sure what I'm reading here. I think it is reporting on cpu
> cycles and cache misses used in various kernel functions. It's not clear what
> the units are but it looks like percentages of overall cycles spent in the
> reported functions. That may or may not be good depending on whether there
> is a higher cost elsewhere pushing the percentages down but that detail
> is not in the report. It looks like this is reporting that fewer clock
> cycles are being spent and incurring fewer cache misses. What is the problem?
LKP does not report problems only, it will also report commits that make
things better :-)
>From the perf-stat.cache-misses, I think it is indicating your commit
does something for good.
Thanks,
Aaron
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
2014-07-31 9:01 ` Aaron Lu
@ 2014-07-31 9:32 ` Mel Gorman
0 siblings, 0 replies; 4+ messages in thread
From: Mel Gorman @ 2014-07-31 9:32 UTC (permalink / raw)
To: Aaron Lu; +Cc: Stephen Rothwell, LKML, lkp
On Thu, Jul 31, 2014 at 05:01:30PM +0800, Aaron Lu wrote:
> On Thu, Jul 31, 2014 at 09:48:32AM +0100, Mel Gorman wrote:
> > On Thu, Jul 31, 2014 at 01:50:35PM +0800, Aaron Lu wrote:
> > > FYI, we noticed the below changes on
> > >
> > > git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> > > commit b72fd1470c9735f53485d089aa918dc327a86077 ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
> > >
> > > test case: lkp-st02/dd-write/5m-11HDD-JBOD-cfq-xfs-10dd
> > >
> > > e28c951ff01a805 b72fd1470c9735f53485d089a
> > > --------------- -------------------------
> > > 1.06 ~ 6% -41.7% 0.62 ~ 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
> > > 1.34 ~ 2% -19.8% 1.07 ~ 2% TOTAL perf-profile.cpu-cycles.__block_write_begin.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
> > > 1.19 ~ 5% -12.1% 1.05 ~ 4% TOTAL perf-profile.cpu-cycles.copy_from_user_atomic_iovec.iov_iter_copy_from_user_atomic.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
> > > 2.78 ~ 1% -16.3% 2.32 ~ 4% TOTAL perf-profile.cpu-cycles.__clear_user.read_zero.read_zero.vfs_read.sys_read
> > > 2.96e+09 ~ 4% -5.2% 2.806e+09 ~ 0% TOTAL perf-stat.cache-misses
> > > 3.86e+12 ~ 5% -5.2% 3.658e+12 ~ 1% TOTAL perf-stat.ref-cycles
> > >
> > > Legend:
> > > ~XX% - stddev percent
> > > [+-]XX% - change percent
> > >
> >
> > I'm not exactly sure what I'm reading here. I think it is reporting on cpu
> > cycles and cache misses used in various kernel functions. It's not clear what
> > the units are but it looks like percentages of overall cycles spent in the
> > reported functions. That may or may not be good depending on whether there
> > is a higher cost elsewhere pushing the percentages down but that detail
> > is not in the report. It looks like this is reporting that fewer clock
> > cycles are being spent and incurring fewer cache misses. What is the problem?
>
> LKP does not report problems only, it will also report commits that make
> things better :-)
>
> From the perf-stat.cache-misses, I think it is indicating your commit
> does something for good.
>
Hooray! Thanks for the good news :D
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2014-07-31 9:32 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-31 5:50 [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page Aaron Lu
2014-07-31 8:48 ` Mel Gorman
2014-07-31 9:01 ` Aaron Lu
2014-07-31 9:32 ` Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox