From: Jet Chen <jet.chen@intel.com>
To: Eivind Sarto <eivindsarto@gmail.com>
Cc: NeilBrown <neilb@suse.de>, LKML <linux-kernel@vger.kernel.org>,
lkp@01.org, Fengguang Wu <fengguang.wu@intel.com>
Subject: [raid5] cf170f3fa45: +4.8% vmstat.io.bo
Date: Sat, 07 Jun 2014 14:53:16 +0800 [thread overview]
Message-ID: <5392B6DC.6040502@intel.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 4105 bytes --]
Hi Eivind,
FYI, we noticed the below changes on
git://neil.brown.name/md for-next
commit cf170f3fa451350e431314e1a0a52014fda4b2d6 ("raid5: avoid release list until last reference of the stripe")
test case: lkp-st02/dd-write/11HDD-RAID5-cfq-xfs-10dd
8b32bf5e37328c0 cf170f3fa451350e431314e1a
--------------- -------------------------
486996 ~ 0% +4.8% 510428 ~ 0% TOTAL vmstat.io.bo
17643 ~ 1% -17.3% 14599 ~ 0% TOTAL vmstat.system.in
11633 ~ 4% -56.7% 5039 ~ 0% TOTAL vmstat.system.cs
109 ~ 1% +6.5% 116 ~ 1% TOTAL iostat.sdb.rrqm/s
109 ~ 2% +5.1% 114 ~ 1% TOTAL iostat.sdc.rrqm/s
110 ~ 2% +5.5% 117 ~ 0% TOTAL iostat.sdj.rrqm/s
12077 ~ 0% +4.8% 12660 ~ 0% TOTAL iostat.sde.wrqm/s
48775 ~ 0% +4.8% 51125 ~ 0% TOTAL iostat.sde.wkB/s
12077 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdb.wrqm/s
12076 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdd.wrqm/s
12077 ~ 0% +4.8% 12660 ~ 0% TOTAL iostat.sdf.wrqm/s
48775 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdb.wkB/s
12078 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdj.wrqm/s
12078 ~ 0% +4.8% 12660 ~ 0% TOTAL iostat.sdi.wrqm/s
12076 ~ 0% +4.8% 12658 ~ 0% TOTAL iostat.sdg.wrqm/s
48774 ~ 0% +4.8% 51122 ~ 0% TOTAL iostat.sdd.wkB/s
48776 ~ 0% +4.8% 51128 ~ 0% TOTAL iostat.sdf.wkB/s
48780 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdj.wkB/s
48779 ~ 0% +4.8% 51128 ~ 0% TOTAL iostat.sdi.wkB/s
48773 ~ 0% +4.8% 51119 ~ 0% TOTAL iostat.sdg.wkB/s
486971 ~ 0% +4.8% 510409 ~ 0% TOTAL iostat.md0.wkB/s
12076 ~ 0% +4.8% 12657 ~ 0% TOTAL iostat.sdc.wrqm/s
12077 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdh.wrqm/s
1910 ~ 0% +4.8% 2001 ~ 0% TOTAL iostat.md0.w/s
110 ~ 2% +6.5% 117 ~ 1% TOTAL iostat.sdk.rrqm/s
12077 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdk.wrqm/s
48772 ~ 0% +4.8% 51115 ~ 0% TOTAL iostat.sdc.wkB/s
48776 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdh.wkB/s
48777 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdk.wkB/s
109 ~ 2% +3.3% 113 ~ 1% TOTAL iostat.sde.rrqm/s
4.28e+09 ~ 0% -4.1% 4.104e+09 ~ 0% TOTAL perf-stat.cache-misses
8.654e+10 ~ 0% +4.7% 9.058e+10 ~ 0% TOTAL perf-stat.L1-dcache-store-misses
3.549e+09 ~ 1% +3.7% 3.682e+09 ~ 0% TOTAL perf-stat.L1-dcache-prefetches
6.764e+11 ~ 0% +3.7% 7.011e+11 ~ 0% TOTAL perf-stat.dTLB-stores
6.759e+11 ~ 0% +3.7% 7.011e+11 ~ 0% TOTAL perf-stat.L1-dcache-stores
4.731e+10 ~ 0% +3.6% 4.903e+10 ~ 0% TOTAL perf-stat.L1-dcache-load-misses
3.017e+12 ~ 0% +3.5% 3.121e+12 ~ 0% TOTAL perf-stat.instructions
1.118e+12 ~ 0% +3.3% 1.156e+12 ~ 0% TOTAL perf-stat.dTLB-loads
1.117e+12 ~ 0% +3.2% 1.152e+12 ~ 0% TOTAL perf-stat.L1-dcache-loads
3.022e+12 ~ 0% +3.2% 3.119e+12 ~ 0% TOTAL perf-stat.iTLB-loads
5.613e+11 ~ 0% +3.2% 5.794e+11 ~ 0% TOTAL perf-stat.branch-instructions
5.62e+11 ~ 0% +3.1% 5.793e+11 ~ 0% TOTAL perf-stat.branch-loads
1.343e+09 ~ 0% +2.6% 1.378e+09 ~ 0% TOTAL perf-stat.LLC-store-misses
2.073e+10 ~ 0% +2.9% 2.133e+10 ~ 1% TOTAL perf-stat.LLC-loads
4.854e+10 ~ 0% +1.6% 4.931e+10 ~ 0% TOTAL perf-stat.cache-references
1.167e+10 ~ 0% +1.4% 1.183e+10 ~ 0% TOTAL perf-stat.L1-icache-load-misses
7068624 ~ 4% -56.4% 3078966 ~ 0% TOTAL perf-stat.context-switches
2.214e+09 ~ 1% -7.8% 2.041e+09 ~ 1% TOTAL perf-stat.LLC-load-misses
131433 ~ 0% -18.9% 106597 ~ 1% TOTAL perf-stat.cpu-migrations
Legend:
~XX% - stddev percent
[+-]XX% - change percent
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Jet
[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 1110 bytes --]
mdadm -q --create /dev/md0 --chunk=256 --level=raid5 --raid-devices=11 --force --assume-clean /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1
echo 1 > /sys/kernel/debug/tracing/events/writeback/balance_dirty_pages/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/bdi_dirty_ratelimit/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/global_dirty_state/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/writeback_single_inode/enable
mkfs -t xfs /dev/md0
mount -t xfs -o nobarrier,inode64 /dev/md0 /fs/md0
dd if=/dev/zero of=/fs/md0/zero-1 status=none &
dd if=/dev/zero of=/fs/md0/zero-2 status=none &
dd if=/dev/zero of=/fs/md0/zero-3 status=none &
dd if=/dev/zero of=/fs/md0/zero-4 status=none &
dd if=/dev/zero of=/fs/md0/zero-5 status=none &
dd if=/dev/zero of=/fs/md0/zero-6 status=none &
dd if=/dev/zero of=/fs/md0/zero-7 status=none &
dd if=/dev/zero of=/fs/md0/zero-8 status=none &
dd if=/dev/zero of=/fs/md0/zero-9 status=none &
dd if=/dev/zero of=/fs/md0/zero-10 status=none &
sleep 600
killall -9 dd
reply other threads:[~2014-06-07 6:53 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5392B6DC.6040502@intel.com \
--to=jet.chen@intel.com \
--cc=eivindsarto@gmail.com \
--cc=fengguang.wu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox