linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: 刘正元 <liuzhengyuan@kylinos.com.cn>
To: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: Poor written performance with RAID5 on ARM64
Date: Wed, 20 Apr 2016 12:16:59 +0800	[thread overview]
Message-ID: <tencent_07D46DDC1237D078275717E2@qq.com> (raw)

Thanks for reply.
I have traced the iostat changes under the both kernel version as below show.  As you can see, disk IO is not the bottleneck because bandwidth not reach 100%, and cpu for single core hold about 40% during testing.  
----------------------------------------------for kernel 3.14.64----------------------------------------------------------------------
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    3.65    6.25    0.00   90.10

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00  1549.01    0.00  220.30     0.00 112033.91  1017.12    14.23   63.48    0.00   63.48   4.11  90.59
sdc               0.00  1549.01    0.00  222.77     0.00 113301.24  1017.19    10.95   48.76    0.00   48.76   4.02  89.60
sdd               0.00  1549.01    0.00  222.28     0.00 113047.77  1017.18    16.18   72.32    0.00   72.32   4.16  92.57
sdf               0.00  1549.01    0.00  216.34     0.00 110006.19  1016.99    14.61   65.01    0.00   65.01   4.28  92.57
sde               0.00  1549.01    0.00  222.28     0.00 113047.77  1017.18    12.37   55.21    0.00   55.21   3.99  88.61
 --------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------for kernel 4.4.3--------------------------------------------------------------
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.03    0.00    2.37    5.43    0.00   92.18

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz    avgqu-sz  await r_await   w_await  svctm  %util
sdb              30.00  1209.50    1.50   70.00  1056.00 85441.50  2419.51    19.62  331.41  114.67  336.06   9.68  69.20
sdc              30.00  1209.50    1.50   69.00  1056.00 84161.50  2417.52    18.02  303.06  120.00  307.04   9.28  65.40
sdd              30.50  1211.00    1.50   70.00  1088.00 85505.50  2422.20    20.88  351.13  142.67  355.60  10.13  72.40
sde              30.50  1210.50    1.50   69.50  1088.00 84866.25  2421.25    18.53  311.77  117.33  315.97   9.55  67.80
sdf               0.00  1224.50    0.00   72.00     0.00     86850.25  2412.51    21.38  353.64    0.00    353.64  10.75  77.40
 --------------------------------------------------------------------------------------------------------------------------------------
While disable bitmap the field %util could reach 100% and written speed of single disk could reach 120M/s under kernel 4.4.3. So, I doubt bitmap may be the mather.

                                                                                                                                 Best wishs (祝生活愉快)!
 
 
------------------ Original ------------------
From: "Shaohua Li";
Date: 2016年4月19日(星期二) 晚上9:59
To: "刘正元";
Cc: "linux-raid";
Subject: Re: Poor written performance with RAID5 on ARM64
 
On Mon, Apr 18, 2016 at 01:43:04PM +0800, 刘正元 wrote:
> Hi,everyone.  I upgrade kernel form 3.14.x to 4.4.x recently on my ARM64
> server.  I create a RAID5 device with 8 disks on the server and have a dd
> test which like this "dd if=/dev/zero of=/dev/md5 bs=64K count=400000".
> Before upgrade it can reach 700M/s written and only 500M/s after upgrade.
> Then I disable the bitmap which means "mdadm create --bitmap=none", the speed
> can reach 800M/s with 4.4 kernel. I have a fast view on driver/md/bitmap.c
> and got no answer.  I doubt X86 platform has the same question. So, what is
> the most difference between 3.14.x and 4.4.x about md driver. Where could I
> found the ChangeLog or commit about the driver. Any answer would be thankful.

did you observe any changes in iostat? Could you post blktrace from one of the
raid disk?

Thanks,
Shaohua

             reply	other threads:[~2016-04-20  4:16 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-20  4:16 刘正元 [this message]
  -- strict thread matches above, loose matches on Subject: below --
2016-04-18  5:43 Poor written performance with RAID5 on ARM64 刘正元
2016-04-19 13:59 ` Shaohua Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tencent_07D46DDC1237D078275717E2@qq.com \
    --to=liuzhengyuan@kylinos.com.cn \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).