linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Linux Raid Study <linuxraid.study@gmail.com>
To: NeilBrown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: iostat with raid device...
Date: Fri, 8 Apr 2011 17:40:46 -0700	[thread overview]
Message-ID: <BANLkTin2KP-_vy8jbmV3JB-LHHJNOEVCKw@mail.gmail.com> (raw)
In-Reply-To: <20110409094629.2eae2d5b@notabene.brown>

Hi Neil,

This is raid5. I have mounted /dev/md0 to /mnt and file system is ext4.

The system is newly created. Steps:
mdadm for raid5
mkfs.ext4 /dev/md0
mount /dev/md0 /mnt/raid
Export /mnt/raid to remote PC using CIFS
Copy file from PC to the mounted drive

An update....
I just ran the test again (without doing reformatting device) and
noticed all 4 HDDs incremented the #ofWritesBlocks equally. This
implies that when raid was configured first time, raid5 was trying to
do its own stuff (recovery)...

What I'm not sure of is if the device is newly formatted, would raid
recovery happen? What else could explain difference in the first run
of IO benchmark?


Thanks.

On Fri, Apr 8, 2011 at 4:46 PM, NeilBrown <neilb@suse.de> wrote:
> On Fri, 8 Apr 2011 12:55:39 -0700 Linux Raid Study
> <linuxraid.study@gmail.com> wrote:
>
>> Hello,
>>
>> I have a raid device /dev/md0 based on 4 devices sd[abcd].
>
> Would this be raid0? raid1? raid5? raid6? raid10?
> It could make a difference.
>
>>
>> When I write 4GB to /dev/md0, I see following output from iostat...
>
> Are you writing directly to the /dev/md0, or to a filesystem mounted
> from /dev/md0?  It might be easier to explain in the second case, but you
> text suggests the first case.
>
>>
>> Ques:
>> Shouldn't I see write/sec to be same for all four drives? Why does
>> /dev/sdd always have higher value for  BlksWrtn/sec?
>> My strip size is 1MB.
>>
>> thanks for any pointers...
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>            0.02    0.00    0.34    0.03    0.00   99.61
>>
>> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
>> sda               1.08       247.77       338.73   37478883   51237136
>> sda1              1.08       247.77       338.73   37478195   51237136
>> sdb               1.08       247.73       338.78   37472990   51245712
>> sdb1              1.08       247.73       338.78   37472302   51245712
>> sdc               1.10       247.82       338.66   37486670   51226640
>> sdc1              1.10       247.82       338.66   37485982   51226640
>> sdd               1.09       118.46       467.97   17918510   70786576
>> sdd1              1.09       118.45       467.97   17917822   70786576
>> md0              65.60       443.79      1002.42   67129812  151629440
>
> Doing the sums, for every 2 blocks written to md0 we see 3 blocks written to
> some underlying device.  That doesn't make much sense for a 4 drive array.
> If we assume that the extra writes to sdd were from some other source, then
> It is closer to a 3:4 ratio which suggests raid5.
> So I'm guessing that the array is newly created and is recovering the data on
> sdd1 at the same time as you are doing the IO test.
> This would agree with the observation that sd[abc] see a lot more reads than
> sdd.
>
> I'll let you figure out the tps number.... do the math to find out the
> average blk/t number for each device.
>
> NeilBrown
>
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2011-04-09  0:40 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-08 19:55 iostat with raid device Linux Raid Study
2011-04-08 22:05 ` Roberto Spadim
2011-04-08 22:10   ` Linux Raid Study
2011-04-08 23:46 ` NeilBrown
2011-04-09  0:40   ` Linux Raid Study [this message]
2011-04-09  8:50     ` Robin Hill
2011-04-11  8:32       ` Linux Raid Study
2011-04-11  9:25         ` Robin Hill
2011-04-11  9:36           ` Linux Raid Study
2011-04-11  9:53             ` Robin Hill
2011-04-11 10:18               ` NeilBrown
2011-04-12  1:57                 ` Linux Raid Study
2011-04-12  2:51                   ` NeilBrown
2011-04-12 19:36                     ` Linux Raid Study
2011-04-13 18:21                       ` Linux Raid Study
2011-04-13 21:00                         ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BANLkTin2KP-_vy8jbmV3JB-LHHJNOEVCKw@mail.gmail.com \
    --to=linuxraid.study@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).