linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Smith <msmith626@gmail.com>
To: Song Liu <song@kernel.org>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: MD Array 'stat' File - Sectors Read
Date: Wed, 10 Jun 2020 16:50:28 -0400	[thread overview]
Message-ID: <CAH6h+hfV-JeoRvbEFVpyLTnGhtFLJbDuw_+V=fBU2FFdYFuMYA@mail.gmail.com> (raw)
In-Reply-To: <CAH6h+he2=_1hgBm3hJ4KAnqxHkPgFj3+q-pPTRHrro1vzxgg3w@mail.gmail.com>

On Sat, Apr 11, 2020 at 12:59 AM Marc Smith <msmith626@gmail.com> wrote:
>
> On Thu, Apr 9, 2020 at 3:11 AM Song Liu <song@kernel.org> wrote:
> >
> > On Mon, Mar 30, 2020 at 1:55 PM Marc Smith <msmith626@gmail.com> wrote:
> > >
> > > Hi,
> > >
> > > Apologies in advance, as I'm sure this question has been asked many
> > > times and there is a standard answer, but I can't seem to find it on
> > > forums or this mailing list.
> > >
> > > I've always observed this behavior using 'iostat', when looking at
> > > READ throughput numbers, the value is about 4 times more than the real
> > > throughput number. Knowing this, I typically look at the member
> > > devices to determine what throughput is actually being achieved (or
> > > from the application driving the I/O).
> > >
> > > Looking at the sectors read field in the 'stat' file for an MD array
> > > block device:
> > > # cat /sys/block/md127/stat && sleep 1 && cat /sys/block/md127/stat
> > > 93591416        0 55082801792        0       93        0        0
> > >   0        0        0        0        0        0        0        0
> > > 93608938        0 55092996456        0       93        0        0
> > >   0        0        0        0        0        0        0        0
> > >
> > > 55092996456 - 55082801792 = 10194664
> > > 10194664 * 512 = 5219667968
> > > 5219667968 / 1024 / 1024 = 4977
> > >
> > > This device definitely isn't doing 4,977 MiB/s. So now my curiosity is
> > > getting to me: Is this just known/expected behavior for the MD array
> > > block devices? The numbers for WRITE sectors is always accurate as far
> > > as I can tell. Or something configured strangely on my systems?
> > >
> > > I'm using vanilla Linux 5.4.12.
> >
> > Thanks for the report. Could you please share output of
> >
> >    mdadm --detial /dev/md127
> >
>
> # mdadm --detail /dev/md127
> /dev/md127:
>            Version : 1.2
>      Creation Time : Tue Mar 17 17:23:00 2020
>         Raid Level : raid6
>         Array Size : 17580320640 (16765.90 GiB 18002.25 GB)
>      Used Dev Size : 1758032064 (1676.59 GiB 1800.22 GB)
>       Raid Devices : 12
>      Total Devices : 12
>        Persistence : Superblock is persistent
>
>        Update Time : Thu Apr  9 13:07:12 2020
>              State : clean
>     Active Devices : 12
>    Working Devices : 12
>     Failed Devices : 0
>      Spare Devices : 0
>
>             Layout : left-symmetric
>         Chunk Size : 64K
>
> Consistency Policy : resync
>
>               Name : node-126c4f-1:P2024_126c4f_01  (local to host
> node-126c4f-1)
>               UUID : ceccb91b:1e975007:3efb5a9d:eda08d04
>             Events : 79
>
>     Number   Major   Minor   RaidDevice State
>        0       8        0        0      active sync   /dev/sda
>        1       8       16        1      active sync   /dev/sdb
>        2       8       32        2      active sync   /dev/sdc
>        3       8       48        3      active sync   /dev/sdd
>        4       8       64        4      active sync   /dev/sde
>        5       8       80        5      active sync
>        6       8       96        6      active sync   /dev/sdg
>        7       8      112        7      active sync   /dev/sdh
>        8       8      128        8      active sync   /dev/sdi
>        9       8      144        9      active sync   /dev/sdj
>       10       8      160       10      active sync   /dev/sdk
>       11       8      176       11      active sync   /dev/sdl
>
>
> > and
> >
> >    cat /proc/mdstat
>
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> md127 : active raid6 sda[0] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7]
> sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1]
>       17580320640 blocks super 1.2 level 6, 64k chunk, algorithm 2
> [12/12] [UUUUUUUUUUUU]
>
> unused devices: <none>
>
>
> Thanks; please let me know if there is any more detail I can provide.

I was about to follow-up on this issue, but then I noticed a couple
recent patches are being discussed and it sounds like these will
resolve what I reported above:
https://marc.info/?l=linux-raid&m=159102814820539
https://marc.info/?l=linux-raid&m=159149103212326

I'll see how these play out and report back if needed.


Thanks,

Marc



>
> --Marc
>
>
> >
> > Song

      reply	other threads:[~2020-06-10 20:50 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-30 20:54 MD Array 'stat' File - Sectors Read Marc Smith
2020-04-09  7:11 ` Song Liu
2020-04-11  4:59   ` Marc Smith
2020-06-10 20:50     ` Marc Smith [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAH6h+hfV-JeoRvbEFVpyLTnGhtFLJbDuw_+V=fBU2FFdYFuMYA@mail.gmail.com' \
    --to=msmith626@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=song@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).