linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David T-G <davidtg-robot@justpickone.org>
To: Linux RAID list <linux-raid@vger.kernel.org>
Subject: Re: about linear and about RAID10
Date: Mon, 28 Nov 2022 14:46:30 +0000	[thread overview]
Message-ID: <20221128144630.GN19721@jpo> (raw)
In-Reply-To: <CAAMCDee6cyM5Uw6DitWtBL3W8NbW7j0DZcUp8A2CXWZbYceXeA@mail.gmail.com>

Hi again, all --

...and then Roger Heflin said...
% You do not want to stripe 2 partitions on a single disk, you want that linear.
% 
...
% 
% do a dd if=/dev/mdXX of=/dev/null bs=1M count=100 iflag=direct  on one
% of the raid5s of the partitions and then on the raid1 device over
% them.  I would expect the raid device over them to be much slower, I
% am not sure how much but 5x-20x.

Note that we aren't talking RAID5 but simple RAID1, but I follow you.
Time for more testing.  I ran the same dd tests as on the RAID5 setup

  jpo:~ # for D in 41 40 ; do for C in 128 256 512 ; do for S in 1M 4M 16M ; do CMD="dd if=/dev/md$D of=/dev/null bs=$S count=$C iflag=direct" ; echo "## $CMD" ; $CMD 2>&1 | egrep -v records ; done ; done ; done
  ## dd if=/dev/md41 of=/dev/null bs=1M count=128 iflag=direct
  134217728 bytes (134 MB, 128 MiB) copied, 0.710608 s, 189 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=4M count=128 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 2.7903 s, 192 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=16M count=128 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.3205 s, 190 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=1M count=256 iflag=direct
  268435456 bytes (268 MB, 256 MiB) copied, 1.41372 s, 190 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=4M count=256 iflag=direct
  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.50616 s, 195 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=16M count=256 iflag=direct
  4294967296 bytes (4.3 GB, 4.0 GiB) copied, 22.7846 s, 189 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=1M count=512 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 3.02753 s, 177 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=4M count=512 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.2099 s, 192 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=16M count=512 iflag=direct
  8589934592 bytes (8.6 GB, 8.0 GiB) copied, 45.5623 s, 189 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=1M count=128 iflag=direct
  134217728 bytes (134 MB, 128 MiB) copied, 1.19657 s, 112 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=4M count=128 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 4.32003 s, 124 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=16M count=128 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 12.0615 s, 178 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=1M count=256 iflag=direct
  268435456 bytes (268 MB, 256 MiB) copied, 2.38074 s, 113 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=4M count=256 iflag=direct
  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.62803 s, 124 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=16M count=256 iflag=direct
  4294967296 bytes (4.3 GB, 4.0 GiB) copied, 25.2467 s, 170 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=1M count=512 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 5.13948 s, 104 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=4M count=512 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 16.5954 s, 129 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=16M count=512 iflag=direct
  8589934592 bytes (8.6 GB, 8.0 GiB) copied, 55.5721 s, 155 MB/s

and did the math again

          1M        4M       16M
      +---------+---------+---------+
  128 | 189/112 | 192/124 | 190/178 |
      | (1.68)  | (1.54)  | (1.06)  |
      +---------+---------+---------+
  256 | 190/113 | 195/124 | 189/170 |
      | (1.68)  | (1.57)  | (1.11)  |
      +---------+---------+---------+
  512 | 177/104 | 192/129 | 189/155 |
      | (1.70)  | (1.48)  | (1.21)  |
      +---------+---------+---------+

and ... that was NOT what I expected!  I wonder if it's because of stripe
versus linear again.  A straight mirror will run down the entire disk,
so there's no speedup; if you have to seek from one end to the other, the
head moves the whole way.  By mirroring two halves and swapping them and
then gluing them together, though, a read *should* only have to hit the
first half of either disk and thus be FASTER.  And maybe that's the case
for random versus sequential reads; I dunno.  The difference was nearly
negligible for large reads, but I get a 40% penalty on small reads -- and
this server leans much more toward small files versus large.  Bummer :-(

I don't at this time have a device free to plug in locally to back up the
volume to destroy and rebuild as linear, so that will have to wait.  When
I do get that chance, though, will that help me get to the awesome goal
of actually INCREASING performance by including a RAID0 layer?


Thanks again & HAND

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


  reply	other threads:[~2022-11-28 14:47 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-23 22:07 how do i fix these RAID5 arrays? David T-G
2022-11-23 22:28 ` Roman Mamedov
2022-11-24  0:01   ` Roger Heflin
2022-11-24 21:20     ` David T-G
2022-11-24 21:49       ` Wol
2022-11-25 13:36         ` and dm-integrity, too (was "Re: how do i fix these RAID5 arrays?") David T-G
2022-11-24 21:10   ` how do i fix these RAID5 arrays? David T-G
2022-11-24 21:33     ` Wol
2022-11-25  1:16       ` Roger Heflin
2022-11-25 13:22         ` David T-G
     [not found]           ` <CAAMCDed1-4zFgHMS760dO1pThtkrn8K+FMuG-QQ+9W-FE0iq9Q@mail.gmail.com>
2022-11-25 19:49             ` David T-G
2022-11-28 14:24               ` md RAID0 can be grown (was "Re: how do i fix these RAID5 arrays?") David T-G
2022-11-29 21:17                 ` Jani Partanen
2022-11-29 22:22                   ` Roman Mamedov
2022-12-03  5:41                   ` md vs LVM and VMs and ... (was "Re: md RAID0 can be grown (was ...") David T-G
2022-12-03 12:06                     ` Wols Lists
2022-12-03 18:04                       ` batches and serial numbers (was "Re: md vs LVM and VMs and ...") David T-G
2022-12-03 20:07                         ` Wols Lists
2022-12-04  2:47                           ` batches and serial numbers David T-G
2022-12-04 13:54                             ` Wols Lists
2022-12-04 13:04                         ` batches and serial numbers (was "Re: md vs LVM and VMs and ...") Reindl Harald
2022-12-03  5:41                 ` md RAID0 can be grown David T-G
2022-11-25 13:30       ` about linear and about RAID10 (was "Re: how do i fix these RAID5 arrays?") David T-G
2022-11-25 14:23         ` Wols Lists
2022-11-25 19:50           ` about linear and about RAID10 David T-G
2022-11-25 18:00         ` about linear and about RAID10 (was "Re: how do i fix these RAID5 arrays?") Roger Heflin
2022-11-28 14:46           ` David T-G [this message]
2022-11-28 15:32             ` about linear and about RAID10 Reindl Harald
     [not found]               ` <CAAMCDecXkcmUe=ZFnJ_NndND0C2=D5qSoj1Hohsrty8y1uqdfw@mail.gmail.com>
2022-11-28 17:03                 ` Reindl Harald
2022-11-28 20:45               ` John Stoffel
2022-12-03  5:58                 ` David T-G
2022-12-03 12:16                   ` Wols Lists
2022-12-03 18:27                     ` David T-G
2022-12-03 23:26                       ` Wol
2022-12-04  2:53                         ` David T-G
2022-12-04 13:13                           ` Reindl Harald
2022-12-04 13:08                       ` Reindl Harald
2022-12-03  5:45               ` David T-G
2022-12-03 12:20                 ` Reindl Harald
     [not found]             ` <CAAMCDee_YrhXo+5hp31YXgUHkyuUr-zTXOqi0-HUjMrHpYMkTQ@mail.gmail.com>
2022-12-03  5:52               ` stripe size checking (was "Re: about linear and about RAID10") David T-G
2022-11-25 14:49     ` how do i fix these RAID5 arrays? Wols Lists
2022-11-26 20:02       ` John Stoffel
2022-11-27  9:33         ` Wols Lists
2022-11-27 11:46         ` Reindl Harald
2022-11-27 11:52           ` Wols Lists
2022-11-27 12:06             ` Reindl Harald
2022-11-27 14:33               ` Wol
2022-11-27 18:08                 ` Roman Mamedov
2022-11-27 19:21                   ` Wol
2022-11-28  1:26                     ` Reindl Harald
2022-11-27 18:23                 ` Reindl Harald
2022-11-27 19:30                   ` Wol
2022-11-27 19:51                     ` Reindl Harald
2022-11-27 14:10           ` piergiorgio.sartor
2022-11-27 18:21             ` Reindl Harald
2022-11-27 19:37               ` Piergiorgio Sartor
2022-11-27 19:52                 ` Reindl Harald
2022-11-27 22:05               ` Wol
2022-11-27 22:08                 ` Reindl Harald
2022-11-27 22:11                 ` Reindl Harald
2022-11-27 22:17                 ` Roman Mamedov
2022-11-27 14:58           ` John Stoffel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221128144630.GN19721@jpo \
    --to=davidtg-robot@justpickone.org \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).