linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc MERLIN <marc@merlins.org>
To: Paolo Valente <paolo.valente@linaro.org>
Cc: linux-block <linux-block@vger.kernel.org>, linux-raid@vger.kernel.org
Subject: Re: 5.1.21 Dell 2950 terrible swraid5 I/O performance with swraid on top of Perc 5/i raid0/jbod
Date: Mon, 19 Aug 2019 09:40:54 -0700	[thread overview]
Message-ID: <20190819164053.GF5431@merlins.org> (raw)
In-Reply-To: <5DCAD3D8-07B6-4A5D-A3C1-A1DF4055C5BD@linaro.org>

On Mon, Aug 19, 2019 at 11:18:13AM +0200, Paolo Valente wrote:
> Solving this kind of problem is one of the goals of the BFQ I/O scheduler [1].
> Have you tried?  If you want to, then start by swathing to BFQ in both the
> physical and the virtual block devices in your stack.
 
I sure was not aware of it, thank you for pointing it out.

> Thanks,
> Paolo
> 
> [1] https://algo.ing.unimo.it/people/paolo/BFQ/

I did the following below and when the swraid is rebuilding, I'm still
getting terrible overall throughput:
newmagic:~# hdparm -t /dev/md2
/dev/md2:
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
  Timing buffered disk reads:   2 MB in  5.76 seconds = 355.42 kB/sec

I think things hang a bit less, which I suppose it good, but the system is
still unusable overall.

 
newmagic:~# modprobe bfq
newmagic:~# for i in /sys/block/*/queue/scheduler; do echo $i; echo bfq > $i; cat $i; done
/sys/block/bcache0/queue/scheduler
none
/sys/block/md0/queue/scheduler
none
/sys/block/md1/queue/scheduler
none
/sys/block/md2/queue/scheduler
none
/sys/block/md3/queue/scheduler
none                     
/sys/block/sda/queue/scheduler
[bfq] none
/sys/block/sdb/queue/scheduler
[bfq] none
/sys/block/sdc/queue/scheduler
[bfq] none
/sys/block/sdd/queue/scheduler
[bfq] none
/sys/block/sde/queue/scheduler
[bfq] none
/sys/block/sdf/queue/scheduler
[bfq] none
/sys/block/sdg/queue/scheduler
[bfq] none
/sys/block/sdh/queue/scheduler
[bfq] none
/sys/block/sdi/queue/scheduler
[bfq] none
/sys/block/sr0/queue/scheduler
[bfq] none


Thanks,
Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/  

  parent reply	other threads:[~2019-08-19 16:40 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-19  7:08 5.1.21 Dell 2950 terrible swraid5 I/O performance with swraid on top of Perc 5/i raid0/jbod Marc MERLIN
2019-08-19  9:18 ` Paolo Valente
2019-08-19 12:02   ` Paolo Valente
2019-08-19 16:40   ` Marc MERLIN [this message]
2019-08-19 17:05     ` Paolo Valente
2019-08-19 17:26       ` Marc MERLIN
2019-08-19 11:42 ` o1bigtenor
2019-08-19 16:24   ` Marc MERLIN
2019-08-20  5:49   ` Marc MERLIN
2019-08-19 18:37 ` Roman Mamedov
2019-08-19 19:16   ` Marc MERLIN

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190819164053.GF5431@merlins.org \
    --to=marc@merlins.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=paolo.valente@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).