public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: FD Cami <francois.cami@free.fr>
To: Zan Lynx <zlynx@acm.org>
Cc: Prakash Punnoor <prakash@punnoor.de>,
	Jan Engelhardt <jengelh@computergmbh.de>,
	Lukas Hejtmanek <xhejtman@ics.muni.cz>,
	linux-kernel@vger.kernel.org, megaraidlinux@lsi.com
Subject: Re: Disk schedulers
Date: Fri, 15 Feb 2008 22:32:24 +0100	[thread overview]
Message-ID: <20080215223224.50102e0a@olorin> (raw)
In-Reply-To: <1203095486.6663.12.camel@localhost>

On Fri, 15 Feb 2008 10:11:26 -0700
Zan Lynx <zlynx@acm.org> wrote:

> 
> On Fri, 2008-02-15 at 15:57 +0100, Prakash Punnoor wrote:
> > On the day of Friday 15 February 2008 Jan Engelhardt hast written:
> > > On Feb 14 2008 17:21, Lukas Hejtmanek wrote:
> > > >Hello,
> > > >
> > > >whom should I blame about disk schedulers?
> > >
> > > Also consider
> > > - DMA (e.g. only UDMA2 selected)
> > > - aging disk
> > 
> > Nope, I also reported this problem _years_ ago, but till now much hasn't 
> > changed. Large writes lead to read starvation.
> 
> Yes, I see this often myself.  It's like the disk IO queue (I set mine
> to 1024) fills up, and pdflush and friends can stuff write requests into
> it much more quickly than any other programs can provide read requests.
> 
> CFQ and ionice work very well up until iostat shows average IO queuing
> above 1024 (where I set the queue number).

I can confirm that as well.

This is easily reproductible with dd if=/dev/zero of=somefile bs=2048
for example. After a short while, trying to read the disks takes an
awfully long time, even if the dd process is ionice'd.

What is worse is that other drives attached to the same controller become
unresponsive as well.
I use a Dell Perc 5/i (megaraid_sas) with :
* 2 SAS 15000 RPM drives, RAID1 => sda
* 4 SAS 15000 RPM drives, RAID5 => sdb
* 2 SATA 72000 RPM drives, RAID1 => sdc
Using dd or mkfs on sdb or sdc makes sda unresponsive as well.
Is this expected ?

Cheers

Francois

  reply	other threads:[~2008-02-15 21:32 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-14 16:21 Disk schedulers Lukas Hejtmanek
2008-02-15  0:02 ` Tejun Heo
2008-02-15 10:09   ` Lukas Hejtmanek
2008-02-15 14:42 ` Jan Engelhardt
2008-02-15 14:57   ` Prakash Punnoor
2008-02-15 17:11     ` Zan Lynx
2008-02-15 21:32       ` FD Cami [this message]
2008-02-16 16:13       ` Lukas Hejtmanek
2008-02-20 17:04       ` Zdenek Kabelac
2008-02-15 15:59   ` Lukas Hejtmanek
2008-02-15 16:22     ` Jeffrey E. Hundstad
2008-02-15 17:36     ` Roger Heflin
2008-02-15 17:24 ` Paulo Marques
2008-02-16 16:15   ` Lukas Hejtmanek
2008-02-16 17:20 ` Pavel Machek
2008-02-20 18:48   ` Lukas Hejtmanek
2008-02-21 23:50     ` Giuliano Pochini
2008-02-17 19:38 ` Linda Walsh
2008-02-28 17:14   ` Bill Davidsen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080215223224.50102e0a@olorin \
    --to=francois.cami@free.fr \
    --cc=jengelh@computergmbh.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=megaraidlinux@lsi.com \
    --cc=prakash@punnoor.de \
    --cc=xhejtman@ics.muni.cz \
    --cc=zlynx@acm.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox