public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* RE: Complete I/O starvation with 3ware raid on 2.6
@ 2003-09-25 18:19 Adam Radford
  2003-09-28 22:48 ` Aaron Lehmann
  0 siblings, 1 reply; 13+ messages in thread
From: Adam Radford @ 2003-09-25 18:19 UTC (permalink / raw)
  To: 'Nick Piggin', Aaron Lehmann; +Cc: Andrew Morton, linux-kernel

You should set CONFIG_3W_XXXX_CMD_PER_LUN in your .config to 16 or 32.

-Adam

-----Original Message-----
From: Nick Piggin [mailto:piggin@cyberone.com.au]
Sent: Thursday, September 25, 2003 3:29 AM
To: Aaron Lehmann
Cc: Andrew Morton; linux-kernel@vger.kernel.org
Subject: Re: Complete I/O starvation with 3ware raid on 2.6




Aaron Lehmann wrote:

>On Thu, Sep 25, 2003 at 07:13:32PM +1000, Nick Piggin wrote:
>
>>But the load average will be 11 because there are processes stuck in the
>>kernel somewhere in D state. Have a look for them. They might be things
>>like pdflush, kswapd, scsi_*, etc.
>>
>
>They're pdflush and kjournald. I don't have sysrq support compiled in
>at the moment.
>

OK, it would be good if you could get a couple of sysrq T snapshots then
and post them to the list.

>
>I've noticed the problem does not occur when the raid can absorb data
>faster than the other drive can throw data at it. My naive mind is
>pretty sure that this is just an issue of way too much being queued
>

Although your system (usr, lib, bin etc) is on the IDE disk, right?
And that is only doing reads?

How does your system behave if you are doing just the read side (ie.
going to /dev/null), or just the write side (coming from /dev/zero).

>
>for writing. If someone could tell me how to control this parameter,
>I'd definately give it a try [tomorrow]. All I've found on my own is
>#define TW_Q_LENGTH 256 in 3w-xxxx.h and am not sure if this is the
>right thing to change or safe to change.
>

That looks like it, try it at 4.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 13+ messages in thread
* Complete I/O starvation with 3ware raid on 2.6
@ 2003-09-25  7:12 Aaron Lehmann
  2003-09-25  7:43 ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Aaron Lehmann @ 2003-09-25  7:12 UTC (permalink / raw)
  To: linux-kernel

I'm running bkcvs HEAD on a newly installed system, and started
copying files over to my RAID 5 from older IDE disks. When I copy
these files, the system becomes unusable. Specifically, any disk
access on the 3ware array, no matter how simple, even starting 'vi' on
a file, takes minutes or eternity to complete. Suspending the process
doing the copying doesn't even help much, because the LEDs on the card
continue blinking for about 30 seconds after the suspension. This
happens whether the IDE drive is using DMA or not. It seems that some
kind of insane queueing is going on. Are there parameters worth
playing with? Should I try the deadline I/O scheduler?

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2003-09-28 22:49 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-09-25 18:19 Complete I/O starvation with 3ware raid on 2.6 Adam Radford
2003-09-28 22:48 ` Aaron Lehmann
  -- strict thread matches above, loose matches on Subject: below --
2003-09-25  7:12 Aaron Lehmann
2003-09-25  7:43 ` Andrew Morton
2003-09-25  7:50   ` Aaron Lehmann
2003-09-25  8:02     ` Nick Piggin
2003-09-25  7:58   ` Aaron Lehmann
2003-09-25  8:10     ` Andrew Morton
2003-09-25  8:31       ` Aaron Lehmann
2003-09-25  9:13         ` Nick Piggin
2003-09-25 10:15           ` Aaron Lehmann
2003-09-25 10:25             ` Jens Axboe
2003-09-25 10:29             ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox