From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bill Davidsen Subject: Re: Need to remove failed disk from RAID5 array Date: Mon, 23 Jul 2012 00:14:08 -0400 Message-ID: <500CCF90.5030002@tmr.com> References: <50071C0A.8080307@tmr.com> <20120719091611.22e16100@natsu> <500818D5.4080208@tmr.com> <20120720070801.498902ba@notabene.brown> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Sender: linux-raid-owner@vger.kernel.org To: Alex , Linux RAID List-Id: linux-raid.ids Alex wrote: > Hi, > >>> That's a good argument for not using "whole disk" array members, a partition can >>> be started at a good offset and may perform better. As for the speed, since it >>> is reconstructing the array data (hope the other drives are okay), every block >>> written requires three blocks read and a reconstruct in cpu and memory. You can >>> use "blockdev" to increase readahead, and set the devices to use the deadline >>> scheduler, that _may_ improve things somewhat, but you have to read three block >>> to write one, so it's not going to be fast. >>> >> >> Read-ahead has absolutely no effect in this context. >> >> Read-ahead is a function of the page cache. When filling the page cache, >> read-ahead suggests how much more to be read than has been asked for. >> >> resync/recovery does not use the page cache, consequently the readahead >> setting is irrelevant. >> >> IO scheduler choice may make a difference. > > It's already set for cfq. I assume that would be the preferred over deadline? > > I set it on the actual disk devices. Should I also set it on md0/1 > devices as well? It is currently 'none'. > > /sys/devices/virtual/block/md0/queue/scheduler For what it's worth, my experience has beem that deadline works better for writes to arrays. In arrays with only a few drives, sometimes markedly better. -- Bill Davidsen "We have more to fear from the bungling of the incompetent than from the machinations of the wicked." - from Slashdot