linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* IBM xSeries stop responding during RAID1 reconstruction
@ 2006-06-14  8:53 Niccolo Rigacci
  2006-06-14 15:46 ` Bill Cizek
  0 siblings, 1 reply; 12+ messages in thread
From: Niccolo Rigacci @ 2006-06-14  8:53 UTC (permalink / raw)
  To: linux-raid

Hi to all,

I have a new IBM xSeries 206m with two SATA drives, I installed a 
Debian Testing (Etch) and configured a software RAID as shown:

Personalities : [raid1]
md1 : active raid1 sdb5[1] sda5[0]
      1951744 blocks [2/2] [UU]

md2 : active raid1 sdb6[1] sda6[0]
      2931712 blocks [2/2] [UU]

md3 : active raid1 sdb7[1] sda7[0]
      39061952 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      5855552 blocks [2/2] [UU]

I experience this problem: whenever a volume is reconstructing 
(syncing), the system stops responding. The machine is alive, 
because it responds to the ping, the console is responsive but I 
cannot pass the login prompt. It seems that every disk activity 
is delayed and blocking.

When the sync is complete, the machine start to respond again 
perfectly.

Any hints on how to start debugging?


In the log file nothing strange appears:

Jun 13 18:48:04 paros kernel: md: bind<sdb1>
Jun 13 18:48:04 paros kernel: RAID1 conf printout:
Jun 13 18:48:04 paros kernel:  --- wd:1 rd:2
Jun 13 18:48:04 paros kernel:  disk 0, wo:0, o:1, dev:sda1
Jun 13 18:48:04 paros kernel:  disk 1, wo:1, o:1, dev:sdb1
Jun 13 18:48:04 paros kernel: md: syncing RAID array md0
Jun 13 18:48:04 paros kernel: md: minimum _guaranteed_ 
    reconstruction speed: 1000 KB/sec/disc.
Jun 13 18:48:04 paros kernel: md: using maximum available idle IO 
    bandwidth (but not more than 200000 KB/sec) for 
    reconstruction.
Jun 13 18:48:04 paros kernel: md: using 128k window, over a total 
    of 5855552 blocks.
Jun 13 18:57:30 paros kernel: md: md0: sync done.
Jun 13 18:57:30 paros kernel: RAID1 conf printout:
Jun 13 18:57:30 paros kernel:  --- wd:2 rd:2
Jun 13 18:57:30 paros kernel:  disk 0, wo:0, o:1, dev:sda1
Jun 13 18:57:30 paros kernel:  disk 1, wo:0, o:1, dev:sdb1


Those are the IDE controllers as identified by lspci:

0000:00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 
             Family) IDE Controller (rev 01)
0000:00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 
             Family) Serial ATA Storage Controllers cc=IDE
             (rev 01)


And those are the SATA drivers detected by the kernel:

ata1: SATA max UDMA/133 cmd 0x30C8 ctl 0x30BE bmdma 0x3090
      irq 233
ata2: SATA max UDMA/133 cmd 0x30C0 ctl 0x30BA bmdma 0x3098
      irq 233
ata1: dev 1 cfg 49:2f00 82:346b 83:7fe9 84:4773 85:3469 86:3c01 
      87:4763 88:207f
ata1: dev 1 ATA-7, max UDMA/133, 156312576 sectors: LBA48
ata1: dev 1 configured for UDMA/133
scsi0 : ata_piix
ata2: dev 1 cfg 49:2f00 82:346b 83:7fe9 84:4773 85:3469 86:3c01 
      87:4763 88:207f
ata2: dev 1 ATA-7, max UDMA/133, 156312576 sectors: LBA48
ata2: dev 1 configured for UDMA/133
scsi1 : ata_piix
  Vendor: ATA       Model: HDS728080PLA380   Rev: PF2O
  Type:   Direct-Access                    ANSI SCSI revision: 05
  Vendor: ATA       Model: HDS728080PLA380   Rev: PF2O
  Type:   Direct-Access                    ANSI SCSI revision: 05
SCSI device sda: 156312576 512-byte hdwr sectors (80032 MB)
SCSI device sda: drive cache: write back
SCSI device sda: 156312576 512-byte hdwr sectors (80032 MB)
SCSI device sda: drive cache: write back
 sda: sda1 sda2 < sda5 sda6 sda7 >
sd 0:0:1:0: Attached scsi disk sda
SCSI device sdb: 156312576 512-byte hdwr sectors (80032 MB)
SCSI device sdb: drive cache: write back
SCSI device sdb: 156312576 512-byte hdwr sectors (80032 MB)
SCSI device sdb: drive cache: write back
 sdb: sdb1 sdb2 < sdb5 sdb6 sdb7 >
sd 1:0:1:0: Attached scsi disk sdb

-- 
Niccolo Rigacci
Firenze - Italy

Iraq, missione di pace: 38355 morti - www.iraqbodycount.net

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-14  8:53 IBM xSeries stop responding during RAID1 reconstruction Niccolo Rigacci
@ 2006-06-14 15:46 ` Bill Cizek
  2006-06-15  9:12   ` Niccolo Rigacci
  2006-06-19 15:05   ` Gabor Gombas
  0 siblings, 2 replies; 12+ messages in thread
From: Bill Cizek @ 2006-06-14 15:46 UTC (permalink / raw)
  To: Niccolo Rigacci; +Cc: linux-raid

Niccolo Rigacci wrote:

>Hi to all,
>
>I have a new IBM xSeries 206m with two SATA drives, I installed a 
>Debian Testing (Etch) and configured a software RAID as shown:
>
>Personalities : [raid1]
>md1 : active raid1 sdb5[1] sda5[0]
>      1951744 blocks [2/2] [UU]
>
>md2 : active raid1 sdb6[1] sda6[0]
>      2931712 blocks [2/2] [UU]
>
>md3 : active raid1 sdb7[1] sda7[0]
>      39061952 blocks [2/2] [UU]
>
>md0 : active raid1 sdb1[1] sda1[0]
>      5855552 blocks [2/2] [UU]
>
>I experience this problem: whenever a volume is reconstructing 
>(syncing), the system stops responding. The machine is alive, 
>because it responds to the ping, the console is responsive but I 
>cannot pass the login prompt. It seems that every disk activity 
>is delayed and blocking.
>
>When the sync is complete, the machine start to respond again 
>perfectly.
>
>Any hints on how to start debugging?
>  
>

I ran into a similar problem using kernel 2.6.16.14 on an ASUS 
motherboard:  When I
mirrored two SATA drives it seemed to block all other disk I/O until the 
sync was complete.

My symptoms were the same:  all consoles were non-responsive and when I 
tried to login
it just sat there until the sync was complete.

I was able to work around this by lowering 
/proc/sys/dev/raid/speed_limit_max to a value
below my disk thruput value (~ 50 MB/s) as follows:

$ echo 45000 > /proc/sys/dev/raid/speed_limit_max

That kept my system usable but didn't address the underlying problem of 
the raid
resync not being appropriately throttled.  I ended up configuring my 
system differently
so this became a moot point for me.

Hope this helps,
Bill





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-14 15:46 ` Bill Cizek
@ 2006-06-15  9:12   ` Niccolo Rigacci
  2006-06-15 10:13     ` Neil Brown
  2006-06-19 15:05   ` Gabor Gombas
  1 sibling, 1 reply; 12+ messages in thread
From: Niccolo Rigacci @ 2006-06-15  9:12 UTC (permalink / raw)
  To: Bill Cizek; +Cc: linux-raid

On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
> Niccolo Rigacci wrote:
> 
> >When the sync is complete, the machine start to respond again 
> >perfectly.
> >
> I was able to work around this by lowering 
> /proc/sys/dev/raid/speed_limit_max to a value
> below my disk thruput value (~ 50 MB/s) as follows:
> 
> $ echo 45000 > /proc/sys/dev/raid/speed_limit_max

Thanks!

This hack seems to solve my problem too. So it seems that the 
RAID subsystem does not detect a proper speed to throttle the 
sync.

Can you please send me some details of your system?

- SATA chipset (or motherboard model)?
- Disks make/model?
- Do you have the config file of the kernel that you was running
  (look at /boot/config-<version> file)?

I wonder if kernel preemption can be blamed for that, or burst 
speed of disks can fool the throttle calculation.

-- 
Niccolo Rigacci
Firenze - Italy

Iraq, missione di pace: 38355 morti - www.iraqbodycount.net

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-15  9:12   ` Niccolo Rigacci
@ 2006-06-15 10:13     ` Neil Brown
  2006-06-17 10:01       ` Niccolo Rigacci
  0 siblings, 1 reply; 12+ messages in thread
From: Neil Brown @ 2006-06-15 10:13 UTC (permalink / raw)
  To: Niccolo Rigacci; +Cc: Bill Cizek, linux-raid

On Thursday June 15, niccolo@rigacci.org wrote:
> On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
> > Niccolo Rigacci wrote:
> > 
> > >When the sync is complete, the machine start to respond again 
> > >perfectly.
> > >
> > I was able to work around this by lowering 
> > /proc/sys/dev/raid/speed_limit_max to a value
> > below my disk thruput value (~ 50 MB/s) as follows:
> > 
> > $ echo 45000 > /proc/sys/dev/raid/speed_limit_max
> 
> Thanks!
> 
> This hack seems to solve my problem too. So it seems that the 
> RAID subsystem does not detect a proper speed to throttle the 
> sync.

The RAID subsystem doesn't try to detect a 'proper' speed.
When there is nothing else happening, it just drives the disks as fast
as they will go.
If this is causing a lockup, then there is something else wrong, just
as any single process should not - by writing constantly to disks - be
able to clog up the whole system.

Maybe if you could get the result of 
  alt-sysrq-P
or even
  alt-sysrq-T
while the system seems to hang.

NeilBrown

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-15 10:13     ` Neil Brown
@ 2006-06-17 10:01       ` Niccolo Rigacci
  0 siblings, 0 replies; 12+ messages in thread
From: Niccolo Rigacci @ 2006-06-17 10:01 UTC (permalink / raw)
  To: Neil Brown, linux-raid

On Thursday 15 June 2006 12:13, you wrote:
> If this is causing a lockup, then there is something else wrong, just
> as any single process should not - by writing constantly to disks - be
> able to clog up the whole system.
>
> Maybe if you could get the result of
>   alt-sysrq-P

I tried some kernel changes enabling the HyperThreading on the (single) P4 
processor and enabling CONFIG_PREEMPT_VOLUNTARY=y, but with no success.

During the lookup Alt-SysRq-P constantly says that:

  EIP is at mwait_idle+0x1a/0x2e

While Alt-SysRq-T shows - among other processes - the MD syncing and the bash 
looked-up; this is the hand-copied call traces:

md3_resync
  device_barrier
  default_wake_function
  sync_request
  __generic_unplug_device
  md_do_sync
  schedule
  md_thread
  md_thread
  kthread
  kthread
  kernel_thread_helper

bash
  io_schedule
  sync_buffer
  sync_buffer
  __wait_on_bit_lock
  sync_buffer
  out_of_line_wait_on_bit_lock
  wake_bit_function
  __lock_buffer
  do_get_write_access
  __ext3_get_inode_loc
  jurnal_get_write_access
  ext3_reserve_inode_write
  ext3_mark_inode_dirty
  ext3_dirty_inode
  __mark_inode_dirty
  update_atime
  vfs_readdir
  sys_getdents64
  filldir64
  syscall_call


This is also the top output, which runs regularly during the lookup:

top - 11:40:41 up 7 min,  2 users,  load average: 8.70, 4.92, 2.04
Tasks:  70 total,   1 running,  69 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2% us,  0.7% sy,  0.0% ni, 98.7% id,  0.0% wa,  0.0% hi,  0.5% si
Mem:    906212k total,    58620k used,   847592k free,     3420k buffers
Swap:  1951736k total,        0k used,  1951736k free,    23848k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
  829 root      10  -5     0    0    0 S    1  0.0   0:01.70 md3_raid1
 2823 root      10  -5     0    0    0 D    1  0.0   0:01.62 md3_resync
    1 root      16   0  1956  656  560 S    0  0.1   0:00.52 init
    2 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/0
    3 root      34  19     0    0    0 S    0  0.0   0:00.00 ksoftirqd/0
    4 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/0
    5 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/1
    6 root      34  19     0    0    0 S    0  0.0   0:00.00 ksoftirqd/1
    7 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/1
    8 root      10  -5     0    0    0 S    0  0.0   0:00.01 events/0
    9 root      10  -5     0    0    0 S    0  0.0   0:00.01 events/1
   10 root      10  -5     0    0    0 S    0  0.0   0:00.00 khelper
   11 root      10  -5     0    0    0 S    0  0.0   0:00.00 kthread
   14 root      10  -5     0    0    0 S    0  0.0   0:00.00 kblockd/0
   15 root      10  -5     0    0    0 S    0  0.0   0:00.00 kblockd/1
   16 root      11  -5     0    0    0 S    0  0.0   0:00.00 kacpid
  152 root      20   0     0    0    0 S    0  0.0   0:00.00 pdflush
  153 root      15   0     0    0    0 D    0  0.0   0:00.00 pdflush
  154 root      17   0     0    0    0 S    0  0.0   0:00.00 kswapd0
  155 root      11  -5     0    0    0 S    0  0.0   0:00.00 aio/0
  156 root      11  -5     0    0    0 S    0  0.0   0:00.00 aio/1
  755 root      10  -5     0    0    0 S    0  0.0   0:00.00 kseriod
  796 root      10  -5     0    0    0 S    0  0.0   0:00.00 ata/0
  797 root      11  -5     0    0    0 S    0  0.0   0:00.00 ata/1
  799 root      11  -5     0    0    0 S    0  0.0   0:00.00 scsi_eh_0
  800 root      11  -5     0    0    0 S    0  0.0   0:00.00 scsi_eh_1
  825 root      15   0     0    0    0 S    0  0.0   0:00.00 kirqd
  831 root      10  -5     0    0    0 D    0  0.0   0:00.00 md2_raid1
  833 root      10  -5     0    0    0 S    0  0.0   0:00.00 md1_raid1
  834 root      10  -5     0    0    0 D    0  0.0   0:00.00 md0_raid1
  835 root      15   0     0    0    0 D    0  0.0   0:00.00 kjournald
  932 root      18  -4  2192  584  368 S    0  0.1   0:00.19 udevd
 1698 root      10  -5     0    0    0 S    0  0.0   0:00.00 khubd
 2031 root      22   0     0    0    0 S    0  0.0   0:00.00 kjournald
 2032 root      15   0     0    0    0 D    0  0.0   0:00.00 kjournald
 2142 daemon    16   0  1708  364  272 S    0  0.0   0:00.00 portmap
 2464 root      16   0  2588  932  796 S    0  0.1   0:00.01 syslogd

-- 
Niccolo Rigacci
Firenze - Italy

War against Iraq? Not in my name!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-14 15:46 ` Bill Cizek
  2006-06-15  9:12   ` Niccolo Rigacci
@ 2006-06-19 15:05   ` Gabor Gombas
  2006-06-20 13:08     ` Niccolo Rigacci
  1 sibling, 1 reply; 12+ messages in thread
From: Gabor Gombas @ 2006-06-19 15:05 UTC (permalink / raw)
  To: Bill Cizek; +Cc: Niccolo Rigacci, linux-raid

On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:

> I was able to work around this by lowering 
> /proc/sys/dev/raid/speed_limit_max to a value
> below my disk thruput value (~ 50 MB/s) as follows:

IMHO a much better fix is to use the cfq I/O scheduler during the
rebuild. The default anticipatory scheduler gives horrible latencies
and can cause the machine to appear as 'locked up' if there is heavy
I/O load like a RAID reconstruct or heavy database usage.

The price of cfq is lower throughput (higher RAID rebuild time) than
with the anticipatory I/O scheduler.

Gabor

-- 
     ---------------------------------------------------------
     MTA SZTAKI Computer and Automation Research Institute
                Hungarian Academy of Sciences
     ---------------------------------------------------------

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-19 15:05   ` Gabor Gombas
@ 2006-06-20 13:08     ` Niccolo Rigacci
  2006-06-20 13:27       ` Gabor Gombas
  0 siblings, 1 reply; 12+ messages in thread
From: Niccolo Rigacci @ 2006-06-20 13:08 UTC (permalink / raw)
  To: linux-raid; +Cc: Bill Cizek, Gabor Gombas

On Mon, Jun 19, 2006 at 05:05:56PM +0200, Gabor Gombas wrote:
> 
> IMHO a much better fix is to use the cfq I/O scheduler during the
> rebuild.

Yes, changing the default I/O to DEFAULT_CFQ solve the problem 
very well, I get over 40 Mb/s resync speed with no lock-up at 
all!

Thank you very much, I think we can elaborate a new FAQ entry.

Do you know if it is possible to switch the scheduler at runtime?

-- 
Niccolo Rigacci
Firenze - Italy

Iraq, missione di pace: 38475 morti - www.iraqbodycount.net

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-20 13:08     ` Niccolo Rigacci
@ 2006-06-20 13:27       ` Gabor Gombas
  2006-06-20 15:00         ` Mr. James W. Laferriere
  0 siblings, 1 reply; 12+ messages in thread
From: Gabor Gombas @ 2006-06-20 13:27 UTC (permalink / raw)
  To: Niccolo Rigacci; +Cc: linux-raid, Bill Cizek

On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote:

> Do you know if it is possible to switch the scheduler at runtime?

echo cfq > /sys/block/<disk>/queue/scheduler

Gabor

-- 
     ---------------------------------------------------------
     MTA SZTAKI Computer and Automation Research Institute
                Hungarian Academy of Sciences,
     Laboratory of Parallel and Distributed Systems
     Address   : H-1132 Budapest Victor Hugo u. 18-22. Hungary
     Phone/Fax : +36 1 329-78-64 (secretary)
     W3        : http://www.lpds.sztaki.hu
     ---------------------------------------------------------

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-20 13:27       ` Gabor Gombas
@ 2006-06-20 15:00         ` Mr. James W. Laferriere
  2006-06-20 15:45           ` Niccolo Rigacci
                             ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Mr. James W. Laferriere @ 2006-06-20 15:00 UTC (permalink / raw)
  To: Gabor Gombas; +Cc: linux-raid maillist

 	Hello Gabor ,

On Tue, 20 Jun 2006, Gabor Gombas wrote:
> On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote:
>> Do you know if it is possible to switch the scheduler at runtime?
> echo cfq > /sys/block/<disk>/queue/scheduler

 	At least one can do a ls of the /sys/block area & then do an automated
 	echo cfq down the tree .  Does anyone know of a method to set a default
 	scheduler ?  Scanning down a list or manually maintaining a list seems
 	to be a bug in the waiting .  Tia ,  JimL
-- 
+----------------------------------------------------------------------+
| James   W.   Laferriere |   System    Techniques   | Give me VMS     |
| Network        Engineer | 3600 14th Ave SE #20-103 |  Give me Linux  |
| babydr@baby-dragons.com |  Olympia ,  WA.   98501  |   only  on  AXP |
+----------------------------------------------------------------------+

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-20 15:00         ` Mr. James W. Laferriere
@ 2006-06-20 15:45           ` Niccolo Rigacci
  2006-06-20 16:38           ` Gabor Gombas
  2006-06-26 14:52           ` Bill Davidsen
  2 siblings, 0 replies; 12+ messages in thread
From: Niccolo Rigacci @ 2006-06-20 15:45 UTC (permalink / raw)
  To: Mr. James W. Laferriere; +Cc: linux-raid maillist

> 	At least one can do a ls of the /sys/block area & then do an 
> 	automated
> 	echo cfq down the tree .  Does anyone know of a method to set a 
> 	default
> 	scheduler ?

My be I didn't understand the question...

You decide what schedulers are available at kernel compile time, 
also at kernel compile time you decide which is the default i/o 
scheduler.

-- 
Niccolo Rigacci
Firenze - Italy

Iraq, missione di pace: 38475 morti - www.iraqbodycount.net

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-20 15:00         ` Mr. James W. Laferriere
  2006-06-20 15:45           ` Niccolo Rigacci
@ 2006-06-20 16:38           ` Gabor Gombas
  2006-06-26 14:52           ` Bill Davidsen
  2 siblings, 0 replies; 12+ messages in thread
From: Gabor Gombas @ 2006-06-20 16:38 UTC (permalink / raw)
  To: Mr. James W. Laferriere; +Cc: linux-raid maillist

On Tue, Jun 20, 2006 at 08:00:13AM -0700, Mr. James W. Laferriere wrote:

> 	At least one can do a ls of the /sys/block area & then do an 
> 	automated
> 	echo cfq down the tree .  Does anyone know of a method to set a 
> 	default
> 	scheduler ?

RTFM: Documentation/kernel-parameters.txt in the kernel source.

Gabor

-- 
     ---------------------------------------------------------
     MTA SZTAKI Computer and Automation Research Institute
                Hungarian Academy of Sciences
     ---------------------------------------------------------

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: IBM xSeries stop responding during RAID1 reconstruction
  2006-06-20 15:00         ` Mr. James W. Laferriere
  2006-06-20 15:45           ` Niccolo Rigacci
  2006-06-20 16:38           ` Gabor Gombas
@ 2006-06-26 14:52           ` Bill Davidsen
  2 siblings, 0 replies; 12+ messages in thread
From: Bill Davidsen @ 2006-06-26 14:52 UTC (permalink / raw)
  To: Mr. James W. Laferriere; +Cc: Gabor Gombas, linux-raid maillist

Mr. James W. Laferriere wrote:

>     Hello Gabor ,
>
> On Tue, 20 Jun 2006, Gabor Gombas wrote:
>
>> On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote:
>>
>>> Do you know if it is possible to switch the scheduler at runtime?
>>
>> echo cfq > /sys/block/<disk>/queue/scheduler
>
>
>     At least one can do a ls of the /sys/block area & then do an 
> automated
>     echo cfq down the tree .  Does anyone know of a method to set a 
> default
>     scheduler ?  Scanning down a list or manually maintaining a list 
> seems
>     to be a bug in the waiting .  Tia ,  JimL

Thought I posted this... it can be set in kernel build or on the bloot 
parameters from grub/lilo.

2nd thought: set it to cfq by default, then at the END of rc.local, if 
there are no arrays rebuilding, change to something else if you like.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2006-06-26 14:52 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-14  8:53 IBM xSeries stop responding during RAID1 reconstruction Niccolo Rigacci
2006-06-14 15:46 ` Bill Cizek
2006-06-15  9:12   ` Niccolo Rigacci
2006-06-15 10:13     ` Neil Brown
2006-06-17 10:01       ` Niccolo Rigacci
2006-06-19 15:05   ` Gabor Gombas
2006-06-20 13:08     ` Niccolo Rigacci
2006-06-20 13:27       ` Gabor Gombas
2006-06-20 15:00         ` Mr. James W. Laferriere
2006-06-20 15:45           ` Niccolo Rigacci
2006-06-20 16:38           ` Gabor Gombas
2006-06-26 14:52           ` Bill Davidsen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).