linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* bio too big - in nested raid setup
@ 2010-01-24 18:49 "Ing. Daniel Rozsnyó"
  2010-01-25 15:25 ` Marti Raudsepp
  0 siblings, 1 reply; 9+ messages in thread
From: "Ing. Daniel Rozsnyó" @ 2010-01-24 18:49 UTC (permalink / raw)
  To: linux-kernel

Hello,
   I am having troubles with nested RAID - when one array is added to 
the other, the "bio too big device md0" messages are appearing:

bio too big device md0 (144 > 8)
bio too big device md0 (248 > 8)
bio too big device md0 (32 > 8)

   From internet searches I've found no solution or error like mine, 
just a note about data corruption when this is happening.

Description:

   My setup is the following - one 2TB and four 500GB drives. The goal 
is to have a mirror of the 2TB drive to a linear array of the other four 
drives.

   So.. the state without the error above is this:

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active linear sdb1[0] sde1[3] sdd1[2] sdc1[1]
       1953535988 blocks super 1.1 0k rounding

md0 : active raid1 sda2[0]
       1953447680 blocks [2/1] [U_]
       bitmap: 233/233 pages [932KB], 4096KB chunk

unused devices: <none>

   With these block request sizes:

# cat /sys/block/md{0,1}/queue/max_{,hw_}sectors_kb
127
127
127
127

   Now, I add the four drive array to the mirror - and the system starts 
showing the bio error at any significant disk activity..  (probably 
writes only). The reboot/shutdown process is full of these errors.

   The step which messes up the system (ignore re-added, it happened the 
very first time I've constructed the 4 drive array a hour ago):

# mdadm /dev/md0 --add /dev/md1
mdadm: re-added /dev/md1

# cat /sys/block/md{0,1}/queue/max_{,hw_}sectors_kb
4
4
127
127

The dmesg is just showing this:

md: bind<md1>
RAID1 conf printout:
  --- wd:1 rd:2
  disk 0, wo:0, o:1, dev:sda2
  disk 1, wo:1, o:1, dev:md1
md: recovery of RAID array md0
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 
KB/sec) for recovery.
md: using 128k window, over a total of 1953447680 blocks.


   And as soon as a write occures to the array:

bio too big device md0 (40 > 8)

   The removal of md1 from md0 does not help the situation, I need to 
reboot the machine.

   The md0 array bears LVM and inside it a root / swap / portage / 
distfiles and home logical volumes.

   My system is:

# uname -a
Linux desktop 2.6.32-gentoo-r1 #2 SMP PREEMPT Sun Jan 24 12:06:13 CET 
2010 i686 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux


Thanks for any help,

Daniel


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-01-31 15:42 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-24 18:49 bio too big - in nested raid setup "Ing. Daniel Rozsnyó"
2010-01-25 15:25 ` Marti Raudsepp
2010-01-25 18:27   ` Milan Broz
2010-01-28  2:28     ` Neil Brown
2010-01-28  9:24       ` "Ing. Daniel Rozsnyó"
2010-01-28 10:50         ` Neil Brown
2010-01-28 12:07           ` Boaz Harrosh
2010-01-28 22:14             ` Neil Brown
2010-01-31 15:42               ` Boaz Harrosh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).