* Help Please! mdadm hangs when using nbd or gnbd
@ 2006-02-23 0:25 Brian Kelly
2006-02-23 20:36 ` JaniD++
0 siblings, 1 reply; 3+ messages in thread
From: Brian Kelly @ 2006-02-23 0:25 UTC (permalink / raw)
To: linux-raid
Hail to the Great Linux RAID Gurus! I humbly seek any assistance you
can offer.
I am building a couple of 20 TB logical volumes from six storage nodes
each offering two 8TB raw storage devices built with Broadcom RAIDCore
BC4852 SATA cards. Each storage node (called leadstor1-6) needs to
publish its two raw devices with iSCSI, nbd or gnbd over a gigabit
network which the head node (leadstor) combines into a RAID 5 volume
using mdadm.
My problem is that when using nbd or gnbd the original build of the
array on the head node quickly halts, as if a deadlock has occurred. I
have this problem with RAID 1 and RAID 5 configurations regardless of
the size of the storage node published devices. Here's a demonstration
with two 4 TB drives being mirrored using nbd:
*** Begin Demonstration ***
[root@leadstor nbd-2.8.3]# uname -a
Linux leadstor.unidata.ucar.edu 2.6.15-1.1831_FC4smp #1 SMP Tue Feb 7
13:51:52 EST 2006 x86_64 x86_64 x86_64 GNU/Linux
>>> I start by preparing the system for nbd and md devices
[root@leadstor ~]# modprobe nbd
[root@leadstor ~]# cd /dev
[root@leadstor dev]# ./MAKEDEV nb
[root@leadstor dev]# ./MAKEDEV md
>>> I then mount two 4TB volumes from leadstor5 and leadstor6
[root@leadstor dev]# cd /opt/nbd-2.8.3
[root@leadstor nbd-2.8.3]# ./nbd-client leadstor5 2002 /dev/nb5
Negotiation: ..size = 3899484160KB
bs=1024, sz=3899484160
[root@leadstor nbd-2.8.3]# ./nbd-client leadstor6 2002 /dev/nb6
Negotiation: ..size = 3899484160KB
bs=1024, sz=3899484160
>>> I confirm the volumes are mounted properly
[root@leadstor nbd-2.8.3]# fdisk -l /dev/nb5
Disk /dev/nb5: 3993.0 GB, 3993071779840 bytes
255 heads, 63 sectors/track, 485463 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/nb5 doesn't contain a valid partition table
[root@leadstor nbd-2.8.3]# fdisk -l /dev/nb6
Disk /dev/nb6: 3993.0 GB, 3993071779840 bytes
255 heads, 63 sectors/track, 485463 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/nb6 doesn't contain a valid partition table
>>> I prepare the drives to be used in mdadm
[root@leadstor nbd-2.8.3]# mdadm -V
mdadm - v1.12.0 - 14 June 2005
[root@leadstor nbd-2.8.3]# mdadm --zero-superblock /dev/nb5
[root@leadstor nbd-2.8.3]# mdadm --zero-superblock /dev/nb6
>>> I create a device to mirror the two volumes
[root@leadstor nbd-2.8.3]# mdadm --create /dev/md2 -l 1 -n 2 /dev/nb5
/dev/nb6
mdadm: array /dev/md2 started.
>>> And watch the progress in /proc/mdstat
[root@leadstor nbd-2.8.3]# date
Wed Feb 22 16:18:55 MST 2006
[root@leadstor nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
3899484096 blocks [2/2] [UU]
[>....................] resync = 0.0% (1408/3899484096)
finish=389948.2min speed=156K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
>>> But no more has been done a minute later
[root@leadstor nbd-2.8.3]# date
Wed Feb 22 16:19:49 MST 2006
[root@leadstor nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
3899484096 blocks [2/2] [UU]
[>....................] resync = 0.0% (1408/3899484096)
finish=2599655.1min speed=23K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
>>> And later still, no more of the resync has been done
[root@leadstor nbd-2.8.3]# date
Wed Feb 22 16:20:38 MST 2006
[root@leadstor nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
3899484096 blocks [2/2] [UU]
[>....................] resync = 0.0% (1408/3899484096)
finish=4679379.2min speed=13K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
>>> At this point, the resync is stuck and the system is idle. I have
left it overnight, but it progresses no further. 100% of the time this
test will stop at 1408 on the rebuild. With other configurations, the
number will change (for example, it was 1280 for a 6 column RAID 5), but
always halt at the same spot.
>>> Nothing is logged in the system files
[root@leadstor nbd-2.8.3]# tail -15 /var/log/messages
Feb 22 15:48:35 leadstor kernel: parport: PnPBIOS parport detected.
Feb 22 15:48:35 leadstor kernel: parport0: PC-style at 0x378, irq 7 [PCSPP]
Feb 22 15:48:35 leadstor kernel: lp0: using parport0 (interrupt-driven).
Feb 22 15:48:35 leadstor kernel: lp0: console ready
Feb 22 15:48:37 leadstor fstab-sync[2585]: removed all generated mount
points
Feb 22 16:01:00 leadstor sshd(pam_unix)[3000]: session opened for user
root by root(uid=0)
Feb 22 16:06:10 leadstor kernel: nbd: registered device at major 43
Feb 22 16:07:43 leadstor sshd(pam_unix)[3199]: session opened for user
root by root(uid=0)
Feb 22 16:18:51 leadstor kernel: md: bind<nbd5>
Feb 22 16:18:51 leadstor kernel: md: bind<nbd6>
Feb 22 16:18:51 leadstor kernel: raid1: raid set md2 active with 2 out
of 2 mirrors
Feb 22 16:18:51 leadstor kernel: md: syncing RAID array md2
Feb 22 16:18:51 leadstor kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Feb 22 16:18:51 leadstor kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Feb 22 16:18:51 leadstor kernel: md: using 128k window, over a total of
3899484096 blocks.
>>> And one last check of the rebuild
[root@leadstor nbd-2.8.3]# date
Wed Feb 22 16:33:50 MST 2006
[root@leadstor nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
3899484096 blocks [2/2] [UU]
[>....................] resync = 0.0% (1408/3899484096)
finish=38994826.8min speed=1K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
>>> Now if I try to abort the build, the command also hangs
[root@leadstor nbd-2.8.3]# mdadm --misc --stop /dev/md2
* never returns*
>>> But I can connect with another shell and poke around
Last login: Wed Feb 22 16:07:43 2006 from robin.unidata.ucar.edu
[root@leadstor ~]# ps -eaf | grep md
root 578 9 0 15:48 ? 00:00:00 [md1_raid1]
root 579 9 0 15:48 ? 00:00:00 [md0_raid1]
root 2181 1 0 15:48 ? 00:00:00 mdadm --monitor --scan
-f --pid-file=/var/run/mdadm/mdadm.pid
root 2258 1 0 15:48 ? 00:00:00 [krfcommd]
root 3298 9 0 16:18 ? 00:00:00 [md2_raid1]
root 3299 9 0 16:18 ? 00:00:00 [md2_resync]
root 3384 3049 0 16:35 pts/1 00:00:00 mdadm --misc --stop /dev/md2
root 3426 3399 0 16:37 pts/2 00:00:00 grep md
>>> But all the md2 processes are wedged and can not be killed
[root@leadstor ~]# kill -9 3298 3299 3384
[root@leadstor ~]# ps -eaf | grep md
root 578 9 0 15:48 ? 00:00:00 [md1_raid1]
root 579 9 0 15:48 ? 00:00:00 [md0_raid1]
root 2181 1 0 15:48 ? 00:00:00 mdadm --monitor --scan
-f --pid-file=/var/run/mdadm/mdadm.pid
root 2258 1 0 15:48 ? 00:00:00 [krfcommd]
root 3298 9 0 16:18 ? 00:00:00 [md2_raid1]
root 3299 9 0 16:18 ? 00:00:00 [md2_resync]
root 3384 3049 0 16:35 pts/1 00:00:00 mdadm --misc --stop /dev/md2
root 3431 3399 0 16:38 pts/2 00:00:00 grep md
>>> So, to get rid of these processes I reboot the system and have to
power down the box since the shutdown process stops when unloading
iptables or md
>>> The head node is running Fedora Core 4 with the latest 2.6.15smp
kernel since it was mentioned that some deadlock issues were fixed
there. It is running two Opteron CPUs at 1600MHz and has 2GB of RAM.
The storage nodes are FC4 2.6.14 but with a single CPU and 1 GB of RAM.
All systems are using nbd-2.8.3, but the problem systems are the same
when using gnbd in Red Hat's GFS cluster software. The systems
interconnect with a dedicated gigabit copper network.
*** End Demonstration ***
This problem seems to exist on both single and multi-threaded kernels.
When I repeat the procedure, but on one of the uni-processor systems,
the resync gets further but still hangs. Here's where it hung on leadstor1:
[root@leadstor1 nbd-2.8.3]# uname -a
Linux leadstor1.unidata.ucar.edu 2.6.14-1.1653_FC4 #1 Tue Dec 13
21:34:16 EST 2005 x86_64 x86_64 x86_64 GNU/Linux
[root@leadstor1 nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
3899484096 blocks [2/2] [UU]
[>....................] resync = 0.0% (1409024/3899484096)
finish=1936.4min speed=33548K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
In addition to nbd and gnbd, I have used iSCSI to mount the storage
node's volumes. With iet-0.4.12b and open-iscsi-1.0-485, mdadm worked
well. I'm trying other solutions because the head node would always
crash before getting through a rebuild which I suspect is a problem of
open-iscsi, the hardware or both. I was also hoping mdadm would handle
failures better when using native block devices.
I've spent the last few days trying different combinations to pinpoint
the problem, but configuration seems to make no difference. Any FC4
system trying to RAID 1 or RAID 5 any size nbd volumes from any
system(s) will hang. However, and array built without ndb works fine.
So, I would like to get this nbd/mdadm configuration working, but I am
uncertain where best to look next. I would think it best to determine
where this hang is happening, but my code and kernel debugging skills
are not the best. Would anyone have suggestions on good tests for me to
run or where else I should look?
My thanks in advance and my apologies if I'm missing something blatantly
obvious.
Brian
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: Help Please! mdadm hangs when using nbd or gnbd
2006-02-23 0:25 Help Please! mdadm hangs when using nbd or gnbd Brian Kelly
@ 2006-02-23 20:36 ` JaniD++
2006-02-24 23:48 ` Brian Kelly
0 siblings, 1 reply; 3+ messages in thread
From: JaniD++ @ 2006-02-23 20:36 UTC (permalink / raw)
To: Brian Kelly; +Cc: linux-raid
----- Original Message -----
From: "Brian Kelly" <bkelly@unidata.ucar.edu>
To: <linux-raid@vger.kernel.org>
Sent: Thursday, February 23, 2006 1:25 AM
Subject: Help Please! mdadm hangs when using nbd or gnbd
> Hail to the Great Linux RAID Gurus! I humbly seek any assistance you
> can offer.
>
> I am building a couple of 20 TB logical volumes from six storage nodes
> each offering two 8TB raw storage devices built with Broadcom RAIDCore
> BC4852 SATA cards. Each storage node (called leadstor1-6) needs to
> publish its two raw devices with iSCSI, nbd or gnbd over a gigabit
> network which the head node (leadstor) combines into a RAID 5 volume
> using mdadm.
>
> My problem is that when using nbd or gnbd the original build of the
> array on the head node quickly halts, as if a deadlock has occurred. I
> have this problem with RAID 1 and RAID 5 configurations regardless of
> the size of the storage node published devices. Here's a demonstration
> with two 4 TB drives being mirrored using nbd:
>
> *** Begin Demonstration ***
>
> [root@leadstor nbd-2.8.3]# uname -a
> Linux leadstor.unidata.ucar.edu 2.6.15-1.1831_FC4smp #1 SMP Tue Feb 7
> 13:51:52 EST 2006 x86_64 x86_64 x86_64 GNU/Linux
>
> >>> I start by preparing the system for nbd and md devices
>
> [root@leadstor ~]# modprobe nbd
> [root@leadstor ~]# cd /dev
> [root@leadstor dev]# ./MAKEDEV nb
> [root@leadstor dev]# ./MAKEDEV md
>
> >>> I then mount two 4TB volumes from leadstor5 and leadstor6
>
> [root@leadstor dev]# cd /opt/nbd-2.8.3
> [root@leadstor nbd-2.8.3]# ./nbd-client leadstor5 2002 /dev/nb5
> Negotiation: ..size = 3899484160KB
> bs=1024, sz=3899484160
> [root@leadstor nbd-2.8.3]# ./nbd-client leadstor6 2002 /dev/nb6
> Negotiation: ..size = 3899484160KB
> bs=1024, sz=3899484160
>
> >>> I confirm the volumes are mounted properly
>
> [root@leadstor nbd-2.8.3]# fdisk -l /dev/nb5
>
> Disk /dev/nb5: 3993.0 GB, 3993071779840 bytes
> 255 heads, 63 sectors/track, 485463 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/nb5 doesn't contain a valid partition table
> [root@leadstor nbd-2.8.3]# fdisk -l /dev/nb6
>
> Disk /dev/nb6: 3993.0 GB, 3993071779840 bytes
> 255 heads, 63 sectors/track, 485463 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/nb6 doesn't contain a valid partition table
>
> >>> I prepare the drives to be used in mdadm
>
> [root@leadstor nbd-2.8.3]# mdadm -V
> mdadm - v1.12.0 - 14 June 2005
> [root@leadstor nbd-2.8.3]# mdadm --zero-superblock /dev/nb5
> [root@leadstor nbd-2.8.3]# mdadm --zero-superblock /dev/nb6
>
> >>> I create a device to mirror the two volumes
>
> [root@leadstor nbd-2.8.3]# mdadm --create /dev/md2 -l 1 -n 2 /dev/nb5
> /dev/nb6
> mdadm: array /dev/md2 started.
>
> >>> And watch the progress in /proc/mdstat
>
> [root@leadstor nbd-2.8.3]# date
> Wed Feb 22 16:18:55 MST 2006
> [root@leadstor nbd-2.8.3]# cat /proc/mdstat
> Personalities : [raid1]
> md2 : active raid1 nbd6[1] nbd5[0]
> 3899484096 blocks [2/2] [UU]
> [>....................] resync = 0.0% (1408/3899484096)
> finish=389948.2min speed=156K/sec
>
> md1 : active raid1 sdb3[1] sda3[0]
> 78188288 blocks [2/2] [UU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 128384 blocks [2/2] [UU]
>
> unused devices: <none>
>
> >>> But no more has been done a minute later
>
> [root@leadstor nbd-2.8.3]# date
> Wed Feb 22 16:19:49 MST 2006
> [root@leadstor nbd-2.8.3]# cat /proc/mdstat
> Personalities : [raid1]
> md2 : active raid1 nbd6[1] nbd5[0]
> 3899484096 blocks [2/2] [UU]
> [>....................] resync = 0.0% (1408/3899484096)
> finish=2599655.1min speed=23K/sec
>
> md1 : active raid1 sdb3[1] sda3[0]
> 78188288 blocks [2/2] [UU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 128384 blocks [2/2] [UU]
>
> unused devices: <none>
>
> >>> And later still, no more of the resync has been done
>
> [root@leadstor nbd-2.8.3]# date
> Wed Feb 22 16:20:38 MST 2006
> [root@leadstor nbd-2.8.3]# cat /proc/mdstat
> Personalities : [raid1]
> md2 : active raid1 nbd6[1] nbd5[0]
> 3899484096 blocks [2/2] [UU]
> [>....................] resync = 0.0% (1408/3899484096)
> finish=4679379.2min speed=13K/sec
>
> md1 : active raid1 sdb3[1] sda3[0]
> 78188288 blocks [2/2] [UU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 128384 blocks [2/2] [UU]
>
> unused devices: <none>
>
> >>> At this point, the resync is stuck and the system is idle. I have
> left it overnight, but it progresses no further. 100% of the time this
> test will stop at 1408 on the rebuild. With other configurations, the
> number will change (for example, it was 1280 for a 6 column RAID 5), but
> always halt at the same spot.
>
> >>> Nothing is logged in the system files
>
> [root@leadstor nbd-2.8.3]# tail -15 /var/log/messages
> Feb 22 15:48:35 leadstor kernel: parport: PnPBIOS parport detected.
> Feb 22 15:48:35 leadstor kernel: parport0: PC-style at 0x378, irq 7
[PCSPP]
> Feb 22 15:48:35 leadstor kernel: lp0: using parport0 (interrupt-driven).
> Feb 22 15:48:35 leadstor kernel: lp0: console ready
> Feb 22 15:48:37 leadstor fstab-sync[2585]: removed all generated mount
> points
> Feb 22 16:01:00 leadstor sshd(pam_unix)[3000]: session opened for user
> root by root(uid=0)
> Feb 22 16:06:10 leadstor kernel: nbd: registered device at major 43
> Feb 22 16:07:43 leadstor sshd(pam_unix)[3199]: session opened for user
> root by root(uid=0)
> Feb 22 16:18:51 leadstor kernel: md: bind<nbd5>
> Feb 22 16:18:51 leadstor kernel: md: bind<nbd6>
> Feb 22 16:18:51 leadstor kernel: raid1: raid set md2 active with 2 out
> of 2 mirrors
> Feb 22 16:18:51 leadstor kernel: md: syncing RAID array md2
> Feb 22 16:18:51 leadstor kernel: md: minimum _guaranteed_ reconstruction
> speed: 1000 KB/sec/disc.
> Feb 22 16:18:51 leadstor kernel: md: using maximum available idle IO
> bandwidth (but not more than 200000 KB/sec) for reconstruction.
> Feb 22 16:18:51 leadstor kernel: md: using 128k window, over a total of
> 3899484096 blocks.
>
> >>> And one last check of the rebuild
>
> [root@leadstor nbd-2.8.3]# date
> Wed Feb 22 16:33:50 MST 2006
> [root@leadstor nbd-2.8.3]# cat /proc/mdstat
> Personalities : [raid1]
> md2 : active raid1 nbd6[1] nbd5[0]
> 3899484096 blocks [2/2] [UU]
> [>....................] resync = 0.0% (1408/3899484096)
> finish=38994826.8min speed=1K/sec
>
> md1 : active raid1 sdb3[1] sda3[0]
> 78188288 blocks [2/2] [UU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 128384 blocks [2/2] [UU]
>
> unused devices: <none>
>
> >>> Now if I try to abort the build, the command also hangs
>
> [root@leadstor nbd-2.8.3]# mdadm --misc --stop /dev/md2
> * never returns*
>
> >>> But I can connect with another shell and poke around
>
> Last login: Wed Feb 22 16:07:43 2006 from robin.unidata.ucar.edu
> [root@leadstor ~]# ps -eaf | grep md
> root 578 9 0 15:48 ? 00:00:00 [md1_raid1]
> root 579 9 0 15:48 ? 00:00:00 [md0_raid1]
> root 2181 1 0 15:48 ? 00:00:00 mdadm --monitor --scan
> -f --pid-file=/var/run/mdadm/mdadm.pid
> root 2258 1 0 15:48 ? 00:00:00 [krfcommd]
> root 3298 9 0 16:18 ? 00:00:00 [md2_raid1]
> root 3299 9 0 16:18 ? 00:00:00 [md2_resync]
> root 3384 3049 0 16:35 pts/1 00:00:00 mdadm --misc --stop
/dev/md2
> root 3426 3399 0 16:37 pts/2 00:00:00 grep md
>
> >>> But all the md2 processes are wedged and can not be killed
>
> [root@leadstor ~]# kill -9 3298 3299 3384
> [root@leadstor ~]# ps -eaf | grep md
> root 578 9 0 15:48 ? 00:00:00 [md1_raid1]
> root 579 9 0 15:48 ? 00:00:00 [md0_raid1]
> root 2181 1 0 15:48 ? 00:00:00 mdadm --monitor --scan
> -f --pid-file=/var/run/mdadm/mdadm.pid
> root 2258 1 0 15:48 ? 00:00:00 [krfcommd]
> root 3298 9 0 16:18 ? 00:00:00 [md2_raid1]
> root 3299 9 0 16:18 ? 00:00:00 [md2_resync]
> root 3384 3049 0 16:35 pts/1 00:00:00 mdadm --misc --stop
/dev/md2
> root 3431 3399 0 16:38 pts/2 00:00:00 grep md
>
> >>> So, to get rid of these processes I reboot the system and have to
> power down the box since the shutdown process stops when unloading
> iptables or md
>
> >>> The head node is running Fedora Core 4 with the latest 2.6.15smp
> kernel since it was mentioned that some deadlock issues were fixed
> there. It is running two Opteron CPUs at 1600MHz and has 2GB of RAM.
> The storage nodes are FC4 2.6.14 but with a single CPU and 1 GB of RAM.
> All systems are using nbd-2.8.3, but the problem systems are the same
> when using gnbd in Red Hat's GFS cluster software. The systems
> interconnect with a dedicated gigabit copper network.
>
> *** End Demonstration ***
>
> This problem seems to exist on both single and multi-threaded kernels.
> When I repeat the procedure, but on one of the uni-processor systems,
> the resync gets further but still hangs. Here's where it hung on
leadstor1:
>
> [root@leadstor1 nbd-2.8.3]# uname -a
> Linux leadstor1.unidata.ucar.edu 2.6.14-1.1653_FC4 #1 Tue Dec 13
> 21:34:16 EST 2005 x86_64 x86_64 x86_64 GNU/Linux
> [root@leadstor1 nbd-2.8.3]# cat /proc/mdstat
> Personalities : [raid1]
> md2 : active raid1 nbd6[1] nbd5[0]
> 3899484096 blocks [2/2] [UU]
> [>....................] resync = 0.0% (1409024/3899484096)
> finish=1936.4min speed=33548K/sec
>
> md1 : active raid1 sdb3[1] sda3[0]
> 78188288 blocks [2/2] [UU]
>
> md0 : active raid1 sdb1[1] sda1[0]
> 128384 blocks [2/2] [UU]
>
> unused devices: <none>
>
> In addition to nbd and gnbd, I have used iSCSI to mount the storage
> node's volumes. With iet-0.4.12b and open-iscsi-1.0-485, mdadm worked
> well. I'm trying other solutions because the head node would always
> crash before getting through a rebuild which I suspect is a problem of
> open-iscsi, the hardware or both. I was also hoping mdadm would handle
> failures better when using native block devices.
>
> I've spent the last few days trying different combinations to pinpoint
> the problem, but configuration seems to make no difference. Any FC4
> system trying to RAID 1 or RAID 5 any size nbd volumes from any
> system(s) will hang. However, and array built without ndb works fine.
>
> So, I would like to get this nbd/mdadm configuration working, but I am
> uncertain where best to look next. I would think it best to determine
> where this hang is happening, but my code and kernel debugging skills
> are not the best. Would anyone have suggestions on good tests for me to
> run or where else I should look?
>
> My thanks in advance and my apologies if I'm missing something blatantly
> obvious.
>
> Brian
Hello,
I have use a similar system, and i have some ideas:
The general nbd deadlock is fixed on 2.6.16 series!
The head node is X86_64 system, or 32 bit?
Please try this system with 1.99TB nbd devices, and let me know, it is
works?
(I use my system like this: nbd-server 1230 /dev/md0 2097000 )
Check this if the sync is stoped:
1.
ps fax | grep nbd-client
2.
dd if=/dev/nbX of=/dev/null bs=1M count=1 (or more)
And dmesg messages after dd!
3. make sure about network package lost.
Cheers,
Janos
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Help Please! mdadm hangs when using nbd or gnbd
2006-02-23 20:36 ` JaniD++
@ 2006-02-24 23:48 ` Brian Kelly
0 siblings, 0 replies; 3+ messages in thread
From: Brian Kelly @ 2006-02-24 23:48 UTC (permalink / raw)
To: linux-raid; +Cc: JaniD++
Hi JaniD++, thank you very much for your reply.
I downloaded, patched, compiled and installed the 2.6.16-rc4 version of
the kernel and installed it on the system where mdadm rebuilds quickly
stop. It made no difference and the rebuild stoped in the same place.
Yes, this server is a dual processor x86_64 system running two AMD
Opteron 242 processors. The mdadm rebuild hangs regardless of using the
standard or smp kernels.
I have duplicated the problem with devices totalling less than 2TB but I
have not tried it with an argument to nbd-server. Let me do that now:
Last login: Fri Feb 24 13:49:38 2006 from leadstor.unidata.ucar.edu
[root@leadstor1 ~]# uname -a
Linux leadstor1.unidata.ucar.edu 2.6.14-1.1653_FC4 #1 Tue Dec 13
21:34:16 EST 2005 x86_64 x86_64 x86_64 GNU/Linux
[root@leadstor1 ~]# modprobe nbd
[root@leadstor1 ~]# cd /dev
[root@leadstor1 dev]# ./MAKEDEV md
[root@leadstor1 dev]# ./MAKEDEV nb
[root@leadstor1 dev]# cd /opt/nbd-2.8.3
[root@leadstor1 nbd-2.8.3]# ./nbd-client leadstor5 2002 /dev/nb5
Negotiation: ..size = 2047KB
bs=1024, sz=2047
[root@leadstor1 nbd-2.8.3]# ./nbd-client leadstor6 2002 /dev/nb6
Negotiation: ..size = 2047KB
bs=1024, sz=2047
[root@leadstor1 nbd-2.8.3]# fdisk -l /dev/nb5
Disk /dev/nb5: 2 MB, 2096128 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/nb5 doesn't contain a valid partition table
[root@leadstor1 nbd-2.8.3]# fdisk -l /dev/nb6
Disk /dev/nb6: 2 MB, 2096128 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/nb6 doesn't contain a valid partition table
>>> Hmm, 2 MB drives? The nbd-server argument must be in bytes. Well,
let's see if it builds.
[root@leadstor1 nbd-2.8.3]# mdadm --create /dev/md2 -l 1 -n 2 /dev/nb5
/dev/nb6
mdadm: array /dev/md2 started.
[root@leadstor1 nbd-2.8.3]# date
Fri Feb 24 14:31:54 MST 2006
[root@leadstor1 nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
1920 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
>>> Okay, that worked, but all the resyncs go a short way before
quitting. Let me expand the drives to 2GB and try again.
[root@leadstor1 nbd-2.8.3]# ./nbd-client leadstor5 2002 /dev/nb5
Negotiation: ..size = 2047851KB
bs=1024, sz=2047851
[root@leadstor1 nbd-2.8.3]# ./nbd-client leadstor6 2002 /dev/nb6
Negotiation: ..size = 2047851KB
bs=1024, sz=2047851
[root@leadstor1 nbd-2.8.3]# mdadm --create /dev/md2 -l 1 -n 2 /dev/nb5
/dev/nb6 mdadm: array /dev/md2 started.
[root@leadstor1 nbd-2.8.3]# date
Fri Feb 24 14:50:10 MST 2006
[root@leadstor1 nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
2047744 blocks [2/2] [UU]
[==>..................] resync = 13.1% (268480/2047744)
finish=0.4min speed=67120K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
[root@leadstor1 nbd-2.8.3]# date
Fri Feb 24 14:50:24 MST 2006
[root@leadstor1 nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
2047744 blocks [2/2] [UU]
[=========>...........] resync = 46.8% (958656/2047744)
finish=0.2min speed=63910K/sec
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
[root@leadstor1 nbd-2.8.3]# date
Fri Feb 24 14:50:45 MST 2006
[root@leadstor1 nbd-2.8.3]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 nbd6[1] nbd5[0]
2047744 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
78188288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
128384 blocks [2/2] [UU]
unused devices: <none>
>>> Hmm, that worked too. Okay, let me do more testing...
Okay, I'm back. A 2,000,000 MB devices stops syncing immediately but a
1,000,000 MB devices gets 10% of the way before it hangs. I don't have
time for more testing tonight, but this does appear to be a size issue
after all. What confuses me is that smaller volumes seem to have the
same problem, but work for a longer period before failing. More testing
might show if it happens at the same point each time or if it varies
like a race condition. I could also see if I can duplicate the problem
with dd, as you suggested.
So it seems NBDs may not be able to handle LBDs. Can anyone confirm if
this is the case? Would any other tests help pinpoint the cause of my
problem?
Thanks again for any help.
Brian
JaniD++ wrote:
>
>Hello,
>
>I have use a similar system, and i have some ideas:
>
>The general nbd deadlock is fixed on 2.6.16 series!
>
>The head node is X86_64 system, or 32 bit?
>Please try this system with 1.99TB nbd devices, and let me know, it is
>works?
>(I use my system like this: nbd-server 1230 /dev/md0 2097000 )
>
>Check this if the sync is stoped:
>
>1.
>ps fax | grep nbd-client
>
>2.
>dd if=/dev/nbX of=/dev/null bs=1M count=1 (or more)
>And dmesg messages after dd!
>
>3. make sure about network package lost.
>
>Cheers,
>Janos
>
>
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2006-02-24 23:48 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-23 0:25 Help Please! mdadm hangs when using nbd or gnbd Brian Kelly
2006-02-23 20:36 ` JaniD++
2006-02-24 23:48 ` Brian Kelly
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).