Linux software RAID maintainers and developers, Two months ago I wrote to the Linux kernel mailing list regarding a condition expressed as "BUG: soft lockup - CPU#3 stuck for 61s!". I initially battled this recurring problem in both Fedora 10 and Fedora 11. Rafael J. Wysocki suggested that I update the kernel (to 2.6.31-rc4 or later) and see if the problem resurfaced. I then used kernel 2.6.31-0.94.rc4.fc12.x86_64 and found that the problem still continued, but noticeably only when the md data-check process was run. You can read the last post to the LKML thread (with links to the entire thread) here: http://lkml.org/lkml/2009/8/6/387 The md data-check is being run in Fedora 11 by /etc/cron.weekly/raid-check which is a little shell script that looks like this: ------------------------------------------------------ #!/bin/bash for dev in `grep "^md.*: active" /proc/mdstat | cut -f 1 -d ' '`; do [ -f /sys/block/$dev/md/sync_action ] && \ echo "check" > /sys/block/$dev/md/sync_action done ------------------------------------------------------ I have disabled this weekly data-check, and since doing so have not encountered any soft lockup (or any other problem, for that matter). For reference, you can see in this thread here that I am not the only one to have this problem: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/212684 It may important to note that I am running a 4-disk RAID1 array on 500GB Western-Digial SATA drives. I've attached the output of 'lspci -vv' as "lspci.out". I've attached the output of 'hdparm -I /dev/sda' as "hdparm.out". /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd are all identical drive types. I've attached the output of 'for i in 0 1 2; do mdadm --detail /dev/md$i; done' as "mdadm.out". Please let me know if this is a configuration problem, a kernel bug, or something else. Please let me know how to fix this problem so that I can safely re-enable md data-check. Thanks, Lee.