As part of HP's Open Source and Linux Organization's Performance and Scalability Group, I've noticed what looks to be a regression in U320 SCSI performance coming down the pike. Background: These measurements were performed on Red Hat RHEL4 update 2 (2.6.9-based) and a "generic" 2.6.14-based kernel; I used an HP RX4640 with 4 dual-U320 LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 08) adapters, each with two buses containing 5 72GB/15k U320 drives (for a total of 40 drives). The runs were done in single-user mode (to minimize effects from other tasks), and report results in megabytes per second. I ran tests for various block sized IOs (1KB, 2KB, 4, ...256KB per IO). While trying to gage other things, I ran across this set of strange phenomena on single-disk IOs (meaning: tests run one at a time on an individual disk): Read averages: BS(KB) RHEL4U2 2.6.14 %-diff ------ -------- -------- ------ 1 23.8 12.3 -48.4% 2 46.6 21.3 -54.2% 4 87.5 44.0 -49.7% 8 89.9 51.8 -42.4% 16 89.9 74.2 -17.4% 32 89.9 89.9 0.0% 64 89.9 89.9 0.0% 128 89.9 89.9 0.0% 256 89.9 89.8 -0.1% For reads, we see a huge average drop for block sizes up to 32KB. Write averages: BS(KB) RHEL4U2 2.6.14 %-diff ------ -------- -------- ------ 1 3.6 3.6 -0.2% 2 8.2 8.1 -1.3% 4 19.7 19.0 -3.2% 8 52.6 49.7 -5.5% 16 84.8 78.3 -7.6% 32 84.7 78.8 -7.0% 64 84.8 79.9 -5.8% 128 84.6 81.8 -3.3% 256 84.7 82.0 -3.1% For writes, an across the board smaller decrease in average performance. The real kicker though came out when I show individual disk results: Going back to the read cases: Here are a couple of (representative) drives - all the data is added as an attachment - I see some drives with monstrously large performance reductions like: (0,1) 1 2 4 8 16 32 64 128 256 RHEL4U2 23.8 46.8 87.8 89.6 89.5 89.6 89.5 89.5 89.5 2.6.14 3.9 7.8 15.6 31.2 62.5 89.6 89.5 89.5 89.6 Differ -83.6% -83.3% -82.2% -65.2% -30.2% 0.0% 0.0% 0.0% 0.0% (0,1) means target 1 on bus 0, and then the columns represent different transfer sizes (1K up to 256K). I also see some that show little if any difference: (6,4) 1 2 4 8 16 32 64 128 256 RHEL4U2 23.9 46.9 88.5 90.0 89.9 89.9 89.9 90.0 89.9 2.6.14.2 24.1 47.0 88.5 90.0 89.9 90.0 89.9 89.9 89.9 Differ 0.8% 0.2% 0.0% 0.0% 0.0% 0.1% 0.0% -0.1% 0.0% (Which tends to explain the average being "only" about 50% as listed in the first table.) And there are other drives which show some unchanged, and some changed results: (3,1) 1 2 4 8 16 32 64 128 256 RHEL4U2 23.8 46.1 86.6 91.2 91.2 91.2 91.2 91.2 91.2 2.6.14.2 23.9 46.3 66.9 31.2 91.2 91.2 91.2 91.2 91.2 Differ 0.4% 0.4% -22.7% -65.8% 0.0% 0.0% 0.0% 0.0% 0.0% (3,2) 1 2 4 8 16 32 64 128 256 RHEL4U2 23.6 46.0 86.1 89.8 89.7 89.8 89.8 89.8 89.8 2.6.14.2 23.8 46.2 15.6 89.8 62.5 89.8 89.8 89.8 89.3 Differ 0.8% 0.4% -81.9% 0.0% -30.3% 0.0% 0.0% 0.0% -0.6% One thing I will point out: while I haven't looked to see if the same drivers consistently show similar issues, I have done multiple runs with the same characteristics noted in the averages -- meaning, it's not single run peculiarities in the general sense. [Next I am going to see if I can match drives from run-to-run to see if the same ones exhibit "bad" read regressions, or whether that moves around... I am also going to check out runs on straight 2.6.9 (instead of using Red Hat's kernel...)] For the write case, the drives exhibited similar differences, but on a much smaller scale - here are some examples: (0,4) 1 2 4 8 16 32 64 128 256 RHEL4U2 3.6 8.2 19.6 54.7 86.2 85.9 86.0 85.9 85.9 2.6.14.2 3.5 8.1 19.0 52.0 79.5 80.0 80.9 83.1 83.4 Differ -2.8% -1.2% -3.1% -4.9% -7.8% -6.9% -5.9% -3.3% -2.9% (0,5) 1 2 4 8 16 32 64 128 256 RHEL4U2 3.6 8.1 19.5 53.5 86.8 86.7 86.7 86.3 86.7 2.6.14.2 3.6 8.1 19.0 51.1 79.8 80.5 81.3 83.0 83.9 Differ 0.0% 0.0% -2.6% -4.5% -8.1% -7.2% -6.2% -3.8% -3.2% (1,1) 1 2 4 8 16 32 64 128 256 RHEL4U2 3.6 8.2 19.6 55.0 86.7 87.3 87.1 87.0 86.8 2.6.14.2 3.5 8.1 19.0 52.0 80.2 80.9 82.7 84.0 84.0 Differ -2.8% -1.2% -3.1% -5.5% -7.5% -7.3% -5.1% -3.4% -3.2% (1,2) 1 2 4 8 16 32 64 128 256 RHEL4U2 3.6 8.1 19.5 53.6 86.2 86.1 85.9 86.1 86.5 2.6.14.2 3.6 8.1 19.0 51.1 78.9 80.0 82.0 83.1 83.3 Differ 0.0% 0.0% -2.6% -4.7% -8.5% -7.1% -4.5% -3.5% -3.7% So, the natural question - has anybody else noted such things? Any ideas as to why? Thanks, Alan D. Brunelle