From: "erich" <erich@areca.com.tw>
To: "Chris Caputo" <ccaputo@alt.net>
Cc: "(廣安科技)安可O" <billion.wu@areca.com.tw>,
"Al Viro" <viro@ftp.linux.org.uk>,
"Andrew Morton" <akpm@osdl.org>,
"Randy.Dunlap" <rdunlap@xenotime.net>,
"Matti Aarnio" <matti.aarnio@zmailer.org>,
linux-kernel@vger.kernel.org, dax@gurulabs.com, ccaputo@alt.net
Subject: Re: new Areca driver in 2.6.16-rc6-mm2 appears to be broken
Date: Thu, 30 Mar 2006 16:54:22 +0800 [thread overview]
Message-ID: <007701c653d7$8b8ee670$b100a8c0@erich2003> (raw)
In-Reply-To: Pine.LNX.4.64.0603212310070.20655@nacho.alt.net
Dear Chris Caputo,
Could you tell me about the file system of testing volume on bonnie++ ?
Does this issue come from different driver update?
I had done more two weeks of long time testing on three machines with
bonnie++ and iometer benchmark utility.
I have not got this phenomena in ext3 file system and reiserfs filesystem.
But I can reproduce this message immediately on large file "900MB" copy from
ARECA RAID volume format with ext2 file system.
The ext2 file system seem have this bug after linux kernel version 2.6.3.
I do same operation at linux 2.6.3 , it works fine.
Now I am researching the files system kernel source, I hope you can help me
to clear up the issue that what's happen with it
Best Regartds
Erich Chen
----- Original Message -----
From: "Chris Caputo" <ccaputo@alt.net>
To: "Erich Chen" <erich@areca.com.tw>
Sent: Wednesday, March 22, 2006 7:45 AM
Subject: new Areca driver in 2.6.16-rc6-mm2 appears to be broken
> Erich,
>
> The new Areca driver in 2.6.16-rc6-mm2 is broken as far as I can tell.
>
> I applied the Areca driver in Linux 2.6.16-rc6-mm2 to a 2.6.15.6 as
> follows:
>
> cd /usr/src
> wget
> http://www.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.16-rc6/2.6.16-rc6-mm2/broken-out/areca-raid-linux-scsi-driver.patch
> wget
> http://www.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.16-rc6/2.6.16-rc6-mm2/broken-out/areca-raid-linux-scsi-driver-update4.patch
> cd /usr/src/linux
> cat ../areca-raid-linux-scsi-driver.patch | patch -p1
> cat ../areca-raid-linux-scsi-driver-update4.patch | patch -p1
>
> After compiling a new 2.6.15.6 kernel with the driver I was able to boot
> and things appeared fine until I ran a bonnie++ test. During the test the
> following started spewing endlessly and the system was unusable:
>
> ...
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> attempt to access beyond end of device
> sdb1: rw=0, want=134744080, limit=128002016
> ...
>
> When only "areca-raid-linux-scsi-driver.patch" is used, the system works
> fine for bonnie++ tests and general usage.
>
> The system is as follows:
>
> AMD Opteron 280 dual-core 2.4ghz. Revision E6, 256KB L1, 2048KB L2.
> SuperMicro H8DAE (rev 1.11)
> 8 gigabytes (4 * 2GB PC3200/DDR400 REG ECC)
> Areca Tekram ARC-1160ML SATAII 16-port multi-lane with 256 megs RAM
> Areca ARC-6120 Battery Backup Module
> Four (4) Seagate ST3250823AS 250GB
> Twelve (12) Western Digital WD2500JD 250GB
>
> Info on the RAID config is below.
>
> Please let me know how I can assist.
>
> Thank you,
> Chris Caputo
>
> ---
>
> # areca rsf info
> Num Name Disks TotalCap FreeCap DiskChannels State
> ===============================================================================
> 1 Raid Set # 00 14 3500.0GB 0.0GB FG3456789ABCDE Normal
> ===============================================================================
> GuiErrMsg<0x00>: Success.
>
> # areca vsf info
> # Name Raid# Level Capacity Ch/Id/Lun State
> ===============================================================================
> 1 ARC-1160-VOL#00 1 Raid0+1 19.0GB 00/00/00 Normal
> 2 ARC-1160-VOL#01 1 Raid0+1 1731.0GB 00/01/00 Normal
> ===============================================================================
> GuiErrMsg<0x00>: Success.
>
> # areca disk info
> Ch ModelName Serial# FirmRev Capacity State
> ===============================================================================
> 1 ST3250823AS 4ND1JKDW 3.03 250.1GB HotSpare
> 2 ST3250823AS 4ND1HEKE 3.03 250.1GB HotSpare
> 3 ST3250823AS 4ND1DEFN 3.03 250.1GB RaidSet
> Member(1)
> 4 ST3250823AS 4ND1E37B 3.03 250.1GB RaidSet
> Member(1)
> 5 WDC WD2500JD-00 WD-WMAEH1416638 02.05D02 250.1GB RaidSet
> Member(1)
> 6 WDC WD2500JD-00 WD-WMAEH1415477 02.05D02 250.1GB RaidSet
> Member(1)
> 7 WDC WD2500JD-00 WD-WMAEH1408943 02.05D02 250.1GB RaidSet
> Member(1)
> 8 WDC WD2500JD-00 WD-WMAEH1428940 02.05D02 250.1GB RaidSet
> Member(1)
> 9 WDC WD2500JD-00 WD-WMAEH1416508 02.05D02 250.1GB RaidSet
> Member(1)
> 10 WDC WD2500JD-00 WD-WMAEH1416317 02.05D02 250.1GB RaidSet
> Member(1)
> 11 WDC WD2500JD-00 WD-WMAEH1596552 02.05D02 250.1GB RaidSet
> Member(1)
> 12 WDC WD2500SD-01 WD-WMAL71480351 08.02D08 250.1GB RaidSet
> Member(1)
> 13 WDC WD2500JD-00 WD-WMAEH1408933 02.05D02 250.1GB RaidSet
> Member(1)
> 14 WDC WD2500JD-00 WD-WMAEH1398573 02.05D02 250.1GB RaidSet
> Member(1)
> 15 WDC WD2500JD-00 WD-WMAEH1409027 02.05D02 250.1GB RaidSet
> Member(1)
> 16 WDC WD2500SD-01 WD-WMAL72085064 08.02D08 250.1GB RaidSet
> Member(1)
> ===============================================================================
> GuiErrMsg<0x00>: Success.
next parent reply other threads:[~2006-03-30 8:54 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Pine.LNX.4.64.0603212310070.20655@nacho.alt.net>
2006-03-30 8:54 ` erich [this message]
2006-03-30 15:46 ` new Areca driver in 2.6.16-rc6-mm2 appears to be broken Chris Caputo
[not found] ` <004a01c65470$412daaa0$b100a8c0@erich2003>
[not found] ` <20060330192057.4bd8c568.akpm@osdl.org>
[not found] ` <20060331074237.GH14022@suse.de>
2006-03-31 8:36 ` erich
2006-04-12 13:20 ` erich
2006-04-19 10:40 ` Jens Axboe
2006-04-19 13:16 ` erich
2006-04-19 13:19 ` Jens Axboe
2006-04-20 1:54 ` erich
2006-04-20 6:42 ` Jens Axboe
2006-04-20 8:11 ` erich
2006-04-20 8:23 ` Jens Axboe
2006-04-20 9:32 ` erich
2006-04-20 9:39 ` Jens Axboe
2006-04-20 12:53 ` James Bottomley
2006-04-20 15:38 ` Randy.Dunlap
2006-04-25 8:45 ` erich
2006-04-25 16:02 ` Randy.Dunlap
2006-04-26 3:24 ` erich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='007701c653d7$8b8ee670$b100a8c0@erich2003' \
--to=erich@areca.com.tw \
--cc=akpm@osdl.org \
--cc=billion.wu@areca.com.tw \
--cc=ccaputo@alt.net \
--cc=dax@gurulabs.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matti.aarnio@zmailer.org \
--cc=rdunlap@xenotime.net \
--cc=viro@ftp.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox