* Re: PERC5 - MegaRaid-SAS problems.. [not found] <Pine.LNX.4.56.0610121246400.11816@lion.drogon.net> @ 2006-10-17 10:44 ` Gordon Henderson 2006-10-17 14:34 ` Patrick_Boyd 2006-10-17 17:58 ` Andrew Moise 0 siblings, 2 replies; 8+ messages in thread From: Gordon Henderson @ 2006-10-17 10:44 UTC (permalink / raw) To: linux-poweredge, linux-raid For anyone who cares about my saga so-far ;-) ... I got physical access to the unit this morning and setup the drives as 15 RAID-0 Logical drives and booted up Linux, and it then attached all the drives in the usual way. And I can see all 15 drives. So the down-side is that I can't use any sort of SMART stuff over the bus (as pointed out by someone else). Anyway, it's currently in a RAID-1 configuration (which I used for some initial soaktests) and seems to be just fine: Filesystem Size Used Avail Use% Mounted on /dev/md9 6.8T 33M 6.7T 1% /mnt And it's more than fast enough too, although it'll be a lot slower when I rebuilt it in RAID-6 mode. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP bertha 2G 266336 99 203934 61 421850 40 872.9 1 md9 : active raid0 sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0] 7317748800 blocks 64k chunks Cheers, Gordon ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: PERC5 - MegaRaid-SAS problems.. 2006-10-17 10:44 ` PERC5 - MegaRaid-SAS problems Gordon Henderson @ 2006-10-17 14:34 ` Patrick_Boyd 2006-10-17 17:58 ` Andrew Moise 1 sibling, 0 replies; 8+ messages in thread From: Patrick_Boyd @ 2006-10-17 14:34 UTC (permalink / raw) To: gordon, linux-poweredge, linux-raid You can use either LSI or Dell utilities to monitor the SMART status of the disks. Patrick Boyd Dell Storage Software Engineer (512)728-3182 -----Original Message----- From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Gordon Henderson Sent: Tuesday, October 17, 2006 5:45 AM To: linux-poweredge-Lists; linux-raid@vger.kernel.org Subject: Re: PERC5 - MegaRaid-SAS problems.. For anyone who cares about my saga so-far ;-) ... I got physical access to the unit this morning and setup the drives as 15 RAID-0 Logical drives and booted up Linux, and it then attached all the drives in the usual way. And I can see all 15 drives. So the down-side is that I can't use any sort of SMART stuff over the bus (as pointed out by someone else). Anyway, it's currently in a RAID-1 configuration (which I used for some initial soaktests) and seems to be just fine: Filesystem Size Used Avail Use% Mounted on /dev/md9 6.8T 33M 6.7T 1% /mnt And it's more than fast enough too, although it'll be a lot slower when I rebuilt it in RAID-6 mode. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP bertha 2G 266336 99 203934 61 421850 40 872.9 1 md9 : active raid0 sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0] 7317748800 blocks 64k chunks Cheers, Gordon - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PERC5 - MegaRaid-SAS problems.. 2006-10-17 10:44 ` PERC5 - MegaRaid-SAS problems Gordon Henderson 2006-10-17 14:34 ` Patrick_Boyd @ 2006-10-17 17:58 ` Andrew Moise 2006-10-17 18:13 ` Greg Dickie 2006-10-17 20:22 ` Gordon Henderson 1 sibling, 2 replies; 8+ messages in thread From: Andrew Moise @ 2006-10-17 17:58 UTC (permalink / raw) To: Gordon Henderson; +Cc: linux-poweredge, linux-raid On 10/17/06, Gordon Henderson <gordon@drogon.net> wrote: > Anyway, it's currently in a RAID-1 configuration (which I used for some > initial soaktests) and seems to be just fine: > > Filesystem Size Used Avail Use% Mounted on > /dev/md9 6.8T 33M 6.7T 1% /mnt Incidentally, what filesystem are you using? We may be setting up a similar situation soon, and I'm considering ext3 and XFS. ext3 apparently has a low max fs size, though, and I've heard bad things about XFS's behavior on unclean shutdowns... ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PERC5 - MegaRaid-SAS problems.. 2006-10-17 17:58 ` Andrew Moise @ 2006-10-17 18:13 ` Greg Dickie 2006-10-17 20:24 ` Gordon Henderson 2006-10-17 20:22 ` Gordon Henderson 1 sibling, 1 reply; 8+ messages in thread From: Greg Dickie @ 2006-10-17 18:13 UTC (permalink / raw) To: Andrew Moise; +Cc: Gordon Henderson, linux-raid, linux-poweredge Never lost an XFS filesystem completely. Can't say the same about ext3. just my 2 cents, Greg On Tue, 2006-10-17 at 13:58 -0400, Andrew Moise wrote: > On 10/17/06, Gordon Henderson <gordon@drogon.net> wrote: > > Anyway, it's currently in a RAID-1 configuration (which I used for some > > initial soaktests) and seems to be just fine: > > > > Filesystem Size Used Avail Use% Mounted on > > /dev/md9 6.8T 33M 6.7T 1% /mnt > > Incidentally, what filesystem are you using? We may be setting up a > similar situation soon, and I'm considering ext3 and XFS. ext3 > apparently has a low max fs size, though, and I've heard bad things > about XFS's behavior on unclean shutdowns... > > _______________________________________________ > Linux-PowerEdge mailing list > Linux-PowerEdge@dell.com > http://lists.us.dell.com/mailman/listinfo/linux-poweredge > Please read the FAQ at http://lists.us.dell.com/faq -- Greg Dickie just a guy Maximum Throughput ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PERC5 - MegaRaid-SAS problems.. 2006-10-17 18:13 ` Greg Dickie @ 2006-10-17 20:24 ` Gordon Henderson 0 siblings, 0 replies; 8+ messages in thread From: Gordon Henderson @ 2006-10-17 20:24 UTC (permalink / raw) To: Greg Dickie; +Cc: linux-raid, linux-poweredge On Tue, 17 Oct 2006, Greg Dickie wrote: > Never lost an XFS filesystem completely. Can't say the same about ext3. Whereas I have exactly the reverse of the problem... Never lost an ext2/3, but had a few XFSs trashed when I played with it a couple of years ago... My 2 euros, Gordon _______________________________________________ Linux-PowerEdge mailing list Linux-PowerEdge@dell.com http://lists.us.dell.com/mailman/listinfo/linux-poweredge Please read the FAQ at http://lists.us.dell.com/faq ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PERC5 - MegaRaid-SAS problems.. 2006-10-17 17:58 ` Andrew Moise 2006-10-17 18:13 ` Greg Dickie @ 2006-10-17 20:22 ` Gordon Henderson 2006-10-17 22:19 ` Andrew Moise 1 sibling, 1 reply; 8+ messages in thread From: Gordon Henderson @ 2006-10-17 20:22 UTC (permalink / raw) To: Andrew Moise; +Cc: linux-poweredge, linux-raid On Tue, 17 Oct 2006, Andrew Moise wrote: > On 10/17/06, Gordon Henderson <gordon@drogon.net> wrote: > > Anyway, it's currently in a RAID-1 configuration (which I used for some > > initial soaktests) and seems to be just fine: > > > > Filesystem Size Used Avail Use% Mounted on > > /dev/md9 6.8T 33M 6.7T 1% /mnt > > Incidentally, what filesystem are you using? We may be setting up a > similar situation soon, and I'm considering ext3 and XFS. ext3 > apparently has a low max fs size, though, and I've heard bad things > about XFS's behavior on unclean shutdowns... I'm using ext3, and it's now RAID-6, so a little smaller: Filesystem Size Used Avail Use% Mounted on /dev/md9 5.9T 33M 5.7T 1% /archive And slower: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP bertha 2G 186290 96 111105 75 262779 71 602.0 2 I've looked at, and tried XFS (and JFS) and keep coming back to ext3, which I have on many, many servers and have never had an issue with it. I have had problems with XFS, but that was about 2 years ago, so things might have improved by then. This is the biggest server I've setup so-far, it beats my previous record of 3TB (which I have on 2 identical servers halfway round the world from each other and they've been doing well for several months now). After your comment about ext3 max FS size, I had a bit of a oo-er situation so had to do a bit of looking to make sure I was OK, and it seems that 16TB is the current limit, so OK there for a while, at least... (/usr/src/linux/documentation/filesystems/ext2.txt) Gordon ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PERC5 - MegaRaid-SAS problems.. 2006-10-17 20:22 ` Gordon Henderson @ 2006-10-17 22:19 ` Andrew Moise 2006-10-28 21:46 ` Chris Allen 0 siblings, 1 reply; 8+ messages in thread From: Andrew Moise @ 2006-10-17 22:19 UTC (permalink / raw) To: Gordon Henderson; +Cc: linux-poweredge, linux-raid On 10/17/06, Gordon Henderson <gordon@drogon.net> wrote: > I have had problems with XFS, but that was about 2 years ago, so things > might have improved by then. Well, filling some random files with zeroes because of an unclean shutdown is still defined as "correct" behavior in XFS. That hasn't changed that I've heard of. Why they haven't implemented something akin to "data=ordered" is a mystery to me. I guess I'm begging to start a filesystem flamewar at this point :-). > After your comment about ext3 max FS size, I had a bit of a oo-er > situation so had to do a bit of looking to make sure I was OK, and it > seems that 16TB is the current limit, so OK there for a while, at least... > (/usr/src/linux/documentation/filesystems/ext2.txt) Aah, great news. Different sources differ on what the max size is, so I was pessimistically assuming it might be as low as 2TB. Thanks. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: PERC5 - MegaRaid-SAS problems.. 2006-10-17 22:19 ` Andrew Moise @ 2006-10-28 21:46 ` Chris Allen 0 siblings, 0 replies; 8+ messages in thread From: Chris Allen @ 2006-10-28 21:46 UTC (permalink / raw) To: linux-raid Andrew Moise wrote: > On 10/17/06, Gordon Henderson <gordon@drogon.net> wrote: >> I have had problems with XFS, but that was about 2 years ago, so things >> might have improved by then. > > Well, filling some random files with zeroes because of an unclean > shutdown is still defined as "correct" behavior in XFS. That hasn't > changed that I've heard of. Why they haven't implemented something > akin to "data=ordered" is a mystery to me. > I guess I'm begging to start a filesystem flamewar at this point :-). I can consistently crash XFS over md (kernel panic) using a simple forking perl script that copies large numbers of files about. The response from the XFS mailing list was a shrug and a "well don't do that then..". Problem is, this exactly what the clients of my machines *will* do. I can't crash ext3 in this way no matter what I do. > >> After your comment about ext3 max FS size, I had a bit of a oo-er >> situation so had to do a bit of looking to make sure I was OK, and it >> seems that 16TB is the current limit, so OK there for a while, at >> least... >> (/usr/src/linux/documentation/filesystems/ext2.txt) > > Aah, great news. Different sources differ on what the max size is, > so I was pessimistically assuming it might be as low as 2TB. Thanks. > - The max in mainstream kernels is 8TB. mm kernels will support 16TB. ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2006-10-28 21:46 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <Pine.LNX.4.56.0610121246400.11816@lion.drogon.net>
2006-10-17 10:44 ` PERC5 - MegaRaid-SAS problems Gordon Henderson
2006-10-17 14:34 ` Patrick_Boyd
2006-10-17 17:58 ` Andrew Moise
2006-10-17 18:13 ` Greg Dickie
2006-10-17 20:24 ` Gordon Henderson
2006-10-17 20:22 ` Gordon Henderson
2006-10-17 22:19 ` Andrew Moise
2006-10-28 21:46 ` Chris Allen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).