linux-admin.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* HW RAID configuration - best practise question(s)
@ 2005-02-07 15:06 Dermot Paikkos
  2005-02-07 15:52 ` Jens Knoell
  2005-02-07 16:04 ` Scott Taylor
  0 siblings, 2 replies; 5+ messages in thread
From: Dermot Paikkos @ 2005-02-07 15:06 UTC (permalink / raw)
  To: linux-admin

Hi admins,

I'm about to buy a custom build server and wanted to know to 
configure the RAID. This will be the first time I have configured a 
RAID from scratch and I am a little un-sure about how the layout of 
the system.

The server will primarily be used as a file server, uses a 3Ware SATA 
8 port card and will serve about 50 users. There will initially be 4-
5 disks installed. I would like one as a hot-spare but it isn't 
written in stone at this time.

My own thoughts were to keep the root file system outside of the 
RAID.
This would allow me to do a complete OS re-install without effecting 
the RAID. There are a couple of reasons why an OS re-install might be 
desirable: I may change my distro (currently slackware) and I may 
choose to try the i64 instead of i386 (AMD Opteron). This is just my 
thoughts and there may be some good reasons for not configuring the 
system in this way. If you have any please let me know.

What about swap? How should you configure the swap space on such a 
system. Can swap space be assigned to a RAIDed file system? Should 
it? 

Is there anything else I should be thinking about or any documents 
should read (I tried tldp.org but nothing struck me). Any thoughts?
Thanx.
Dp.



~~
Dermot Paikkos * dermot@sciencephoto.com
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: HW RAID configuration - best practise question(s)
  2005-02-07 15:06 HW RAID configuration - best practise question(s) Dermot Paikkos
@ 2005-02-07 15:52 ` Jens Knoell
  2005-02-07 16:04 ` Scott Taylor
  1 sibling, 0 replies; 5+ messages in thread
From: Jens Knoell @ 2005-02-07 15:52 UTC (permalink / raw)
  To: dermot; +Cc: linux-admin

Good morning :)

Dermot Paikkos wrote:

>Hi admins,
>
>I'm about to buy a custom build server and wanted to know to 
>configure the RAID. This will be the first time I have configured a 
>RAID from scratch and I am a little un-sure about how the layout of 
>the system.
>
>The server will primarily be used as a file server, uses a 3Ware SATA 
>8 port card and will serve about 50 users. There will initially be 4-
>5 disks installed. I would like one as a hot-spare but it isn't 
>written in stone at this time.
>  
>
Well, for starters there are a bunch of different RAID options 
available. I'd go with RAID 5 if the controller supports it. To Linux 
it'll just appear as one physical disk anyway, i.e. you do not 
necessarily need to worry about any RAID config issues in Linux itself.

A hot-spare is nice to have, but IMO not immediately useful. I tend to 
keep a brand new (tested OK) disk on hand, but not plugged in. It all 
depends on how closely the system is monitored. In my experience it's 
rare that two disks fail immediately, and RAID 5 can work with one 
failed disk just fine for a limited time.

>My own thoughts were to keep the root file system outside of the 
>RAID.
>This would allow me to do a complete OS re-install without effecting 
>the RAID. There are a couple of reasons why an OS re-install might be 
>desirable: I may change my distro (currently slackware) and I may 
>choose to try the i64 instead of i386 (AMD Opteron). This is just my 
>thoughts and there may be some good reasons for not configuring the 
>system in this way. If you have any please let me know.
>  
>
Personally I wouldn't bother splitting one disk off the RAID array. You 
really don't gain anything, unless you have a good reason to run the 
disk in a non-RAID environment - i.e. if you preconfig the disk in 
another machine and then move it to the server or something.

>What about swap? How should you configure the swap space on such a 
>system. Can swap space be assigned to a RAIDed file system? Should 
>it? 
>  
>
Yep it most certainly can on hardware RAID. For performance reasons it 
should be on RAID too, IMO.

>Is there anything else I should be thinking about or any documents 
>should read (I tried tldp.org but nothing struck me). Any thoughts?
>Thanx.
>Dp.
>
The only thing you need to look for is the matching kernel module to 
boot from that controller. Other than that... nothing I can think of. I 
just slapped Slackware on an IBM e-Server with ServeRAID, and it worked 
perfectly on first try. I've simulated disk failures too... everything 
went rather smooth.

J.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: HW RAID configuration - best practise question(s)
  2005-02-07 15:06 HW RAID configuration - best practise question(s) Dermot Paikkos
  2005-02-07 15:52 ` Jens Knoell
@ 2005-02-07 16:04 ` Scott Taylor
  2005-02-08  8:49   ` urgrue
  1 sibling, 1 reply; 5+ messages in thread
From: Scott Taylor @ 2005-02-07 16:04 UTC (permalink / raw)
  To: dermot; +Cc: linux-admin


Dermot Paikkos said:
> Hi admins,
>
> I'm about to buy a custom build server and wanted to know to
> configure the RAID. This will be the first time I have configured a
> RAID from scratch and I am a little un-sure about how the layout of
> the system.
>
> The server will primarily be used as a file server, uses a 3Ware SATA
> 8 port card and will serve about 50 users. There will initially be 4-
> 5 disks installed. I would like one as a hot-spare but it isn't
> written in stone at this time.

SATA is for game computers and highend workstations.  Use SCSI for servers
and hardware RAID not software RAID.  IBM has 20KRPM SCSI drives now and
with the Ultra160 wide channels, data flow just screams.

4 drives with a RAID 5 over three drives with one hot-spare is a very
efficient configuration.

> My own thoughts were to keep the root file system outside of the
> RAID.

That is not necessary.  Your hardware RAID arrays will look like
individual drives to your software, treat them as such when you partition
them.  For myself, I like to make one array for the OS (/boot, /, and
/usr).  Then one for /home (and/or /opt) or wherever you will be storing
majority of user files or shares. Last, a small array for /etc, /tmp and
/var which I like to have available for reconfiguring a new OS install,
but it's not needed.

> What about swap? How should you configure the swap space on such a
> system. Can swap space be assigned to a RAIDed file system? Should
> it?

Why not.  If your hard drive with SWAP on it goes down, wouldn't you like
it to be as safe as the rest of the server?

> Is there anything else I should be thinking about or any documents
> should read (I tried tldp.org but nothing struck me). Any thoughts?

What I learned the hard way, with HW-RAID: make sure it is the only device
on the channel.  I once had a tape drive on the same cable and when I went
to change it, all the drives went off line.  That is not a good thing.

Anytime you read, "Warning! This action may cause data loss" read it as,
"make sure you have two good backups before doing this, because you _ARE_
going to need one".

Test it!: load up an OS, copy some large pictures to it, large documents,
and some third party software or something you can test.  Pull out a
drive, test the pics, docs, and software to make sure they will work while
the drive is off-line, while the drive is being rebuilt and once the
system if finished rebuilding.  Test as much as you can with your new RAID
before you trust it to a live, production server.  If you insist on
playing about with a SW-RAID, break it and make sure you can reboot LOL.

Never remove more then one drive at a time, ever, even with the thing shut
off.  When you replace the drive it will need to be rebuilt before another
drive can be removed, swapped or whatever.  Once you got it running,
stable and solid, don't mess with it.  If you take one drive off-line, for
any reason, that drive will need to be rebuilt before you can take another
off-line.

--
Scott

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: HW RAID configuration - best practise question(s)
  2005-02-07 16:04 ` Scott Taylor
@ 2005-02-08  8:49   ` urgrue
  2005-02-08 10:01     ` Dermot Paikkos
  0 siblings, 1 reply; 5+ messages in thread
From: urgrue @ 2005-02-08  8:49 UTC (permalink / raw)
  To: linux-admin

> SATA is for game computers and highend workstations.  Use SCSI for
> servers and hardware RAID not software RAID.  IBM has 20KRPM SCSI 
> drives now and with the Ultra160 wide channels, data flow just 
> screams.

I don't quite agree. SATA is excellent and significantly more 
affordable than SCSI. I would not recommend normal PATA IDE for 
anything, SATA for almost everything, and SCSI only for very high-end 
situations where money is not a concern. For the vast majority of RAID 
scenarios I would recommend SATA.
I'm very wary of software RAID, although I have used it in a few 
scenarios and it does do the job. But if nothing else, its much easier 
on the linux side if its a hardware solution, as linux will just see it 
as a single disk.

> 4 drives with a RAID 5 over three drives with one hot-spare is a very
> efficient configuration.

Yes, it is. One thing to keep in mind is to make sure you have a good 
system set up to send you an alert when a drive fails, though. I had 
one RAID array that due to configuration errors was unable to get its 
alarm mail through when a drive failed. Eventually a second drive 
failed, at which point we noticed it. Personally I got for RAID-10, 
just to be on the safe side. Drives are so cheap these days that I 
prefer to pay a little extra and gain that little bit of extra safety...

> > My own thoughts were to keep the root file system outside of the
> > RAID.
> 
> That is not necessary.  Your hardware RAID arrays will look like
> individual drives to your software, treat them as such when you
> partition

No it's not necessary. Personally however I do prefer to keep the OS on 
its own disk. This makes it so much easier to fix OS software problems. 
You can have an extra copy of the OS disk ready, so that in case of 
software failure you can swap the backup right in and be up and running 
in minutes instead of having to go through some rather more complicated 
process of restoring an OS to an existing RAID array.
It also makes patches/upgrades much easier, as you can apply them to 
the backup disk, swap, and see if everything is OK, and just swap back 
if not.
It's all just one step more complicated if the OS is on the RAID array.

> Why not.  If your hard drive with SWAP on it goes down, wouldn't you
> like it to be as safe as the rest of the server?

I keep swap, along with everything else OS-related, on the OS disk. The 
RAID array I use just for data.
All in all, it's a matter of preference and depends on how you set up 
your own systems, I wouldnt say there is any one correct answer.

> Test it!: load up an OS, copy some large pictures to it, large
> documents, and some third party software or something you can test.  
> Pull out a drive, test the pics, docs, and software to make sure they 
> will work while the drive is off-line, while the drive is being 
> rebuilt and once the system if finished rebuilding.  Test as much as 
> you can with your new RAID before you trust it to a live, production 
> server.  If you insist on playing about with a SW-RAID, break it and 
> make sure you can reboot LOL.

I cant agree more. After you install a RAID, TEST it every way you can. 
It's a nightmare situation if you realize youve lost all your data 
because of some misconfiguration or because something didnt work the 
way you thought it was supposed to. 

urgrue

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: HW RAID configuration - best practise question(s)
  2005-02-08  8:49   ` urgrue
@ 2005-02-08 10:01     ` Dermot Paikkos
  0 siblings, 0 replies; 5+ messages in thread
From: Dermot Paikkos @ 2005-02-08 10:01 UTC (permalink / raw)
  To: linux-admin

I would agree; SATA is more than just jumped up IDE but SCSI is the 
preferred choice if you can get the cash. I am going to get my suppler 
to put a SCSI option on paper and I will try and make a case for getting 
SCSI. In my size business, it might well be the case that this server 
may have to do a stint as a fall-over for the DB server. In this scenario 
SCSI would be best.

I do like the idea of having a separate OS disk. Upgrading the OS can 
be a real pain and I thought that if I kept the OS disk elsewhere I could 
destroy it and start again without too much trouble. I also like the idea 
of having a spare disk on hand in case of trouble ..etc. I guess another 
layout would be to have a mirrored pair (RAID 1) of the OS volume and 
the rest given over to data (RAID 5). 


On 8 Feb 2005 at 10:49, urgrue wrote:

> > SATA is for game computers and highend workstations.  Use SCSI for
> > servers and hardware RAID not software RAID.  IBM has 20KRPM SCSI 
> > drives now and with the Ultra160 wide channels, data flow just 
> > screams.
> 
> I don't quite agree. SATA is excellent and significantly more 
> affordable than SCSI. I would not recommend normal PATA IDE for 
> anything, SATA for almost everything, and SCSI only for very high-end 
> situations where money is not a concern. For the vast majority of RAID 
> scenarios I would recommend SATA.
> I'm very wary of software RAID, although I have used it in a few 
> scenarios and it does do the job. But if nothing else, its much easier 
> on the linux side if its a hardware solution, as linux will just see it 
> as a single disk.
> 
> > 4 drives with a RAID 5 over three drives with one hot-spare is a very
> > efficient configuration.
> 
> Yes, it is. One thing to keep in mind is to make sure you have a good 
> system set up to send you an alert when a drive fails, though. I had 
> one RAID array that due to configuration errors was unable to get its 
> alarm mail through when a drive failed. Eventually a second drive 
> failed, at which point we noticed it. Personally I got for RAID-10, 
> just to be on the safe side. Drives are so cheap these days that I 
> prefer to pay a little extra and gain that little bit of extra safety...
> 
> > > My own thoughts were to keep the root file system outside of the
> > > RAID.
> > 
> > That is not necessary.  Your hardware RAID arrays will look like
> > individual drives to your software, treat them as such when you
> > partition
> 
> No it's not necessary. Personally however I do prefer to keep the OS on 
> its own disk. This makes it so much easier to fix OS software problems. 
> You can have an extra copy of the OS disk ready, so that in case of 
> software failure you can swap the backup right in and be up and running 
> in minutes instead of having to go through some rather more complicated 
> process of restoring an OS to an existing RAID array.
> It also makes patches/upgrades much easier, as you can apply them to 
> the backup disk, swap, and see if everything is OK, and just swap back 
> if not.
> It's all just one step more complicated if the OS is on the RAID array.
> 
> > Why not.  If your hard drive with SWAP on it goes down, wouldn't you
> > like it to be as safe as the rest of the server?
> 
> I keep swap, along with everything else OS-related, on the OS disk. The 
> RAID array I use just for data.
> All in all, it's a matter of preference and depends on how you set up 
> your own systems, I wouldnt say there is any one correct answer.
> 
> > Test it!: load up an OS, copy some large pictures to it, large
> > documents, and some third party software or something you can test.  
> > Pull out a drive, test the pics, docs, and software to make sure they 
> > will work while the drive is off-line, while the drive is being 
> > rebuilt and once the system if finished rebuilding.  Test as much as 
> > you can with your new RAID before you trust it to a live, production 
> > server.  If you insist on playing about with a SW-RAID, break it and 
> > make sure you can reboot LOL.
> 
> I cant agree more. After you install a RAID, TEST it every way you can. 
> It's a nightmare situation if you realize youve lost all your data 
> because of some misconfiguration or because something didnt work the 
> way you thought it was supposed to. 
> 
> urgrue
> -
> To unsubscribe from this list: send the line "unsubscribe linux-admin" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


~~
Dermot Paikkos * dermot@sciencephoto.com
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2005-02-08 10:01 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-02-07 15:06 HW RAID configuration - best practise question(s) Dermot Paikkos
2005-02-07 15:52 ` Jens Knoell
2005-02-07 16:04 ` Scott Taylor
2005-02-08  8:49   ` urgrue
2005-02-08 10:01     ` Dermot Paikkos

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).