linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Brown <david.brown@hesbynett.no>
To: linux-raid@vger.kernel.org
Subject: Re: Direct disk access on IBM Server
Date: Thu, 21 Apr 2011 13:19:49 +0200	[thread overview]
Message-ID: <iop3sl$6rl$1@dough.gmane.org> (raw)
In-Reply-To: <4DAFAE49.1020802@hardwarefreak.com>

On 21/04/11 06:10, Stan Hoeppner wrote:
> David Brown put forth on 4/20/2011 6:24 AM:
>
>> For this particular server, I have 4 disks.
>
> Seems like a lot of brain activity going on here for such a small array.
> ;)
>

I prefer to do my thinking and learning before committing too much - 
it's always annoying to have everything installed and /almost/ perfect, 
and then think "if only I'd set up the disks a little differently"!

And since it's my first hardware raid card (I don't count fakeraid on 
desktop motherboards), I have been learning a fair bit here.

>> First off, when I ran "lspci" on a system rescue cd, the card was
>> identified as a "LSI Megaraid SAS 2108".  But running "lspci" on CentOS
>> (with an older kernel), it was identified as a "MegaRAID SAS 9260".
>
> This is simply differences in kernels/drivers' device ID tables.
> Nothing to worry about AFAIK.
>

That was my thoughts.  I get the impression that the "SAS 2108" is the 
raid ASIC, while the "SAS 9260" is the name of a card.  That turned out 
to be more helpful in identifying the card on LSI's website.

>> I don't think there will be significant performance differences,
>> especially not for the number of drives I am using.
>
> Correct assumption.
>
>> I have one question about the hardware raid that I don't know about.  I
>> will have filesystems (some ext4, some xfs) on top of LVM on top of the
>> raid.  With md raid, the filesystem knows about the layout, so xfs
>> arranges its allocation groups to fit with the stripes of the raid. Will
>> this automatic detection work as well with hardware raid?
>
> See:
>
> Very important infor for virtual machines:
> http://xfs.org/index.php/XFS_FAQ#Q:_Which_settings_are_best_with_virtualization_like_VMware.2C_XEN.2C_qemu.3F
>
> Hardware RAID write cache, data safety info
> http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F
>
> Hardware controller settings:
> http://xfs.org/index.php/XFS_FAQ#Q._Which_settings_does_my_RAID_controller_need_.3F
>
> Calculate correct mkfs.xfs parameters:
> http://xfs.org/index.php/XFS_FAQ#Q:_How_to_calculate_the_correct_sunit.2Cswidth_values_for_optimal_performance
>
> General XFS tuning advice:
> http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
>

I guess I should have looked at the FAQ before asking - after all, 
that's what the FAQ is for.  Many thanks for the links.

>> Anyway, now it's time to play a little with MegaCli and see how I get
>> on.  It seems to have options to put drives in "JBOD" mode - maybe that
>> would give me direct access to the disk like a normal SATA drive?
>
> IIRC, using JBOD mode for all the drives will disable the hardware
> cache, and many/most/all other advanced features of the controller,
> turning the RAID card literally into a plain SAS/SATA HBA.  I believe
> this is why Dave chose the RAID0 per drive option.  Check your docs to
> confirm.
>

My original thought was that plain old SATA is what I know and am used 
to, and I know how to work with it for md raid, hot plugging, etc.  So 
JBOD was what I was looking for.

However, having gathered a fair amount of information and done some 
testing, I am leaning heavily towards using the hardware raid card for 
hardware raid.  As you say, I've done a fair amount of thinking for a 
small array - I like to know what my options are and their pros and 
cons.  Having established that, the actual /implementation/ choice will 
be whatever gives me the functionality I need with the least effort (now 
and for future maintenance) - it looks like a hardware raid5 is the 
choice here.

> In parting, carefully read about filesystem data consistency issues WRT
> virtual machine environments.  It may prove more important for you than
> any filesystem tuning.
>

Yes, I am aware of such issues - I have read about them before (and they 
are relevant for the VirtualBox systems I use on desktops).  However, on 
the server I use openvz, which is a "lightweight" virtualisation - more 
like a glorified chroot than full virtualisation.  The host handles the 
filesystems - the guests just see a restricted part of the filesystem, 
rather than virtual drives.  So all data consistency issues are simple 
host issues.  I still need to make sure I understand about barriers, 
raid card caches, etc. (reading the xfs faq), but at least there are no 
special problems with virtual disks.

Thanks,

David


      reply	other threads:[~2011-04-21 11:19 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-19 13:21 Direct disk access on IBM Server David Brown
2011-04-19 13:25 ` Mathias Burén
2011-04-19 14:04   ` David Brown
2011-04-19 14:07     ` Mathias Burén
2011-04-19 15:12       ` David Brown
2011-04-19 15:41         ` Mathias Burén
2011-04-20  8:08           ` David Brown
2011-04-19 20:08 ` Stan Hoeppner
2011-04-20 11:24   ` David Brown
2011-04-20 11:40     ` Rudy Zijlstra
2011-04-20 12:21       ` David Brown
2011-04-21  6:24         ` Stan Hoeppner
2011-04-21 11:36           ` David Brown
2011-04-23 14:05             ` Majed B.
2011-04-23 14:42               ` David Brown
2011-04-24 12:48             ` Drew
2011-04-24 20:00               ` David Brown
2011-04-24 20:25                 ` Rudy Zijlstra
2011-04-25  9:42                   ` David Brown
2011-04-21  3:50     ` Ryan Wagoner
2011-04-21 11:00       ` David Brown
2011-04-21  4:10     ` Stan Hoeppner
2011-04-21 11:19       ` David Brown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='iop3sl$6rl$1@dough.gmane.org' \
    --to=david.brown@hesbynett.no \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).