* sunit/swidth for HP P4500 Lefthand Networks storage arrays
@ 2012-01-12 4:15 Stan Hoeppner
2012-01-12 7:30 ` Michael Monnerie
0 siblings, 1 reply; 2+ messages in thread
From: Stan Hoeppner @ 2012-01-12 4:15 UTC (permalink / raw)
To: xfs
The Linux host will be Debian Squeeze with a very recent BPO kernel, but
running as a guest inside a VMware cluster. The storage platform is a
dozen clustered HP P4500/Lefthand Networks "self load balancing" iSCSI
arrays. The application is Dovecot IMAP using the mdbox mailbox storage
format, which is basically a hybrid mbox/maildir+dedup format. Files
will be 32-64MB or so, each containing multiple emails, with multiple
files per mailbox directory. Mail indexes are internal to the files.
What is the best mkfs.xfs configuration for this scenario?
I'm guessing it would be best to simply use mostly, if not completely,
the defaults, due to the way iSCSI packets are redirected on the fly to
any storage node depending on load, by the Lefthand special sauce.
What about mount options?
Should I use barriers with the P4500s or disable them?
TTBOMK the internal PCIe RAID controllers have BBWC, but the ~6GB of RAM
on the P4500 mobos isn't battery backed, but for the typical external
UPS. In this setup, from a physical hardware standpoint, iSCSI packets
will be making at least 2 ethernet switch hops between the ESX nodes and
the P4500s, with redundant links between everything, if that's a factor
at all.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: sunit/swidth for HP P4500 Lefthand Networks storage arrays
2012-01-12 4:15 sunit/swidth for HP P4500 Lefthand Networks storage arrays Stan Hoeppner
@ 2012-01-12 7:30 ` Michael Monnerie
0 siblings, 0 replies; 2+ messages in thread
From: Michael Monnerie @ 2012-01-12 7:30 UTC (permalink / raw)
To: stan; +Cc: xfs
[-- Attachment #1.1: Type: Text/Plain, Size: 1478 bytes --]
On Donnerstag, 12. Januar 2012 Stan Hoeppner wrote:
> What is the best mkfs.xfs configuration for this scenario?
>
> I'm guessing it would be best to simply use mostly, if not
> completely, the defaults, due to the way iSCSI packets are
> redirected on the fly to any storage node depending on load, by the
> Lefthand special sauce.
I'd use defaults. We've recently switched to a NetApp storage, and with
all the specialities it has also use the defaults.
> What about mount options?
>
> Should I use barriers with the P4500s or disable them?
> TTBOMK the internal PCIe RAID controllers have BBWC, but the ~6GB of
> RAM on the P4500 mobos isn't battery backed, but for the typical
> external UPS. In this setup, from a physical hardware standpoint,
> iSCSI packets will be making at least 2 ethernet switch hops between
> the ESX nodes and the P4500s, with redundant links between
> everything, if that's a factor at all.
Turn off barriers, I'd say. We use the NetApp over NFS (to VMware
stores), and turned them off. I guess that's also correct to do.
As I understand them, barriers help to not loose blocks which the
storage already received, so it doesn't matter how it's connected
because the packets must have arrived there already. Can someone
confirm?
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-01-12 7:31 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-12 4:15 sunit/swidth for HP P4500 Lefthand Networks storage arrays Stan Hoeppner
2012-01-12 7:30 ` Michael Monnerie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox