public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: XFS journaling position
Date: Thu, 28 Oct 2010 18:33:08 -0500	[thread overview]
Message-ID: <4CCA0834.8040703@hardwarefreak.com> (raw)
In-Reply-To: <201010281144.39307@zmi.at>

Michael Monnerie put forth on 10/28/2010 4:44 AM:
> On Mittwoch, 27. Oktober 2010 Robert Brockway wrote:
>> Similarly virtual hosts have little chance of trying to establish
>> the  physical nature of the device holding their filesystems.
> 
> Yes, performance optimizations will be fun in the near future. VMs, thin 
> provisioning, NetApps WAFL, LVM, funny disk layouts, all can do things 
> completely different than our "old school" thinking. I wonder when 
> there's gonna be an I/O scheduler that just elevates the I/O from a VM 
> to the real host, so that the host itself can optimize and align. After 
> all, a VM has no idea of the storage. That's why already now you can 
> choose "noop" as the scheduler in a VM. I guess there will be a 
> "virtualized" scheduler once, but we will see.

I don't see how any of this is really that different from where we
already are with advanced storage systems and bare metal host OSes.
We're already virtualized WRT basic SAN arrays and maybe even some PCIe
RAID cards if they allow carving a RAID set into LUNs.

Take for example a small FC/iSCSI SAN array controller box with 16 x 1TB
SATA drives.  We initialize it using the serial console, web gui, or
other management tool into a single RAID 6 array with 14TB of raw space
using a 256KB stripe size.  We then carve this 14TB into 10 LUNs, 1.4TB
each, and unmask each LUN to the FC WWN of a bare metal host running
Linux.  Lets assume the array controller starts at the outside edge of
each disk and works its way to the inner cylinder when creating each
LUN, which seems like a logical way for a vendor to implement this.  We
now have 10 LUNs each with progressively less performance than the one
preceding it due to its location on the platters.

Now, on each host we format the 1.4TB LUN with XFS.  In this
configuration, given that the LUNs are spread all across the platters,
from outside to inside cylinder, is it really going to matter where each
AG or the log is located, from a performance standpoint?

The only parameters we actually know for sure here are the stripe width
(14) and the stripe size (256KB).  We have no knowledge of the real
layout of the cylinders when we run mkfs.xfs.

So as we move to a totally virtualized guest OS, we then lose the stripe
width and stripe size information.  How much performance does this
really cost us WRT XFS filesystem layout?  And considering these are VM
guests, which are by design meant for consolidation, not necessarily
performance, are we really losing anything at all, when looking at the
big picture?  How many folks are running their critical core business
databases in virtual machine guests?  How about core email systems?
Other performance/business critical applications?

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2010-10-28 23:31 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-25  5:26 XFS journaling position klonatos
2010-10-25 16:10 ` Geoffrey Wehrman
2010-10-26 22:27   ` Michael Monnerie
2010-10-26 23:28     ` Dave Chinner
2010-10-26 23:59       ` Michael Monnerie
2010-10-27 14:27     ` Robert Brockway
2010-10-27 14:32       ` Robert Brockway
2010-10-28  9:44         ` Michael Monnerie
2010-10-28 23:33           ` Stan Hoeppner [this message]
2010-10-29  7:58             ` Michael Monnerie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4CCA0834.8040703@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox