public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Geoffrey Wehrman <gwehrman@sgi.com>
Cc: Christoph Hellwig <hch@infradead.org>, bpm@sgi.com, xfs@oss.sgi.com
Subject: Re: Issues with delalloc->real extent allocation
Date: Mon, 24 Jan 2011 10:26:37 +1100	[thread overview]
Message-ID: <20110123232637.GB16267@dastard> (raw)
In-Reply-To: <20110121144152.GD10729@sgi.com>

On Fri, Jan 21, 2011 at 08:41:52AM -0600, Geoffrey Wehrman wrote:
> On Fri, Jan 21, 2011 at 01:51:40PM +1100, Dave Chinner wrote:
> | Realistically, for every disadvantage or advantage we can enumerate
> | for specific workloads, I think one of us will be able to come up
> | with a counter example that shows the opposite of the original
> | point. I don't think this sort of argument is particularly
> | productive. :/
> 
> Sorry, I wasn't trying to be argumentative.  Rather I was just documenting
> what I saw as potential issues.  I'm not arguing against your proposed
> change.  If you don't find my sharing of observations productive, I'm
> happy to keep my thoughts to my self in the future.

Ah, that's not what I meant, Geoffrey. Reading it back, I probably
should have said "direction of discussion" rather than "sort of
argument" to make it more obvious I trying not to get stuck with us
goign round and round trying to demonstrate the pros and cons of
different approaches on a workload-by-workload basis.

Basically all I was trying to do is move the discusion past a
potential sticking point - I definitely value the input and insight
you provide, and I'll try to write more clearly to hopefully avoid
such misunderstandings in future discussions.

> | Instead, I look at it from the point of view that a 64k IO is little
> | slower than a 4k IO so such a change would not make much difference
> | to performance. And given that terabytes of storage capacity is
> | _cheap_ these days (and getting cheaper all the time), the extra
> | space of using 64k instead of 4k for sparse blocks isn't a big deal.
> | 
> | When I combine that with my experience from SGI where we always
> | recommended using filesystems block size == page size for best IO
> | performance on HPC setups, there's a fair argument that using page
> | size extents for small sparse writes isn't a problem we really need
> | to care about.
> | 
> | І'd prefer to design for where we expect storage to be in the next
> | few years e.g. 10TB spindles. Minimising space usage is not a big
> | priority when we consider that in 2-3 years 100TB of storage will
> | cost less than $5000 (it's about $15-20k right now).  Even on
> | desktops we're going to have more capacity that we know what to do
> | with, so trading off storage space for lower memory overhead, lower
> | metadata IO overhead and lower potential fragmentation seems like
> | the right way to move forward to me.
> | 
> | Does that seem like a reasonable position to take, or are there
> | other factors that you think I should be considering?
> 
> Keep in mind that storage of the future may not be on spindles, and
> fragmentation may not be an issue.  Even so, with SSD 64K I/O is very
> reasonable as most flash memory implements at a minimum 64K page.  I'm
> fully in favor your proposal to require page sized I/O.

With flash memory there is the potential that we don't even need to
care. The trend is towards on-device compression (e.g. Sandforce
controllers already do this) to reduce write amplification to values
lower than one. Hence a 4k write surrounded by 60k of zeros is
unlikely to be a major issue as it will compress really well.... :)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-01-23 23:24 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-14  0:29 Issues with delalloc->real extent allocation Dave Chinner
2011-01-14 16:40 ` Geoffrey Wehrman
2011-01-14 22:59   ` Dave Chinner
2011-01-15  4:16     ` Geoffrey Wehrman
2011-01-17  5:18       ` Dave Chinner
2011-01-17 14:37         ` Geoffrey Wehrman
2011-01-18  0:24           ` Dave Chinner
2011-01-18 14:30             ` Geoffrey Wehrman
2011-01-18 20:40               ` Christoph Hellwig
2011-01-18 22:03                 ` Dave Chinner
2011-01-14 21:43 ` bpm
2011-01-14 23:32   ` bpm
2011-01-14 23:50   ` bpm
2011-01-14 23:55   ` Dave Chinner
2011-01-17 20:12     ` bpm
2011-01-18  1:44       ` Dave Chinner
2011-01-18 20:47     ` Christoph Hellwig
2011-01-18 23:18       ` Dave Chinner
2011-01-19 12:03         ` Christoph Hellwig
2011-01-19 13:31           ` Dave Chinner
2011-01-19 13:55             ` Christoph Hellwig
2011-01-20  1:33               ` Dave Chinner
2011-01-20 11:16                 ` Christoph Hellwig
2011-01-21  1:59                   ` Dave Chinner
2011-01-20 14:45                 ` Geoffrey Wehrman
2011-01-21  2:51                   ` Dave Chinner
2011-01-21 14:41                     ` Geoffrey Wehrman
2011-01-23 23:26                       ` Dave Chinner [this message]
2011-01-17  0:28   ` Lachlan McIlroy
2011-01-17  4:37     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110123232637.GB16267@dastard \
    --to=david@fromorbit.com \
    --cc=bpm@sgi.com \
    --cc=gwehrman@sgi.com \
    --cc=hch@infradead.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox