linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: xfs@oss.sgi.com, linux-fsdevel@vger.kernel.org,
	linux-scsi <linux-scsi@vger.kernel.org>
Subject: Re: [ANNOUNCE] xfs: Supporting Host Aware SMR Drives
Date: Tue, 17 Mar 2015 07:32:07 +1100	[thread overview]
Message-ID: <20150316203207.GD28557@dastard> (raw)
In-Reply-To: <1426519733.4000.11.camel@HansenPartnership.com>

On Mon, Mar 16, 2015 at 11:28:53AM -0400, James Bottomley wrote:
> [cc to linux-scsi added since this seems relevant]
> On Mon, 2015-03-16 at 17:00 +1100, Dave Chinner wrote:
> > Hi Folks,
> > 
> > As I told many people at Vault last week, I wrote a document
> > outlining how we should modify the on-disk structures of XFS to
> > support host aware SMR drives on the (long) plane flights to Boston.
> > 
> > TL;DR: not a lot of change to the XFS kernel code is required, no
> > specific SMR awareness is needed by the kernel code.  Only
> > relatively minor tweaks to the on-disk format will be needed and
> > most of the userspace changes are relatively straight forward, too.
> > 
> > The source for that document can be found in this git tree here:
> > 
> > git://git.kernel.org/pub/scm/fs/xfs/xfs-documentation
> > 
> > in the file design/xfs-smr-structure.asciidoc. Alternatively,
> > pull it straight from cgit:
> > 
> > https://git.kernel.org/cgit/fs/xfs/xfs-documentation.git/tree/design/xfs-smr-structure.asciidoc
> > 
> > Or there is a pdf version built from the current TOT on the xfs.org
> > wiki here:
> > 
> > http://xfs.org/index.php/Host_Aware_SMR_architecture
> > 
> > Happy reading!
> 
> I don't think it would have caused too much heartache to post the entire
> doc to the list, but anyway
> 
> The first is a meta question: What happened to the idea of separating
> the fs block allocator from filesystems?  It looks like a lot of the
> updates could be duplicated into other filesystems, so it might be a
> very opportune time to think about this.

Which requires a complete rework of the fs/block layer. That's the
long term goal, but we aren't going to be there for a few years yet.
Hust look at how long it's taken for copy offload (which is trivial
compared to allocation offload) to be implemented....

> > === RAID on SMR....
> > 
> > How does RAID work with SMR, and exactly what does that look like to
> > the filesystem?
> > 
> > How does libzbc work with RAID given it is implemented through the scsi ioctl
> > interface?
> 
> Probably need to cc dm-devel here.  However, I think we're all agreed
> this is RAID across multiple devices, rather than within a single
> device?  In which case we just need a way of ensuring identical zoning
> on the raided devices and what you get is either a standard zone (for
> mirror) or a larger zone (for hamming etc).

Any sort of RAID is a bloody hard problem, hence the fact that I'm
designing a solution for a filesystem on top of an entire bare
drive. I'm not trying to solve every use case in the world, just the
one where the drive manufactures think SMR will be mostly used: the
back end of "never delete" distributed storage environments....

We can't wait for years for infrastructure layers to catch up in the
brave new world of shipping SMR drives. We may not like them, but we
have to make stuff work. I'm not trying to solve every problem - I'm
just tryin gto address the biggest use case I see for SMR devices
and it just so happens that XFS is already used pervasively in that
same use case, mostly within the same "no raid, fs per entire
device" constraints as I've documented for this proposal...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2015-03-16 20:32 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-16  6:00 [ANNOUNCE] xfs: Supporting Host Aware SMR Drives Dave Chinner
2015-03-16 15:28 ` James Bottomley
2015-03-16 18:23   ` Adrian Palmer
2015-03-16 19:06     ` James Bottomley
2015-03-16 20:20       ` Dave Chinner
2015-03-16 22:48         ` Cyril Guyot
2015-03-16 20:32   ` Dave Chinner [this message]
2015-03-17  1:12     ` Alireza Haghdoost
2015-03-17  6:06       ` Dave Chinner
2015-03-17 13:25 ` Brian Foster
2015-03-17 21:28   ` Dave Chinner
2015-03-21 14:48     ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150316203207.GD28557@dastard \
    --to=david@fromorbit.com \
    --cc=James.Bottomley@HansenPartnership.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).