linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Alireza Haghdoost <haghdoost@gmail.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>,
	Linux Filesystem Development List <linux-fsdevel@vger.kernel.org>,
	linux-scsi <linux-scsi@vger.kernel.org>,
	xfs@oss.sgi.com
Subject: Re: [ANNOUNCE] xfs: Supporting Host Aware SMR Drives
Date: Tue, 17 Mar 2015 17:06:01 +1100	[thread overview]
Message-ID: <20150317060601.GA10105@dastard> (raw)
In-Reply-To: <CAB-428kqj2KChEGmjqSXLhM4TMM-dsgK7LzkQXxgp5En4jBVRg@mail.gmail.com>

On Mon, Mar 16, 2015 at 08:12:16PM -0500, Alireza Haghdoost wrote:
> On Mon, Mar 16, 2015 at 3:32 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Mon, Mar 16, 2015 at 11:28:53AM -0400, James Bottomley wrote:
> >> Probably need to cc dm-devel here.  However, I think we're all agreed
> >> this is RAID across multiple devices, rather than within a single
> >> device?  In which case we just need a way of ensuring identical zoning
> >> on the raided devices and what you get is either a standard zone (for
> >> mirror) or a larger zone (for hamming etc).
> >
> > Any sort of RAID is a bloody hard problem, hence the fact that I'm
> > designing a solution for a filesystem on top of an entire bare
> > drive. I'm not trying to solve every use case in the world, just the
> > one where the drive manufactures think SMR will be mostly used: the
> > back end of "never delete" distributed storage environments....
> > We can't wait for years for infrastructure layers to catch up in the
> > brave new world of shipping SMR drives. We may not like them, but we
> > have to make stuff work. I'm not trying to solve every problem - I'm
> > just tryin gto address the biggest use case I see for SMR devices
> > and it just so happens that XFS is already used pervasively in that
> > same use case, mostly within the same "no raid, fs per entire
> > device" constraints as I've documented for this proposal...
> 
> I am confused what kind of application you are referring to for this
> "back end, no raid, fs per entire device". Are you gonna rely on the
> application to do replication for disk failure protection ?

Exactly. Think distributed storage such as Ceph and gluster where
the data redundancy and failure recovery algorithms are in layers
*above* the local filesystem, not in the storage below the fs.  The
"no raid, fs per device" model is already a very common back end
storage configuration for such deployments.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2015-03-17  6:06 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20150316060020.GB28557@dastard>
2015-03-16 15:28 ` [ANNOUNCE] xfs: Supporting Host Aware SMR Drives James Bottomley
2015-03-16 18:23   ` Adrian Palmer
2015-03-16 19:06     ` James Bottomley
2015-03-16 20:20       ` Dave Chinner
2015-03-16 20:32   ` Dave Chinner
2015-03-17  1:12     ` Alireza Haghdoost
2015-03-17  6:06       ` Dave Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150317060601.GA10105@dastard \
    --to=david@fromorbit.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=haghdoost@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).