From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [ANNOUNCE] xfs: Supporting Host Aware SMR Drives Date: Tue, 17 Mar 2015 17:06:01 +1100 Message-ID: <20150317060601.GA10105@dastard> References: <20150316060020.GB28557@dastard> <1426519733.4000.11.camel@HansenPartnership.com> <20150316203207.GD28557@dastard> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: James Bottomley , Linux Filesystem Development List , linux-scsi , xfs@oss.sgi.com To: Alireza Haghdoost Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com List-Id: linux-fsdevel.vger.kernel.org On Mon, Mar 16, 2015 at 08:12:16PM -0500, Alireza Haghdoost wrote: > On Mon, Mar 16, 2015 at 3:32 PM, Dave Chinner wrote: > > On Mon, Mar 16, 2015 at 11:28:53AM -0400, James Bottomley wrote: > >> Probably need to cc dm-devel here. However, I think we're all agreed > >> this is RAID across multiple devices, rather than within a single > >> device? In which case we just need a way of ensuring identical zoning > >> on the raided devices and what you get is either a standard zone (for > >> mirror) or a larger zone (for hamming etc). > > > > Any sort of RAID is a bloody hard problem, hence the fact that I'm > > designing a solution for a filesystem on top of an entire bare > > drive. I'm not trying to solve every use case in the world, just the > > one where the drive manufactures think SMR will be mostly used: the > > back end of "never delete" distributed storage environments.... > > We can't wait for years for infrastructure layers to catch up in the > > brave new world of shipping SMR drives. We may not like them, but we > > have to make stuff work. I'm not trying to solve every problem - I'm > > just tryin gto address the biggest use case I see for SMR devices > > and it just so happens that XFS is already used pervasively in that > > same use case, mostly within the same "no raid, fs per entire > > device" constraints as I've documented for this proposal... > > I am confused what kind of application you are referring to for this > "back end, no raid, fs per entire device". Are you gonna rely on the > application to do replication for disk failure protection ? Exactly. Think distributed storage such as Ceph and gluster where the data redundancy and failure recovery algorithms are in layers *above* the local filesystem, not in the storage below the fs. The "no raid, fs per device" model is already a very common back end storage configuration for such deployments. Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs