From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: question: should io_is_direct really return true for DAX inodes? Date: Fri, 30 Oct 2015 08:09:45 +1100 Message-ID: <20151029210945.GA10656@dastard> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Jeff Moyer , linux-nvdimm , linux-fsdevel , Dave Chinner To: Dan Williams Return-path: Received: from ipmail06.adl2.internode.on.net ([150.101.137.129]:23745 "EHLO ipmail06.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751438AbbJ2VKL (ORCPT ); Thu, 29 Oct 2015 17:10:11 -0400 Content-Disposition: inline In-Reply-To: Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Fri, Oct 30, 2015 at 04:37:36AM +0900, Dan Williams wrote: > [ reply-all re-send, sorry for the duplicate Jeff. ] > > On Thu, Oct 29, 2015 at 11:32 PM, Jeff Moyer wrote: > > Hi, > > > > I'm concerned that applications that used to run out of page cache will > > experience a performance degradation when being forced into doing I/O > > directly to the backing store. What do others think? > > I would think this is only a problem in the case where the media is > orders of magnitude slower than page cache. That isn't the case with > pmem. If you're really concerned, I'm addressing this on XFS by making DAX per-inode selectable (i.e the mount option needs to die). In which case, users can have the best of both worlds - files marked as DAX use DAX/direct IO, files that aren't marked can cache and suffer the lower performance that all that page allocation, dirty tracking and writeback via memcpy entails.. Cheers, Dave. -- Dave Chinner david@fromorbit.com