From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757855AbcEEVpM (ORCPT ); Thu, 5 May 2016 17:45:12 -0400 Received: from mga09.intel.com ([134.134.136.24]:16008 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756390AbcEEVpK (ORCPT ); Thu, 5 May 2016 17:45:10 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,583,1455004800"; d="scan'208";a="973595294" From: "Verma, Vishal L" To: "Williams, Dan J" , "hch@infradead.org" CC: "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "xfs@oss.sgi.com" , "linux-nvdimm@ml01.01.org" , "linux-mm@kvack.org" , "viro@zeniv.linux.org.uk" , "axboe@fb.com" , "akpm@linux-foundation.org" , "linux-fsdevel@vger.kernel.org" , "linux-ext4@vger.kernel.org" , "david@fromorbit.com" , "jack@suse.cz" , "matthew@wil.cx" Subject: Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io Thread-Topic: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io Thread-Index: AQHRoZNfS9ZF3cQEEUydwj8b8xF2vZ+mRHyAgAShZYCAAA4/AIAAAfIAgABq2YA= Date: Thu, 5 May 2016 21:45:07 +0000 Message-ID: <1462484695.29294.7.camel@intel.com> References: <1461878218-3844-1-git-send-email-vishal.l.verma@intel.com> <1461878218-3844-6-git-send-email-vishal.l.verma@intel.com> <5727753F.6090104@plexistor.com> <20160505142433.GA4557@infradead.org> <20160505152230.GA3994@infradead.org> In-Reply-To: <20160505152230.GA3994@infradead.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.232.112.171] Content-Type: text/plain; charset="utf-8" Content-ID: <70E6DC278A3EE74D9541D62DE6070F32@intel.com> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id u45LjJTM013178 On Thu, 2016-05-05 at 08:22 -0700, Christoph Hellwig wrote: > On Thu, May 05, 2016 at 08:15:32AM -0700, Dan Williams wrote: > > > > > > > > Agreed - makig O_DIRECT less direct than not having it is plain > > > stupid, > > > and I somehow missed this initially. > > Of course I disagree because like Dave argues in the msync case we > > should do the correct thing first and make it fast later, but also > > like Dave this arguing in circles is getting tiresome. > We should do the right thing first, and make it fast later.  But this > proposal is not getting it right - it still does not handle errors > for the fast path, but magically makes it work for direct I/O by > in general using a less optional path for O_DIRECT.  It's getting the > worst of all choices. > > As far as I can tell the only sensible option is to: > >  - always try dax-like I/O first >  - have a custom get_user_pages + rw_bytes fallback handles bad blocks >    when hitting EIO I'm not sure I completely understand how this will work? Can you explain a bit? Would we have to export rw_bytes up to layers above the pmem driver? Where does get_user_pages come in? > > And then we need to sort out the concurrent write synchronization. > Again there I think we absolutely have to obey Posix for the !O_DIRECT > case and can avoid it for O_DIRECT, similar to the existing non-DAX > semantics.  If we want any special additional semantics we _will_ need > a special O_DAX flag. > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm