From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750972AbcEHSmx (ORCPT ); Sun, 8 May 2016 14:42:53 -0400 Received: from mga02.intel.com ([134.134.136.20]:3834 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750852AbcEHSmv (ORCPT ); Sun, 8 May 2016 14:42:51 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,597,1455004800"; d="scan'208";a="801781273" From: "Verma, Vishal L" To: "hch@infradead.org" CC: "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "xfs@oss.sgi.com" , "linux-nvdimm@ml01.01.org" , "linux-mm@kvack.org" , "viro@zeniv.linux.org.uk" , "Williams, Dan J" , "axboe@fb.com" , "akpm@linux-foundation.org" , "linux-fsdevel@vger.kernel.org" , "linux-ext4@vger.kernel.org" , "david@fromorbit.com" , "jack@suse.cz" , "matthew@wil.cx" Subject: Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io Thread-Topic: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io Thread-Index: AQHRoZNfS9ZF3cQEEUydwj8b8xF2vZ+mRHyAgAShZYCAAA4/AIAAAfIAgABq2YCAA+GggIAAom4A Date: Sun, 8 May 2016 18:42:37 +0000 Message-ID: <1462732956.3006.4.camel@intel.com> References: <1461878218-3844-1-git-send-email-vishal.l.verma@intel.com> <1461878218-3844-6-git-send-email-vishal.l.verma@intel.com> <5727753F.6090104@plexistor.com> <20160505142433.GA4557@infradead.org> <20160505152230.GA3994@infradead.org> <1462484695.29294.7.camel@intel.com> <20160508090115.GE15458@infradead.org> In-Reply-To: <20160508090115.GE15458@infradead.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.255.73.102] Content-Type: text/plain; charset="utf-8" Content-ID: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id u48IgugC030814 On Sun, 2016-05-08 at 02:01 -0700, hch@infradead.org wrote: > On Thu, May 05, 2016 at 09:45:07PM +0000, Verma, Vishal L wrote: > > > > I'm not sure I completely understand how this will work? Can you > > explain > > a bit? Would we have to export rw_bytes up to layers above the pmem > > driver? Where does get_user_pages come in? > A DAX filesystem can directly use the nvdimm layer the same way btt > doe,s what's the problem? The BTT does rw_bytes through an internal-to-libnvdimm mechanism, but rw_bytes isn't exported to the filesystem, currently.. To do this we'd have to either add an rw_bytes to block device operations...or something. Another thing is rw_bytes currently doesn't do error clearing either. We store badblocks at sector granularity, and like Dan said earlier, that hides the clear_error alignment requirements and upper layers don't have to be aware of it. To make rw_bytes clear sub-sector errors, we'd have to change the granularity of bad-blocks, and make upper layers aware of the clearing alignment requirements. Using a block-write semantic for clearing hides all this away. > > Re get_user_pages my idea was to simply use that to lock down the > user > pages so that we can call rw_bytes on it.  How else would you do > it?  Do > a kmalloc, copy_from_user and then another memcpy?