From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750966AbcEHJBS (ORCPT ); Sun, 8 May 2016 05:01:18 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:45350 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750722AbcEHJBQ (ORCPT ); Sun, 8 May 2016 05:01:16 -0400 Date: Sun, 8 May 2016 02:01:15 -0700 From: "hch@infradead.org" To: "Verma, Vishal L" Cc: "Williams, Dan J" , "hch@infradead.org" , "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "xfs@oss.sgi.com" , "linux-nvdimm@ml01.01.org" , "linux-mm@kvack.org" , "viro@zeniv.linux.org.uk" , "axboe@fb.com" , "akpm@linux-foundation.org" , "linux-fsdevel@vger.kernel.org" , "linux-ext4@vger.kernel.org" , "david@fromorbit.com" , "jack@suse.cz" , "matthew@wil.cx" Subject: Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io Message-ID: <20160508090115.GE15458@infradead.org> References: <1461878218-3844-1-git-send-email-vishal.l.verma@intel.com> <1461878218-3844-6-git-send-email-vishal.l.verma@intel.com> <5727753F.6090104@plexistor.com> <20160505142433.GA4557@infradead.org> <20160505152230.GA3994@infradead.org> <1462484695.29294.7.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1462484695.29294.7.camel@intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 05, 2016 at 09:45:07PM +0000, Verma, Vishal L wrote: > I'm not sure I completely understand how this will work? Can you explain > a bit? Would we have to export rw_bytes up to layers above the pmem > driver? Where does get_user_pages come in? A DAX filesystem can directly use the nvdimm layer the same way btt doe,s what's the problem? Re get_user_pages my idea was to simply use that to lock down the user pages so that we can call rw_bytes on it. How else would you do it? Do a kmalloc, copy_from_user and then another memcpy?