From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org ([65.50.211.133]:39711 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751210AbdGZRDn (ORCPT ); Wed, 26 Jul 2017 13:03:43 -0400 Date: Wed, 26 Jul 2017 10:03:40 -0700 From: Christoph Hellwig To: Dan Williams Cc: Ross Zwisler , OGAWA Hirofumi , Johannes Thumshirn , "Kani, Toshimitsu" , "linux-nvdimm@lists.01.org" , "linux-fsdevel@vger.kernel.org" Subject: Re: FIle copy to FAT FS on NVDIMM hits BUG_ON at fs/buffer.c:3305! Message-ID: <20170726170340.GA8576@infradead.org> References: <1501018096.2042.70.camel@hpe.com> <20170725222247.GA26391@linux.intel.com> <20170726082159.GE4039@linux-x5ow.site> <87d18neemb.fsf@devron> <20170726142348.GA4130@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Wed, Jul 26, 2017 at 09:08:00AM -0700, Dan Williams wrote: > Another question, does ->rw_page() really buy us that much with the > pmem driver? If applications want to enjoy the lowest latency access > they can just use DAX. There's now only 4 drivers that use rw_page > since nvme dropped its usage and I'd be inclined to just rip it out. nvme never supported rw_page (there was a page for it, but it fortunately never got merged). rw_page are massive pain the ass and the method should go away. For make_request drivers that actually operate synchronous (e.g. the ramdisk) it's not much of a benefit, and even for normally asynchronous drivers like nvme the block layer polling interface is much more suitable.