From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 096CCC4332F for ; Mon, 13 Dec 2021 08:23:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232856AbhLMIXY (ORCPT ); Mon, 13 Dec 2021 03:23:24 -0500 Received: from verein.lst.de ([213.95.11.211]:46642 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232838AbhLMIXX (ORCPT ); Mon, 13 Dec 2021 03:23:23 -0500 Received: by verein.lst.de (Postfix, from userid 2407) id B4C9368BFE; Mon, 13 Dec 2021 09:23:18 +0100 (CET) Date: Mon, 13 Dec 2021 09:23:18 +0100 From: Christoph Hellwig To: Dan Williams Cc: Vivek Goyal , Christoph Hellwig , Vishal Verma , Dave Jiang , Alasdair Kergon , Mike Snitzer , Ira Weiny , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Stefan Hajnoczi , Miklos Szeredi , Matthew Wilcox , device-mapper development , Linux NVDIMM , linux-s390 , linux-fsdevel , virtualization@lists.linux-foundation.org Subject: Re: [PATCH 4/5] dax: remove the copy_from_iter and copy_to_iter methods Message-ID: <20211213082318.GB21462@lst.de> References: <20211209063828.18944-1-hch@lst.de> <20211209063828.18944-5-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-s390@vger.kernel.org On Sun, Dec 12, 2021 at 06:44:26AM -0800, Dan Williams wrote: > On Fri, Dec 10, 2021 at 6:17 AM Vivek Goyal wrote: > > Going forward, I am wondering should virtiofs use flushcache version as > > well. What if host filesystem is using DAX and mapping persistent memory > > pfn directly into qemu address space. I have never tested that. > > > > Right now we are relying on applications to do fsync/msync on virtiofs > > for data persistence. > > This sounds like it would need coordination with a paravirtualized > driver that can indicate whether the host side is pmem or not, like > the virtio_pmem driver. However, if the guest sends any fsync/msync > you would still need to go explicitly cache flush any dirty page > because you can't necessarily trust that the guest did that already. Do we? The application can't really know what backend it is on, so it sounds like the current virtiofs implementation doesn't really, does it?