From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51231) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eHilU-000411-14 for qemu-devel@nongnu.org; Wed, 22 Nov 2017 23:05:28 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eHilQ-0007t1-T7 for qemu-devel@nongnu.org; Wed, 22 Nov 2017 23:05:28 -0500 Received: from mail-pl0-x241.google.com ([2607:f8b0:400e:c01::241]:38169) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1eHilQ-0007sc-Jh for qemu-devel@nongnu.org; Wed, 22 Nov 2017 23:05:24 -0500 Received: by mail-pl0-x241.google.com with SMTP id s10so1979155plj.5 for ; Wed, 22 Nov 2017 20:05:24 -0800 (PST) References: <1455443283.33337333.1500618150787.JavaMail.zimbra@redhat.com> <1501016375.26846.21.camel@redhat.com> <1063764405.34607875.1501076841865.JavaMail.zimbra@redhat.com> <1501104453.26846.45.camel@redhat.com> <1501112787.4073.49.camel@redhat.com> <0a26793f-86f7-29e7-f61b-dc4c1ef08c8e@gmail.com> <378b10f3-b32f-84f5-2bbc-50c2ec5bcdd4@gmail.com> <86754966-281f-c3ed-938c-f009440de563@gmail.com> <1511288389.1080.14.camel@redhat.com> From: Xiao Guangrong Message-ID: Date: Thu, 23 Nov 2017 12:05:45 +0800 MIME-Version: 1.0 In-Reply-To: <1511288389.1080.14.camel@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] KVM "fake DAX" flushing interface - discussion List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Rik van Riel , Dan Williams Cc: Pankaj Gupta , Jan Kara , Stefan Hajnoczi , Stefan Hajnoczi , kvm-devel , Qemu Developers , "linux-nvdimm@lists.01.org" , ross zwisler , Paolo Bonzini , Kevin Wolf , Nitesh Narayan Lal , Haozhong Zhang , Ross Zwisler On 11/22/2017 02:19 AM, Rik van Riel wrote: > We can go with the "best" interface for what > could be a relatively slow flush (fsync on a > file on ssd/disk on the host), which requires > that the flushing task wait on completion > asynchronously. I'd like to clarify the interface of "wait on completion asynchronously" and KVM async page fault a bit more. Current design of async-page-fault only works on RAM rather than MMIO, i.e, if the page fault caused by accessing the device memory of a emulated device, it needs to go to userspace (QEMU) which emulates the operation in vCPU's thread. As i mentioned before the memory region used for vNVDIMM flush interface should be MMIO and consider its support on other hypervisors, so we do better push this async mechanism into the flush interface design itself rather than depends on kvm async-page-fault.