From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48497) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1e9Qjt-0001WT-OG for qemu-devel@nongnu.org; Tue, 31 Oct 2017 03:13:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1e9Qjo-0003mR-NV for qemu-devel@nongnu.org; Tue, 31 Oct 2017 03:13:33 -0400 Received: from mail-pg0-x243.google.com ([2607:f8b0:400e:c05::243]:43566) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1e9Qjo-0003ko-HQ for qemu-devel@nongnu.org; Tue, 31 Oct 2017 03:13:28 -0400 Received: by mail-pg0-x243.google.com with SMTP id s75so13901089pgs.0 for ; Tue, 31 Oct 2017 00:13:28 -0700 (PDT) References: <1455443283.33337333.1500618150787.JavaMail.zimbra@redhat.com> <20170724102330.GE652@quack2.suse.cz> <1157879323.33809400.1500897967669.JavaMail.zimbra@redhat.com> <20170724123752.GN652@quack2.suse.cz> <1888117852.34216619.1500992835767.JavaMail.zimbra@redhat.com> <1501016375.26846.21.camel@redhat.com> <1063764405.34607875.1501076841865.JavaMail.zimbra@redhat.com> <1501104453.26846.45.camel@redhat.com> <1501112787.4073.49.camel@redhat.com> From: Xiao Guangrong Message-ID: <0a26793f-86f7-29e7-f61b-dc4c1ef08c8e@gmail.com> Date: Tue, 31 Oct 2017 15:13:44 +0800 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] KVM "fake DAX" flushing interface - discussion List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Dan Williams , Rik van Riel Cc: Pankaj Gupta , Jan Kara , Stefan Hajnoczi , Stefan Hajnoczi , kvm-devel , Qemu Developers , "linux-nvdimm@lists.01.org" , ross zwisler , Paolo Bonzini , Kevin Wolf , Nitesh Narayan Lal , Haozhong Zhang , Ross Zwisler On 07/27/2017 08:54 AM, Dan Williams wrote: >> At that point, would it make sense to expose these special >> virtio-pmem areas to the guest in a slightly different way, >> so the regions that need virtio flushing are not bound by >> the regular driver, and the regular driver can continue to >> work for memory regions that are backed by actual pmem in >> the host? > > Hmm, yes that could be feasible especially if it uses the ACPI NFIT > mechanism. It would basically involve defining a new SPA (System > Phyiscal Address) range GUID type, and then teaching libnvdimm to > treat that as a new pmem device type. I would prefer a new flush mechanism to a new memory type introduced to NFIT, e.g, in that mechanism we can define request queues and completion queues and any other features to make virtualization friendly. That would be much simpler.