From mboxrd@z Thu Jan 1 00:00:00 1970 From: Haozhong Zhang Subject: Re: [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu Date: Wed, 20 Jan 2016 22:42:12 +0800 Message-ID: <20160120144212.GD11445@hz-desktop.sh.intel.com> References: <5699362402000078000C7803@prv-mh.provo.novell.com> <20160118005255.GC3528@hz-desktop.sh.intel.com> <569CB47502000078000C7CFB@prv-mh.provo.novell.com> <20160120053132.GA5005@hz-desktop.sh.intel.com> <569F575902000078000C8EDC@prv-mh.provo.novell.com> <569F4C2C.7040300@citrix.com> <20160120101526.GC4939@hz-desktop.sh.intel.com> <569F6336.2040104@linux.intel.com> <569F88AC.40100@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Stefano Stabellini Cc: Kevin Tian , Wei Liu , Ian Campbell , Andrew Cooper , Ian Jackson , xen-devel@lists.xen.org, Jan Beulich , Jun Nakajima , Xiao Guangrong , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 01/20/16 14:29, Stefano Stabellini wrote: > On Wed, 20 Jan 2016, Andrew Cooper wrote: > > On 20/01/16 10:36, Xiao Guangrong wrote: > > > > > > Hi, > > > > > > On 01/20/2016 06:15 PM, Haozhong Zhang wrote: > > > > > >> CCing QEMU vNVDIMM maintainer: Xiao Guangrong > > >> > > >>> Conceptually, an NVDIMM is just like a fast SSD which is linearly > > >>> mapped > > >>> into memory. I am still on the dom0 side of this fence. > > >>> > > >>> The real question is whether it is possible to take an NVDIMM, split it > > >>> in half, give each half to two different guests (with appropriate NFIT > > >>> tables) and that be sufficient for the guests to just work. > > >>> > > >> > > >> Yes, one NVDIMM device can be split into multiple parts and assigned > > >> to different guests, and QEMU is responsible to maintain virtual NFIT > > >> tables for each part. > > >> > > >>> Either way, it needs to be a toolstack policy decision as to how to > > >>> split the resource. > > > > > > Currently, we are using NVDIMM as a block device and a DAX-based > > > filesystem > > > is created upon it in Linux so that file-related accesses directly reach > > > the NVDIMM device. > > > > > > In KVM, If the NVDIMM device need to be shared by different VMs, we can > > > create multiple files on the DAX-based filesystem and assign the file to > > > each VMs. In the future, we can enable namespace (partition-like) for > > > PMEM > > > memory and assign the namespace to each VMs (current Linux driver uses > > > the > > > whole PMEM as a single namespace). > > > > > > I think it is not a easy work to let Xen hypervisor recognize NVDIMM > > > device > > > and manager NVDIMM resource. > > > > > > Thanks! > > > > > > > The more I see about this, the more sure I am that we want to keep it as > > a block device managed by dom0. > > > > In the case of the DAX-based filesystem, I presume files are not > > necessarily contiguous. I also presume that this is worked around by > > permuting the mapping of the virtual NVDIMM such that the it appears as > > a contiguous block of addresses to the guest? > > > > Today in Xen, Qemu already has the ability to create mappings in the > > guest's address space, e.g. to map PCI device BARs. I don't see a > > conceptual difference here, although the security/permission model > > certainly is more complicated. > > I imagine that mmap'ing these /dev/pmemXX devices require root > privileges, does it not? > Yes, unless we assign non-root access permissions to /dev/pmemXX (but this is not the default behavior of linux kernel so far). > I wouldn't encourage the introduction of anything else that requires > root privileges in QEMU. With QEMU running as non-root by default in > 4.7, the feature will not be available unless users explicitly ask to > run QEMU as root (which they shouldn't really). > Yes, I'll include those privileged operations in the design document. Haozhong > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel