From mboxrd@z Thu Jan 1 00:00:00 1970 From: Haozhong Zhang Subject: Re: [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu Date: Wed, 27 Jan 2016 10:23:14 +0800 Message-ID: <20160127022314.GA13489@hz-desktop.sh.intel.com> References: <56A095E3.5060507@linux.intel.com> <56A0AA8A02000078000C977D@prv-mh.provo.novell.com> <56A0A09A.2050101@linux.intel.com> <56A0C02A02000078000C9823@prv-mh.provo.novell.com> <20160121140103.GB6362@hz-desktop.sh.intel.com> <56A0FEA102000078000C9A44@prv-mh.provo.novell.com> <56A7785802000078000CB0CD@prv-mh.provo.novell.com> <20160126153000.GA6293@hz-desktop.sh.intel.com> <56A7A57A02000078000CB2B7@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <56A7A57A02000078000CB2B7@prv-mh.provo.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Kevin Tian , Wei Liu , Ian Campbell , Stefano Stabellini , George Dunlap , Andrew Cooper , Ian Jackson , "xen-devel@lists.xen.org" , Jun Nakajima , Xiao Guangrong , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 01/26/16 08:57, Jan Beulich wrote: > >>> On 26.01.16 at 16:30, wrote: > > On 01/26/16 05:44, Jan Beulich wrote: > >> Interesting. This isn't the usage model I have been thinking about > >> so far. Having just gone back to the original 0/4 mail, I'm afraid > >> we're really left guessing, and you guessed differently than I did. > >> My understanding of the intentions of PMEM so far was that this > >> is a high-capacity, slower than DRAM but much faster than e.g. > >> swapping to disk alternative to normal RAM. I.e. the persistent > >> aspect of it wouldn't matter at all in this case (other than for PBLK, > >> obviously). > > > > Of course, pmem could be used in the way you thought because of its > > 'ram' aspect. But I think the more meaningful usage is from its > > persistent aspect. For example, the implementation of some journal > > file systems could store logs in pmem rather than the normal ram, so > > that if a power failure happens before those in-memory logs are > > completely written to the disk, there would still be chance to restore > > them from pmem after next booting (rather than abandoning all of > > them). > > Well, that leaves open how that file system would find its log > after reboot, or how that log is protected from clobbering by > another OS booted in between. > It would depend on the concrete design of those OS or applications. This is just an example to show a possible usage of the persistent aspect. > >> However, thinking through your usage model I have problems > >> seeing it work in a reasonable way even with virtualization left > >> aside: To my knowledge there's no established protocol on how > >> multiple parties (different versions of the same OS, or even > >> completely different OSes) would arbitrate using such memory > >> ranges. And even for a single OS it is, other than for disks (and > >> hence PBLK), not immediately clear how it would communicate > >> from one boot to another what information got stored where, > >> or how it would react to some or all of this storage having > >> disappeared (just like a disk which got removed, which - unless > >> it held the boot partition - would normally have pretty little > >> effect on the OS coming back up). > > > > Label storage area is a persistent area on NVDIMM and can be used to > > store partitions information. It's not included in pmem (that part > > that is mapped into the system address space). Instead, it can be only > > accessed through NVDIMM _DSM method [1]. However, what contents are > > stored and how they are interpreted are left to software. One way is > > to follow NVDIMM Namespace Specification [2] to store an array of > > labels that describe the start address (from the base 0 of pmem) and > > the size of each partition, which is called as namespace. On Linux, > > each namespace is exposed as a /dev/pmemXX device. > > According to what I've just read in one of the documents Konrad > pointed us to, there can be just one PMEM label per DIMM. Unless > I misread of course... > My mistake, only one pmem label per DIMM. Haozhong