xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Haozhong Zhang <haozhong.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Xiao Guangrong <guangrong.xiao@linux.intel.com>,
	Keir Fraser <keir@xen.org>
Subject: Re: [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu
Date: Tue, 26 Jan 2016 23:30:00 +0800	[thread overview]
Message-ID: <20160126153000.GA6293@hz-desktop.sh.intel.com> (raw)
In-Reply-To: <56A7785802000078000CB0CD@prv-mh.provo.novell.com>

On 01/26/16 05:44, Jan Beulich wrote:
> >>> On 26.01.16 at 12:44, <George.Dunlap@eu.citrix.com> wrote:
> > On Thu, Jan 21, 2016 at 2:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>>>> On 21.01.16 at 15:01, <haozhong.zhang@intel.com> wrote:
> >>> On 01/21/16 03:25, Jan Beulich wrote:
> >>>> >>> On 21.01.16 at 10:10, <guangrong.xiao@linux.intel.com> wrote:
> >>>> > c) hypervisor should mange PMEM resource pool and partition it to multiple
> >>>> >     VMs.
> >>>>
> >>>> Yes.
> >>>>
> >>>
> >>> But I Still do not quite understand this part: why must pmem resource
> >>> management and partition be done in hypervisor?
> >>
> >> Because that's where memory management belongs. And PMEM,
> >> other than PBLK, is just another form of RAM.
> > 
> > I haven't looked more deeply into the details of this, but this
> > argument doesn't seem right to me.
> > 
> > Normal RAM in Xen is what might be called "fungible" -- at boot, all
> > RAM is zeroed, and it basically doesn't matter at all what RAM is
> > given to what guest.  (There are restrictions of course: lowmem for
> > DMA, contiguous superpages, &c; but within those groups, it doesn't
> > matter *which* bit of lowmem you get, as long as you get enough to do
> > your job.)  If you reboot your guest or hand RAM back to the
> > hypervisor, you assume that everything in it will disappear.  When you
> > ask for RAM, you can request some parameters that it will have
> > (lowmem, on a specific node, &c), but you can't request a specific
> > page that you had before.
> > 
> > This is not the case for PMEM.  The whole point of PMEM (correct me if
> > I'm wrong) is to be used for long-term storage that survives over
> > reboot.  It matters very much that a guest be given the same PRAM
> > after the host is rebooted that it was given before.  It doesn't make
> > any sense to manage it the way Xen currently manages RAM (i.e., that
> > you request a page and get whatever Xen happens to give you).
> 
> Interesting. This isn't the usage model I have been thinking about
> so far. Having just gone back to the original 0/4 mail, I'm afraid
> we're really left guessing, and you guessed differently than I did.
> My understanding of the intentions of PMEM so far was that this
> is a high-capacity, slower than DRAM but much faster than e.g.
> swapping to disk alternative to normal RAM. I.e. the persistent
> aspect of it wouldn't matter at all in this case (other than for PBLK,
> obviously).
>

Of course, pmem could be used in the way you thought because of its
'ram' aspect. But I think the more meaningful usage is from its
persistent aspect. For example, the implementation of some journal
file systems could store logs in pmem rather than the normal ram, so
that if a power failure happens before those in-memory logs are
completely written to the disk, there would still be chance to restore
them from pmem after next booting (rather than abandoning all of
them).

(I'm still writing the design doc which will include more details of
underlying hardware and the software interface of nvdimm exposed by
current linux)

> However, thinking through your usage model I have problems
> seeing it work in a reasonable way even with virtualization left
> aside: To my knowledge there's no established protocol on how
> multiple parties (different versions of the same OS, or even
> completely different OSes) would arbitrate using such memory
> ranges. And even for a single OS it is, other than for disks (and
> hence PBLK), not immediately clear how it would communicate
> from one boot to another what information got stored where,
> or how it would react to some or all of this storage having
> disappeared (just like a disk which got removed, which - unless
> it held the boot partition - would normally have pretty little
> effect on the OS coming back up).
>

Label storage area is a persistent area on NVDIMM and can be used to
store partitions information. It's not included in pmem (that part
that is mapped into the system address space). Instead, it can be only
accessed through NVDIMM _DSM method [1]. However, what contents are
stored and how they are interpreted are left to software. One way is
to follow NVDIMM Namespace Specification [2] to store an array of
labels that describe the start address (from the base 0 of pmem) and
the size of each partition, which is called as namespace. On Linux,
each namespace is exposed as a /dev/pmemXX device.

In the virtualization, the (virtual) label storage area of vNVDIMM and
the corresponding _DSM method are emulated by QEMU. The virtual label
storage area is not written to the host one. Instead, we can reserve a
piece area on pmem for the virtual one.

Besides namespaces, we can also create DAX file systems on pmem and
use files to partition.

Haozhong

> > So if Xen is going to use PMEM, it will have to invent an entirely new
> > interface for guests, and it will have to keep track of those
> > resources across host reboots.  In other words, it will have to
> > duplicate all the work that Linux already does.  What do we gain from
> > that duplication?  Why not just leverage what's already implemented in
> > dom0?
> 
> Indeed if my guessing on the intentions was wrong, then the
> picture completely changes (also for the points you've made
> further down).
> 
> Jan
> 

  parent reply	other threads:[~2016-01-26 15:30 UTC|newest]

Thread overview: 88+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-29 11:31 [PATCH 0/4] add support for vNVDIMM Haozhong Zhang
2015-12-29 11:31 ` [PATCH 1/4] x86/hvm: allow guest to use clflushopt and clwb Haozhong Zhang
2015-12-29 15:46   ` Andrew Cooper
2015-12-30  1:35     ` Haozhong Zhang
2015-12-30  2:16       ` Haozhong Zhang
2015-12-30 10:33         ` Andrew Cooper
2015-12-29 11:31 ` [PATCH 2/4] x86/hvm: add support for pcommit instruction Haozhong Zhang
2015-12-29 11:31 ` [PATCH 3/4] tools/xl: add a new xl configuration 'nvdimm' Haozhong Zhang
2016-01-04 11:16   ` Wei Liu
2016-01-06 12:40   ` Jan Beulich
2016-01-06 15:28     ` Haozhong Zhang
2015-12-29 11:31 ` [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu Haozhong Zhang
2016-01-15 17:10   ` Jan Beulich
2016-01-18  0:52     ` Haozhong Zhang
2016-01-18  8:46       ` Jan Beulich
2016-01-19 11:37         ` Wei Liu
2016-01-19 11:46           ` Jan Beulich
2016-01-20  5:14             ` Tian, Kevin
2016-01-20  5:58               ` Zhang, Haozhong
2016-01-20  5:31         ` Haozhong Zhang
2016-01-20  8:46           ` Jan Beulich
2016-01-20  8:58             ` Andrew Cooper
2016-01-20 10:15               ` Haozhong Zhang
2016-01-20 10:36                 ` Xiao Guangrong
2016-01-20 13:16                   ` Andrew Cooper
2016-01-20 14:29                     ` Stefano Stabellini
2016-01-20 14:42                       ` Haozhong Zhang
2016-01-20 14:45                       ` Andrew Cooper
2016-01-20 14:53                         ` Haozhong Zhang
2016-01-20 15:13                           ` Konrad Rzeszutek Wilk
2016-01-20 15:29                             ` Haozhong Zhang
2016-01-20 15:41                               ` Konrad Rzeszutek Wilk
2016-01-20 15:54                                 ` Haozhong Zhang
2016-01-21  3:35                                 ` Bob Liu
2016-01-20 15:05                         ` Stefano Stabellini
2016-01-20 18:14                           ` Andrew Cooper
2016-01-20 14:38                     ` Haozhong Zhang
2016-01-20 11:04             ` Haozhong Zhang
2016-01-20 11:20               ` Jan Beulich
2016-01-20 15:29                 ` Xiao Guangrong
2016-01-20 15:47                   ` Konrad Rzeszutek Wilk
2016-01-20 16:25                     ` Xiao Guangrong
2016-01-20 16:47                       ` Konrad Rzeszutek Wilk
2016-01-20 16:55                         ` Xiao Guangrong
2016-01-20 17:18                           ` Konrad Rzeszutek Wilk
2016-01-20 17:23                             ` Xiao Guangrong
2016-01-20 17:48                               ` Konrad Rzeszutek Wilk
2016-01-21  3:12                             ` Haozhong Zhang
2016-01-20 17:07                   ` Jan Beulich
2016-01-20 17:17                     ` Xiao Guangrong
2016-01-21  8:18                       ` Jan Beulich
2016-01-21  8:25                         ` Xiao Guangrong
2016-01-21  8:53                           ` Jan Beulich
2016-01-21  9:10                             ` Xiao Guangrong
2016-01-21  9:29                               ` Andrew Cooper
2016-01-21 10:26                                 ` Jan Beulich
2016-01-21 10:25                               ` Jan Beulich
2016-01-21 14:01                                 ` Haozhong Zhang
2016-01-21 14:52                                   ` Jan Beulich
2016-01-22  2:43                                     ` Haozhong Zhang
2016-01-26 11:44                                     ` George Dunlap
2016-01-26 12:44                                       ` Jan Beulich
2016-01-26 12:54                                         ` Juergen Gross
2016-01-26 14:44                                           ` Konrad Rzeszutek Wilk
2016-01-26 15:37                                             ` Jan Beulich
2016-01-26 15:57                                               ` Haozhong Zhang
2016-01-26 16:34                                                 ` Jan Beulich
2016-01-26 19:32                                                   ` Konrad Rzeszutek Wilk
2016-01-27  7:22                                                     ` Haozhong Zhang
2016-01-27 10:16                                                     ` Jan Beulich
2016-01-27 14:50                                                       ` Konrad Rzeszutek Wilk
2016-01-27 10:55                                                   ` George Dunlap
2016-01-26 13:58                                         ` George Dunlap
2016-01-26 14:46                                           ` Konrad Rzeszutek Wilk
2016-01-26 15:30                                         ` Haozhong Zhang [this message]
2016-01-26 15:33                                           ` Haozhong Zhang
2016-01-26 15:57                                           ` Jan Beulich
2016-01-27  2:23                                             ` Haozhong Zhang
2016-01-20 15:07               ` Konrad Rzeszutek Wilk
2016-01-06 15:37 ` [PATCH 0/4] add support for vNVDIMM Ian Campbell
2016-01-06 15:47   ` Haozhong Zhang
2016-01-20  3:28 ` Tian, Kevin
2016-01-20 12:43   ` Stefano Stabellini
2016-01-20 14:26     ` Zhang, Haozhong
2016-01-20 14:35       ` Stefano Stabellini
2016-01-20 14:47         ` Zhang, Haozhong
2016-01-20 14:54           ` Andrew Cooper
2016-01-20 15:59             ` Haozhong Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160126153000.GA6293@hz-desktop.sh.intel.com \
    --to=haozhong.zhang@intel.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).