xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Haozhong Zhang <haozhong.zhang@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xensource.com,
	Xiao Guangrong <guangrong.xiao@linux.intel.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony.Perard@citrix.com, Igor Mammedov <imammedo@redhat.com>,
	Richard Henderson <rth@twiddle.net>
Subject: Re: [PATCH 0/2] add vNVDIMM support for Xen
Date: Tue, 5 Jan 2016 09:33:37 +0800	[thread overview]
Message-ID: <20160105013337.GF3619@hz-desktop.sh.intel.com> (raw)
In-Reply-To: <alpine.DEB.2.02.1601041551140.27577@kaball.uk.xensource.com>

On 01/04/16 15:56, Stefano Stabellini wrote:
> On Tue, 29 Dec 2015, Haozhong Zhang wrote:
> > This patch series extends current vNVDIMM implementation to provide
> > vNVDIMM to HVM domains when QEMU is used as the device model of Xen.
> > 
> > This patch series is based on upstream qemu rather of qemu-xen,
> > because vNVDIMM support has not been in qemu-xen (commit f165e58).
> > 
> > * Following two problem that prevent Xen from directly using current
> >   implementation are solved by this patch series.
> > 
> >  (1) The current way to allocate memory for pc-dimm based vNVDIMM
> >      through file-backend device is not compatible with Xen. Patch 2
> >      adds another pc-nvdimm device to manage vNVDIMM's memory by
> >      itself.
> >     
> >      A pc-nvdimm device can be used only with Xen and is specified by
> >      parameters like
> >             -device pc-nvdimm,file=/dev/pmem0,size=MBYTES
> > 	    
> >  (2) Xen uses its hvmloader rather than QEMU to build guest ACPI
> >      tables. In order to reuse as much code as possible, patch 2 calls
> >      the existing QEMU code to build guest ACPI tables for pc-nvdimm
> >      devices and passes them to hvmloader.
> 
> I don't think that is acceptable: this would introduce a VM build time
> dependency between QEMU and hvmloader which I think is undesirable.
>
Guess I should not say "calls ... QEMU code". In fact, QEMU copies
some ACPI tables into guest and writes its location and size in
xenstore so that hvmloader can find and load those tables. Because
hvmloader uses certain xenstore keys to find ACPI tables, I think it
does not tightly depend on QEMU (and those keys and ACPI tables could
also be prepared by other device models or Xen tool stack).

> Please note that Anthony is working on a way to pass ACPI tables from
> the toolstack to hvmloader: 
> 
> http://marc.info/?l=xen-devel&m=144587582606159
> 
> I would build this work on top of his series.
>
I'll look at this work.

Thanks,
Haozhong

> 
> > * Test
> >  (1) A patched Xen is used for test. Xen patch series is sent
> >      separately with title "[PATCH 0/4] add support for vNVDIMM".
> >       
> >  (2) Prepare a memory backend file:
> >             dd if=/dev/zero of=/tmp/nvm0 bs=1G count=10
> > 
> >  (3) Add the following line to a HVM domain's configuration xl.cfg:
> >             nvdimm =[ 'file=/tmp/nvm0,size=10240' ]
> > 
> >  (4) Launch a HVM domain from above xl.cfg.
> > 
> >  (5) If guest Linux kernel is 4.2 or newer and kernel modules
> >      libnvdimm, nfit, nd_btt and nd_pmem are loaded, then you will see
> >      the whole nvdimm device used as a single namespace and /dev/pmem0
> >      will appear.
> > 
> > 
> > 
> > Haozhong Zhang (2):
> >   pc-nvdimm: implement pc-nvdimm device abstract
> >   pc-nvdimm acpi: build ACPI tables for pc-nvdimm devices
> > 
> >  hw/acpi/nvdimm.c           |   5 +-
> >  hw/i386/pc.c               |   6 +-
> >  hw/mem/Makefile.objs       |   1 +
> >  hw/mem/pc-nvdimm.c         | 239 +++++++++++++++++++++++++++++++++++++++++++++
> >  include/hw/mem/pc-nvdimm.h |  49 ++++++++++
> >  include/hw/xen/xen.h       |   2 +
> >  xen-hvm.c                  |  73 ++++++++++++++
> >  7 files changed, 373 insertions(+), 2 deletions(-)
> >  create mode 100644 hw/mem/pc-nvdimm.c
> >  create mode 100644 include/hw/mem/pc-nvdimm.h
> > 
> > -- 
> > 2.4.8
> > 

      reply	other threads:[~2016-01-05  1:33 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-29 11:28 [PATCH 0/2] add vNVDIMM support for Xen Haozhong Zhang
2015-12-29 11:28 ` [PATCH 1/2] pc-nvdimm: implement pc-nvdimm device abstract Haozhong Zhang
2015-12-29 11:28 ` [PATCH 2/2] pc-nvdimm acpi: build ACPI tables for pc-nvdimm devices Haozhong Zhang
2016-01-04 16:01   ` Stefano Stabellini
2016-01-04 21:10     ` Konrad Rzeszutek Wilk
2016-01-05 11:00       ` Stefano Stabellini
2016-01-05 14:01         ` Haozhong Zhang
2016-01-06 14:50           ` Konrad Rzeszutek Wilk
2016-01-06 15:24             ` Haozhong Zhang
2016-01-05  2:14     ` Haozhong Zhang
2015-12-29 15:11 ` [PATCH 0/2] add vNVDIMM support for Xen Xiao Guangrong
2016-01-04 15:57   ` Stefano Stabellini
2016-01-05  1:22     ` Haozhong Zhang
2016-01-04 15:56 ` Stefano Stabellini
2016-01-05  1:33   ` Haozhong Zhang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160105013337.GF3619@hz-desktop.sh.intel.com \
    --to=haozhong.zhang@intel.com \
    --cc=Anthony.Perard@citrix.com \
    --cc=ehabkost@redhat.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rth@twiddle.net \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).