From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: linux-nvdimm@ml01.01.org, xen-devel@lists.xenproject.org,
Juergen Gross <jgross@suse.com>,
Xiao Guangrong <guangrong.xiao@linux.intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Arnd Bergmann <arnd@arndb.de>,
Johannes Thumshirn <jthumshirn@suse.de>,
linux-kernel@vger.kernel.org,
Stefano Stabellini <stefano@aporeto.com>,
David Vrabel <david.vrabel@citrix.com>,
Ross Zwisler <ross.zwisler@linux.intel.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Dan Williams <dan.j.williams@intel.com>
Subject: Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen
Date: Tue, 11 Oct 2016 14:48:04 -0400 [thread overview]
Message-ID: <20161011184804.GD23193@localhost.localdomain> (raw)
In-Reply-To: <de62aa59-37e0-b01f-1617-6fc8f6fb3620@citrix.com>
On Tue, Oct 11, 2016 at 07:37:09PM +0100, Andrew Cooper wrote:
> On 11/10/16 06:52, Haozhong Zhang wrote:
> > On 10/10/16 17:43, Andrew Cooper wrote:
> >> On 10/10/16 01:35, Haozhong Zhang wrote:
> >>> Overview
> >>> ========
> >>> This RFC kernel patch series along with corresponding patch series of
> >>> Xen, QEMU and ndctl implements Xen vNVDIMM, which can map the host
> >>> NVDIMM devices to Xen HVM domU as vNVDIMM devices.
> >>>
> >>> Xen hypervisor does not include an NVDIMM driver, so it needs the
> >>> assistance from the driver in Dom0 Linux kernel to manage NVDIMM
> >>> devices. We currently only supports NVDIMM devices in pmem mode.
> >>>
> >>> Design and Implementation
> >>> =========================
> >>> The complete design can be found at
> >>> https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg01921.html.
> >>>
> >>> All patch series can be found at
> >>> Xen: https://github.com/hzzhan9/xen.git nvdimm-rfc-v1
> >>> QEMU: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v1
> >>> Linux kernel: https://github.com/hzzhan9/nvdimm.git xen-nvdimm-rfc-v1
> >>> ndctl: https://github.com/hzzhan9/ndctl.git pfn-xen-rfc-v1
> >>>
> >>> Xen hypervisor needs assistance from Dom0 Linux kernel for following tasks:
> >>> 1) Reserve an area on NVDIMM devices for Xen hypervisor to place
> >>> memory management data structures, i.e. frame table and M2P table.
> >>> 2) Report SPA ranges of NVDIMM devices and the reserved area to Xen
> >>> hypervisor.
> >> Please can we take a step back here before diving down a rabbit hole.
> >>
> >>
> >> How do pblk/pmem regions appear in the E820 map at boot? At the very
> >> least, I would expect at least a large reserved region.
> > ACPI specification does not require them to appear in E820, though
> > it defines E820 type-7 for persistent memory.
>
> Ok, so we might get some E820 type-7 ranges, or some holes.
>
> >
> >> Is the MFN information (SPA in your terminology, so far as I can tell)
> >> available in any static APCI tables, or are they only available as a
> >> result of executing AML methods?
> >>
> > For NVDIMM devices already plugged at power on, their MFN information
> > can be got from NFIT table. However, MFN information for hotplugged
> > NVDIMM devices should be got via AML _FIT method, so point 2) is needed.
>
> How does NVDIMM hotplug compare to RAM hotplug? Are the hotplug regions
> described at boot and marked as initially not present, or do you only
> know the hotplugged SPA at the point that it is hotplugged?
The latter. You have no idea of the size until you get an ACPI hotplug.
The ACPI hotplug contains the NFIT MADT table so based on that you
can populate the machine.
>
> I certainly agree that there needs to be a propagation of the hotplug
> notification from OSPM to Xen, which will involve some glue in the Xen
> subsystem in Linux, but I would expect that this would be similar to the
> existing plain RAM hotplug mechanism.
I am actually not sure how ACPI RAM hotplug mechanism is suppose to work
in practice. I thought that the regions (E820) are marked as reserved
and the 'RAM' slots nicely in there.
next prev parent reply other threads:[~2016-10-11 18:50 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-10 0:35 [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen Haozhong Zhang
2016-10-10 0:35 ` [RFC KERNEL PATCH 1/2] nvdimm: add PFN_MODE_XEN to pfn device for Xen usage Haozhong Zhang
2016-10-10 0:35 ` [RFC KERNEL PATCH 2/2] xen, nvdimm: report pfn devices in PFN_MODE_XEN to Xen hypervisor Haozhong Zhang
2016-10-10 3:45 ` [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen Dan Williams
2016-10-10 6:32 ` Haozhong Zhang
2016-10-10 16:24 ` Dan Williams
2016-10-11 7:11 ` Haozhong Zhang
2016-10-10 16:43 ` [Xen-devel] " Andrew Cooper
2016-10-11 5:52 ` Haozhong Zhang
2016-10-11 18:37 ` Andrew Cooper
2016-10-11 18:45 ` Konrad Rzeszutek Wilk
2016-10-11 18:48 ` Konrad Rzeszutek Wilk [this message]
2016-10-11 13:08 ` Jan Beulich
2016-10-11 15:53 ` Dan Williams
2016-10-11 16:58 ` Konrad Rzeszutek Wilk
2016-10-11 17:51 ` Dan Williams
2016-10-11 18:15 ` Andrew Cooper
2016-10-11 18:42 ` Konrad Rzeszutek Wilk
2016-10-11 19:43 ` Konrad Rzeszutek Wilk
2016-10-11 18:33 ` Konrad Rzeszutek Wilk
2016-10-11 19:28 ` Dan Williams
2016-10-11 19:48 ` Konrad Rzeszutek Wilk
2016-10-11 20:17 ` Dan Williams
2016-10-12 10:33 ` Haozhong Zhang
2016-10-12 11:32 ` Jan Beulich
2016-10-12 14:58 ` Haozhong Zhang
2016-10-12 15:39 ` Jan Beulich
2016-10-12 15:42 ` Dan Williams
2016-10-12 16:01 ` Jan Beulich
2016-10-12 16:19 ` Dan Williams
2016-10-13 8:34 ` Jan Beulich
2016-10-13 8:53 ` Haozhong Zhang
2016-10-13 9:08 ` Jan Beulich
2016-10-13 15:40 ` Dan Williams
2016-10-13 16:01 ` Andrew Cooper
2016-10-13 18:59 ` Dan Williams
2016-10-13 19:33 ` Andrew Cooper
2016-10-14 7:08 ` Haozhong Zhang
2016-10-14 12:18 ` Andrew Cooper
2016-10-20 9:14 ` Haozhong Zhang
2016-10-20 21:46 ` Andrew Cooper
2016-10-14 10:03 ` Jan Beulich
2016-10-13 15:46 ` Haozhong Zhang
2016-10-14 10:16 ` Jan Beulich
2016-10-20 9:15 ` Haozhong Zhang
2016-10-13 9:08 ` Haozhong Zhang
2016-10-11 20:18 ` Andrew Cooper
2016-10-12 7:25 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161011184804.GD23193@localhost.localdomain \
--to=konrad.wilk@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=arnd@arndb.de \
--cc=boris.ostrovsky@oracle.com \
--cc=dan.j.williams@intel.com \
--cc=david.vrabel@citrix.com \
--cc=guangrong.xiao@linux.intel.com \
--cc=jgross@suse.com \
--cc=jthumshirn@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvdimm@ml01.01.org \
--cc=ross.zwisler@linux.intel.com \
--cc=stefano@aporeto.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox