From: Haozhong Zhang <haozhong.zhang@intel.com>
To: qemu-devel@nongnu.org, xen-devel@lists.xen.org
Cc: Haozhong Zhang <haozhong.zhang@intel.com>,
Xiao Guangrong <guangrong.xiao@linux.intel.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Igor Mammedov <imammedo@redhat.com>,
Konrad Rzeszutek Wilk <konrad@darnok.org>,
Dan Williams <dan.j.williams@intel.com>
Subject: [RFC QEMU PATCH v2 06/10] nvdimm acpi: build and copy NVDIMM namespace devices to guest on Xen
Date: Mon, 20 Mar 2017 08:12:45 +0800 [thread overview]
Message-ID: <20170320001249.25521-7-haozhong.zhang@intel.com> (raw)
In-Reply-To: <20170320001249.25521-1-haozhong.zhang@intel.com>
Build and copy NVDIMM namespace devices to guest when QEMU is used as
Xen device model. Only the body of each AML device is built and copied.
Xen hvmloader will complete other parts of namespace devices and put
in SSDT.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
hw/acpi/nvdimm.c | 55 ++++++++++++++++++++++++++++++++++++++-----------------
1 file changed, 38 insertions(+), 17 deletions(-)
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 2509561729..1e077eca25 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -1222,22 +1222,8 @@ static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
}
}
-static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
- BIOSLinker *linker, GArray *dsm_dma_arrea,
- uint32_t ram_slots)
+static void nvdimm_build_devices(Aml *dev, uint32_t ram_slots)
{
- Aml *ssdt, *sb_scope, *dev;
- int mem_addr_offset, nvdimm_ssdt;
-
- acpi_add_table(table_offsets, table_data);
-
- ssdt = init_aml_allocator();
- acpi_data_push(ssdt->buf, sizeof(AcpiTableHeader));
-
- sb_scope = aml_scope("\\_SB");
-
- dev = aml_device("NVDR");
-
/*
* ACPI 6.0: 9.20 NVDIMM Devices:
*
@@ -1258,6 +1244,25 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
nvdimm_build_fit(dev);
nvdimm_build_nvdimm_devices(dev, ram_slots);
+}
+
+static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
+ BIOSLinker *linker, GArray *dsm_dma_arrea,
+ uint32_t ram_slots)
+{
+ Aml *ssdt, *sb_scope, *dev;
+ int mem_addr_offset, nvdimm_ssdt;
+
+ acpi_add_table(table_offsets, table_data);
+
+ ssdt = init_aml_allocator();
+ acpi_data_push(ssdt->buf, sizeof(AcpiTableHeader));
+
+ sb_scope = aml_scope("\\_SB");
+
+ dev = aml_device("NVDR");
+
+ nvdimm_build_devices(dev, ram_slots);
aml_append(sb_scope, dev);
aml_append(ssdt, sb_scope);
@@ -1281,6 +1286,18 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
free_aml_allocator();
}
+static void nvdimm_build_xen_nvdimm_devices(uint32_t ram_slots)
+{
+ Aml *dev = init_aml_allocator();
+
+ nvdimm_build_devices(dev, ram_slots);
+ build_append_named_dword(dev->buf, NVDIMM_ACPI_MEM_ADDR);
+ xen_acpi_copy_to_guest("NVDR", dev->buf->data, dev->buf->len,
+ XEN_DM_ACPI_BLOB_TYPE_NSDEV);
+
+ free_aml_allocator();
+}
+
void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
BIOSLinker *linker, AcpiNVDIMMState *state,
uint32_t ram_slots)
@@ -1292,8 +1309,12 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
return;
}
- nvdimm_build_ssdt(table_offsets, table_data, linker, state->dsm_mem,
- ram_slots);
+ if (!xen_enabled()) {
+ nvdimm_build_ssdt(table_offsets, table_data, linker, state->dsm_mem,
+ ram_slots);
+ } else {
+ nvdimm_build_xen_nvdimm_devices(ram_slots);
+ }
device_list = nvdimm_get_device_list();
/* no NVDIMM device is plugged. */
--
2.12.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-03-20 0:12 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-20 0:12 [RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 01/10] nvdimm xen: disable label support on Xen Haozhong Zhang
2017-04-01 12:25 ` Konrad Rzeszutek Wilk
2017-03-20 0:12 ` [RFC QEMU PATCH v2 02/10] xen-hvm: initialize DM ACPI Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 03/10] xen-hvm: support copying ACPI to guest memory Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 04/10] nvdimm acpi: do not use fw_cfg on Xen Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 05/10] nvdimm acpi: copy NFIT to Xen guest Haozhong Zhang
2017-03-20 0:12 ` Haozhong Zhang [this message]
2017-03-20 0:12 ` [RFC QEMU PATCH v2 07/10] xen-hvm: initiate building DM ACPI on i386 machine Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 08/10] hostmem: add a host memory backend for Xen Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 09/10] xen-hvm: create hotplug memory region on Xen Haozhong Zhang
2017-03-20 0:12 ` [RFC QEMU PATCH v2 10/10] qapi: extend 'query-memory-devices' to list devices of specified type Haozhong Zhang
2017-04-11 8:56 ` [Qemu-devel] " Markus Armbruster
2017-03-20 0:26 ` [Qemu-devel] [RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest no-reply
2017-03-28 13:18 ` no-reply
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170320001249.25521-7-haozhong.zhang@intel.com \
--to=haozhong.zhang@intel.com \
--cc=dan.j.williams@intel.com \
--cc=guangrong.xiao@linux.intel.com \
--cc=imammedo@redhat.com \
--cc=konrad@darnok.org \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).