* [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
@ 2016-11-03 18:36 Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: prebuild nvdimm devices for available slots Xiao Guangrong
` (5 more replies)
0 siblings, 6 replies; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-03 18:36 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel, Xiao Guangrong
Hi Michael,
This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
in your pull request:
[PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
[PULL 37/47] nvdimm acpi: introduce fit buffer
[PULL 38/47] nvdimm acpi: introduce _FIT
[PULL 39/47] pc: memhp: enable nvdimm device hotplug
Thanks for your patience also thank Igor and Stefan for their review.
Chanelog in v5:
1) make definition of dsm return values be a separate patch
2) drop stopping nvdimm hot remove in pc_dimm_unplug()
3) drop length field in the output of Read_Fit
4) doc fixes & improvement
5) better naming some functions
6) code cleanup
Changelog in v4:
1) drop fit lock and post_hotplug_cb
2) move nvdimm hotplug code to hw/acpi/nvdimm.c
3) introduce length field to indicate the fit size
4) nvdimm acpi cleanup
5) doc typo fixes
Changelog in v3:
1) use a dedicated interrupt for nvdimm device hotplug
2) stop nvdimm device hot unplug
3) reserve UUID and handle for QEMU internally used QEMU
5) redesign fit buffer to avoid OSPM reading incomplete fit info
6) bug fixes and cleanups
Changelog in v2:
Fixed signed integer overflow pointed out by Stefan Hajnoczi
This patchset enables nvdimm hotplug support, it is used as pc-dimm hotplug,
for example, a new nvdimm device can be plugged as follows:
object_add memory-backend-file,id=mem3,size=10G,mem-path=/home/eric/nvdimm3
device_add nvdimm,id=nvdimm3,memdev=mem3
Xiao Guangrong (5):
nvdimm acpi: prebuild nvdimm devices for available slots
nvdimm acpi: introduce fit buffer
nvdimm acpi: define DSM return codes
nvdimm acpi: introduce _FIT method
pc: memhp: enable nvdimm device hotplug
default-configs/mips-softmmu-common.mak | 1 +
docs/specs/acpi_nvdimm.txt | 66 ++++++-
hw/acpi/ich9.c | 8 +-
hw/acpi/nvdimm.c | 321 +++++++++++++++++++++++++++-----
hw/acpi/piix4.c | 7 +-
hw/i386/acpi-build.c | 9 +-
hw/i386/pc.c | 10 +
hw/mem/nvdimm.c | 4 -
include/hw/acpi/acpi_dev_interface.h | 1 +
include/hw/mem/nvdimm.h | 23 ++-
10 files changed, 390 insertions(+), 60 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: prebuild nvdimm devices for available slots
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
@ 2016-11-03 18:36 ` Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce fit buffer Xiao Guangrong
` (4 subsequent siblings)
5 siblings, 0 replies; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-03 18:36 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel, Xiao Guangrong
For each NVDIMM present or intended to be supported by platform,
platform firmware also exposes an ACPI Namespace Device under
the root device
So it builds nvdimm devices for all slots to support vNVDIMM hotplug
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
hw/acpi/nvdimm.c | 33 ++++++++++++++++++++-------------
hw/i386/acpi-build.c | 2 +-
include/hw/mem/nvdimm.h | 3 ++-
3 files changed, 23 insertions(+), 15 deletions(-)
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index bb896c9..58bc5c6 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -961,12 +961,11 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle)
aml_append(dev, method);
}
-static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
+static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
{
- for (; device_list; device_list = device_list->next) {
- DeviceState *dev = device_list->data;
- int slot = object_property_get_int(OBJECT(dev), PC_DIMM_SLOT_PROP,
- NULL);
+ uint32_t slot;
+
+ for (slot = 0; slot < ram_slots; slot++) {
uint32_t handle = nvdimm_slot_to_handle(slot);
Aml *nvdimm_dev;
@@ -987,9 +986,9 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
}
}
-static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
- GArray *table_data, BIOSLinker *linker,
- GArray *dsm_dma_arrea)
+static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
+ BIOSLinker *linker, GArray *dsm_dma_arrea,
+ uint32_t ram_slots)
{
Aml *ssdt, *sb_scope, *dev;
int mem_addr_offset, nvdimm_ssdt;
@@ -1021,7 +1020,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
/* 0 is reserved for root device. */
nvdimm_build_device_dsm(dev, 0);
- nvdimm_build_nvdimm_devices(device_list, dev);
+ nvdimm_build_nvdimm_devices(dev, ram_slots);
aml_append(sb_scope, dev);
aml_append(ssdt, sb_scope);
@@ -1046,17 +1045,25 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
}
void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
- BIOSLinker *linker, GArray *dsm_dma_arrea)
+ BIOSLinker *linker, GArray *dsm_dma_arrea,
+ uint32_t ram_slots)
{
GSList *device_list;
- /* no NVDIMM device is plugged. */
+ /* no nvdimm device can be plugged. */
+ if (!ram_slots) {
+ return;
+ }
+
+ nvdimm_build_ssdt(table_offsets, table_data, linker, dsm_dma_arrea,
+ ram_slots);
+
device_list = nvdimm_get_plugged_device_list();
+ /* no NVDIMM device is plugged. */
if (!device_list) {
return;
}
+
nvdimm_build_nfit(device_list, table_offsets, table_data, linker);
- nvdimm_build_ssdt(device_list, table_offsets, table_data, linker,
- dsm_dma_arrea);
g_slist_free(device_list);
}
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index e999654..6ae4769 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2767,7 +2767,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
}
if (pcms->acpi_nvdimm_state.is_enabled) {
nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
- pcms->acpi_nvdimm_state.dsm_mem);
+ pcms->acpi_nvdimm_state.dsm_mem, machine->ram_slots);
}
/* Add tables supplied by user (if any) */
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 1cfe9e0..63a2b20 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -112,5 +112,6 @@ typedef struct AcpiNVDIMMState AcpiNVDIMMState;
void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
FWCfgState *fw_cfg, Object *owner);
void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
- BIOSLinker *linker, GArray *dsm_dma_arrea);
+ BIOSLinker *linker, GArray *dsm_dma_arrea,
+ uint32_t ram_slots);
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce fit buffer
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: prebuild nvdimm devices for available slots Xiao Guangrong
@ 2016-11-03 18:36 ` Xiao Guangrong
2016-11-04 10:08 ` Igor Mammedov
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: define DSM return codes Xiao Guangrong
` (3 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-03 18:36 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel, Xiao Guangrong
The buffer is used to save the FIT info for all the presented nvdimm
devices which is updated after the nvdimm device is plugged or
unplugged. In the later patch, it will be used to construct NVDIMM
ACPI _FIT method which reflects the presented nvdimm devices after
nvdimm hotplug
As FIT buffer can not completely mapped into guest address space,
OSPM will exit to QEMU multiple times, however, there is the race
condition - FIT may be changed during these multiple exits, so that
we mark @dirty whenever the buffer is updated.
@dirty is cleared for the first time OSPM gets fit buffer, if
dirty is detected in the later access, OSPM will restart the
access
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
hw/acpi/nvdimm.c | 59 +++++++++++++++++++++++++++++++------------------
hw/i386/acpi-build.c | 2 +-
hw/i386/pc.c | 4 ++++
include/hw/mem/nvdimm.h | 21 +++++++++++++++++-
4 files changed, 62 insertions(+), 24 deletions(-)
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 58bc5c6..73db3a7 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -33,35 +33,30 @@
#include "hw/nvram/fw_cfg.h"
#include "hw/mem/nvdimm.h"
-static int nvdimm_plugged_device_list(Object *obj, void *opaque)
+static int nvdimm_device_list(Object *obj, void *opaque)
{
GSList **list = opaque;
if (object_dynamic_cast(obj, TYPE_NVDIMM)) {
- DeviceState *dev = DEVICE(obj);
-
- if (dev->realized) { /* only realized NVDIMMs matter */
- *list = g_slist_append(*list, DEVICE(obj));
- }
+ *list = g_slist_append(*list, DEVICE(obj));
}
- object_child_foreach(obj, nvdimm_plugged_device_list, opaque);
+ object_child_foreach(obj, nvdimm_device_list, opaque);
return 0;
}
/*
- * inquire plugged NVDIMM devices and link them into the list which is
+ * inquire NVDIMM devices and link them into the list which is
* returned to the caller.
*
* Note: it is the caller's responsibility to free the list to avoid
* memory leak.
*/
-static GSList *nvdimm_get_plugged_device_list(void)
+static GSList *nvdimm_get_device_list(void)
{
GSList *list = NULL;
- object_child_foreach(qdev_get_machine(), nvdimm_plugged_device_list,
- &list);
+ object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list);
return list;
}
@@ -219,7 +214,7 @@ static uint32_t nvdimm_slot_to_dcr_index(int slot)
static NVDIMMDevice *nvdimm_get_device_by_handle(uint32_t handle)
{
NVDIMMDevice *nvdimm = NULL;
- GSList *list, *device_list = nvdimm_get_plugged_device_list();
+ GSList *list, *device_list = nvdimm_get_device_list();
for (list = device_list; list; list = list->next) {
NVDIMMDevice *nvd = list->data;
@@ -348,8 +343,9 @@ static void nvdimm_build_structure_dcr(GArray *structures, DeviceState *dev)
(DSM) in DSM Spec Rev1.*/);
}
-static GArray *nvdimm_build_device_structure(GSList *device_list)
+static GArray *nvdimm_build_device_structure(void)
{
+ GSList *device_list = nvdimm_get_device_list();
GArray *structures = g_array_new(false, true /* clear */, 1);
for (; device_list; device_list = device_list->next) {
@@ -367,14 +363,32 @@ static GArray *nvdimm_build_device_structure(GSList *device_list)
/* build NVDIMM Control Region Structure. */
nvdimm_build_structure_dcr(structures, dev);
}
+ g_slist_free(device_list);
return structures;
}
-static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
+static void nvdimm_init_fit_buffer(NvdimmFitBuffer *fit_buf)
+{
+ fit_buf->fit = g_array_new(false, true /* clear */, 1);
+}
+
+static void nvdimm_build_fit_buffer(NvdimmFitBuffer *fit_buf)
+{
+ g_array_free(fit_buf->fit, true);
+ fit_buf->fit = nvdimm_build_device_structure();
+ fit_buf->dirty = true;
+}
+
+void nvdimm_plug(AcpiNVDIMMState *state)
+{
+ nvdimm_build_fit_buffer(&state->fit_buf);
+}
+
+static void nvdimm_build_nfit(AcpiNVDIMMState *state, GArray *table_offsets,
GArray *table_data, BIOSLinker *linker)
{
- GArray *structures = nvdimm_build_device_structure(device_list);
+ NvdimmFitBuffer *fit_buf = &state->fit_buf;
unsigned int header;
acpi_add_table(table_offsets, table_data);
@@ -383,12 +397,11 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
header = table_data->len;
acpi_data_push(table_data, sizeof(NvdimmNfitHeader));
/* NVDIMM device structures. */
- g_array_append_vals(table_data, structures->data, structures->len);
+ g_array_append_vals(table_data, fit_buf->fit->data, fit_buf->fit->len);
build_header(linker, table_data,
(void *)(table_data->data + header), "NFIT",
- sizeof(NvdimmNfitHeader) + structures->len, 1, NULL, NULL);
- g_array_free(structures, true);
+ sizeof(NvdimmNfitHeader) + fit_buf->fit->len, 1, NULL, NULL);
}
struct NvdimmDsmIn {
@@ -771,6 +784,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
state->dsm_mem->len);
+
+ nvdimm_init_fit_buffer(&state->fit_buf);
}
#define NVDIMM_COMMON_DSM "NCAL"
@@ -1045,7 +1060,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
}
void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
- BIOSLinker *linker, GArray *dsm_dma_arrea,
+ BIOSLinker *linker, AcpiNVDIMMState *state,
uint32_t ram_slots)
{
GSList *device_list;
@@ -1055,15 +1070,15 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
return;
}
- nvdimm_build_ssdt(table_offsets, table_data, linker, dsm_dma_arrea,
+ nvdimm_build_ssdt(table_offsets, table_data, linker, state->dsm_mem,
ram_slots);
- device_list = nvdimm_get_plugged_device_list();
+ device_list = nvdimm_get_device_list();
/* no NVDIMM device is plugged. */
if (!device_list) {
return;
}
- nvdimm_build_nfit(device_list, table_offsets, table_data, linker);
+ nvdimm_build_nfit(state, table_offsets, table_data, linker);
g_slist_free(device_list);
}
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 6ae4769..bc49958 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2767,7 +2767,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
}
if (pcms->acpi_nvdimm_state.is_enabled) {
nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
- pcms->acpi_nvdimm_state.dsm_mem, machine->ram_slots);
+ &pcms->acpi_nvdimm_state, machine->ram_slots);
}
/* Add tables supplied by user (if any) */
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 93ff49c..a4e1509 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1700,6 +1700,10 @@ static void pc_dimm_plug(HotplugHandler *hotplug_dev,
goto out;
}
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ nvdimm_plug(&pcms->acpi_nvdimm_state);
+ }
+
hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &error_abort);
out:
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 63a2b20..e704e55 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -98,12 +98,30 @@ typedef struct NVDIMMClass NVDIMMClass;
#define NVDIMM_ACPI_IO_BASE 0x0a18
#define NVDIMM_ACPI_IO_LEN 4
+/*
+ * NvdimmFitBuffer:
+ * @fit: FIT structures for present NVDIMMs. It is updated after
+ * the NVDIMM device is plugged or unplugged.
+ * @dirty: It is set whenever the buffer is updated so that it
+ * preserves NVDIMM ACPI _FIT method to read incomplete or
+ * obsolete fit info if fit update happens during multiple RFIT
+ * calls.
+ */
+struct NvdimmFitBuffer {
+ GArray *fit;
+ bool dirty;
+};
+typedef struct NvdimmFitBuffer NvdimmFitBuffer;
+
struct AcpiNVDIMMState {
/* detect if NVDIMM support is enabled. */
bool is_enabled;
/* the data of the fw_cfg file NVDIMM_DSM_MEM_FILE. */
GArray *dsm_mem;
+
+ NvdimmFitBuffer fit_buf;
+
/* the IO region used by OSPM to transfer control to QEMU. */
MemoryRegion io_mr;
};
@@ -112,6 +130,7 @@ typedef struct AcpiNVDIMMState AcpiNVDIMMState;
void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
FWCfgState *fw_cfg, Object *owner);
void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
- BIOSLinker *linker, GArray *dsm_dma_arrea,
+ BIOSLinker *linker, AcpiNVDIMMState *state,
uint32_t ram_slots);
+void nvdimm_plug(AcpiNVDIMMState *state);
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: define DSM return codes
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: prebuild nvdimm devices for available slots Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce fit buffer Xiao Guangrong
@ 2016-11-03 18:36 ` Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: introduce _FIT method Xiao Guangrong
` (2 subsequent siblings)
5 siblings, 0 replies; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-03 18:36 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel, Xiao Guangrong
and use these codes to refine the code
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
hw/acpi/nvdimm.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 73db3a7..d9d1ef7 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -498,6 +498,11 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr)
cpu_physical_memory_write(dsm_mem_addr, &out, sizeof(out));
}
+#define NVDIMM_DSM_RET_STATUS_SUCCESS 0 /* Success */
+#define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */
+#define NVDIMM_DSM_RET_STATUS_NOMEMDEV 2 /* Non-Existing Memory Device */
+#define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */
+
static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
{
/*
@@ -511,7 +516,7 @@ static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
}
/* No function except function 0 is supported yet. */
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
}
/*
@@ -557,7 +562,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr)
nvdimm_debug("label_size %#x, max_xfer %#x.\n", label_size, mxfer);
- label_size_out.func_ret_status = cpu_to_le32(0 /* Success */);
+ label_size_out.func_ret_status = cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
label_size_out.label_size = cpu_to_le32(label_size);
label_size_out.max_xfer = cpu_to_le32(mxfer);
@@ -568,7 +573,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr)
static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
uint32_t offset, uint32_t length)
{
- uint32_t ret = 3 /* Invalid Input Parameters */;
+ uint32_t ret = NVDIMM_DSM_RET_STATUS_INVALID;
if (offset + length < offset) {
nvdimm_debug("offset %#x + length %#x is overflow.\n", offset,
@@ -588,7 +593,7 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
return ret;
}
- return 0 /* Success */;
+ return NVDIMM_DSM_RET_STATUS_SUCCESS;
}
/*
@@ -612,7 +617,7 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
status = nvdimm_rw_label_data_check(nvdimm, get_label_data->offset,
get_label_data->length);
- if (status != 0 /* Success */) {
+ if (status != NVDIMM_DSM_RET_STATUS_SUCCESS) {
nvdimm_dsm_no_payload(status, dsm_mem_addr);
return;
}
@@ -622,7 +627,8 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
get_label_data_out = g_malloc(size);
get_label_data_out->len = cpu_to_le32(size);
- get_label_data_out->func_ret_status = cpu_to_le32(0 /* Success */);
+ get_label_data_out->func_ret_status =
+ cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
nvc->read_label_data(nvdimm, get_label_data_out->out_buf,
get_label_data->length, get_label_data->offset);
@@ -650,7 +656,7 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
status = nvdimm_rw_label_data_check(nvdimm, set_label_data->offset,
set_label_data->length);
- if (status != 0 /* Success */) {
+ if (status != NVDIMM_DSM_RET_STATUS_SUCCESS) {
nvdimm_dsm_no_payload(status, dsm_mem_addr);
return;
}
@@ -660,7 +666,7 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
nvc->write_label_data(nvdimm, set_label_data->in_buf,
set_label_data->length, set_label_data->offset);
- nvdimm_dsm_no_payload(0 /* Success */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS, dsm_mem_addr);
}
static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
@@ -684,7 +690,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
}
if (!nvdimm) {
- nvdimm_dsm_no_payload(2 /* Non-Existing Memory Device */,
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_NOMEMDEV,
dsm_mem_addr);
return;
}
@@ -711,7 +717,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
break;
}
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
}
static uint64_t
@@ -747,7 +753,7 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
if (in->revision != 0x1 /* Currently we only support DSM Spec Rev1. */) {
nvdimm_debug("Revision %#x is not supported, expect %#x.\n",
in->revision, 0x1);
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
goto exit;
}
@@ -913,7 +919,7 @@ static void nvdimm_build_common_dsm(Aml *dev)
aml_append(unsupport, ifctx);
/* No function is supported yet. */
- byte_list[0] = 1 /* Not Supported */;
+ byte_list[0] = NVDIMM_DSM_RET_STATUS_UNSUPPORT;
aml_append(unsupport, aml_return(aml_buffer(1, byte_list)));
aml_append(method, unsupport);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: introduce _FIT method
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
` (2 preceding siblings ...)
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: define DSM return codes Xiao Guangrong
@ 2016-11-03 18:36 ` Xiao Guangrong
2016-11-04 10:24 ` Igor Mammedov
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 5/5] pc: memhp: enable nvdimm device hotplug Xiao Guangrong
2016-11-04 3:50 ` [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
5 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-03 18:36 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel, Xiao Guangrong
_FIT is required for hotplug support, guest will inquire the
updated device info from it if a hotplug event is received
As FIT buffer is not completely mapped into guest address space,
Read_FIT method is introduced to read NFIT structures blob from
QEMU, The buffer is concatenated before _FIT return
Refer to docs/specs/acpi-nvdimm.txt for detailed design
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
docs/specs/acpi_nvdimm.txt | 61 ++++++++++++--
hw/acpi/nvdimm.c | 198 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 252 insertions(+), 7 deletions(-)
diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt
index 0fdd251..63cb63f 100644
--- a/docs/specs/acpi_nvdimm.txt
+++ b/docs/specs/acpi_nvdimm.txt
@@ -65,8 +65,8 @@ _FIT(Firmware Interface Table)
The detailed definition of the structure can be found at ACPI 6.0: 5.2.25
NVDIMM Firmware Interface Table (NFIT).
-QEMU NVDIMM Implemention
-========================
+QEMU NVDIMM Implementation
+==========================
QEMU uses 4 bytes IO Port starting from 0x0a18 and a RAM-based memory page
for NVDIMM ACPI.
@@ -82,6 +82,16 @@ Memory:
ACPI writes _DSM Input Data (based on the offset in the page):
[0x0 - 0x3]: 4 bytes, NVDIMM Device Handle, 0 is reserved for NVDIMM
Root device.
+
+ The handle is completely QEMU internal thing, the values in
+ range [1, 0xFFFF] indicate nvdimm device. Other values are
+ reserved for other purposes.
+
+ Reserved handles:
+ 0 is reserved for nvdimm root device named NVDR.
+ 0x10000 is reserved for QEMU internal DSM function called on
+ the root device.
+
[0x4 - 0x7]: 4 bytes, Revision ID, that is the Arg1 of _DSM method.
[0x8 - 0xB]: 4 bytes. Function Index, that is the Arg2 of _DSM method.
[0xC - 0xFFF]: 4084 bytes, the Arg3 of _DSM method.
@@ -127,6 +137,47 @@ _DSM process diagram:
| result from the page | | |
+--------------------------+ +--------------+
- _FIT implementation
- -------------------
- TODO (will fill it when nvdimm hotplug is introduced)
+QEMU internal use only _DSM function
+------------------------------------
+1) Read FIT
+ _FIT method uses _DSM method to fetch NFIT structures blob from QEMU
+ in 1 page sized increments which are then concatenated and returned
+ as _FIT method result.
+
+ Input parameters:
+ Arg0 – UUID {set to 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62}
+ Arg1 – Revision ID (set to 1)
+ Arg2 - Function Index, 0x1
+ Arg3 - A package containing a buffer whose layout is as follows:
+
+ +----------+--------+--------+-------------------------------------------+
+ | Field | Length | Offset | Description |
+ +----------+--------+--------+-------------------------------------------+
+ | offset | 4 | 0 | offset in QEMU's NFIT structures blob to |
+ | | | | read from |
+ +----------+--------+--------+-------------------------------------------+
+
+ Output layout in the dsm memory page:
+ +----------+--------+--------+-------------------------------------------+
+ | Field | Length | Offset | Description |
+ +----------+--------+--------+-------------------------------------------+
+ | length | 4 | 0 | length of entire returned data |
+ | | | | (including the header) |
+ +----------+-----------------+-------------------------------------------+
+ | | | | return status codes |
+ | | | | 0x0 - success |
+ | | | | 0x100 - error caused by NFIT update while |
+ | status | 4 | 4 | read by _FIT wasn't completed, other |
+ | | | | codes follow Chapter 3 in DSM Spec Rev1 |
+ +----------+-----------------+-------------------------------------------+
+ | fit data | Varies | 8 | contains FIT data, this field is present |
+ | | | | if status field is 0; |
+ +----------+--------+--------+-------------------------------------------+
+
+ The FIT offset is maintained by the OSPM itself, current offset plus
+ the size of the fit data returned by the function is the next offset
+ OSPM should read. When all FIT data has been read out, zero length is
+ returned.
+
+ If it returns status code 0x100, OSPM should restart to read FIT (read
+ from offset 0 again).
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index d9d1ef7..f82ae4a 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -478,6 +478,22 @@ typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn;
QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
offsetof(NvdimmDsmIn, arg3) > 4096);
+struct NvdimmFuncReadFITIn {
+ uint32_t offset; /* the offset into FIT buffer. */
+} QEMU_PACKED;
+typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
+QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
+ offsetof(NvdimmDsmIn, arg3) > 4096);
+
+struct NvdimmFuncReadFITOut {
+ /* the size of buffer filled by QEMU. */
+ uint32_t len;
+ uint32_t func_ret_status; /* return status code. */
+ uint8_t fit[0]; /* the FIT data. */
+} QEMU_PACKED;
+typedef struct NvdimmFuncReadFITOut NvdimmFuncReadFITOut;
+QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITOut) > 4096);
+
static void
nvdimm_dsm_function0(uint32_t supported_func, hwaddr dsm_mem_addr)
{
@@ -502,6 +518,73 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr)
#define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */
#define NVDIMM_DSM_RET_STATUS_NOMEMDEV 2 /* Non-Existing Memory Device */
#define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */
+#define NVDIMM_DSM_RET_STATUS_FIT_CHANGED 0x100 /* FIT Changed */
+
+#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000
+
+/* Read FIT data, defined in docs/specs/acpi_nvdimm.txt. */
+static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in,
+ hwaddr dsm_mem_addr)
+{
+ NvdimmFitBuffer *fit_buf = &state->fit_buf;
+ NvdimmFuncReadFITIn *read_fit;
+ NvdimmFuncReadFITOut *read_fit_out;
+ GArray *fit;
+ uint32_t read_len = 0, func_ret_status, offset;
+ int size;
+
+ read_fit = (NvdimmFuncReadFITIn *)in->arg3;
+ offset = le32_to_cpu(read_fit->offset);
+
+ fit = fit_buf->fit;
+
+ nvdimm_debug("Read FIT: offset %#x FIT size %#x Dirty %s.\n",
+ offset, fit->len, fit_buf->dirty ? "Yes" : "No");
+
+ /* It is the first time to read FIT. */
+ if (!offset) {
+ fit_buf->dirty = false;
+ } else if (fit_buf->dirty) { /* FIT has been changed during RFIT. */
+ func_ret_status = NVDIMM_DSM_RET_STATUS_FIT_CHANGED;
+ goto exit;
+ }
+
+ if (offset > fit->len) {
+ func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
+ goto exit;
+ }
+
+ func_ret_status = NVDIMM_DSM_RET_STATUS_SUCCESS;
+ read_len = MIN(fit->len - offset, 4096 - sizeof(NvdimmFuncReadFITOut));
+
+exit:
+ size = sizeof(NvdimmFuncReadFITOut) + read_len;
+ read_fit_out = g_malloc(size);
+
+ read_fit_out->len = cpu_to_le32(size);
+ read_fit_out->func_ret_status = cpu_to_le32(func_ret_status);
+ memcpy(read_fit_out->fit, fit->data + offset, read_len);
+
+ cpu_physical_memory_write(dsm_mem_addr, read_fit_out, size);
+
+ g_free(read_fit_out);
+}
+
+static void
+nvdimm_dsm_handle_reserved_root_method(AcpiNVDIMMState *state,
+ NvdimmDsmIn *in, hwaddr dsm_mem_addr)
+{
+ switch (in->function) {
+ case 0x0:
+ nvdimm_dsm_function0(0x1 | 1 << 1 /* Read FIT */, dsm_mem_addr);
+ return;
+ case 0x1 /* Read FIT */:
+ nvdimm_dsm_func_read_fit(state, in, dsm_mem_addr);
+ return;
+ }
+
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
+}
static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
{
@@ -730,6 +813,7 @@ nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
static void
nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
{
+ AcpiNVDIMMState *state = opaque;
NvdimmDsmIn *in;
hwaddr dsm_mem_addr = val;
@@ -757,6 +841,11 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
goto exit;
}
+ if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
+ nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr);
+ goto exit;
+ }
+
/* Handle 0 is reserved for NVDIMM Root Device. */
if (!in->handle) {
nvdimm_dsm_root(in, dsm_mem_addr);
@@ -809,9 +898,13 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
#define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
#define NVDIMM_DSM_OUT_BUF "ODAT"
+#define NVDIMM_DSM_RFIT_STATUS "RSTA"
+
+#define NVDIMM_QEMU_RSVD_UUID "648B9CF2-CDA1-4312-8AD9-49C4AF32BD62"
+
static void nvdimm_build_common_dsm(Aml *dev)
{
- Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem;
+ Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem, *elsectx2;
Aml *elsectx, *unsupport, *unpatched, *expected_uuid, *uuid_invalid;
Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size;
uint8_t byte_list[1];
@@ -900,9 +993,15 @@ static void nvdimm_build_common_dsm(Aml *dev)
/* UUID for NVDIMM Root Device */, expected_uuid));
aml_append(method, ifctx);
elsectx = aml_else();
- aml_append(elsectx, aml_store(
+ ifctx = aml_if(aml_equal(handle, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT)));
+ aml_append(ifctx, aml_store(aml_touuid(NVDIMM_QEMU_RSVD_UUID
+ /* UUID for QEMU internal use */), expected_uuid));
+ aml_append(elsectx, ifctx);
+ elsectx2 = aml_else();
+ aml_append(elsectx2, aml_store(
aml_touuid("4309AC30-0D11-11E4-9191-0800200C9A66")
/* UUID for NVDIMM Devices */, expected_uuid));
+ aml_append(elsectx, elsectx2);
aml_append(method, elsectx);
uuid_invalid = aml_lnot(aml_equal(uuid, expected_uuid));
@@ -982,6 +1081,100 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle)
aml_append(dev, method);
}
+static void nvdimm_build_fit_method(Aml *dev)
+{
+ Aml *method, *pkg, *buf, *buf_size, *offset, *call_result;
+ Aml *whilectx, *ifcond, *ifctx, *elsectx, *fit;
+
+ buf = aml_local(0);
+ buf_size = aml_local(1);
+ fit = aml_local(2);
+
+ aml_append(dev, aml_name_decl(NVDIMM_DSM_RFIT_STATUS, aml_int(0)));
+
+ /* build helper function, RFIT. */
+ method = aml_method("RFIT", 1, AML_SERIALIZED);
+ aml_append(method, aml_name_decl("OFST", aml_int(0)));
+
+ /* prepare input package. */
+ pkg = aml_package(1);
+ aml_append(method, aml_store(aml_arg(0), aml_name("OFST")));
+ aml_append(pkg, aml_name("OFST"));
+
+ /* call Read_FIT function. */
+ call_result = aml_call5(NVDIMM_COMMON_DSM,
+ aml_touuid(NVDIMM_QEMU_RSVD_UUID),
+ aml_int(1) /* Revision 1 */,
+ aml_int(0x1) /* Read FIT */,
+ pkg, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT));
+ aml_append(method, aml_store(call_result, buf));
+
+ /* handle _DSM result. */
+ aml_append(method, aml_create_dword_field(buf,
+ aml_int(0) /* offset at byte 0 */, "STAU"));
+
+ aml_append(method, aml_store(aml_name("STAU"),
+ aml_name(NVDIMM_DSM_RFIT_STATUS)));
+
+ /* if something is wrong during _DSM. */
+ ifcond = aml_equal(aml_int(0 /* Success */), aml_name("STAU"));
+ ifctx = aml_if(aml_lnot(ifcond));
+ aml_append(ifctx, aml_return(aml_buffer(0, NULL)));
+ aml_append(method, ifctx);
+
+ aml_append(method, aml_store(aml_sizeof(buf), buf_size));
+ aml_append(method, aml_subtract(buf_size,
+ aml_int(4) /* the size of "STAU" */, buf_size));
+
+ /* if we read the end of fit. */
+ ifctx = aml_if(aml_equal(buf_size, aml_int(0)));
+ aml_append(ifctx, aml_return(aml_buffer(0, NULL)));
+ aml_append(method, ifctx);
+
+ aml_append(method, aml_create_field(buf,
+ aml_int(4 * BITS_PER_BYTE), /* offset at byte 4.*/
+ aml_shiftleft(buf_size, aml_int(3)), "BUFF"));
+ aml_append(method, aml_return(aml_name("BUFF")));
+ aml_append(dev, method);
+
+ /* build _FIT. */
+ method = aml_method("_FIT", 0, AML_SERIALIZED);
+ offset = aml_local(3);
+
+ aml_append(method, aml_store(aml_buffer(0, NULL), fit));
+ aml_append(method, aml_store(aml_int(0), offset));
+
+ whilectx = aml_while(aml_int(1));
+ aml_append(whilectx, aml_store(aml_call1("RFIT", offset), buf));
+ aml_append(whilectx, aml_store(aml_sizeof(buf), buf_size));
+
+ /*
+ * if fit buffer was changed during RFIT, read from the beginning
+ * again.
+ */
+ ifctx = aml_if(aml_equal(aml_name(NVDIMM_DSM_RFIT_STATUS),
+ aml_int(NVDIMM_DSM_RET_STATUS_FIT_CHANGED)));
+ aml_append(ifctx, aml_store(aml_buffer(0, NULL), fit));
+ aml_append(ifctx, aml_store(aml_int(0), offset));
+ aml_append(whilectx, ifctx);
+
+ elsectx = aml_else();
+
+ /* finish fit read if no data is read out. */
+ ifctx = aml_if(aml_equal(buf_size, aml_int(0)));
+ aml_append(ifctx, aml_return(fit));
+ aml_append(elsectx, ifctx);
+
+ /* update the offset. */
+ aml_append(elsectx, aml_add(offset, buf_size, offset));
+ /* append the data we read out to the fit buffer. */
+ aml_append(elsectx, aml_concatenate(fit, buf, fit));
+ aml_append(whilectx, elsectx);
+ aml_append(method, whilectx);
+
+ aml_append(dev, method);
+}
+
static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
{
uint32_t slot;
@@ -1040,6 +1233,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
/* 0 is reserved for root device. */
nvdimm_build_device_dsm(dev, 0);
+ nvdimm_build_fit_method(dev);
nvdimm_build_nvdimm_devices(dev, ram_slots);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH v5 5/5] pc: memhp: enable nvdimm device hotplug
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
` (3 preceding siblings ...)
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: introduce _FIT method Xiao Guangrong
@ 2016-11-03 18:36 ` Xiao Guangrong
2016-11-04 3:50 ` [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
5 siblings, 0 replies; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-03 18:36 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel, Xiao Guangrong
_GPE.E04 is dedicated for nvdimm device hotplug
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
default-configs/mips-softmmu-common.mak | 1 +
docs/specs/acpi_nvdimm.txt | 5 +++++
hw/acpi/ich9.c | 8 ++++++--
hw/acpi/nvdimm.c | 7 +++++++
hw/acpi/piix4.c | 7 ++++++-
hw/i386/acpi-build.c | 7 +++++++
hw/i386/pc.c | 6 ++++++
hw/mem/nvdimm.c | 4 ----
include/hw/acpi/acpi_dev_interface.h | 1 +
include/hw/mem/nvdimm.h | 1 +
10 files changed, 40 insertions(+), 7 deletions(-)
diff --git a/default-configs/mips-softmmu-common.mak b/default-configs/mips-softmmu-common.mak
index 0394514..f0676f5 100644
--- a/default-configs/mips-softmmu-common.mak
+++ b/default-configs/mips-softmmu-common.mak
@@ -17,6 +17,7 @@ CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
+CONFIG_ACPI_NVDIMM=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt
index 63cb63f..9ab6d9a 100644
--- a/docs/specs/acpi_nvdimm.txt
+++ b/docs/specs/acpi_nvdimm.txt
@@ -137,6 +137,11 @@ _DSM process diagram:
| result from the page | | |
+--------------------------+ +--------------+
+NVDIMM hotplug
+--------------
+ACPI BIOS GPE.4 handler is dedicated for notifying OS about nvdimm device
+hot-add event.
+
QEMU internal use only _DSM function
------------------------------------
1) Read FIT
diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
index e5a3c18..830c475 100644
--- a/hw/acpi/ich9.c
+++ b/hw/acpi/ich9.c
@@ -490,8 +490,12 @@ void ich9_pm_device_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev,
if (lpc->pm.acpi_memory_hotplug.is_enabled &&
object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
- acpi_memory_plug_cb(hotplug_dev, &lpc->pm.acpi_memory_hotplug,
- dev, errp);
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ nvdimm_acpi_plug_cb(hotplug_dev, dev);
+ } else {
+ acpi_memory_plug_cb(hotplug_dev, &lpc->pm.acpi_memory_hotplug,
+ dev, errp);
+ }
} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
if (lpc->pm.cpu_hotplug_legacy) {
legacy_acpi_cpu_plug_cb(hotplug_dev, &lpc->pm.gpe_cpu, dev, errp);
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index f82ae4a..3026e3e 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -868,6 +868,13 @@ static const MemoryRegionOps nvdimm_dsm_ops = {
},
};
+void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev)
+{
+ if (dev->hotplugged) {
+ acpi_send_event(DEVICE(hotplug_dev), ACPI_NVDIMM_HOTPLUG_STATUS);
+ }
+}
+
void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
FWCfgState *fw_cfg, Object *owner)
{
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 2adc246..17d36bd 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -378,7 +378,12 @@ static void piix4_device_plug_cb(HotplugHandler *hotplug_dev,
if (s->acpi_memory_hotplug.is_enabled &&
object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
- acpi_memory_plug_cb(hotplug_dev, &s->acpi_memory_hotplug, dev, errp);
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ nvdimm_acpi_plug_cb(hotplug_dev, dev);
+ } else {
+ acpi_memory_plug_cb(hotplug_dev, &s->acpi_memory_hotplug,
+ dev, errp);
+ }
} else if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
acpi_pcihp_device_plug_cb(hotplug_dev, &s->acpi_pci_hotplug, dev, errp);
} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bc49958..32270c3 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2039,6 +2039,13 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
method = aml_method("_E03", 0, AML_NOTSERIALIZED);
aml_append(method, aml_call0(MEMORY_HOTPLUG_HANDLER_PATH));
aml_append(scope, method);
+
+ if (pcms->acpi_nvdimm_state.is_enabled) {
+ method = aml_method("_E04", 0, AML_NOTSERIALIZED);
+ aml_append(method, aml_notify(aml_name("\\_SB.NVDR"),
+ aml_int(0x80)));
+ aml_append(scope, method);
+ }
}
aml_append(dsdt, scope);
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index a4e1509..bc36e19 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1723,6 +1723,12 @@ static void pc_dimm_unplug_request(HotplugHandler *hotplug_dev,
goto out;
}
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ error_setg(&local_err,
+ "nvdimm device hot unplug is not supported yet.");
+ goto out;
+ }
+
hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
hhc->unplug_request(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
diff --git a/hw/mem/nvdimm.c b/hw/mem/nvdimm.c
index 7895805..db896b0 100644
--- a/hw/mem/nvdimm.c
+++ b/hw/mem/nvdimm.c
@@ -148,13 +148,9 @@ static MemoryRegion *nvdimm_get_vmstate_memory_region(PCDIMMDevice *dimm)
static void nvdimm_class_init(ObjectClass *oc, void *data)
{
- DeviceClass *dc = DEVICE_CLASS(oc);
PCDIMMDeviceClass *ddc = PC_DIMM_CLASS(oc);
NVDIMMClass *nvc = NVDIMM_CLASS(oc);
- /* nvdimm hotplug has not been supported yet. */
- dc->hotpluggable = false;
-
ddc->realize = nvdimm_realize;
ddc->get_memory_region = nvdimm_get_memory_region;
ddc->get_vmstate_memory_region = nvdimm_get_vmstate_memory_region;
diff --git a/include/hw/acpi/acpi_dev_interface.h b/include/hw/acpi/acpi_dev_interface.h
index da4ef7f..901a4ae 100644
--- a/include/hw/acpi/acpi_dev_interface.h
+++ b/include/hw/acpi/acpi_dev_interface.h
@@ -10,6 +10,7 @@ typedef enum {
ACPI_PCI_HOTPLUG_STATUS = 2,
ACPI_CPU_HOTPLUG_STATUS = 4,
ACPI_MEMORY_HOTPLUG_STATUS = 8,
+ ACPI_NVDIMM_HOTPLUG_STATUS = 16,
} AcpiEventStatusBits;
#define TYPE_ACPI_DEVICE_IF "acpi-device-interface"
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index e704e55..5d7a8c0 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -133,4 +133,5 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
BIOSLinker *linker, AcpiNVDIMMState *state,
uint32_t ram_slots);
void nvdimm_plug(AcpiNVDIMMState *state);
+void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev);
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
` (4 preceding siblings ...)
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 5/5] pc: memhp: enable nvdimm device hotplug Xiao Guangrong
@ 2016-11-04 3:50 ` Xiao Guangrong
2016-11-04 4:01 ` Michael S. Tsirkin
5 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-04 3:50 UTC (permalink / raw)
To: pbonzini, imammedo
Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams, kvm,
qemu-devel
On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
> Hi Michael,
>
> This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
> in your pull request:
> [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
> [PULL 37/47] nvdimm acpi: introduce fit buffer
> [PULL 38/47] nvdimm acpi: introduce _FIT
> [PULL 39/47] pc: memhp: enable nvdimm device hotplug
>
> Thanks for your patience also thank Igor and Stefan for their review.
Hi,
As the pull request has been upstream (Cool! :)), i will post
diff changes based on that.
Thanks!
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 3:50 ` [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
@ 2016-11-04 4:01 ` Michael S. Tsirkin
2016-11-04 4:22 ` Michael S. Tsirkin
0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2016-11-04 4:01 UTC (permalink / raw)
To: Xiao Guangrong
Cc: pbonzini, imammedo, gleb, mtosatti, stefanha, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
On Fri, Nov 04, 2016 at 11:50:19AM +0800, Xiao Guangrong wrote:
>
>
> On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
> > Hi Michael,
> >
> > This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
> > in your pull request:
> > [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
> > [PULL 37/47] nvdimm acpi: introduce fit buffer
> > [PULL 38/47] nvdimm acpi: introduce _FIT
> > [PULL 39/47] pc: memhp: enable nvdimm device hotplug
> >
> > Thanks for your patience also thank Igor and Stefan for their review.
>
> Hi,
>
> As the pull request has been upstream (Cool! :)), i will post
> diff changes based on that.
>
> Thanks!
Igor prefers seeing revert+patches, I prefer seeing a diff.
Can you send both? A global diff would be ok for me
as it's small and easy enough to generate.
--
MST
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 4:01 ` Michael S. Tsirkin
@ 2016-11-04 4:22 ` Michael S. Tsirkin
2016-11-04 7:51 ` Xiao Guangrong
2016-11-04 9:15 ` Stefan Hajnoczi
0 siblings, 2 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2016-11-04 4:22 UTC (permalink / raw)
To: Xiao Guangrong
Cc: pbonzini, imammedo, gleb, mtosatti, stefanha, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
On Fri, Nov 04, 2016 at 06:01:54AM +0200, Michael S. Tsirkin wrote:
> On Fri, Nov 04, 2016 at 11:50:19AM +0800, Xiao Guangrong wrote:
> >
> >
> > On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
> > > Hi Michael,
> > >
> > > This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
> > > in your pull request:
> > > [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
> > > [PULL 37/47] nvdimm acpi: introduce fit buffer
> > > [PULL 38/47] nvdimm acpi: introduce _FIT
> > > [PULL 39/47] pc: memhp: enable nvdimm device hotplug
> > >
> > > Thanks for your patience also thank Igor and Stefan for their review.
> >
> > Hi,
> >
> > As the pull request has been upstream (Cool! :)), i will post
> > diff changes based on that.
> >
> > Thanks!
>
> Igor prefers seeing revert+patches, I prefer seeing a diff.
> Can you send both? A global diff would be ok for me
> as it's small and easy enough to generate.
Stefan, I wonder what's easier for you to review?
>
> --
> MST
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 4:22 ` Michael S. Tsirkin
@ 2016-11-04 7:51 ` Xiao Guangrong
2016-11-04 9:15 ` Stefan Hajnoczi
1 sibling, 0 replies; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-04 7:51 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: pbonzini, imammedo, gleb, mtosatti, stefanha, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
On 11/04/2016 12:22 PM, Michael S. Tsirkin wrote:
> On Fri, Nov 04, 2016 at 06:01:54AM +0200, Michael S. Tsirkin wrote:
>> On Fri, Nov 04, 2016 at 11:50:19AM +0800, Xiao Guangrong wrote:
>>>
>>>
>>> On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
>>>> Hi Michael,
>>>>
>>>> This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
>>>> in your pull request:
>>>> [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
>>>> [PULL 37/47] nvdimm acpi: introduce fit buffer
>>>> [PULL 38/47] nvdimm acpi: introduce _FIT
>>>> [PULL 39/47] pc: memhp: enable nvdimm device hotplug
>>>>
>>>> Thanks for your patience also thank Igor and Stefan for their review.
>>>
>>> Hi,
>>>
>>> As the pull request has been upstream (Cool! :)), i will post
>>> diff changes based on that.
>>>
>>> Thanks!
>>
>> Igor prefers seeing revert+patches, I prefer seeing a diff.
>> Can you send both? A global diff would be ok for me
>> as it's small and easy enough to generate.
Okay, this is the diff comparing upstream and this version (v5):
diff --git a/default-configs/mips-softmmu-common.mak b/default-configs/mips-softmmu-common.mak
index 0394514..f0676f5 100644
--- a/default-configs/mips-softmmu-common.mak
+++ b/default-configs/mips-softmmu-common.mak
@@ -17,6 +17,7 @@ CONFIG_FDC=y
CONFIG_ACPI=y
CONFIG_ACPI_X86=y
CONFIG_ACPI_MEMORY_HOTPLUG=y
+CONFIG_ACPI_NVDIMM=y
CONFIG_ACPI_CPU_HOTPLUG=y
CONFIG_APM=y
CONFIG_I8257=y
diff --git a/docs/specs/acpi_mem_hotplug.txt b/docs/specs/acpi_mem_hotplug.txt
index cb26dd2..3df3620 100644
--- a/docs/specs/acpi_mem_hotplug.txt
+++ b/docs/specs/acpi_mem_hotplug.txt
@@ -4,9 +4,6 @@ QEMU<->ACPI BIOS memory hotplug interface
ACPI BIOS GPE.3 handler is dedicated for notifying OS about memory hot-add
and hot-remove events.
-ACPI BIOS GPE.4 handler is dedicated for notifying OS about nvdimm device
-hot-add and hot-remove events.
-
Memory hot-plug interface (IO port 0xa00-0xa17, 1-4 byte access):
---------------------------------------------------------------
0xa00:
diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt
index 4aa5e3d..9ab6d9a 100644
--- a/docs/specs/acpi_nvdimm.txt
+++ b/docs/specs/acpi_nvdimm.txt
@@ -65,8 +65,8 @@ _FIT(Firmware Interface Table)
The detailed definition of the structure can be found at ACPI 6.0: 5.2.25
NVDIMM Firmware Interface Table (NFIT).
-QEMU NVDIMM Implemention
-========================
+QEMU NVDIMM Implementation
+==========================
QEMU uses 4 bytes IO Port starting from 0x0a18 and a RAM-based memory page
for NVDIMM ACPI.
@@ -82,6 +82,16 @@ Memory:
ACPI writes _DSM Input Data (based on the offset in the page):
[0x0 - 0x3]: 4 bytes, NVDIMM Device Handle, 0 is reserved for NVDIMM
Root device.
+
+ The handle is completely QEMU internal thing, the values in
+ range [1, 0xFFFF] indicate nvdimm device. Other values are
+ reserved for other purposes.
+
+ Reserved handles:
+ 0 is reserved for nvdimm root device named NVDR.
+ 0x10000 is reserved for QEMU internal DSM function called on
+ the root device.
+
[0x4 - 0x7]: 4 bytes, Revision ID, that is the Arg1 of _DSM method.
[0x8 - 0xB]: 4 bytes. Function Index, that is the Arg2 of _DSM method.
[0xC - 0xFFF]: 4084 bytes, the Arg3 of _DSM method.
@@ -127,28 +137,17 @@ _DSM process diagram:
| result from the page | | |
+--------------------------+ +--------------+
-Device Handle Reservation
--------------------------
-As we mentioned above, byte 0 ~ byte 3 in the DSM memory save NVDIMM device
-handle. The handle is completely QEMU internal thing, the values in range
-[0, 0xFFFF] indicate nvdimm device (O means nvdimm root device named NVDR),
-other values are reserved by other purpose.
-
-Current reserved handle:
-0x10000 is reserved for QEMU internal DSM function called on the root
-device.
+NVDIMM hotplug
+--------------
+ACPI BIOS GPE.4 handler is dedicated for notifying OS about nvdimm device
+hot-add event.
QEMU internal use only _DSM function
------------------------------------
-UUID, 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62, is reserved for QEMU internal
-DSM function.
-
-There is the function introduced by QEMU and only used by QEMU internal.
-
1) Read FIT
- As we only reserved one page for NVDIMM ACPI it is impossible to map the
- whole FIT data to guest's address space. This function is used by _FIT
- method to read a piece of FIT data from QEMU.
+ _FIT method uses _DSM method to fetch NFIT structures blob from QEMU
+ in 1 page sized increments which are then concatenated and returned
+ as _FIT method result.
Input parameters:
Arg0 – UUID {set to 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62}
@@ -156,29 +155,34 @@ There is the function introduced by QEMU and only used by QEMU internal.
Arg2 - Function Index, 0x1
Arg3 - A package containing a buffer whose layout is as follows:
- +----------+-------------+-------------+-----------------------------------+
- | Filed | Byte Length | Byte Offset | Description |
- +----------+-------------+-------------+-----------------------------------+
- | offset | 4 | 0 | the offset of FIT buffer |
- +----------+-------------+-------------+-----------------------------------+
-
- Output:
- +----------+-------------+-------------+-----------------------------------+
- | Filed | Byte Length | Byte Offset | Description |
- +----------+-------------+-------------+-----------------------------------+
- | | | | return status codes |
- | | | | 0x100 indicates fit has been |
- | status | 4 | 0 | updated |
- | | | | other follows Chapter 3 in DSM |
- | | | | Spec Rev1 |
- +----------+-------------+-------------+-----------------------------------+
- | fit data | Varies | 4 | FIT data |
- | | | | |
- +----------+-------------+-------------+-----------------------------------+
-
- The FIT offset is maintained by the caller itself, current offset plugs
- the length returned by the function is the next offset we should read.
- When all the FIT data has been read out, zero length is returned.
-
- If it returns 0x100, OSPM should restart to read FIT (read from offset 0
- again).
+ +----------+--------+--------+-------------------------------------------+
+ | Field | Length | Offset | Description |
+ +----------+--------+--------+-------------------------------------------+
+ | offset | 4 | 0 | offset in QEMU's NFIT structures blob to |
+ | | | | read from |
+ +----------+--------+--------+-------------------------------------------+
+
+ Output layout in the dsm memory page:
+ +----------+--------+--------+-------------------------------------------+
+ | Field | Length | Offset | Description |
+ +----------+--------+--------+-------------------------------------------+
+ | length | 4 | 0 | length of entire returned data |
+ | | | | (including the header) |
+ +----------+-----------------+-------------------------------------------+
+ | | | | return status codes |
+ | | | | 0x0 - success |
+ | | | | 0x100 - error caused by NFIT update while |
+ | status | 4 | 4 | read by _FIT wasn't completed, other |
+ | | | | codes follow Chapter 3 in DSM Spec Rev1 |
+ +----------+-----------------+-------------------------------------------+
+ | fit data | Varies | 8 | contains FIT data, this field is present |
+ | | | | if status field is 0; |
+ +----------+--------+--------+-------------------------------------------+
+
+ The FIT offset is maintained by the OSPM itself, current offset plus
+ the size of the fit data returned by the function is the next offset
+ OSPM should read. When all FIT data has been read out, zero length is
+ returned.
+
+ If it returns status code 0x100, OSPM should restart to read FIT (read
+ from offset 0 again).
diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
index e5a3c18..830c475 100644
--- a/hw/acpi/ich9.c
+++ b/hw/acpi/ich9.c
@@ -490,8 +490,12 @@ void ich9_pm_device_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev,
if (lpc->pm.acpi_memory_hotplug.is_enabled &&
object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
- acpi_memory_plug_cb(hotplug_dev, &lpc->pm.acpi_memory_hotplug,
- dev, errp);
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ nvdimm_acpi_plug_cb(hotplug_dev, dev);
+ } else {
+ acpi_memory_plug_cb(hotplug_dev, &lpc->pm.acpi_memory_hotplug,
+ dev, errp);
+ }
} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
if (lpc->pm.cpu_hotplug_legacy) {
legacy_acpi_cpu_plug_cb(hotplug_dev, &lpc->pm.gpe_cpu, dev, errp);
diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index 70f6451..ec4e64b 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -2,7 +2,6 @@
#include "hw/acpi/memory_hotplug.h"
#include "hw/acpi/pc-hotplug.h"
#include "hw/mem/pc-dimm.h"
-#include "hw/mem/nvdimm.h"
#include "hw/boards.h"
#include "hw/qdev-core.h"
#include "trace.h"
@@ -233,8 +232,11 @@ void acpi_memory_plug_cb(HotplugHandler *hotplug_dev, MemHotplugState *mem_st,
DeviceState *dev, Error **errp)
{
MemStatus *mdev;
- AcpiEventStatusBits event;
- bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM);
+ DeviceClass *dc = DEVICE_GET_CLASS(dev);
+
+ if (!dc->hotpluggable) {
+ return;
+ }
mdev = acpi_memory_slot_status(mem_st, dev, errp);
if (!mdev) {
@@ -242,23 +244,10 @@ void acpi_memory_plug_cb(HotplugHandler *hotplug_dev, MemHotplugState *mem_st,
}
mdev->dimm = dev;
-
- /*
- * do not set is_enabled and is_inserting if the slot is plugged with
- * a nvdimm device to stop OSPM inquires memory region from the slot.
- */
- if (is_nvdimm) {
- event = ACPI_NVDIMM_HOTPLUG_STATUS;
- } else {
- mdev->is_enabled = true;
- event = ACPI_MEMORY_HOTPLUG_STATUS;
- }
-
+ mdev->is_enabled = true;
if (dev->hotplugged) {
- if (!is_nvdimm) {
- mdev->is_inserting = true;
- }
- acpi_send_event(DEVICE(hotplug_dev), event);
+ mdev->is_inserting = true;
+ acpi_send_event(DEVICE(hotplug_dev), ACPI_MEMORY_HOTPLUG_STATUS);
}
}
@@ -273,8 +262,6 @@ void acpi_memory_unplug_request_cb(HotplugHandler *hotplug_dev,
return;
}
- /* nvdimm device hot unplug is not supported yet. */
- assert(!object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM));
mdev->is_removing = true;
acpi_send_event(DEVICE(hotplug_dev), ACPI_MEMORY_HOTPLUG_STATUS);
}
@@ -289,8 +276,6 @@ void acpi_memory_unplug_cb(MemHotplugState *mem_st,
return;
}
- /* nvdimm device hot unplug is not supported yet. */
- assert(!object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM));
mdev->is_enabled = false;
mdev->dimm = NULL;
}
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index fc1a012..3026e3e 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -33,35 +33,30 @@
#include "hw/nvram/fw_cfg.h"
#include "hw/mem/nvdimm.h"
-static int nvdimm_plugged_device_list(Object *obj, void *opaque)
+static int nvdimm_device_list(Object *obj, void *opaque)
{
GSList **list = opaque;
if (object_dynamic_cast(obj, TYPE_NVDIMM)) {
- DeviceState *dev = DEVICE(obj);
-
- if (dev->realized) { /* only realized NVDIMMs matter */
- *list = g_slist_append(*list, DEVICE(obj));
- }
+ *list = g_slist_append(*list, DEVICE(obj));
}
- object_child_foreach(obj, nvdimm_plugged_device_list, opaque);
+ object_child_foreach(obj, nvdimm_device_list, opaque);
return 0;
}
/*
- * inquire plugged NVDIMM devices and link them into the list which is
+ * inquire NVDIMM devices and link them into the list which is
* returned to the caller.
*
* Note: it is the caller's responsibility to free the list to avoid
* memory leak.
*/
-static GSList *nvdimm_get_plugged_device_list(void)
+static GSList *nvdimm_get_device_list(void)
{
GSList *list = NULL;
- object_child_foreach(qdev_get_machine(), nvdimm_plugged_device_list,
- &list);
+ object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list);
return list;
}
@@ -219,7 +214,7 @@ static uint32_t nvdimm_slot_to_dcr_index(int slot)
static NVDIMMDevice *nvdimm_get_device_by_handle(uint32_t handle)
{
NVDIMMDevice *nvdimm = NULL;
- GSList *list, *device_list = nvdimm_get_plugged_device_list();
+ GSList *list, *device_list = nvdimm_get_device_list();
for (list = device_list; list; list = list->next) {
NVDIMMDevice *nvd = list->data;
@@ -350,7 +345,7 @@ static void nvdimm_build_structure_dcr(GArray *structures, DeviceState *dev)
static GArray *nvdimm_build_device_structure(void)
{
- GSList *device_list = nvdimm_get_plugged_device_list();
+ GSList *device_list = nvdimm_get_device_list();
GArray *structures = g_array_new(false, true /* clear */, 1);
for (; device_list; device_list = device_list->next) {
@@ -375,20 +370,17 @@ static GArray *nvdimm_build_device_structure(void)
static void nvdimm_init_fit_buffer(NvdimmFitBuffer *fit_buf)
{
- qemu_mutex_init(&fit_buf->lock);
fit_buf->fit = g_array_new(false, true /* clear */, 1);
}
static void nvdimm_build_fit_buffer(NvdimmFitBuffer *fit_buf)
{
- qemu_mutex_lock(&fit_buf->lock);
g_array_free(fit_buf->fit, true);
fit_buf->fit = nvdimm_build_device_structure();
fit_buf->dirty = true;
- qemu_mutex_unlock(&fit_buf->lock);
}
-void nvdimm_acpi_hotplug(AcpiNVDIMMState *state)
+void nvdimm_plug(AcpiNVDIMMState *state)
{
nvdimm_build_fit_buffer(&state->fit_buf);
}
@@ -399,13 +391,6 @@ static void nvdimm_build_nfit(AcpiNVDIMMState *state, GArray *table_offsets,
NvdimmFitBuffer *fit_buf = &state->fit_buf;
unsigned int header;
- qemu_mutex_lock(&fit_buf->lock);
-
- /* NVDIMM device is not plugged? */
- if (!fit_buf->fit->len) {
- goto exit;
- }
-
acpi_add_table(table_offsets, table_data);
/* NFIT header. */
@@ -417,9 +402,6 @@ static void nvdimm_build_nfit(AcpiNVDIMMState *state, GArray *table_offsets,
build_header(linker, table_data,
(void *)(table_data->data + header), "NFIT",
sizeof(NvdimmNfitHeader) + fit_buf->fit->len, 1, NULL, NULL);
-
-exit:
- qemu_mutex_unlock(&fit_buf->lock);
}
struct NvdimmDsmIn {
@@ -497,7 +479,7 @@ QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
offsetof(NvdimmDsmIn, arg3) > 4096);
struct NvdimmFuncReadFITIn {
- uint32_t offset; /* the offset of FIT buffer. */
+ uint32_t offset; /* the offset into FIT buffer. */
} QEMU_PACKED;
typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
@@ -532,7 +514,13 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr)
cpu_physical_memory_write(dsm_mem_addr, &out, sizeof(out));
}
-#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000
+#define NVDIMM_DSM_RET_STATUS_SUCCESS 0 /* Success */
+#define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */
+#define NVDIMM_DSM_RET_STATUS_NOMEMDEV 2 /* Non-Existing Memory Device */
+#define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */
+#define NVDIMM_DSM_RET_STATUS_FIT_CHANGED 0x100 /* FIT Changed */
+
+#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000
/* Read FIT data, defined in docs/specs/acpi_nvdimm.txt. */
static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in,
@@ -542,34 +530,32 @@ static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in,
NvdimmFuncReadFITIn *read_fit;
NvdimmFuncReadFITOut *read_fit_out;
GArray *fit;
- uint32_t read_len = 0, func_ret_status;
+ uint32_t read_len = 0, func_ret_status, offset;
int size;
read_fit = (NvdimmFuncReadFITIn *)in->arg3;
- le32_to_cpus(&read_fit->offset);
+ offset = le32_to_cpu(read_fit->offset);
- qemu_mutex_lock(&fit_buf->lock);
fit = fit_buf->fit;
nvdimm_debug("Read FIT: offset %#x FIT size %#x Dirty %s.\n",
- read_fit->offset, fit->len, fit_buf->dirty ? "Yes" : "No");
-
- if (read_fit->offset > fit->len) {
- func_ret_status = 3 /* Invalid Input Parameters */;
- goto exit;
- }
+ offset, fit->len, fit_buf->dirty ? "Yes" : "No");
/* It is the first time to read FIT. */
- if (!read_fit->offset) {
+ if (!offset) {
fit_buf->dirty = false;
} else if (fit_buf->dirty) { /* FIT has been changed during RFIT. */
- func_ret_status = 0x100 /* fit changed */;
+ func_ret_status = NVDIMM_DSM_RET_STATUS_FIT_CHANGED;
+ goto exit;
+ }
+
+ if (offset > fit->len) {
+ func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
goto exit;
}
- func_ret_status = 0 /* Success */;
- read_len = MIN(fit->len - read_fit->offset,
- 4096 - sizeof(NvdimmFuncReadFITOut));
+ func_ret_status = NVDIMM_DSM_RET_STATUS_SUCCESS;
+ read_len = MIN(fit->len - offset, 4096 - sizeof(NvdimmFuncReadFITOut));
exit:
size = sizeof(NvdimmFuncReadFITOut) + read_len;
@@ -577,27 +563,27 @@ exit:
read_fit_out->len = cpu_to_le32(size);
read_fit_out->func_ret_status = cpu_to_le32(func_ret_status);
- memcpy(read_fit_out->fit, fit->data + read_fit->offset, read_len);
+ memcpy(read_fit_out->fit, fit->data + offset, read_len);
cpu_physical_memory_write(dsm_mem_addr, read_fit_out, size);
g_free(read_fit_out);
- qemu_mutex_unlock(&fit_buf->lock);
}
-static void nvdimm_dsm_reserved_root(AcpiNVDIMMState *state, NvdimmDsmIn *in,
- hwaddr dsm_mem_addr)
+static void
+nvdimm_dsm_handle_reserved_root_method(AcpiNVDIMMState *state,
+ NvdimmDsmIn *in, hwaddr dsm_mem_addr)
{
switch (in->function) {
case 0x0:
nvdimm_dsm_function0(0x1 | 1 << 1 /* Read FIT */, dsm_mem_addr);
return;
- case 0x1 /*Read FIT */:
+ case 0x1 /* Read FIT */:
nvdimm_dsm_func_read_fit(state, in, dsm_mem_addr);
return;
}
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
}
static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
@@ -613,7 +599,7 @@ static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
}
/* No function except function 0 is supported yet. */
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
}
/*
@@ -659,7 +645,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr)
nvdimm_debug("label_size %#x, max_xfer %#x.\n", label_size, mxfer);
- label_size_out.func_ret_status = cpu_to_le32(0 /* Success */);
+ label_size_out.func_ret_status = cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
label_size_out.label_size = cpu_to_le32(label_size);
label_size_out.max_xfer = cpu_to_le32(mxfer);
@@ -670,7 +656,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr)
static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
uint32_t offset, uint32_t length)
{
- uint32_t ret = 3 /* Invalid Input Parameters */;
+ uint32_t ret = NVDIMM_DSM_RET_STATUS_INVALID;
if (offset + length < offset) {
nvdimm_debug("offset %#x + length %#x is overflow.\n", offset,
@@ -690,7 +676,7 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
return ret;
}
- return 0 /* Success */;
+ return NVDIMM_DSM_RET_STATUS_SUCCESS;
}
/*
@@ -714,7 +700,7 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
status = nvdimm_rw_label_data_check(nvdimm, get_label_data->offset,
get_label_data->length);
- if (status != 0 /* Success */) {
+ if (status != NVDIMM_DSM_RET_STATUS_SUCCESS) {
nvdimm_dsm_no_payload(status, dsm_mem_addr);
return;
}
@@ -724,7 +710,8 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
get_label_data_out = g_malloc(size);
get_label_data_out->len = cpu_to_le32(size);
- get_label_data_out->func_ret_status = cpu_to_le32(0 /* Success */);
+ get_label_data_out->func_ret_status =
+ cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
nvc->read_label_data(nvdimm, get_label_data_out->out_buf,
get_label_data->length, get_label_data->offset);
@@ -752,7 +739,7 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
status = nvdimm_rw_label_data_check(nvdimm, set_label_data->offset,
set_label_data->length);
- if (status != 0 /* Success */) {
+ if (status != NVDIMM_DSM_RET_STATUS_SUCCESS) {
nvdimm_dsm_no_payload(status, dsm_mem_addr);
return;
}
@@ -762,7 +749,7 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
nvc->write_label_data(nvdimm, set_label_data->in_buf,
set_label_data->length, set_label_data->offset);
- nvdimm_dsm_no_payload(0 /* Success */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS, dsm_mem_addr);
}
static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
@@ -786,7 +773,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
}
if (!nvdimm) {
- nvdimm_dsm_no_payload(2 /* Non-Existing Memory Device */,
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_NOMEMDEV,
dsm_mem_addr);
return;
}
@@ -813,7 +800,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
break;
}
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
}
static uint64_t
@@ -850,14 +837,14 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
if (in->revision != 0x1 /* Currently we only support DSM Spec Rev1. */) {
nvdimm_debug("Revision %#x is not supported, expect %#x.\n",
in->revision, 0x1);
- nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr);
+ nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
goto exit;
}
if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
- nvdimm_dsm_reserved_root(state, in, dsm_mem_addr);
+ nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr);
goto exit;
- }
+ }
/* Handle 0 is reserved for NVDIMM Root Device. */
if (!in->handle) {
@@ -881,6 +868,13 @@ static const MemoryRegionOps nvdimm_dsm_ops = {
},
};
+void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev)
+{
+ if (dev->hotplugged) {
+ acpi_send_event(DEVICE(hotplug_dev), ACPI_NVDIMM_HOTPLUG_STATUS);
+ }
+}
+
void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
FWCfgState *fw_cfg, Object *owner)
{
@@ -1031,7 +1025,7 @@ static void nvdimm_build_common_dsm(Aml *dev)
aml_append(unsupport, ifctx);
/* No function is supported yet. */
- byte_list[0] = 1 /* Not Supported */;
+ byte_list[0] = NVDIMM_DSM_RET_STATUS_UNSUPPORT;
aml_append(unsupport, aml_return(aml_buffer(1, byte_list)));
aml_append(method, unsupport);
@@ -1094,7 +1088,7 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle)
aml_append(dev, method);
}
-static void nvdimm_build_fit(Aml *dev)
+static void nvdimm_build_fit_method(Aml *dev)
{
Aml *method, *pkg, *buf, *buf_size, *offset, *call_result;
Aml *whilectx, *ifcond, *ifctx, *elsectx, *fit;
@@ -1103,13 +1097,11 @@ static void nvdimm_build_fit(Aml *dev)
buf_size = aml_local(1);
fit = aml_local(2);
- aml_append(dev, aml_create_dword_field(aml_buffer(4, NULL),
- aml_int(0), NVDIMM_DSM_RFIT_STATUS));
+ aml_append(dev, aml_name_decl(NVDIMM_DSM_RFIT_STATUS, aml_int(0)));
/* build helper function, RFIT. */
method = aml_method("RFIT", 1, AML_SERIALIZED);
- aml_append(method, aml_create_dword_field(aml_buffer(4, NULL),
- aml_int(0), "OFST"));
+ aml_append(method, aml_name_decl("OFST", aml_int(0)));
/* prepare input package. */
pkg = aml_package(1);
@@ -1139,19 +1131,16 @@ static void nvdimm_build_fit(Aml *dev)
aml_append(method, aml_store(aml_sizeof(buf), buf_size));
aml_append(method, aml_subtract(buf_size,
- aml_int(4) /* the size of "STAU" */,
- buf_size));
+ aml_int(4) /* the size of "STAU" */, buf_size));
/* if we read the end of fit. */
ifctx = aml_if(aml_equal(buf_size, aml_int(0)));
aml_append(ifctx, aml_return(aml_buffer(0, NULL)));
aml_append(method, ifctx);
- aml_append(method, aml_store(aml_shiftleft(buf_size, aml_int(3)),
- buf_size));
aml_append(method, aml_create_field(buf,
aml_int(4 * BITS_PER_BYTE), /* offset at byte 4.*/
- buf_size, "BUFF"));
+ aml_shiftleft(buf_size, aml_int(3)), "BUFF"));
aml_append(method, aml_return(aml_name("BUFF")));
aml_append(dev, method);
@@ -1171,7 +1160,7 @@ static void nvdimm_build_fit(Aml *dev)
* again.
*/
ifctx = aml_if(aml_equal(aml_name(NVDIMM_DSM_RFIT_STATUS),
- aml_int(0x100 /* fit changed */)));
+ aml_int(NVDIMM_DSM_RET_STATUS_FIT_CHANGED)));
aml_append(ifctx, aml_store(aml_buffer(0, NULL), fit));
aml_append(ifctx, aml_store(aml_int(0), offset));
aml_append(whilectx, ifctx);
@@ -1251,7 +1240,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
/* 0 is reserved for root device. */
nvdimm_build_device_dsm(dev, 0);
- nvdimm_build_fit(dev);
+ nvdimm_build_fit_method(dev);
nvdimm_build_nvdimm_devices(dev, ram_slots);
@@ -1281,14 +1270,22 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
BIOSLinker *linker, AcpiNVDIMMState *state,
uint32_t ram_slots)
{
- nvdimm_build_nfit(state, table_offsets, table_data, linker);
+ GSList *device_list;
- /*
- * NVDIMM device is allowed to be plugged only if there is available
- * slot.
- */
- if (ram_slots) {
- nvdimm_build_ssdt(table_offsets, table_data, linker, state->dsm_mem,
- ram_slots);
+ /* no nvdimm device can be plugged. */
+ if (!ram_slots) {
+ return;
+ }
+
+ nvdimm_build_ssdt(table_offsets, table_data, linker, state->dsm_mem,
+ ram_slots);
+
+ device_list = nvdimm_get_device_list();
+ /* no NVDIMM device is plugged. */
+ if (!device_list) {
+ return;
}
+
+ nvdimm_build_nfit(state, table_offsets, table_data, linker);
+ g_slist_free(device_list);
}
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 2adc246..17d36bd 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -378,7 +378,12 @@ static void piix4_device_plug_cb(HotplugHandler *hotplug_dev,
if (s->acpi_memory_hotplug.is_enabled &&
object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
- acpi_memory_plug_cb(hotplug_dev, &s->acpi_memory_hotplug, dev, errp);
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ nvdimm_acpi_plug_cb(hotplug_dev, dev);
+ } else {
+ acpi_memory_plug_cb(hotplug_dev, &s->acpi_memory_hotplug,
+ dev, errp);
+ }
} else if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
acpi_pcihp_device_plug_cb(hotplug_dev, &s->acpi_pci_hotplug, dev, errp);
} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
diff --git a/hw/core/hotplug.c b/hw/core/hotplug.c
index ab34c19..17ac986 100644
--- a/hw/core/hotplug.c
+++ b/hw/core/hotplug.c
@@ -35,17 +35,6 @@ void hotplug_handler_plug(HotplugHandler *plug_handler,
}
}
-void hotplug_handler_post_plug(HotplugHandler *plug_handler,
- DeviceState *plugged_dev,
- Error **errp)
-{
- HotplugHandlerClass *hdc = HOTPLUG_HANDLER_GET_CLASS(plug_handler);
-
- if (hdc->post_plug) {
- hdc->post_plug(plug_handler, plugged_dev, errp);
- }
-}
-
void hotplug_handler_unplug_request(HotplugHandler *plug_handler,
DeviceState *plugged_dev,
Error **errp)
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
index d835e62..5783442 100644
--- a/hw/core/qdev.c
+++ b/hw/core/qdev.c
@@ -945,21 +945,10 @@ static void device_set_realized(Object *obj, bool value, Error **errp)
goto child_realize_fail;
}
}
-
if (dev->hotplugged) {
device_reset(dev);
}
dev->pending_deleted_event = false;
- dev->realized = value;
-
- if (hotplug_ctrl) {
- hotplug_handler_post_plug(hotplug_ctrl, dev, &local_err);
- }
-
- if (local_err != NULL) {
- dev->realized = value;
- goto post_realize_fail;
- }
} else if (!value && dev->realized) {
Error **local_errp = NULL;
QLIST_FOREACH(bus, &dev->child_bus, sibling) {
@@ -976,14 +965,13 @@ static void device_set_realized(Object *obj, bool value, Error **errp)
}
dev->pending_deleted_event = true;
DEVICE_LISTENER_CALL(unrealize, Reverse, dev);
+ }
- if (local_err != NULL) {
- goto fail;
- }
-
- dev->realized = value;
+ if (local_err != NULL) {
+ goto fail;
}
+ dev->realized = value;
return;
child_realize_fail:
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 200963f..bc36e19 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1700,22 +1700,16 @@ static void pc_dimm_plug(HotplugHandler *hotplug_dev,
goto out;
}
+ if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
+ nvdimm_plug(&pcms->acpi_nvdimm_state);
+ }
+
hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &error_abort);
out:
error_propagate(errp, local_err);
}
-static void pc_dimm_post_plug(HotplugHandler *hotplug_dev,
- DeviceState *dev, Error **errp)
-{
- PCMachineState *pcms = PC_MACHINE(hotplug_dev);
-
- if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
- nvdimm_acpi_hotplug(&pcms->acpi_nvdimm_state);
- }
-}
-
static void pc_dimm_unplug_request(HotplugHandler *hotplug_dev,
DeviceState *dev, Error **errp)
{
@@ -1752,12 +1746,6 @@ static void pc_dimm_unplug(HotplugHandler *hotplug_dev,
HotplugHandlerClass *hhc;
Error *local_err = NULL;
- if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
- error_setg(&local_err,
- "nvdimm device hot unplug is not supported yet.");
- goto out;
- }
-
hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
hhc->unplug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
@@ -1988,14 +1976,6 @@ static void pc_machine_device_plug_cb(HotplugHandler *hotplug_dev,
}
}
-static void pc_machine_device_post_plug_cb(HotplugHandler *hotplug_dev,
- DeviceState *dev, Error **errp)
-{
- if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
- pc_dimm_post_plug(hotplug_dev, dev, errp);
- }
-}
-
static void pc_machine_device_unplug_request_cb(HotplugHandler *hotplug_dev,
DeviceState *dev, Error **errp)
{
@@ -2330,7 +2310,6 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
mc->reset = pc_machine_reset;
hc->pre_plug = pc_machine_device_pre_plug_cb;
hc->plug = pc_machine_device_plug_cb;
- hc->post_plug = pc_machine_device_post_plug_cb;
hc->unplug_request = pc_machine_device_unplug_request_cb;
hc->unplug = pc_machine_device_unplug_cb;
nc->nmi_monitor_handler = x86_nmi;
diff --git a/include/hw/hotplug.h b/include/hw/hotplug.h
index 10ca5b6..c0db869 100644
--- a/include/hw/hotplug.h
+++ b/include/hw/hotplug.h
@@ -47,7 +47,6 @@ typedef void (*hotplug_fn)(HotplugHandler *plug_handler,
* @parent: Opaque parent interface.
* @pre_plug: pre plug callback called at start of device.realize(true)
* @plug: plug callback called at end of device.realize(true).
- * @post_pug: post plug callback called after device is successfully plugged.
* @unplug_request: unplug request callback.
* Used as a means to initiate device unplug for devices that
* require asynchronous unplug handling.
@@ -62,7 +61,6 @@ typedef struct HotplugHandlerClass {
/* <public> */
hotplug_fn pre_plug;
hotplug_fn plug;
- hotplug_fn post_plug;
hotplug_fn unplug_request;
hotplug_fn unplug;
} HotplugHandlerClass;
@@ -85,14 +83,6 @@ void hotplug_handler_pre_plug(HotplugHandler *plug_handler,
DeviceState *plugged_dev,
Error **errp);
-/**
- * hotplug_handler_post_plug:
- *
- * Call #HotplugHandlerClass.post_plug callback of @plug_handler.
- */
-void hotplug_handler_post_plug(HotplugHandler *plug_handler,
- DeviceState *plugged_dev,
- Error **errp);
/**
* hotplug_handler_unplug_request:
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 33cd421..5d7a8c0 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -99,20 +99,15 @@ typedef struct NVDIMMClass NVDIMMClass;
#define NVDIMM_ACPI_IO_LEN 4
/*
- * The buffer, @fit, saves the FIT info for all the presented NVDIMM
- * devices which is updated after the NVDIMM device is plugged or
- * unplugged.
- *
- * Rules to use the buffer:
- * 1) the user should hold the @lock to access the buffer.
- * 2) mark @dirty whenever the buffer is updated.
- *
- * These rules preserve NVDIMM ACPI _FIT method to read incomplete
- * or obsolete fit info if fit update happens during multiple RFIT
- * calls.
+ * NvdimmFitBuffer:
+ * @fit: FIT structures for present NVDIMMs. It is updated after
+ * the NVDIMM device is plugged or unplugged.
+ * @dirty: It is set whenever the buffer is updated so that it
+ * preserves NVDIMM ACPI _FIT method to read incomplete or
+ * obsolete fit info if fit update happens during multiple RFIT
+ * calls.
*/
struct NvdimmFitBuffer {
- QemuMutex lock;
GArray *fit;
bool dirty;
};
@@ -137,5 +132,6 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
BIOSLinker *linker, AcpiNVDIMMState *state,
uint32_t ram_slots);
-void nvdimm_acpi_hotplug(AcpiNVDIMMState *state);
+void nvdimm_plug(AcpiNVDIMMState *state);
+void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev);
#endif
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 4:22 ` Michael S. Tsirkin
2016-11-04 7:51 ` Xiao Guangrong
@ 2016-11-04 9:15 ` Stefan Hajnoczi
2016-11-04 9:51 ` Igor Mammedov
1 sibling, 1 reply; 16+ messages in thread
From: Stefan Hajnoczi @ 2016-11-04 9:15 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Xiao Guangrong, pbonzini, imammedo, gleb, mtosatti, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1348 bytes --]
On Fri, Nov 04, 2016 at 06:22:26AM +0200, Michael S. Tsirkin wrote:
> On Fri, Nov 04, 2016 at 06:01:54AM +0200, Michael S. Tsirkin wrote:
> > On Fri, Nov 04, 2016 at 11:50:19AM +0800, Xiao Guangrong wrote:
> > > On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
> > > > Hi Michael,
> > > >
> > > > This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
> > > > in your pull request:
> > > > [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
> > > > [PULL 37/47] nvdimm acpi: introduce fit buffer
> > > > [PULL 38/47] nvdimm acpi: introduce _FIT
> > > > [PULL 39/47] pc: memhp: enable nvdimm device hotplug
> > > >
> > > > Thanks for your patience also thank Igor and Stefan for their review.
> > >
> > > Hi,
> > >
> > > As the pull request has been upstream (Cool! :)), i will post
> > > diff changes based on that.
> > >
> > > Thanks!
> >
> > Igor prefers seeing revert+patches, I prefer seeing a diff.
> > Can you send both? A global diff would be ok for me
> > as it's small and easy enough to generate.
>
> Stefan, I wonder what's easier for you to review?
Since it has been merged into qemu.git/master I'd now like to see
follow-up patches. Not a global diff but real individual changes on top
of qemu.git/master. These fixes can be merged during softfreeze.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 9:15 ` Stefan Hajnoczi
@ 2016-11-04 9:51 ` Igor Mammedov
2016-11-04 10:01 ` Xiao Guangrong
2016-11-07 13:04 ` Stefan Hajnoczi
0 siblings, 2 replies; 16+ messages in thread
From: Igor Mammedov @ 2016-11-04 9:51 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Michael S. Tsirkin, Xiao Guangrong, pbonzini, gleb, mtosatti, rth,
ehabkost, dan.j.williams, kvm, qemu-devel
On Fri, 4 Nov 2016 09:15:48 +0000
Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Fri, Nov 04, 2016 at 06:22:26AM +0200, Michael S. Tsirkin wrote:
> > On Fri, Nov 04, 2016 at 06:01:54AM +0200, Michael S. Tsirkin wrote:
> > > On Fri, Nov 04, 2016 at 11:50:19AM +0800, Xiao Guangrong wrote:
> > > > On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
> > > > > Hi Michael,
> > > > >
> > > > > This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
> > > > > in your pull request:
> > > > > [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
> > > > > [PULL 37/47] nvdimm acpi: introduce fit buffer
> > > > > [PULL 38/47] nvdimm acpi: introduce _FIT
> > > > > [PULL 39/47] pc: memhp: enable nvdimm device hotplug
> > > > >
> > > > > Thanks for your patience also thank Igor and Stefan for their review.
> > > >
> > > > Hi,
> > > >
> > > > As the pull request has been upstream (Cool! :)), i will post
> > > > diff changes based on that.
> > > >
> > > > Thanks!
> > >
> > > Igor prefers seeing revert+patches, I prefer seeing a diff.
> > > Can you send both? A global diff would be ok for me
> > > as it's small and easy enough to generate.
> >
> > Stefan, I wonder what's easier for you to review?
>
> Since it has been merged into qemu.git/master I'd now like to see
> follow-up patches. Not a global diff but real individual changes on top
> of qemu.git/master. These fixes can be merged during softfreeze.
Incremental followup patches will make review a bit harder as
they should remove code that's shouldn't have been there is
the first place and would be fixing existing mess.
But since majority prefers incremental followup patches lets do
it this way.
>
> Stefan
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 9:51 ` Igor Mammedov
@ 2016-11-04 10:01 ` Xiao Guangrong
2016-11-07 13:04 ` Stefan Hajnoczi
1 sibling, 0 replies; 16+ messages in thread
From: Xiao Guangrong @ 2016-11-04 10:01 UTC (permalink / raw)
To: Igor Mammedov, Stefan Hajnoczi
Cc: Michael S. Tsirkin, pbonzini, gleb, mtosatti, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
On 11/04/2016 05:51 PM, Igor Mammedov wrote:
> But since majority prefers incremental followup patches lets do
> it this way.
Okay, then i will make this patchset on the top of upstream.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce fit buffer
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce fit buffer Xiao Guangrong
@ 2016-11-04 10:08 ` Igor Mammedov
0 siblings, 0 replies; 16+ messages in thread
From: Igor Mammedov @ 2016-11-04 10:08 UTC (permalink / raw)
To: Xiao Guangrong
Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
On Fri, 4 Nov 2016 02:36:23 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> The buffer is used to save the FIT info for all the presented nvdimm
> devices which is updated after the nvdimm device is plugged or
> unplugged. In the later patch, it will be used to construct NVDIMM
> ACPI _FIT method which reflects the presented nvdimm devices after
> nvdimm hotplug
>
> As FIT buffer can not completely mapped into guest address space,
^^ be
> OSPM will exit to QEMU multiple times, however, there is the race
> condition - FIT may be changed during these multiple exits, so that
> we mark @dirty whenever the buffer is updated.
>
> @dirty is cleared for the first time OSPM gets fit buffer, if
> dirty is detected in the later access, OSPM will restart the
> access
s/presented/present/g
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
> hw/acpi/nvdimm.c | 59 +++++++++++++++++++++++++++++++------------------
> hw/i386/acpi-build.c | 2 +-
> hw/i386/pc.c | 4 ++++
> include/hw/mem/nvdimm.h | 21 +++++++++++++++++-
> 4 files changed, 62 insertions(+), 24 deletions(-)
>
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 58bc5c6..73db3a7 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -33,35 +33,30 @@
> #include "hw/nvram/fw_cfg.h"
> #include "hw/mem/nvdimm.h"
>
> -static int nvdimm_plugged_device_list(Object *obj, void *opaque)
> +static int nvdimm_device_list(Object *obj, void *opaque)
> {
> GSList **list = opaque;
>
> if (object_dynamic_cast(obj, TYPE_NVDIMM)) {
> - DeviceState *dev = DEVICE(obj);
> -
> - if (dev->realized) { /* only realized NVDIMMs matter */
> - *list = g_slist_append(*list, DEVICE(obj));
> - }
> + *list = g_slist_append(*list, DEVICE(obj));
> }
>
> - object_child_foreach(obj, nvdimm_plugged_device_list, opaque);
> + object_child_foreach(obj, nvdimm_device_list, opaque);
> return 0;
> }
above hunk should be in a separate patch, to reduce noise caused by rename
>
> /*
> - * inquire plugged NVDIMM devices and link them into the list which is
> + * inquire NVDIMM devices and link them into the list which is
> * returned to the caller.
> *
> * Note: it is the caller's responsibility to free the list to avoid
> * memory leak.
> */
> -static GSList *nvdimm_get_plugged_device_list(void)
> +static GSList *nvdimm_get_device_list(void)
> {
> GSList *list = NULL;
>
> - object_child_foreach(qdev_get_machine(), nvdimm_plugged_device_list,
> - &list);
> + object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list);
> return list;
> }
>
> @@ -219,7 +214,7 @@ static uint32_t nvdimm_slot_to_dcr_index(int slot)
> static NVDIMMDevice *nvdimm_get_device_by_handle(uint32_t handle)
> {
> NVDIMMDevice *nvdimm = NULL;
> - GSList *list, *device_list = nvdimm_get_plugged_device_list();
> + GSList *list, *device_list = nvdimm_get_device_list();
>
> for (list = device_list; list; list = list->next) {
> NVDIMMDevice *nvd = list->data;
> @@ -348,8 +343,9 @@ static void nvdimm_build_structure_dcr(GArray *structures, DeviceState *dev)
> (DSM) in DSM Spec Rev1.*/);
> }
>
> -static GArray *nvdimm_build_device_structure(GSList *device_list)
> +static GArray *nvdimm_build_device_structure(void)
> {
> + GSList *device_list = nvdimm_get_device_list();
> GArray *structures = g_array_new(false, true /* clear */, 1);
>
> for (; device_list; device_list = device_list->next) {
> @@ -367,14 +363,32 @@ static GArray *nvdimm_build_device_structure(GSList *device_list)
> /* build NVDIMM Control Region Structure. */
> nvdimm_build_structure_dcr(structures, dev);
> }
> + g_slist_free(device_list);
>
> return structures;
> }
>
> -static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
> +static void nvdimm_init_fit_buffer(NvdimmFitBuffer *fit_buf)
> +{
> + fit_buf->fit = g_array_new(false, true /* clear */, 1);
> +}
> +
> +static void nvdimm_build_fit_buffer(NvdimmFitBuffer *fit_buf)
> +{
> + g_array_free(fit_buf->fit, true);
> + fit_buf->fit = nvdimm_build_device_structure();
> + fit_buf->dirty = true;
> +}
> +
> +void nvdimm_plug(AcpiNVDIMMState *state)
> +{
> + nvdimm_build_fit_buffer(&state->fit_buf);
> +}
> +
> +static void nvdimm_build_nfit(AcpiNVDIMMState *state, GArray *table_offsets,
> GArray *table_data, BIOSLinker *linker)
> {
> - GArray *structures = nvdimm_build_device_structure(device_list);
> + NvdimmFitBuffer *fit_buf = &state->fit_buf;
> unsigned int header;
>
> acpi_add_table(table_offsets, table_data);
> @@ -383,12 +397,11 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
> header = table_data->len;
> acpi_data_push(table_data, sizeof(NvdimmNfitHeader));
> /* NVDIMM device structures. */
> - g_array_append_vals(table_data, structures->data, structures->len);
> + g_array_append_vals(table_data, fit_buf->fit->data, fit_buf->fit->len);
>
> build_header(linker, table_data,
> (void *)(table_data->data + header), "NFIT",
> - sizeof(NvdimmNfitHeader) + structures->len, 1, NULL, NULL);
> - g_array_free(structures, true);
> + sizeof(NvdimmNfitHeader) + fit_buf->fit->len, 1, NULL, NULL);
> }
>
> struct NvdimmDsmIn {
> @@ -771,6 +784,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
> fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
> state->dsm_mem->len);
> +
> + nvdimm_init_fit_buffer(&state->fit_buf);
> }
>
> #define NVDIMM_COMMON_DSM "NCAL"
> @@ -1045,7 +1060,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
> }
>
> void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
> - BIOSLinker *linker, GArray *dsm_dma_arrea,
> + BIOSLinker *linker, AcpiNVDIMMState *state,
> uint32_t ram_slots)
> {
> GSList *device_list;
> @@ -1055,15 +1070,15 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
> return;
> }
>
> - nvdimm_build_ssdt(table_offsets, table_data, linker, dsm_dma_arrea,
> + nvdimm_build_ssdt(table_offsets, table_data, linker, state->dsm_mem,
> ram_slots);
>
> - device_list = nvdimm_get_plugged_device_list();
> + device_list = nvdimm_get_device_list();
> /* no NVDIMM device is plugged. */
> if (!device_list) {
> return;
> }
>
> - nvdimm_build_nfit(device_list, table_offsets, table_data, linker);
> + nvdimm_build_nfit(state, table_offsets, table_data, linker);
> g_slist_free(device_list);
> }
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 6ae4769..bc49958 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -2767,7 +2767,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
> }
> if (pcms->acpi_nvdimm_state.is_enabled) {
> nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
> - pcms->acpi_nvdimm_state.dsm_mem, machine->ram_slots);
> + &pcms->acpi_nvdimm_state, machine->ram_slots);
> }
>
> /* Add tables supplied by user (if any) */
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 93ff49c..a4e1509 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -1700,6 +1700,10 @@ static void pc_dimm_plug(HotplugHandler *hotplug_dev,
> goto out;
> }
>
> + if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) {
> + nvdimm_plug(&pcms->acpi_nvdimm_state);
> + }
> +
> hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
> hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &error_abort);
> out:
> diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> index 63a2b20..e704e55 100644
> --- a/include/hw/mem/nvdimm.h
> +++ b/include/hw/mem/nvdimm.h
> @@ -98,12 +98,30 @@ typedef struct NVDIMMClass NVDIMMClass;
> #define NVDIMM_ACPI_IO_BASE 0x0a18
> #define NVDIMM_ACPI_IO_LEN 4
>
> +/*
> + * NvdimmFitBuffer:
> + * @fit: FIT structures for present NVDIMMs. It is updated after
s/after/when/
> + * the NVDIMM device is plugged or unplugged.
> + * @dirty: It is set whenever the buffer is updated
. It allows OSPM to detect change and restart read in progress if there is any.
> + so that it
> + * preserves NVDIMM ACPI _FIT method to read incomplete or
> + * obsolete fit info if fit update happens during multiple RFIT
> + * calls.
drop this
> + */
> +struct NvdimmFitBuffer {
> + GArray *fit;
> + bool dirty;
> +};
> +typedef struct NvdimmFitBuffer NvdimmFitBuffer;
> +
> struct AcpiNVDIMMState {
> /* detect if NVDIMM support is enabled. */
> bool is_enabled;
>
> /* the data of the fw_cfg file NVDIMM_DSM_MEM_FILE. */
> GArray *dsm_mem;
> +
> + NvdimmFitBuffer fit_buf;
> +
> /* the IO region used by OSPM to transfer control to QEMU. */
> MemoryRegion io_mr;
> };
> @@ -112,6 +130,7 @@ typedef struct AcpiNVDIMMState AcpiNVDIMMState;
> void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> FWCfgState *fw_cfg, Object *owner);
> void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
> - BIOSLinker *linker, GArray *dsm_dma_arrea,
> + BIOSLinker *linker, AcpiNVDIMMState *state,
> uint32_t ram_slots);
> +void nvdimm_plug(AcpiNVDIMMState *state);
> #endif
otherwise patch looks good
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: introduce _FIT method
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: introduce _FIT method Xiao Guangrong
@ 2016-11-04 10:24 ` Igor Mammedov
0 siblings, 0 replies; 16+ messages in thread
From: Igor Mammedov @ 2016-11-04 10:24 UTC (permalink / raw)
To: Xiao Guangrong
Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
dan.j.williams, kvm, qemu-devel
On Fri, 4 Nov 2016 02:36:25 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> _FIT is required for hotplug support, guest will inquire the
> updated device info from it if a hotplug event is received
>
> As FIT buffer is not completely mapped into guest address space,
> Read_FIT method is introduced to read NFIT structures blob from
> QEMU, The buffer is concatenated before _FIT return
>
> Refer to docs/specs/acpi-nvdimm.txt for detailed design
>
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
> docs/specs/acpi_nvdimm.txt | 61 ++++++++++++--
> hw/acpi/nvdimm.c | 198 ++++++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 252 insertions(+), 7 deletions(-)
>
> diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt
> index 0fdd251..63cb63f 100644
> --- a/docs/specs/acpi_nvdimm.txt
> +++ b/docs/specs/acpi_nvdimm.txt
> @@ -65,8 +65,8 @@ _FIT(Firmware Interface Table)
> The detailed definition of the structure can be found at ACPI 6.0: 5.2.25
> NVDIMM Firmware Interface Table (NFIT).
>
> -QEMU NVDIMM Implemention
> -========================
> +QEMU NVDIMM Implementation
> +==========================
> QEMU uses 4 bytes IO Port starting from 0x0a18 and a RAM-based memory page
> for NVDIMM ACPI.
>
> @@ -82,6 +82,16 @@ Memory:
> ACPI writes _DSM Input Data (based on the offset in the page):
> [0x0 - 0x3]: 4 bytes, NVDIMM Device Handle, 0 is reserved for NVDIMM
> Root device.
> +
> + The handle is completely QEMU internal thing, the values in
> + range [1, 0xFFFF] indicate nvdimm device. Other values are
> + reserved for other purposes.
> +
> + Reserved handles:
> + 0 is reserved for nvdimm root device named NVDR.
> + 0x10000 is reserved for QEMU internal DSM function called on
> + the root device.
> +
> [0x4 - 0x7]: 4 bytes, Revision ID, that is the Arg1 of _DSM method.
> [0x8 - 0xB]: 4 bytes. Function Index, that is the Arg2 of _DSM method.
> [0xC - 0xFFF]: 4084 bytes, the Arg3 of _DSM method.
> @@ -127,6 +137,47 @@ _DSM process diagram:
> | result from the page | | |
> +--------------------------+ +--------------+
>
> - _FIT implementation
> - -------------------
> - TODO (will fill it when nvdimm hotplug is introduced)
> +QEMU internal use only _DSM function
> +------------------------------------
> +1) Read FIT
> + _FIT method uses _DSM method to fetch NFIT structures blob from QEMU
> + in 1 page sized increments which are then concatenated and returned
> + as _FIT method result.
> +
> + Input parameters:
> + Arg0 – UUID {set to 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62}
> + Arg1 – Revision ID (set to 1)
> + Arg2 - Function Index, 0x1
> + Arg3 - A package containing a buffer whose layout is as follows:
> +
> + +----------+--------+--------+-------------------------------------------+
> + | Field | Length | Offset | Description |
> + +----------+--------+--------+-------------------------------------------+
> + | offset | 4 | 0 | offset in QEMU's NFIT structures blob to |
> + | | | | read from |
> + +----------+--------+--------+-------------------------------------------+
> +
> + Output layout in the dsm memory page:
> + +----------+--------+--------+-------------------------------------------+
> + | Field | Length | Offset | Description |
> + +----------+--------+--------+-------------------------------------------+
> + | length | 4 | 0 | length of entire returned data |
> + | | | | (including the header) |
s/the header/this header/
> + +----------+-----------------+-------------------------------------------+
> + | | | | return status codes |
> + | | | | 0x0 - success |
> + | | | | 0x100 - error caused by NFIT update while |
> + | status | 4 | 4 | read by _FIT wasn't completed, other |
> + | | | | codes follow Chapter 3 in DSM Spec Rev1 |
> + +----------+-----------------+-------------------------------------------+
> + | fit data | Varies | 8 | contains FIT data, this field is present |
> + | | | | if status field is 0; |
> + +----------+--------+--------+-------------------------------------------+
> +
> + The FIT offset is maintained by the OSPM itself, current offset plus
> + the size of the fit data returned by the function is the next offset
> + OSPM should read. When all FIT data has been read out, zero length is
> + returned.
you mean here fit data length rather then what is in 'length' field,
you should talk here in terms of 'length' field.
> +
> + If it returns status code 0x100, OSPM should restart to read FIT (read
> + from offset 0 again).
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index d9d1ef7..f82ae4a 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -478,6 +478,22 @@ typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn;
> QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
> offsetof(NvdimmDsmIn, arg3) > 4096);
>
> +struct NvdimmFuncReadFITIn {
> + uint32_t offset; /* the offset into FIT buffer. */
> +} QEMU_PACKED;
> +typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
> +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
> + offsetof(NvdimmDsmIn, arg3) > 4096);
> +
> +struct NvdimmFuncReadFITOut {
> + /* the size of buffer filled by QEMU. */
> + uint32_t len;
> + uint32_t func_ret_status; /* return status code. */
> + uint8_t fit[0]; /* the FIT data. */
> +} QEMU_PACKED;
> +typedef struct NvdimmFuncReadFITOut NvdimmFuncReadFITOut;
> +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITOut) > 4096);
> +
> static void
> nvdimm_dsm_function0(uint32_t supported_func, hwaddr dsm_mem_addr)
> {
> @@ -502,6 +518,73 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr)
> #define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */
> #define NVDIMM_DSM_RET_STATUS_NOMEMDEV 2 /* Non-Existing Memory Device */
> #define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */
> +#define NVDIMM_DSM_RET_STATUS_FIT_CHANGED 0x100 /* FIT Changed */
> +
> +#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000
> +
> +/* Read FIT data, defined in docs/specs/acpi_nvdimm.txt. */
> +static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in,
> + hwaddr dsm_mem_addr)
> +{
> + NvdimmFitBuffer *fit_buf = &state->fit_buf;
> + NvdimmFuncReadFITIn *read_fit;
> + NvdimmFuncReadFITOut *read_fit_out;
> + GArray *fit;
> + uint32_t read_len = 0, func_ret_status, offset;
> + int size;
> +
> + read_fit = (NvdimmFuncReadFITIn *)in->arg3;
> + offset = le32_to_cpu(read_fit->offset);
> +
> + fit = fit_buf->fit;
> +
> + nvdimm_debug("Read FIT: offset %#x FIT size %#x Dirty %s.\n",
> + offset, fit->len, fit_buf->dirty ? "Yes" : "No");
> +
> + /* It is the first time to read FIT. */
> + if (!offset) {
> + fit_buf->dirty = false;
> + } else if (fit_buf->dirty) { /* FIT has been changed during RFIT. */
> + func_ret_status = NVDIMM_DSM_RET_STATUS_FIT_CHANGED;
> + goto exit;
> + }
> +
> + if (offset > fit->len) {
> + func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
> + goto exit;
> + }
> +
> + func_ret_status = NVDIMM_DSM_RET_STATUS_SUCCESS;
> + read_len = MIN(fit->len - offset, 4096 - sizeof(NvdimmFuncReadFITOut));
^^^
should be macro that's used at place where page is allocated,
so that usage places won't go out of sync someday
> +
> +exit:
> + size = sizeof(NvdimmFuncReadFITOut) + read_len;
> + read_fit_out = g_malloc(size);
> +
> + read_fit_out->len = cpu_to_le32(size);
> + read_fit_out->func_ret_status = cpu_to_le32(func_ret_status);
> + memcpy(read_fit_out->fit, fit->data + offset, read_len);
> +
> + cpu_physical_memory_write(dsm_mem_addr, read_fit_out, size);
> +
> + g_free(read_fit_out);
> +}
> +
> +static void
> +nvdimm_dsm_handle_reserved_root_method(AcpiNVDIMMState *state,
> + NvdimmDsmIn *in, hwaddr dsm_mem_addr)
> +{
> + switch (in->function) {
> + case 0x0:
> + nvdimm_dsm_function0(0x1 | 1 << 1 /* Read FIT */, dsm_mem_addr);
> + return;
> + case 0x1 /* Read FIT */:
> + nvdimm_dsm_func_read_fit(state, in, dsm_mem_addr);
> + return;
> + }
> +
> + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
> +}
>
> static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
> {
> @@ -730,6 +813,7 @@ nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
> static void
> nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
> {
> + AcpiNVDIMMState *state = opaque;
> NvdimmDsmIn *in;
> hwaddr dsm_mem_addr = val;
>
> @@ -757,6 +841,11 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
> goto exit;
> }
>
> + if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> + nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr);
> + goto exit;
> + }
> +
> /* Handle 0 is reserved for NVDIMM Root Device. */
> if (!in->handle) {
> nvdimm_dsm_root(in, dsm_mem_addr);
> @@ -809,9 +898,13 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> #define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
> #define NVDIMM_DSM_OUT_BUF "ODAT"
>
> +#define NVDIMM_DSM_RFIT_STATUS "RSTA"
> +
> +#define NVDIMM_QEMU_RSVD_UUID "648B9CF2-CDA1-4312-8AD9-49C4AF32BD62"
> +
> static void nvdimm_build_common_dsm(Aml *dev)
> {
> - Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem;
> + Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem, *elsectx2;
> Aml *elsectx, *unsupport, *unpatched, *expected_uuid, *uuid_invalid;
> Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size;
> uint8_t byte_list[1];
> @@ -900,9 +993,15 @@ static void nvdimm_build_common_dsm(Aml *dev)
> /* UUID for NVDIMM Root Device */, expected_uuid));
> aml_append(method, ifctx);
> elsectx = aml_else();
> - aml_append(elsectx, aml_store(
> + ifctx = aml_if(aml_equal(handle, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT)));
> + aml_append(ifctx, aml_store(aml_touuid(NVDIMM_QEMU_RSVD_UUID
> + /* UUID for QEMU internal use */), expected_uuid));
> + aml_append(elsectx, ifctx);
> + elsectx2 = aml_else();
> + aml_append(elsectx2, aml_store(
> aml_touuid("4309AC30-0D11-11E4-9191-0800200C9A66")
> /* UUID for NVDIMM Devices */, expected_uuid));
> + aml_append(elsectx, elsectx2);
> aml_append(method, elsectx);
>
> uuid_invalid = aml_lnot(aml_equal(uuid, expected_uuid));
> @@ -982,6 +1081,100 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle)
> aml_append(dev, method);
> }
>
> +static void nvdimm_build_fit_method(Aml *dev)
> +{
> + Aml *method, *pkg, *buf, *buf_size, *offset, *call_result;
> + Aml *whilectx, *ifcond, *ifctx, *elsectx, *fit;
> +
> + buf = aml_local(0);
> + buf_size = aml_local(1);
> + fit = aml_local(2);
> +
> + aml_append(dev, aml_name_decl(NVDIMM_DSM_RFIT_STATUS, aml_int(0)));
> +
> + /* build helper function, RFIT. */
> + method = aml_method("RFIT", 1, AML_SERIALIZED);
> + aml_append(method, aml_name_decl("OFST", aml_int(0)));
> +
> + /* prepare input package. */
> + pkg = aml_package(1);
> + aml_append(method, aml_store(aml_arg(0), aml_name("OFST")));
> + aml_append(pkg, aml_name("OFST"));
> +
> + /* call Read_FIT function. */
> + call_result = aml_call5(NVDIMM_COMMON_DSM,
> + aml_touuid(NVDIMM_QEMU_RSVD_UUID),
> + aml_int(1) /* Revision 1 */,
> + aml_int(0x1) /* Read FIT */,
> + pkg, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT));
> + aml_append(method, aml_store(call_result, buf));
> +
> + /* handle _DSM result. */
> + aml_append(method, aml_create_dword_field(buf,
> + aml_int(0) /* offset at byte 0 */, "STAU"));
> +
> + aml_append(method, aml_store(aml_name("STAU"),
> + aml_name(NVDIMM_DSM_RFIT_STATUS)));
> +
> + /* if something is wrong during _DSM. */
> + ifcond = aml_equal(aml_int(0 /* Success */), aml_name("STAU"));
> + ifctx = aml_if(aml_lnot(ifcond));
> + aml_append(ifctx, aml_return(aml_buffer(0, NULL)));
> + aml_append(method, ifctx);
> +
> + aml_append(method, aml_store(aml_sizeof(buf), buf_size));
> + aml_append(method, aml_subtract(buf_size,
> + aml_int(4) /* the size of "STAU" */, buf_size));
> +
> + /* if we read the end of fit. */
> + ifctx = aml_if(aml_equal(buf_size, aml_int(0)));
> + aml_append(ifctx, aml_return(aml_buffer(0, NULL)));
> + aml_append(method, ifctx);
> +
> + aml_append(method, aml_create_field(buf,
> + aml_int(4 * BITS_PER_BYTE), /* offset at byte 4.*/
> + aml_shiftleft(buf_size, aml_int(3)), "BUFF"));
> + aml_append(method, aml_return(aml_name("BUFF")));
> + aml_append(dev, method);
> +
> + /* build _FIT. */
> + method = aml_method("_FIT", 0, AML_SERIALIZED);
> + offset = aml_local(3);
> +
> + aml_append(method, aml_store(aml_buffer(0, NULL), fit));
> + aml_append(method, aml_store(aml_int(0), offset));
> +
> + whilectx = aml_while(aml_int(1));
> + aml_append(whilectx, aml_store(aml_call1("RFIT", offset), buf));
> + aml_append(whilectx, aml_store(aml_sizeof(buf), buf_size));
> +
> + /*
> + * if fit buffer was changed during RFIT, read from the beginning
> + * again.
> + */
> + ifctx = aml_if(aml_equal(aml_name(NVDIMM_DSM_RFIT_STATUS),
> + aml_int(NVDIMM_DSM_RET_STATUS_FIT_CHANGED)));
> + aml_append(ifctx, aml_store(aml_buffer(0, NULL), fit));
> + aml_append(ifctx, aml_store(aml_int(0), offset));
> + aml_append(whilectx, ifctx);
> +
> + elsectx = aml_else();
> +
> + /* finish fit read if no data is read out. */
> + ifctx = aml_if(aml_equal(buf_size, aml_int(0)));
> + aml_append(ifctx, aml_return(fit));
> + aml_append(elsectx, ifctx);
> +
> + /* update the offset. */
> + aml_append(elsectx, aml_add(offset, buf_size, offset));
> + /* append the data we read out to the fit buffer. */
> + aml_append(elsectx, aml_concatenate(fit, buf, fit));
> + aml_append(whilectx, elsectx);
> + aml_append(method, whilectx);
> +
> + aml_append(dev, method);
> +}
> +
> static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
> {
> uint32_t slot;
> @@ -1040,6 +1233,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
>
> /* 0 is reserved for root device. */
> nvdimm_build_device_dsm(dev, 0);
> + nvdimm_build_fit_method(dev);
>
> nvdimm_build_nvdimm_devices(dev, ram_slots);
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support
2016-11-04 9:51 ` Igor Mammedov
2016-11-04 10:01 ` Xiao Guangrong
@ 2016-11-07 13:04 ` Stefan Hajnoczi
1 sibling, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2016-11-07 13:04 UTC (permalink / raw)
To: Igor Mammedov
Cc: Stefan Hajnoczi, Xiao Guangrong, ehabkost, kvm,
Michael S. Tsirkin, gleb, mtosatti, qemu-devel, pbonzini,
dan.j.williams, rth
[-- Attachment #1: Type: text/plain, Size: 2418 bytes --]
On Fri, Nov 04, 2016 at 10:51:43AM +0100, Igor Mammedov wrote:
> On Fri, 4 Nov 2016 09:15:48 +0000
> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > On Fri, Nov 04, 2016 at 06:22:26AM +0200, Michael S. Tsirkin wrote:
> > > On Fri, Nov 04, 2016 at 06:01:54AM +0200, Michael S. Tsirkin wrote:
> > > > On Fri, Nov 04, 2016 at 11:50:19AM +0800, Xiao Guangrong wrote:
> > > > > On 11/04/2016 02:36 AM, Xiao Guangrong wrote:
> > > > > > Hi Michael,
> > > > > >
> > > > > > This patchset can replace the patches from [PULL 36/47] to [PULL 39/47]
> > > > > > in your pull request:
> > > > > > [PULL 36/47] nvdimm acpi: prebuild nvdimm devices for available slots
> > > > > > [PULL 37/47] nvdimm acpi: introduce fit buffer
> > > > > > [PULL 38/47] nvdimm acpi: introduce _FIT
> > > > > > [PULL 39/47] pc: memhp: enable nvdimm device hotplug
> > > > > >
> > > > > > Thanks for your patience also thank Igor and Stefan for their review.
> > > > >
> > > > > Hi,
> > > > >
> > > > > As the pull request has been upstream (Cool! :)), i will post
> > > > > diff changes based on that.
> > > > >
> > > > > Thanks!
> > > >
> > > > Igor prefers seeing revert+patches, I prefer seeing a diff.
> > > > Can you send both? A global diff would be ok for me
> > > > as it's small and easy enough to generate.
> > >
> > > Stefan, I wonder what's easier for you to review?
> >
> > Since it has been merged into qemu.git/master I'd now like to see
> > follow-up patches. Not a global diff but real individual changes on top
> > of qemu.git/master. These fixes can be merged during softfreeze.
>
> Incremental followup patches will make review a bit harder as
> they should remove code that's shouldn't have been there is
> the first place and would be fixing existing mess.
>
> But since majority prefers incremental followup patches lets do
> it this way.
For the record I would have preferred to get the patch series right
before merging it.
I guess that Michael favors development velocity and there are arguments
why merging code before it's perfect can help it mature more quickly.
If we merge things before they are ready then it's important that
incomplete features are marked experimental/unstable. But in this case
I guess the strategy will be to merge the fixes during soft-freeze
(before Nov 15th) so that QEMU 2.8 ships the finalized code.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2016-11-07 13:05 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-03 18:36 [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: prebuild nvdimm devices for available slots Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce fit buffer Xiao Guangrong
2016-11-04 10:08 ` Igor Mammedov
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: define DSM return codes Xiao Guangrong
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: introduce _FIT method Xiao Guangrong
2016-11-04 10:24 ` Igor Mammedov
2016-11-03 18:36 ` [Qemu-devel] [PATCH v5 5/5] pc: memhp: enable nvdimm device hotplug Xiao Guangrong
2016-11-04 3:50 ` [Qemu-devel] [PATCH v5 0/5] nvdimm: hotplug support Xiao Guangrong
2016-11-04 4:01 ` Michael S. Tsirkin
2016-11-04 4:22 ` Michael S. Tsirkin
2016-11-04 7:51 ` Xiao Guangrong
2016-11-04 9:15 ` Stefan Hajnoczi
2016-11-04 9:51 ` Igor Mammedov
2016-11-04 10:01 ` Xiao Guangrong
2016-11-07 13:04 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).