qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
@ 2016-03-02 11:50 Xiao Guangrong
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 11:50 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

This patchset is against commit 429fb940caadf9 (fw-cfg: support writeable
blobs) on pci branch of Michael's git tree and can be found at:
      https://github.com/xiaogr/qemu.git nvdimm-acpi-v5

Changelog in v5:
Thanks to Michael's review, the changes in this version are:
- use nvdimm_debug() instead of fprintf() on the path triggered by read
  access of IO port
- introduce the data struct to represent output information to drop the
  open-code
- improve the comment to better explain the fact that currently no function
  other than function 0 is supported

Changelog in v4:
- drop the unnecessary assert() in aml_concatenate() based on Igor's
  suggestion
- introduce build_append_named_dword() and use it to simplify the code as
  Michael's suggestion

Changelog in v3:
Changes addressing Michael's comment:
- rebase the patchset against current code

Changes addressing Igor's comment:
- rename the parameters of aml_create_field() to reflect the ACPI spec
- fix the issue that the @target operand can not be optional in
  aml_concatenate() that is also cleaned up by using build_opcode_2arg_dst()

Others:
- separate the test patches to the single set and will be posted on later 
  
These changes are based on Igor's comments:
- drop ssdt.rev2 support as the memory address allocated by BIOS/OVMF
  are always 32 bits
- support to test NVDIMM tables (NFIT and NVDIMM SSDT)
- add _CRS to report its operation region
- make AML APIs change be the separated patches

This is the second part of vNVDIMM implementation which implements the
BIOS patched dsm memory and introduces the framework that allows QEMU
to emulate DSM method

Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
instead we let BIOS allocate the memory and patch the address to the
offset we want

IO port is still enabled as it plays as the way to notify QEMU and pass
the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
is reserved and it is divided into two 32 bits ports and used to pass
the low 32 bits and high 32 bits of dsm memory address to QEMU

Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
to apply 64 bit operations, in order to keeping compatibility, old
version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
old guests (such as windows XP), we should keep the 64 bits stuff in
the private place where common ACPI operation does not touch it

Xiao Guangrong (5):
  nvdimm acpi: initialize the resource used by NVDIMM ACPI
  nvdimm acpi: introduce patched dsm memory
  nvdimm acpi: let qemu handle _DSM method
  nvdimm acpi: emulate dsm method
  nvdimm acpi: add _CRS

 hw/acpi/Makefile.objs   |   2 +-
 hw/acpi/nvdimm.c        | 255 ++++++++++++++++++++++++++++++++++++++++++++++--
 hw/i386/acpi-build.c    |  10 +-
 hw/i386/pc.c            |   6 +-
 hw/i386/pc_piix.c       |   5 +
 hw/i386/pc_q35.c        |   8 +-
 include/hw/i386/pc.h    |   4 +-
 include/hw/mem/nvdimm.h |  36 ++++++-
 8 files changed, 302 insertions(+), 24 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI
  2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
@ 2016-03-02 11:50 ` Xiao Guangrong
  2016-03-02 11:58   ` Michael S. Tsirkin
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 11:50 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

32 bits IO port starting from 0x0a18 in guest is reserved for NVDIMM
ACPI emulation. The table, NVDIMM_DSM_MEM_FILE, will be patched into
NVDIMM ACPI binary code

OSPM uses this port to tell QEMU the final address of the DSM memory
and notify QEMU to emulate the DSM method

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/Makefile.objs   |  2 +-
 hw/acpi/nvdimm.c        | 35 +++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c    | 10 +---------
 hw/i386/pc.c            |  6 +++---
 hw/i386/pc_piix.c       |  5 +++++
 hw/i386/pc_q35.c        |  8 +++++++-
 include/hw/i386/pc.h    |  4 +++-
 include/hw/mem/nvdimm.h | 28 +++++++++++++++++++++++++++-
 8 files changed, 82 insertions(+), 16 deletions(-)

diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
index f3ade9a..faee86c 100644
--- a/hw/acpi/Makefile.objs
+++ b/hw/acpi/Makefile.objs
@@ -2,7 +2,7 @@ common-obj-$(CONFIG_ACPI_X86) += core.o piix4.o pcihp.o
 common-obj-$(CONFIG_ACPI_X86_ICH) += ich9.o tco.o
 common-obj-$(CONFIG_ACPI_CPU_HOTPLUG) += cpu_hotplug.o cpu_hotplug_acpi_table.o
 common-obj-$(CONFIG_ACPI_MEMORY_HOTPLUG) += memory_hotplug.o memory_hotplug_acpi_table.o
-common-obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
+obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
 common-obj-$(CONFIG_ACPI) += acpi_interface.o
 common-obj-$(CONFIG_ACPI) += bios-linker-loader.o
 common-obj-$(CONFIG_ACPI) += aml-build.o
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 49ee68e..8568b20 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -29,6 +29,7 @@
 #include "qemu/osdep.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 
 static int nvdimm_plugged_device_list(Object *obj, void *opaque)
@@ -370,6 +371,40 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
     g_array_free(structures, true);
 }
 
+static uint64_t
+nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
+{
+    return 0;
+}
+
+static void
+nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
+{
+}
+
+static const MemoryRegionOps nvdimm_dsm_ops = {
+    .read = nvdimm_dsm_read,
+    .write = nvdimm_dsm_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 4,
+        .max_access_size = 4,
+    },
+};
+
+void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
+                            FWCfgState *fw_cfg, Object *owner)
+{
+    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops, state,
+                          "nvdimm-acpi-io", NVDIMM_ACPI_IO_LEN);
+    memory_region_add_subregion(io, NVDIMM_ACPI_IO_BASE, &state->io_mr);
+
+    state->dsm_mem = g_array_new(false, true /* clear */, 1);
+    acpi_data_push(state->dsm_mem, TARGET_PAGE_SIZE);
+    fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
+                    state->dsm_mem->len);
+}
+
 #define NVDIMM_COMMON_DSM      "NCAL"
 
 static void nvdimm_build_common_dsm(Aml *dev)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 6d8d23b..f8ff89d 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -38,7 +38,6 @@
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
 #include "hw/acpi/memory_hotplug.h"
-#include "hw/mem/nvdimm.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
 #include "sysemu/tpm_backend.h"
@@ -2582,13 +2581,6 @@ static bool acpi_has_iommu(void)
     return intel_iommu && !ambiguous;
 }
 
-static bool acpi_has_nvdimm(void)
-{
-    PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
-
-    return pcms->nvdimm;
-}
-
 static
 void acpi_build(AcpiBuildTables *tables)
 {
@@ -2673,7 +2665,7 @@ void acpi_build(AcpiBuildTables *tables)
         build_dmar_q35(tables_blob, tables->linker);
     }
 
-    if (acpi_has_nvdimm()) {
+    if (pcms->acpi_nvdimm_state.is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker);
     }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 0aeefd2..5194acd 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1854,14 +1854,14 @@ static bool pc_machine_get_nvdimm(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    return pcms->nvdimm;
+    return pcms->acpi_nvdimm_state.is_enabled;
 }
 
 static void pc_machine_set_nvdimm(Object *obj, bool value, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    pcms->nvdimm = value;
+    pcms->acpi_nvdimm_state.is_enabled = value;
 }
 
 static void pc_machine_initfn(Object *obj)
@@ -1900,7 +1900,7 @@ static void pc_machine_initfn(Object *obj)
                                     &error_abort);
 
     /* nvdimm is disabled on default. */
-    pcms->nvdimm = false;
+    pcms->acpi_nvdimm_state.is_enabled = false;
     object_property_add_bool(obj, PC_MACHINE_NVDIMM, pc_machine_get_nvdimm,
                              pc_machine_set_nvdimm, &error_abort);
 }
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 6f8c2cd..6a69b23 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -274,6 +274,11 @@ static void pc_init1(MachineState *machine,
     if (pcmc->pci_enabled) {
         pc_pci_device_init(pci_bus);
     }
+
+    if (pcms->acpi_nvdimm_state.is_enabled) {
+        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
+                               pcms->fw_cfg, OBJECT(pcms));
+    }
 }
 
 /* Looking for a pc_compat_2_4() function? It doesn't exist.
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 46522c9..17915b0 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -61,6 +61,7 @@ static void pc_q35_init(MachineState *machine)
     PCIDevice *lpc;
     BusState *idebus[MAX_SATA_PORTS];
     ISADevice *rtc_state;
+    MemoryRegion *system_io = get_system_io();
     MemoryRegion *pci_memory;
     MemoryRegion *rom_memory;
     MemoryRegion *ram_memory;
@@ -160,7 +161,7 @@ static void pc_q35_init(MachineState *machine)
     q35_host->mch.ram_memory = ram_memory;
     q35_host->mch.pci_address_space = pci_memory;
     q35_host->mch.system_memory = get_system_memory();
-    q35_host->mch.address_space_io = get_system_io();
+    q35_host->mch.address_space_io = system_io;
     q35_host->mch.below_4g_mem_size = pcms->below_4g_mem_size;
     q35_host->mch.above_4g_mem_size = pcms->above_4g_mem_size;
     /* pci */
@@ -251,6 +252,11 @@ static void pc_q35_init(MachineState *machine)
     if (pcmc->pci_enabled) {
         pc_pci_device_init(host_bus);
     }
+
+    if (pcms->acpi_nvdimm_state.is_enabled) {
+        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
+                               pcms->fw_cfg, OBJECT(pcms));
+    }
 }
 
 #define DEFINE_Q35_MACHINE(suffix, name, compatfn, optionfn) \
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 8b3546e..3937f54 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -17,6 +17,7 @@
 #include "hw/boards.h"
 #include "hw/compat.h"
 #include "hw/mem/pc-dimm.h"
+#include "hw/mem/nvdimm.h"
 
 #define HPET_INTCAP "hpet-intcap"
 
@@ -57,7 +58,8 @@ struct PCMachineState {
     uint64_t max_ram_below_4g;
     OnOffAuto vmport;
     OnOffAuto smm;
-    bool nvdimm;
+
+    AcpiNVDIMMState acpi_nvdimm_state;
 
     /* RAM information (sizes, addresses, configuration): */
     ram_addr_t below_4g_mem_size, above_4g_mem_size;
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 49183c1..634c60b 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -25,8 +25,34 @@
 
 #include "hw/mem/pc-dimm.h"
 
-#define TYPE_NVDIMM      "nvdimm"
+#define TYPE_NVDIMM             "nvdimm"
 
+#define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
+
+/*
+ * 32 bits IO port starting from 0x0a18 in guest is reserved for
+ * NVDIMM ACPI emulation.
+ */
+#define NVDIMM_ACPI_IO_BASE     0x0a18
+#define NVDIMM_ACPI_IO_LEN      4
+
+/*
+ * AcpiNVDIMMState:
+ * @is_enabled: detect if NVDIMM support is enabled.
+ *
+ * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
+ * @io_mr: the IO region used by OSPM to transfer control to QEMU.
+ */
+struct AcpiNVDIMMState {
+    bool is_enabled;
+
+    GArray *dsm_mem;
+    MemoryRegion io_mr;
+};
+typedef struct AcpiNVDIMMState AcpiNVDIMMState;
+
+void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
+                            FWCfgState *fw_cfg, Object *owner);
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
                        GArray *linker);
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory
  2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
@ 2016-03-02 11:50 ` Xiao Guangrong
  2016-03-03 13:12   ` Michael S. Tsirkin
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 11:50 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

The dsm memory is used to save the input parameters and store
the dsm result which is filled by QEMU.

The address of dsm memory is decided by bios and patched into
int32 object named "MEMA"

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/nvdimm.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 8568b20..90032e5 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -29,6 +29,7 @@
 #include "qemu/osdep.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/bios-linker-loader.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 
@@ -406,6 +407,7 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
 }
 
 #define NVDIMM_COMMON_DSM      "NCAL"
+#define NVDIMM_ACPI_MEM_ADDR   "MEMA"
 
 static void nvdimm_build_common_dsm(Aml *dev)
 {
@@ -471,6 +473,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
                               GArray *table_data, GArray *linker)
 {
     Aml *ssdt, *sb_scope, *dev;
+    int mem_addr_offset, nvdimm_ssdt;
 
     acpi_add_table(table_offsets, table_data);
 
@@ -500,13 +503,24 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
     nvdimm_build_nvdimm_devices(device_list, dev);
 
     aml_append(sb_scope, dev);
-
     aml_append(ssdt, sb_scope);
+
+    nvdimm_ssdt = table_data->len;
+
     /* copy AML table into ACPI tables blob and patch header there */
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
+    mem_addr_offset = build_append_named_dword(table_data,
+                                               NVDIMM_ACPI_MEM_ADDR);
+
+    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
+                             false /* high memory */);
+    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
+                                   NVDIMM_DSM_MEM_FILE, table_data,
+                                   table_data->data + mem_addr_offset,
+                                   sizeof(uint32_t));
     build_header(linker, table_data,
-        (void *)(table_data->data + table_data->len - ssdt->buf->len),
-        "SSDT", ssdt->buf->len, 1, NULL, "NVDIMM");
+        (void *)(table_data->data + nvdimm_ssdt),
+        "SSDT", table_data->len - nvdimm_ssdt, 1, NULL, "NVDIMM");
     free_aml_allocator();
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method
  2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
@ 2016-03-02 11:50 ` Xiao Guangrong
  2016-03-03 13:23   ` Michael S. Tsirkin
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method Xiao Guangrong
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 11:50 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

If dsm memory is successfully patched, we let qemu fully emulate
the dsm method

This patch saves _DSM input parameters into dsm memory, tell dsm
memory address to QEMU, then fetch the result from the dsm memory

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/nvdimm.c | 117 ++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 112 insertions(+), 5 deletions(-)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 90032e5..781f6c1 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -372,6 +372,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
     g_array_free(structures, true);
 }
 
+struct NvdimmDsmIn {
+    uint32_t handle;
+    uint32_t revision;
+    uint32_t function;
+   /* the remaining size in the page is used by arg3. */
+    union {
+        uint8_t arg3[0];
+    };
+} QEMU_PACKED;
+typedef struct NvdimmDsmIn NvdimmDsmIn;
+
+struct NvdimmDsmOut {
+    /* the size of buffer filled by QEMU. */
+    uint32_t len;
+    uint8_t data[0];
+} QEMU_PACKED;
+typedef struct NvdimmDsmOut NvdimmDsmOut;
+
 static uint64_t
 nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
 {
@@ -411,11 +429,18 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
 
 static void nvdimm_build_common_dsm(Aml *dev)
 {
-    Aml *method, *ifctx, *function;
+    Aml *method, *ifctx, *function, *dsm_mem, *unpatched, *result_size;
     uint8_t byte_list[1];
 
-    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
+    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_SERIALIZED);
     function = aml_arg(2);
+    dsm_mem = aml_name(NVDIMM_ACPI_MEM_ADDR);
+
+    /*
+     * do not support any method if DSM memory address has not been
+     * patched.
+     */
+    unpatched = aml_if(aml_equal(dsm_mem, aml_int(0x0)));
 
     /*
      * function 0 is called to inquire what functions are supported by
@@ -424,12 +449,36 @@ static void nvdimm_build_common_dsm(Aml *dev)
     ifctx = aml_if(aml_equal(function, aml_int(0)));
     byte_list[0] = 0 /* No function Supported */;
     aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
-    aml_append(method, ifctx);
+    aml_append(unpatched, ifctx);
 
     /* No function is supported yet. */
     byte_list[0] = 1 /* Not Supported */;
-    aml_append(method, aml_return(aml_buffer(1, byte_list)));
+    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
+    aml_append(method, unpatched);
+
+    /*
+     * Currently no function is supported for both root device and NVDIMM
+     * devices, let's temporarily set handle to 0x0 at this time.
+     */
+    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
+    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
+    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
 
+    /*
+     * tell QEMU about the real address of DSM memory, then QEMU begins
+     * to emulate the method and fills the result to DSM memory.
+     */
+    aml_append(method, aml_store(dsm_mem, aml_name("NTFI")));
+
+    result_size = aml_local(1);
+    aml_append(method, aml_store(aml_name("RLEN"), result_size));
+    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
+                                 result_size));
+    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
+                                        result_size, "OBUF"));
+    aml_append(method, aml_concatenate(aml_buffer(0, NULL), aml_name("OBUF"),
+                                       aml_arg(6)));
+    aml_append(method, aml_return(aml_arg(6)));
     aml_append(dev, method);
 }
 
@@ -472,7 +521,7 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
 static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
                               GArray *table_data, GArray *linker)
 {
-    Aml *ssdt, *sb_scope, *dev;
+    Aml *ssdt, *sb_scope, *dev, *field;
     int mem_addr_offset, nvdimm_ssdt;
 
     acpi_add_table(table_offsets, table_data);
@@ -497,6 +546,64 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
      */
     aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
 
+    /* map DSM memory and IO into ACPI namespace. */
+    aml_append(dev, aml_operation_region("NPIO", AML_SYSTEM_IO,
+               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
+    aml_append(dev, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
+               aml_name(NVDIMM_ACPI_MEM_ADDR), TARGET_PAGE_SIZE));
+
+    /*
+     * DSM notifier:
+     * NTFI: write the address of DSM memory and notify QEMU to emulate
+     *       the access.
+     *
+     * It is the IO port so that accessing them will cause VM-exit, the
+     * control will be transferred to QEMU.
+     */
+    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("NTFI",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(dev, field);
+
+    /*
+     * DSM input:
+     * @HDLE: store device's handle, it's zero if the _DSM call happens
+     *        on NVDIMM Root Device.
+     * @REVS: store the Arg1 of _DSM call.
+     * @FUNC: store the Arg2 of _DSM call.
+     * @ARG3: store the Arg3 of _DSM call.
+     *
+     * They are RAM mapping on host so that these accesses never cause
+     * VM-EXIT.
+     */
+    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("HDLE",
+               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("REVS",
+               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("FUNC",
+               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("ARG3",
+               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
+                     BITS_PER_BYTE));
+    aml_append(dev, field);
+
+    /*
+     * DSM output:
+     * @RLEN: the size of the buffer filled by QEMU.
+     * @ODAT: the buffer QEMU uses to store the result.
+     *
+     * Since the page is reused by both input and out, the input data
+     * will be lost after storing new result into @ODAT.
+    */
+    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("RLEN",
+               sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("ODAT",
+               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) *
+                     BITS_PER_BYTE));
+    aml_append(dev, field);
+
     nvdimm_build_common_dsm(dev);
     nvdimm_build_device_dsm(dev);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method
  2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
                   ` (2 preceding siblings ...)
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
@ 2016-03-02 11:50 ` Xiao Guangrong
  2016-03-03 13:25   ` Michael S. Tsirkin
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS Xiao Guangrong
  2016-03-03 13:43 ` [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Michael S. Tsirkin
  5 siblings, 1 reply; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 11:50 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

Emulate dsm method after IO VM-exit

Currently, we only introduce the framework and no function is actually
supported

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/nvdimm.c        | 56 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/mem/nvdimm.h |  8 +++++++
 2 files changed, 64 insertions(+)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 781f6c1..5a17ee2 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -390,15 +390,71 @@ struct NvdimmDsmOut {
 } QEMU_PACKED;
 typedef struct NvdimmDsmOut NvdimmDsmOut;
 
+struct NvdimmDsmFunc0Out {
+    /* the size of buffer filled by QEMU. */
+     uint32_t len;
+     uint32_t supported_func;
+} QEMU_PACKED;
+typedef struct NvdimmDsmFunc0Out NvdimmDsmFunc0Out;
+
+struct NvdimmDsmFuncNoPayloadOut {
+    /* the size of buffer filled by QEMU. */
+     uint32_t len;
+     uint32_t func_ret_status;
+} QEMU_PACKED;
+typedef struct NvdimmDsmFuncNoPayloadOut NvdimmDsmFuncNoPayloadOut;
+
 static uint64_t
 nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
 {
+    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
     return 0;
 }
 
 static void
 nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
 {
+    NvdimmDsmIn *in;
+    hwaddr dsm_mem_addr = val;
+
+    nvdimm_debug("dsm memory address %#lx.\n", dsm_mem_addr);
+
+    /*
+     * The DSM memory is mapped to guest address space so an evil guest
+     * can change its content while we are doing DSM emulation. Avoid
+     * this by copying DSM memory to QEMU local memory.
+     */
+    in = g_malloc(TARGET_PAGE_SIZE);
+    cpu_physical_memory_read(dsm_mem_addr, in, TARGET_PAGE_SIZE);
+
+    le32_to_cpus(&in->revision);
+    le32_to_cpus(&in->function);
+    le32_to_cpus(&in->handle);
+
+    nvdimm_debug("Revision %#x Handler %#x Function %#x.\n", in->revision,
+                 in->handle, in->function);
+
+    /*
+     * function 0 is called to inquire what functions are supported by
+     * OSPM
+     */
+    if (in->function == 0) {
+        NvdimmDsmFunc0Out func0 = {
+            .len = cpu_to_le32(sizeof(func0)),
+             /* No function supported other than function 0 */
+            .supported_func = cpu_to_le32(0),
+        };
+        cpu_physical_memory_write(dsm_mem_addr, &func0, sizeof func0);
+    } else {
+        /* No function except function 0 is supported yet. */
+        NvdimmDsmFuncNoPayloadOut out = {
+            .len = cpu_to_le32(sizeof(out)),
+            .func_ret_status = cpu_to_le32(1)  /* Not Supported */,
+        };
+        cpu_physical_memory_write(dsm_mem_addr, &out, sizeof(out));
+    }
+
+    g_free(in);
 }
 
 static const MemoryRegionOps nvdimm_dsm_ops = {
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 634c60b..aaa2608 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -25,6 +25,14 @@
 
 #include "hw/mem/pc-dimm.h"
 
+#define NVDIMM_DEBUG 0
+#define nvdimm_debug(fmt, ...)                                \
+    do {                                                      \
+        if (NVDIMM_DEBUG) {                                   \
+            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
+        }                                                     \
+    } while (0)
+
 #define TYPE_NVDIMM             "nvdimm"
 
 #define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
                   ` (3 preceding siblings ...)
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method Xiao Guangrong
@ 2016-03-02 11:50 ` Xiao Guangrong
  2016-03-03 13:29   ` Michael S. Tsirkin
  2016-03-03 13:43 ` [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Michael S. Tsirkin
  5 siblings, 1 reply; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 11:50 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

As Igor suggested that we can report the BIOS patched operation region
so that OSPM could see that particular range is in use and be able to
notice conflicts if it happens some day

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/nvdimm.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 5a17ee2..43fd4c5 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -578,6 +578,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
                               GArray *table_data, GArray *linker)
 {
     Aml *ssdt, *sb_scope, *dev, *field;
+    Aml *min_addr, *max_addr, *mr32, *method, *crs;
     int mem_addr_offset, nvdimm_ssdt;
 
     acpi_add_table(table_offsets, table_data);
@@ -602,6 +603,32 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
      */
     aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
 
+    /*
+     * report the dsm memory so that OSPM could see that particular range is
+     * in use and be able to notice conflicts if it happens some day.
+     */
+    method = aml_method("_CRS", 0, AML_SERIALIZED);
+    crs = aml_resource_template();
+    aml_append(crs, aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                     AML_MAX_FIXED, AML_CACHEABLE,
+                                     AML_READ_WRITE,
+                                     0, 0x0, 0xFFFFFFFE, 0,
+                                     TARGET_PAGE_SIZE));
+    aml_append(method, aml_name_decl("MR32", crs));
+    mr32 = aml_name("MR32");
+    aml_append(method, aml_create_dword_field(mr32, aml_int(10), "MIN"));
+    aml_append(method, aml_create_dword_field(mr32, aml_int(14), "MAX"));
+
+    min_addr = aml_name("MIN");
+    max_addr = aml_name("MAX");
+
+    aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR), min_addr));
+    aml_append(method, aml_add(min_addr, aml_int(TARGET_PAGE_SIZE),
+                               max_addr));
+    aml_append(method, aml_decrement(max_addr));
+    aml_append(method, aml_return(mr32));
+    aml_append(dev, method);
+
     /* map DSM memory and IO into ACPI namespace. */
     aml_append(dev, aml_operation_region("NPIO", AML_SYSTEM_IO,
                aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
@ 2016-03-02 11:58   ` Michael S. Tsirkin
  2016-03-02 16:10     ` Xiao Guangrong
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-02 11:58 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Wed, Mar 02, 2016 at 07:50:37PM +0800, Xiao Guangrong wrote:
> 32 bits IO port starting from 0x0a18 in guest is reserved for NVDIMM
> ACPI emulation. The table, NVDIMM_DSM_MEM_FILE, will be patched into
> NVDIMM ACPI binary code
> 
> OSPM uses this port to tell QEMU the final address of the DSM memory
> and notify QEMU to emulate the DSM method
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>


two minor comments: don't respin just for these, can
be addressed later if necessary.

> ---
>  hw/acpi/Makefile.objs   |  2 +-
>  hw/acpi/nvdimm.c        | 35 +++++++++++++++++++++++++++++++++++
>  hw/i386/acpi-build.c    | 10 +---------
>  hw/i386/pc.c            |  6 +++---
>  hw/i386/pc_piix.c       |  5 +++++
>  hw/i386/pc_q35.c        |  8 +++++++-
>  include/hw/i386/pc.h    |  4 +++-
>  include/hw/mem/nvdimm.h | 28 +++++++++++++++++++++++++++-
>  8 files changed, 82 insertions(+), 16 deletions(-)

How about a spec document to document the interface?

> 
> diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
> index f3ade9a..faee86c 100644
> --- a/hw/acpi/Makefile.objs
> +++ b/hw/acpi/Makefile.objs
> @@ -2,7 +2,7 @@ common-obj-$(CONFIG_ACPI_X86) += core.o piix4.o pcihp.o
>  common-obj-$(CONFIG_ACPI_X86_ICH) += ich9.o tco.o
>  common-obj-$(CONFIG_ACPI_CPU_HOTPLUG) += cpu_hotplug.o cpu_hotplug_acpi_table.o
>  common-obj-$(CONFIG_ACPI_MEMORY_HOTPLUG) += memory_hotplug.o memory_hotplug_acpi_table.o
> -common-obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
> +obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
>  common-obj-$(CONFIG_ACPI) += acpi_interface.o
>  common-obj-$(CONFIG_ACPI) += bios-linker-loader.o
>  common-obj-$(CONFIG_ACPI) += aml-build.o
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 49ee68e..8568b20 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -29,6 +29,7 @@
>  #include "qemu/osdep.h"
>  #include "hw/acpi/acpi.h"
>  #include "hw/acpi/aml-build.h"
> +#include "hw/nvram/fw_cfg.h"
>  #include "hw/mem/nvdimm.h"
>  
>  static int nvdimm_plugged_device_list(Object *obj, void *opaque)
> @@ -370,6 +371,40 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
>      g_array_free(structures, true);
>  }
>  
> +static uint64_t
> +nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
> +{
> +    return 0;
> +}
> +
> +static void
> +nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
> +{
> +}
> +
> +static const MemoryRegionOps nvdimm_dsm_ops = {
> +    .read = nvdimm_dsm_read,
> +    .write = nvdimm_dsm_write,
> +    .endianness = DEVICE_LITTLE_ENDIAN,
> +    .valid = {
> +        .min_access_size = 4,
> +        .max_access_size = 4,
> +    },
> +};
> +
> +void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> +                            FWCfgState *fw_cfg, Object *owner)
> +{
> +    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops, state,
> +                          "nvdimm-acpi-io", NVDIMM_ACPI_IO_LEN);
> +    memory_region_add_subregion(io, NVDIMM_ACPI_IO_BASE, &state->io_mr);
> +
> +    state->dsm_mem = g_array_new(false, true /* clear */, 1);
> +    acpi_data_push(state->dsm_mem, TARGET_PAGE_SIZE);
> +    fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
> +                    state->dsm_mem->len);
> +}
> +
>  #define NVDIMM_COMMON_DSM      "NCAL"
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 6d8d23b..f8ff89d 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -38,7 +38,6 @@
>  #include "hw/loader.h"
>  #include "hw/isa/isa.h"
>  #include "hw/acpi/memory_hotplug.h"
> -#include "hw/mem/nvdimm.h"
>  #include "sysemu/tpm.h"
>  #include "hw/acpi/tpm.h"
>  #include "sysemu/tpm_backend.h"
> @@ -2582,13 +2581,6 @@ static bool acpi_has_iommu(void)
>      return intel_iommu && !ambiguous;
>  }
>  
> -static bool acpi_has_nvdimm(void)
> -{
> -    PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
> -
> -    return pcms->nvdimm;
> -}
> -
>  static
>  void acpi_build(AcpiBuildTables *tables)
>  {
> @@ -2673,7 +2665,7 @@ void acpi_build(AcpiBuildTables *tables)
>          build_dmar_q35(tables_blob, tables->linker);
>      }
>  
> -    if (acpi_has_nvdimm()) {
> +    if (pcms->acpi_nvdimm_state.is_enabled) {
>          nvdimm_build_acpi(table_offsets, tables_blob, tables->linker);
>      }
>  
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 0aeefd2..5194acd 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -1854,14 +1854,14 @@ static bool pc_machine_get_nvdimm(Object *obj, Error **errp)
>  {
>      PCMachineState *pcms = PC_MACHINE(obj);
>  
> -    return pcms->nvdimm;
> +    return pcms->acpi_nvdimm_state.is_enabled;
>  }
>  
>  static void pc_machine_set_nvdimm(Object *obj, bool value, Error **errp)
>  {
>      PCMachineState *pcms = PC_MACHINE(obj);
>  
> -    pcms->nvdimm = value;
> +    pcms->acpi_nvdimm_state.is_enabled = value;
>  }
>  
>  static void pc_machine_initfn(Object *obj)
> @@ -1900,7 +1900,7 @@ static void pc_machine_initfn(Object *obj)
>                                      &error_abort);
>  
>      /* nvdimm is disabled on default. */
> -    pcms->nvdimm = false;
> +    pcms->acpi_nvdimm_state.is_enabled = false;
>      object_property_add_bool(obj, PC_MACHINE_NVDIMM, pc_machine_get_nvdimm,
>                               pc_machine_set_nvdimm, &error_abort);
>  }
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 6f8c2cd..6a69b23 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -274,6 +274,11 @@ static void pc_init1(MachineState *machine,
>      if (pcmc->pci_enabled) {
>          pc_pci_device_init(pci_bus);
>      }
> +
> +    if (pcms->acpi_nvdimm_state.is_enabled) {
> +        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
> +                               pcms->fw_cfg, OBJECT(pcms));
> +    }
>  }
>  
>  /* Looking for a pc_compat_2_4() function? It doesn't exist.
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index 46522c9..17915b0 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -61,6 +61,7 @@ static void pc_q35_init(MachineState *machine)
>      PCIDevice *lpc;
>      BusState *idebus[MAX_SATA_PORTS];
>      ISADevice *rtc_state;
> +    MemoryRegion *system_io = get_system_io();
>      MemoryRegion *pci_memory;
>      MemoryRegion *rom_memory;
>      MemoryRegion *ram_memory;
> @@ -160,7 +161,7 @@ static void pc_q35_init(MachineState *machine)
>      q35_host->mch.ram_memory = ram_memory;
>      q35_host->mch.pci_address_space = pci_memory;
>      q35_host->mch.system_memory = get_system_memory();
> -    q35_host->mch.address_space_io = get_system_io();
> +    q35_host->mch.address_space_io = system_io;
>      q35_host->mch.below_4g_mem_size = pcms->below_4g_mem_size;
>      q35_host->mch.above_4g_mem_size = pcms->above_4g_mem_size;
>      /* pci */
> @@ -251,6 +252,11 @@ static void pc_q35_init(MachineState *machine)
>      if (pcmc->pci_enabled) {
>          pc_pci_device_init(host_bus);
>      }
> +
> +    if (pcms->acpi_nvdimm_state.is_enabled) {
> +        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
> +                               pcms->fw_cfg, OBJECT(pcms));
> +    }
>  }
>  
>  #define DEFINE_Q35_MACHINE(suffix, name, compatfn, optionfn) \
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 8b3546e..3937f54 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -17,6 +17,7 @@
>  #include "hw/boards.h"
>  #include "hw/compat.h"
>  #include "hw/mem/pc-dimm.h"
> +#include "hw/mem/nvdimm.h"
>  
>  #define HPET_INTCAP "hpet-intcap"
>  
> @@ -57,7 +58,8 @@ struct PCMachineState {
>      uint64_t max_ram_below_4g;
>      OnOffAuto vmport;
>      OnOffAuto smm;
> -    bool nvdimm;
> +
> +    AcpiNVDIMMState acpi_nvdimm_state;
>  
>      /* RAM information (sizes, addresses, configuration): */
>      ram_addr_t below_4g_mem_size, above_4g_mem_size;
> diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> index 49183c1..634c60b 100644
> --- a/include/hw/mem/nvdimm.h
> +++ b/include/hw/mem/nvdimm.h
> @@ -25,8 +25,34 @@
>  
>  #include "hw/mem/pc-dimm.h"
>  
> -#define TYPE_NVDIMM      "nvdimm"
> +#define TYPE_NVDIMM             "nvdimm"
>  
> +#define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
> +
> +/*
> + * 32 bits IO port starting from 0x0a18 in guest is reserved for
> + * NVDIMM ACPI emulation.
> + */
> +#define NVDIMM_ACPI_IO_BASE     0x0a18
> +#define NVDIMM_ACPI_IO_LEN      4
> +
> +/*
> + * AcpiNVDIMMState:
> + * @is_enabled: detect if NVDIMM support is enabled.
> + *
> + * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
> + * @io_mr: the IO region used by OSPM to transfer control to QEMU.
> + */

this is not the way we document structures normally.
comments belong near fields.

> +struct AcpiNVDIMMState {
> +    bool is_enabled;
> +
> +    GArray *dsm_mem;
> +    MemoryRegion io_mr;
> +};
> +typedef struct AcpiNVDIMMState AcpiNVDIMMState;
> +
> +void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> +                            FWCfgState *fw_cfg, Object *owner);
>  void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
>                         GArray *linker);
>  #endif
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI
  2016-03-02 11:58   ` Michael S. Tsirkin
@ 2016-03-02 16:10     ` Xiao Guangrong
  0 siblings, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-02 16:10 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/02/2016 07:58 PM, Michael S. Tsirkin wrote:
> On Wed, Mar 02, 2016 at 07:50:37PM +0800, Xiao Guangrong wrote:
>> 32 bits IO port starting from 0x0a18 in guest is reserved for NVDIMM
>> ACPI emulation. The table, NVDIMM_DSM_MEM_FILE, will be patched into
>> NVDIMM ACPI binary code
>>
>> OSPM uses this port to tell QEMU the final address of the DSM memory
>> and notify QEMU to emulate the DSM method
>>
>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>
>
> two minor comments: don't respin just for these, can
> be addressed later if necessary.

Okay.

>
>> ---
>>   hw/acpi/Makefile.objs   |  2 +-
>>   hw/acpi/nvdimm.c        | 35 +++++++++++++++++++++++++++++++++++
>>   hw/i386/acpi-build.c    | 10 +---------
>>   hw/i386/pc.c            |  6 +++---
>>   hw/i386/pc_piix.c       |  5 +++++
>>   hw/i386/pc_q35.c        |  8 +++++++-
>>   include/hw/i386/pc.h    |  4 +++-
>>   include/hw/mem/nvdimm.h | 28 +++++++++++++++++++++++++++-
>>   8 files changed, 82 insertions(+), 16 deletions(-)
>
> How about a spec document to document the interface?

Sure, good to me. Actually, there was a spec file in the old versions (e.g,
https://github.com/xiaogr/qemu/commit/3e30799182ec53fd56af1e753a24ccf9f6a8f047), I will
add it in the last part which will implement the real DSM functions.

>> +/*
>> + * AcpiNVDIMMState:
>> + * @is_enabled: detect if NVDIMM support is enabled.
>> + *
>> + * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
>> + * @io_mr: the IO region used by OSPM to transfer control to QEMU.
>> + */
>
> this is not the way we document structures normally.
> comments belong near fields.
>

Okay. keep that in my mind. ;)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
@ 2016-03-03 13:12   ` Michael S. Tsirkin
  2016-03-03 13:35     ` Xiao Guangrong
  2016-03-04 15:32     ` Xiao Guangrong
  0 siblings, 2 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-03 13:12 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Wed, Mar 02, 2016 at 07:50:38PM +0800, Xiao Guangrong wrote:
> The dsm memory is used to save the input parameters and store
> the dsm result which is filled by QEMU.
> 
> The address of dsm memory is decided by bios and patched into
> int32 object named "MEMA"
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/nvdimm.c | 20 +++++++++++++++++---
>  1 file changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 8568b20..90032e5 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -29,6 +29,7 @@
>  #include "qemu/osdep.h"
>  #include "hw/acpi/acpi.h"
>  #include "hw/acpi/aml-build.h"
> +#include "hw/acpi/bios-linker-loader.h"
>  #include "hw/nvram/fw_cfg.h"
>  #include "hw/mem/nvdimm.h"
>  
> @@ -406,6 +407,7 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>  }
>  
>  #define NVDIMM_COMMON_DSM      "NCAL"
> +#define NVDIMM_ACPI_MEM_ADDR   "MEMA"
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
>  {
> @@ -471,6 +473,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>                                GArray *table_data, GArray *linker)
>  {
>      Aml *ssdt, *sb_scope, *dev;
> +    int mem_addr_offset, nvdimm_ssdt;
>  
>      acpi_add_table(table_offsets, table_data);
>  
> @@ -500,13 +503,24 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>      nvdimm_build_nvdimm_devices(device_list, dev);
>  
>      aml_append(sb_scope, dev);
> -
>      aml_append(ssdt, sb_scope);
> +
> +    nvdimm_ssdt = table_data->len;
> +
>      /* copy AML table into ACPI tables blob and patch header there */
>      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> +    mem_addr_offset = build_append_named_dword(table_data,
> +                                               NVDIMM_ACPI_MEM_ADDR);
> +
> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> +                             false /* high memory */);
> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> +                                   table_data->data + mem_addr_offset,
> +                                   sizeof(uint32_t));
>      build_header(linker, table_data,
> -        (void *)(table_data->data + table_data->len - ssdt->buf->len),
> -        "SSDT", ssdt->buf->len, 1, NULL, "NVDIMM");
> +        (void *)(table_data->data + nvdimm_ssdt),
> +        "SSDT", table_data->len - nvdimm_ssdt, 1, NULL, "NVDIMM");
>      free_aml_allocator();

I prefer ssdt->buf->len to table_data->len - nvdimm_ssdt.
Pls fix by a follow-up patch unless there is a respin.

>  }
>  
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
@ 2016-03-03 13:23   ` Michael S. Tsirkin
  2016-03-03 14:00     ` Xiao Guangrong
  2016-03-04 15:03     ` Xiao Guangrong
  0 siblings, 2 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-03 13:23 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Wed, Mar 02, 2016 at 07:50:39PM +0800, Xiao Guangrong wrote:
> If dsm memory is successfully patched, we let qemu fully emulate
> the dsm method
> 
> This patch saves _DSM input parameters into dsm memory, tell dsm
> memory address to QEMU, then fetch the result from the dsm memory
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/nvdimm.c | 117 ++++++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 112 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 90032e5..781f6c1 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -372,6 +372,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
>      g_array_free(structures, true);
>  }
>  
> +struct NvdimmDsmIn {
> +    uint32_t handle;
> +    uint32_t revision;
> +    uint32_t function;
> +   /* the remaining size in the page is used by arg3. */

Pls fix indentation so /* aligns with union.
> +    union {
> +        uint8_t arg3[0];
> +    };
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmIn NvdimmDsmIn;
> +
> +struct NvdimmDsmOut {
> +    /* the size of buffer filled by QEMU. */
> +    uint32_t len;
> +    uint8_t data[0];
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmOut NvdimmDsmOut;
> +
>  static uint64_t
>  nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
>  {
> @@ -411,11 +429,18 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
>  {
> -    Aml *method, *ifctx, *function;
> +    Aml *method, *ifctx, *function, *dsm_mem, *unpatched, *result_size;
>      uint8_t byte_list[1];
>  
> -    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
> +    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_SERIALIZED);
>      function = aml_arg(2);
> +    dsm_mem = aml_name(NVDIMM_ACPI_MEM_ADDR);
> +
> +    /*
> +     * do not support any method if DSM memory address has not been
> +     * patched.
> +     */

I am confused. When can this happen?

> +    unpatched = aml_if(aml_equal(dsm_mem, aml_int(0x0)));
>  
>      /*
>       * function 0 is called to inquire what functions are supported by
> @@ -424,12 +449,36 @@ static void nvdimm_build_common_dsm(Aml *dev)
>      ifctx = aml_if(aml_equal(function, aml_int(0)));
>      byte_list[0] = 0 /* No function Supported */;
>      aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
> -    aml_append(method, ifctx);
> +    aml_append(unpatched, ifctx);
>  
>      /* No function is supported yet. */
>      byte_list[0] = 1 /* Not Supported */;
> -    aml_append(method, aml_return(aml_buffer(1, byte_list)));
> +    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
> +    aml_append(method, unpatched);
> +
> +    /*
> +     * Currently no function is supported for both root device and NVDIMM
> +     * devices, let's temporarily set handle to 0x0 at this time.

This comment confuses me. What does temporarily mean here?
At what time?
And when is it set to another value?

Pls explain that instead of "set handle to 0" - that can
be seen at a glance.

> +     */
> +    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
> +    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
> +    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
>  
> +    /*
> +     * tell QEMU about the real address of DSM memory, then QEMU begins
> +     * to emulate the method

emulate the method? I'd just drop this.

> and fills the result to DSM memory.

in DSM memory

> +     */
> +    aml_append(method, aml_store(dsm_mem, aml_name("NTFI")));
> +
> +    result_size = aml_local(1);
> +    aml_append(method, aml_store(aml_name("RLEN"), result_size));
> +    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
> +                                 result_size));
> +    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
> +                                        result_size, "OBUF"));
> +    aml_append(method, aml_concatenate(aml_buffer(0, NULL), aml_name("OBUF"),
> +                                       aml_arg(6)));
> +    aml_append(method, aml_return(aml_arg(6)));
>      aml_append(dev, method);
>  }
>  
> @@ -472,7 +521,7 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
>  static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>                                GArray *table_data, GArray *linker)
>  {
> -    Aml *ssdt, *sb_scope, *dev;
> +    Aml *ssdt, *sb_scope, *dev, *field;
>      int mem_addr_offset, nvdimm_ssdt;
>  
>      acpi_add_table(table_offsets, table_data);
> @@ -497,6 +546,64 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>       */
>      aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
>  
> +    /* map DSM memory and IO into ACPI namespace. */
> +    aml_append(dev, aml_operation_region("NPIO", AML_SYSTEM_IO,
> +               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
> +    aml_append(dev, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
> +               aml_name(NVDIMM_ACPI_MEM_ADDR), TARGET_PAGE_SIZE));
> +
> +    /*
> +     * DSM notifier:
> +     * NTFI: write the address of DSM memory and notify QEMU to emulate
> +     *       the access.
> +     *
> +     * It is the IO port so that accessing them will cause VM-exit, the
> +     * control will be transferred to QEMU.
> +     */
> +    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("NTFI",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(dev, field);
> +
> +    /*
> +     * DSM input:
> +     * @HDLE: store device's handle, it's zero if the _DSM call happens
> +     *        on NVDIMM Root Device.
> +     * @REVS: store the Arg1 of _DSM call.
> +     * @FUNC: store the Arg2 of _DSM call.
> +     * @ARG3: store the Arg3 of _DSM call.

Don't use @ prefixes - there are for function arguments.

> +     *
> +     * They are RAM mapping on host so that these accesses never cause
> +     * VM-EXIT.
> +     */
> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("HDLE",
> +               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("REVS",
> +               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("FUNC",
> +               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("ARG3",
> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
> +                     BITS_PER_BYTE));

drop the extra () here please, and align BITS_PER_BYTE with
TARGET_PAGE_SIZE.

> +    aml_append(dev, field);
> +
> +    /*
> +     * DSM output:
> +     * @RLEN: the size of the buffer filled by QEMU.
> +     * @ODAT: the buffer QEMU uses to store the result.
> +     *
> +     * Since the page is reused by both input and out, the input data
> +     * will be lost after storing new result into @ODAT.

And so?

> +    */
> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("RLEN",
> +               sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("ODAT",
> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) *
> +                     BITS_PER_BYTE));
> +    aml_append(dev, field);
> +
>      nvdimm_build_common_dsm(dev);
>      nvdimm_build_device_dsm(dev);
>  
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method Xiao Guangrong
@ 2016-03-03 13:25   ` Michael S. Tsirkin
  2016-03-03 14:01     ` Xiao Guangrong
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-03 13:25 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Wed, Mar 02, 2016 at 07:50:40PM +0800, Xiao Guangrong wrote:
> Emulate dsm method after IO VM-exit
> 
> Currently, we only introduce the framework and no function is actually
> supported
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/nvdimm.c        | 56 +++++++++++++++++++++++++++++++++++++++++++++++++
>  include/hw/mem/nvdimm.h |  8 +++++++
>  2 files changed, 64 insertions(+)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 781f6c1..5a17ee2 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -390,15 +390,71 @@ struct NvdimmDsmOut {
>  } QEMU_PACKED;
>  typedef struct NvdimmDsmOut NvdimmDsmOut;
>  
> +struct NvdimmDsmFunc0Out {
> +    /* the size of buffer filled by QEMU. */
> +     uint32_t len;
> +     uint32_t supported_func;
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmFunc0Out NvdimmDsmFunc0Out;
> +
> +struct NvdimmDsmFuncNoPayloadOut {
> +    /* the size of buffer filled by QEMU. */
> +     uint32_t len;
> +     uint32_t func_ret_status;
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmFuncNoPayloadOut NvdimmDsmFuncNoPayloadOut;
> +
>  static uint64_t
>  nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
>  {
> +    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
>      return 0;
>  }
>  
>  static void
>  nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
>  {
> +    NvdimmDsmIn *in;
> +    hwaddr dsm_mem_addr = val;
> +
> +    nvdimm_debug("dsm memory address %#lx.\n", dsm_mem_addr);
> +
> +    /*
> +     * The DSM memory is mapped to guest address space so an evil guest
> +     * can change its content while we are doing DSM emulation. Avoid
> +     * this by copying DSM memory to QEMU local memory.
> +     */
> +    in = g_malloc(TARGET_PAGE_SIZE);
> +    cpu_physical_memory_read(dsm_mem_addr, in, TARGET_PAGE_SIZE);
> +
> +    le32_to_cpus(&in->revision);
> +    le32_to_cpus(&in->function);
> +    le32_to_cpus(&in->handle);
> +
> +    nvdimm_debug("Revision %#x Handler %#x Function %#x.\n", in->revision,
> +                 in->handle, in->function);
> +
> +    /*
> +     * function 0 is called to inquire what functions

which functions

> are supported by
> +     * OSPM
> +     */
> +    if (in->function == 0) {
> +        NvdimmDsmFunc0Out func0 = {
> +            .len = cpu_to_le32(sizeof(func0)),
> +             /* No function supported other than function 0 */
> +            .supported_func = cpu_to_le32(0),
> +        };
> +        cpu_physical_memory_write(dsm_mem_addr, &func0, sizeof func0);
> +    } else {
> +        /* No function except function 0 is supported yet. */
> +        NvdimmDsmFuncNoPayloadOut out = {
> +            .len = cpu_to_le32(sizeof(out)),
> +            .func_ret_status = cpu_to_le32(1)  /* Not Supported */,
> +        };
> +        cpu_physical_memory_write(dsm_mem_addr, &out, sizeof(out));
> +    }
> +
> +    g_free(in);
>  }
>  
>  static const MemoryRegionOps nvdimm_dsm_ops = {
> diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> index 634c60b..aaa2608 100644
> --- a/include/hw/mem/nvdimm.h
> +++ b/include/hw/mem/nvdimm.h
> @@ -25,6 +25,14 @@
>  
>  #include "hw/mem/pc-dimm.h"
>  
> +#define NVDIMM_DEBUG 0
> +#define nvdimm_debug(fmt, ...)                                \
> +    do {                                                      \
> +        if (NVDIMM_DEBUG) {                                   \
> +            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
> +        }                                                     \
> +    } while (0)
> +
>  #define TYPE_NVDIMM             "nvdimm"
>  
>  #define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS Xiao Guangrong
@ 2016-03-03 13:29   ` Michael S. Tsirkin
  2016-03-03 14:05     ` Xiao Guangrong
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-03 13:29 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:
> As Igor suggested that we can report the BIOS patched operation region
> so that OSPM could see that particular range is in use and be able to
> notice conflicts if it happens some day
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>

This is reserved RAM, exposing it in _CRS makes no sense to me.


> ---
>  hw/acpi/nvdimm.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 5a17ee2..43fd4c5 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -578,6 +578,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>                                GArray *table_data, GArray *linker)
>  {
>      Aml *ssdt, *sb_scope, *dev, *field;
> +    Aml *min_addr, *max_addr, *mr32, *method, *crs;
>      int mem_addr_offset, nvdimm_ssdt;
>  
>      acpi_add_table(table_offsets, table_data);
> @@ -602,6 +603,32 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>       */
>      aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
>  
> +    /*
> +     * report the dsm memory so that OSPM could see that particular range is
> +     * in use and be able to notice conflicts if it happens some day.
> +     */
> +    method = aml_method("_CRS", 0, AML_SERIALIZED);
> +    crs = aml_resource_template();
> +    aml_append(crs, aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                     AML_MAX_FIXED, AML_CACHEABLE,
> +                                     AML_READ_WRITE,
> +                                     0, 0x0, 0xFFFFFFFE, 0,
> +                                     TARGET_PAGE_SIZE));
> +    aml_append(method, aml_name_decl("MR32", crs));
> +    mr32 = aml_name("MR32");
> +    aml_append(method, aml_create_dword_field(mr32, aml_int(10), "MIN"));
> +    aml_append(method, aml_create_dword_field(mr32, aml_int(14), "MAX"));
> +
> +    min_addr = aml_name("MIN");
> +    max_addr = aml_name("MAX");
> +
> +    aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR), min_addr));
> +    aml_append(method, aml_add(min_addr, aml_int(TARGET_PAGE_SIZE),
> +                               max_addr));
> +    aml_append(method, aml_decrement(max_addr));
> +    aml_append(method, aml_return(mr32));
> +    aml_append(dev, method);
> +
>      /* map DSM memory and IO into ACPI namespace. */
>      aml_append(dev, aml_operation_region("NPIO", AML_SYSTEM_IO,
>                 aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory
  2016-03-03 13:12   ` Michael S. Tsirkin
@ 2016-03-03 13:35     ` Xiao Guangrong
  2016-03-04 15:32     ` Xiao Guangrong
  1 sibling, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-03 13:35 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:12 PM, Michael S. Tsirkin wrote:

>> -        (void *)(table_data->data + table_data->len - ssdt->buf->len),
>> -        "SSDT", ssdt->buf->len, 1, NULL, "NVDIMM");
>> +        (void *)(table_data->data + nvdimm_ssdt),
>> +        "SSDT", table_data->len - nvdimm_ssdt, 1, NULL, "NVDIMM");
>>       free_aml_allocator();
>
> I prefer ssdt->buf->len to table_data->len - nvdimm_ssdt.
> Pls fix by a follow-up patch unless there is a respin.

Okay, will do.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
  2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
                   ` (4 preceding siblings ...)
  2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS Xiao Guangrong
@ 2016-03-03 13:43 ` Michael S. Tsirkin
  2016-03-03 14:07   ` Xiao Guangrong
  5 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-03 13:43 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Wed, Mar 02, 2016 at 07:50:36PM +0800, Xiao Guangrong wrote:
> This patchset is against commit 429fb940caadf9 (fw-cfg: support writeable
> blobs) on pci branch of Michael's git tree and can be found at:
>       https://github.com/xiaogr/qemu.git nvdimm-acpi-v5

This looks good overall.  There are some stylistic changes all over, I
think this warrants v6 with all comments addressed.


> Changelog in v5:
> Thanks to Michael's review, the changes in this version are:
> - use nvdimm_debug() instead of fprintf() on the path triggered by read
>   access of IO port
> - introduce the data struct to represent output information to drop the
>   open-code
> - improve the comment to better explain the fact that currently no function
>   other than function 0 is supported
> 
> Changelog in v4:
> - drop the unnecessary assert() in aml_concatenate() based on Igor's
>   suggestion
> - introduce build_append_named_dword() and use it to simplify the code as
>   Michael's suggestion
> 
> Changelog in v3:
> Changes addressing Michael's comment:
> - rebase the patchset against current code
> 
> Changes addressing Igor's comment:
> - rename the parameters of aml_create_field() to reflect the ACPI spec
> - fix the issue that the @target operand can not be optional in
>   aml_concatenate() that is also cleaned up by using build_opcode_2arg_dst()
> 
> Others:
> - separate the test patches to the single set and will be posted on later 
>   
> These changes are based on Igor's comments:
> - drop ssdt.rev2 support as the memory address allocated by BIOS/OVMF
>   are always 32 bits
> - support to test NVDIMM tables (NFIT and NVDIMM SSDT)
> - add _CRS to report its operation region
> - make AML APIs change be the separated patches
> 
> This is the second part of vNVDIMM implementation which implements the
> BIOS patched dsm memory and introduces the framework that allows QEMU
> to emulate DSM method
> 
> Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
> instead we let BIOS allocate the memory and patch the address to the
> offset we want
> 
> IO port is still enabled as it plays as the way to notify QEMU and pass
> the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
> is reserved and it is divided into two 32 bits ports and used to pass
> the low 32 bits and high 32 bits of dsm memory address to QEMU
> 
> Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
> to apply 64 bit operations, in order to keeping compatibility, old
> version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
> old guests (such as windows XP), we should keep the 64 bits stuff in
> the private place where common ACPI operation does not touch it
> 
> Xiao Guangrong (5):
>   nvdimm acpi: initialize the resource used by NVDIMM ACPI
>   nvdimm acpi: introduce patched dsm memory
>   nvdimm acpi: let qemu handle _DSM method
>   nvdimm acpi: emulate dsm method
>   nvdimm acpi: add _CRS
> 
>  hw/acpi/Makefile.objs   |   2 +-
>  hw/acpi/nvdimm.c        | 255 ++++++++++++++++++++++++++++++++++++++++++++++--
>  hw/i386/acpi-build.c    |  10 +-
>  hw/i386/pc.c            |   6 +-
>  hw/i386/pc_piix.c       |   5 +
>  hw/i386/pc_q35.c        |   8 +-
>  include/hw/i386/pc.h    |   4 +-
>  include/hw/mem/nvdimm.h |  36 ++++++-
>  8 files changed, 302 insertions(+), 24 deletions(-)
> 
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method
  2016-03-03 13:23   ` Michael S. Tsirkin
@ 2016-03-03 14:00     ` Xiao Guangrong
  2016-03-04 15:03     ` Xiao Guangrong
  1 sibling, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-03 14:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:23 PM, Michael S. Tsirkin wrote:

>>
>> +struct NvdimmDsmIn {
>> +    uint32_t handle;
>> +    uint32_t revision;
>> +    uint32_t function;
>> +   /* the remaining size in the page is used by arg3. */
>
> Pls fix indentation so /* aligns with union.

Good eyes.

Surprise me that checkpatch did not report error/warning. Sorry for the
careless typo and will fix it.

>> +    union {
>> +        uint8_t arg3[0];
>> +    };
>> +} QEMU_PACKED;
>> +typedef struct NvdimmDsmIn NvdimmDsmIn;
>> +
>> +struct NvdimmDsmOut {
>> +    /* the size of buffer filled by QEMU. */
>> +    uint32_t len;
>> +    uint8_t data[0];
>> +} QEMU_PACKED;
>> +typedef struct NvdimmDsmOut NvdimmDsmOut;
>> +
>>   static uint64_t
>>   nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
>>   {
>> @@ -411,11 +429,18 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>>
>>   static void nvdimm_build_common_dsm(Aml *dev)
>>   {
>> -    Aml *method, *ifctx, *function;
>> +    Aml *method, *ifctx, *function, *dsm_mem, *unpatched, *result_size;
>>       uint8_t byte_list[1];
>>
>> -    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
>> +    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_SERIALIZED);
>>       function = aml_arg(2);
>> +    dsm_mem = aml_name(NVDIMM_ACPI_MEM_ADDR);
>> +
>> +    /*
>> +     * do not support any method if DSM memory address has not been
>> +     * patched.
>> +     */
>
> I am confused. When can this happen?

If BIOS can not allocate memory due to deficiency memory?

It's possible if the same approach is used by other device in the future...

>
>> +    unpatched = aml_if(aml_equal(dsm_mem, aml_int(0x0)));
>>
>>       /*
>>        * function 0 is called to inquire what functions are supported by
>> @@ -424,12 +449,36 @@ static void nvdimm_build_common_dsm(Aml *dev)
>>       ifctx = aml_if(aml_equal(function, aml_int(0)));
>>       byte_list[0] = 0 /* No function Supported */;
>>       aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
>> -    aml_append(method, ifctx);
>> +    aml_append(unpatched, ifctx);
>>
>>       /* No function is supported yet. */
>>       byte_list[0] = 1 /* Not Supported */;
>> -    aml_append(method, aml_return(aml_buffer(1, byte_list)));
>> +    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
>> +    aml_append(method, unpatched);
>> +
>> +    /*
>> +     * Currently no function is supported for both root device and NVDIMM
>> +     * devices, let's temporarily set handle to 0x0 at this time.
>
> This comment confuses me. What does temporarily mean here?
> At what time?
> And when is it set to another value?

The HDLE indicates the DSM function is issued from which device; It is
0 if it comes from root device otherwise it is the handle_id of nvdimm
device.

As we do not support any real function now, i made it always be 0x0 for
all the devices. It will be set to the appropriate value when we implement
the label support in the last part of the long patch serial.

>
> Pls explain that instead of "set handle to 0" - that can
> be seen at a glance.

How about change the comment to this:

The HDLE indicates the DSM function is issued from which device, it
is not used at this time as no function is supported yet. Currently
we make it always be 0 for all the device and will set it the
appropriate value once real function is implemented.

>
>> +     */
>> +    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
>> +    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
>> +    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
>>
>> +    /*
>> +     * tell QEMU about the real address of DSM memory, then QEMU begins
>> +     * to emulate the method
>
> emulate the method? I'd just drop this.

Okay, will drop "then QEMU begins to emulate the method".

>
>> and fills the result to DSM memory.
>
> in DSM memory

Okay.

>
>> +     */
>> +    aml_append(method, aml_store(dsm_mem, aml_name("NTFI")));
>> +
>> +    result_size = aml_local(1);
>> +    aml_append(method, aml_store(aml_name("RLEN"), result_size));
>> +    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
>> +                                 result_size));
>> +    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
>> +                                        result_size, "OBUF"));
>> +    aml_append(method, aml_concatenate(aml_buffer(0, NULL), aml_name("OBUF"),
>> +                                       aml_arg(6)));
>> +    aml_append(method, aml_return(aml_arg(6)));
>>       aml_append(dev, method);
>>   }
>>
>> @@ -472,7 +521,7 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
>>   static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>                                 GArray *table_data, GArray *linker)
>>   {
>> -    Aml *ssdt, *sb_scope, *dev;
>> +    Aml *ssdt, *sb_scope, *dev, *field;
>>       int mem_addr_offset, nvdimm_ssdt;
>>
>>       acpi_add_table(table_offsets, table_data);
>> @@ -497,6 +546,64 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>        */
>>       aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
>>
>> +    /* map DSM memory and IO into ACPI namespace. */
>> +    aml_append(dev, aml_operation_region("NPIO", AML_SYSTEM_IO,
>> +               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
>> +    aml_append(dev, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
>> +               aml_name(NVDIMM_ACPI_MEM_ADDR), TARGET_PAGE_SIZE));
>> +
>> +    /*
>> +     * DSM notifier:
>> +     * NTFI: write the address of DSM memory and notify QEMU to emulate
>> +     *       the access.
>> +     *
>> +     * It is the IO port so that accessing them will cause VM-exit, the
>> +     * control will be transferred to QEMU.
>> +     */
>> +    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
>> +    aml_append(field, aml_named_field("NTFI",
>> +               sizeof(uint32_t) * BITS_PER_BYTE));
>> +    aml_append(dev, field);
>> +
>> +    /*
>> +     * DSM input:
>> +     * @HDLE: store device's handle, it's zero if the _DSM call happens
>> +     *        on NVDIMM Root Device.
>> +     * @REVS: store the Arg1 of _DSM call.
>> +     * @FUNC: store the Arg2 of _DSM call.
>> +     * @ARG3: store the Arg3 of _DSM call.
>
> Don't use @ prefixes - there are for function arguments.

Okay.

>
>> +     *
>> +     * They are RAM mapping on host so that these accesses never cause
>> +     * VM-EXIT.
>> +     */
>> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
>> +    aml_append(field, aml_named_field("HDLE",
>> +               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
>> +    aml_append(field, aml_named_field("REVS",
>> +               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
>> +    aml_append(field, aml_named_field("FUNC",
>> +               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
>> +    aml_append(field, aml_named_field("ARG3",
>> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
>> +                     BITS_PER_BYTE));
>
> drop the extra () here please, and align BITS_PER_BYTE with
> TARGET_PAGE_SIZE.
>

Okay.

>> +    aml_append(dev, field);
>> +
>> +    /*
>> +     * DSM output:
>> +     * @RLEN: the size of the buffer filled by QEMU.
>> +     * @ODAT: the buffer QEMU uses to store the result.
>> +     *
>> +     * Since the page is reused by both input and out, the input data
>> +     * will be lost after storing new result into @ODAT.
>
> And so?


So we should fetch all the input data before writing the result.

I will append it to the end of the comment.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method
  2016-03-03 13:25   ` Michael S. Tsirkin
@ 2016-03-03 14:01     ` Xiao Guangrong
  0 siblings, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-03 14:01 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:25 PM, Michael S. Tsirkin wrote:

>> +
>> +    /*
>> +     * function 0 is called to inquire what functions
>
> which functions

Okay, will fix.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-03 13:29   ` Michael S. Tsirkin
@ 2016-03-03 14:05     ` Xiao Guangrong
  2016-03-03 14:48       ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-03 14:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:
> On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:
>> As Igor suggested that we can report the BIOS patched operation region
>> so that OSPM could see that particular range is in use and be able to
>> notice conflicts if it happens some day
>>
>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>
> This is reserved RAM, exposing it in _CRS makes no sense to me.

As more and more memory will be reserved by BIOS/QEMU, report the
information to OSPM and let it check the potential error is bad,
no? :)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
  2016-03-03 13:43 ` [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Michael S. Tsirkin
@ 2016-03-03 14:07   ` Xiao Guangrong
  0 siblings, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-03 14:07 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:43 PM, Michael S. Tsirkin wrote:
> On Wed, Mar 02, 2016 at 07:50:36PM +0800, Xiao Guangrong wrote:
>> This patchset is against commit 429fb940caadf9 (fw-cfg: support writeable
>> blobs) on pci branch of Michael's git tree and can be found at:
>>        https://github.com/xiaogr/qemu.git nvdimm-acpi-v5
>
> This looks good overall.  There are some stylistic changes all over, I
> think this warrants v6 with all comments addressed.

Indeed. I will make the v6 to address all your comment.

Thanks!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-03 14:05     ` Xiao Guangrong
@ 2016-03-03 14:48       ` Michael S. Tsirkin
  2016-03-07 12:16         ` Igor Mammedov
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-03 14:48 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth

On Thu, Mar 03, 2016 at 10:05:31PM +0800, Xiao Guangrong wrote:
> 
> 
> On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:
> >On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:
> >>As Igor suggested that we can report the BIOS patched operation region
> >>so that OSPM could see that particular range is in use and be able to
> >>notice conflicts if it happens some day
> >>
> >>Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> >
> >This is reserved RAM, exposing it in _CRS makes no sense to me.
> 
> As more and more memory will be reserved by BIOS/QEMU, report the
> information to OSPM and let it check the potential error is bad,
> no? :)

guest has enough info to detect conflicts if it wishes to.
IIUC _CRS is not intended for RAM, it's for MMIO
resources, if it works for RAM that's an accident.

-- 
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method
  2016-03-03 13:23   ` Michael S. Tsirkin
  2016-03-03 14:00     ` Xiao Guangrong
@ 2016-03-04 15:03     ` Xiao Guangrong
  1 sibling, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-04 15:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:23 PM, Michael S. Tsirkin wrote:

>> +     *
>> +     * They are RAM mapping on host so that these accesses never cause
>> +     * VM-EXIT.
>> +     */
>> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
>> +    aml_append(field, aml_named_field("HDLE",
>> +               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
>> +    aml_append(field, aml_named_field("REVS",
>> +               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
>> +    aml_append(field, aml_named_field("FUNC",
>> +               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
>> +    aml_append(field, aml_named_field("ARG3",
>> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
>> +                     BITS_PER_BYTE));
>
> drop the extra () here please, and align BITS_PER_BYTE with
> TARGET_PAGE_SIZE.

Michael, the () is necessary here, as it is calculating the number of bits:

(PAGE_SIZE - offset_of_arg3) * BITS_PER_BYTE

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory
  2016-03-03 13:12   ` Michael S. Tsirkin
  2016-03-03 13:35     ` Xiao Guangrong
@ 2016-03-04 15:32     ` Xiao Guangrong
  1 sibling, 0 replies; 28+ messages in thread
From: Xiao Guangrong @ 2016-03-04 15:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: ehabkost, kvm, gleb, mtosatti, qemu-devel, stefanha, imammedo,
	pbonzini, dan.j.williams, rth



On 03/03/2016 09:12 PM, Michael S. Tsirkin wrote:

>>       /* copy AML table into ACPI tables blob and patch header there */
>>       g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
>> +    mem_addr_offset = build_append_named_dword(table_data,
>> +                                               NVDIMM_ACPI_MEM_ADDR);
>> +
>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
>> +                             false /* high memory */);
>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
>> +                                   table_data->data + mem_addr_offset,
>> +                                   sizeof(uint32_t));
>>       build_header(linker, table_data,
>> -        (void *)(table_data->data + table_data->len - ssdt->buf->len),
>> -        "SSDT", ssdt->buf->len, 1, NULL, "NVDIMM");
>> +        (void *)(table_data->data + nvdimm_ssdt),
>> +        "SSDT", table_data->len - nvdimm_ssdt, 1, NULL, "NVDIMM");
>>       free_aml_allocator();
>
> I prefer ssdt->buf->len to table_data->len - nvdimm_ssdt.
> Pls fix by a follow-up patch unless there is a respin.

Ah, we can not do that as the NVDIMM_ACPI_MEM_ADDR is appended in the table
which is not taken into account in ssdt.

Sorry, i just spotted it when i was addressing all your comments in the new
version. :(

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-03 14:48       ` Michael S. Tsirkin
@ 2016-03-07 12:16         ` Igor Mammedov
  2016-03-07 12:22           ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Igor Mammedov @ 2016-03-07 12:16 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Thu, 3 Mar 2016 16:48:55 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Thu, Mar 03, 2016 at 10:05:31PM +0800, Xiao Guangrong wrote:
> > 
> > 
> > On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:  
> > >On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:  
> > >>As Igor suggested that we can report the BIOS patched operation region
> > >>so that OSPM could see that particular range is in use and be able to
> > >>notice conflicts if it happens some day
> > >>
> > >>Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>  
> > >
> > >This is reserved RAM, exposing it in _CRS makes no sense to me.  
> > 
> > As more and more memory will be reserved by BIOS/QEMU, report the
> > information to OSPM and let it check the potential error is bad,
> > no? :)  
> 
> guest has enough info to detect conflicts if it wishes to.
> IIUC _CRS is not intended for RAM, it's for MMIO
> resources, if it works for RAM that's an accident.
If range isn't reserved here, then guest might assume that it's
free to use it for a PCI device since PCI0._CRS reports it
as available.
So we should either reserve range or punch a hole in PCI0._CRS.
Reserving ranges is simpler and that's what we've switched to
from manual hole punching, see PCI/CPU/Memory hotplug and other
motherboard resources.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-07 12:16         ` Igor Mammedov
@ 2016-03-07 12:22           ` Michael S. Tsirkin
  2016-03-07 14:49             ` Igor Mammedov
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-07 12:22 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Mon, Mar 07, 2016 at 01:16:48PM +0100, Igor Mammedov wrote:
> On Thu, 3 Mar 2016 16:48:55 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Thu, Mar 03, 2016 at 10:05:31PM +0800, Xiao Guangrong wrote:
> > > 
> > > 
> > > On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:  
> > > >On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:  
> > > >>As Igor suggested that we can report the BIOS patched operation region
> > > >>so that OSPM could see that particular range is in use and be able to
> > > >>notice conflicts if it happens some day
> > > >>
> > > >>Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>  
> > > >
> > > >This is reserved RAM, exposing it in _CRS makes no sense to me.  
> > > 
> > > As more and more memory will be reserved by BIOS/QEMU, report the
> > > information to OSPM and let it check the potential error is bad,
> > > no? :)  
> > 
> > guest has enough info to detect conflicts if it wishes to.
> > IIUC _CRS is not intended for RAM, it's for MMIO
> > resources, if it works for RAM that's an accident.
> If range isn't reserved here, then guest might assume that it's
> free to use it for a PCI device since PCI0._CRS reports it
> as available.

Does it really? I thought it's guest RAM allocated by BIOS, as opposed
to PCI memory. Am I wrong?

> So we should either reserve range or punch a hole in PCI0._CRS.
> Reserving ranges is simpler and that's what we've switched to
> from manual hole punching, see PCI/CPU/Memory hotplug and other
> motherboard resources.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-07 12:22           ` Michael S. Tsirkin
@ 2016-03-07 14:49             ` Igor Mammedov
  2016-03-07 15:09               ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Igor Mammedov @ 2016-03-07 14:49 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Mon, 7 Mar 2016 14:22:38 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Mar 07, 2016 at 01:16:48PM +0100, Igor Mammedov wrote:
> > On Thu, 3 Mar 2016 16:48:55 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> >   
> > > On Thu, Mar 03, 2016 at 10:05:31PM +0800, Xiao Guangrong wrote:  
> > > > 
> > > > 
> > > > On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:    
> > > > >On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:    
> > > > >>As Igor suggested that we can report the BIOS patched operation region
> > > > >>so that OSPM could see that particular range is in use and be able to
> > > > >>notice conflicts if it happens some day
> > > > >>
> > > > >>Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>    
> > > > >
> > > > >This is reserved RAM, exposing it in _CRS makes no sense to me.    
> > > > 
> > > > As more and more memory will be reserved by BIOS/QEMU, report the
> > > > information to OSPM and let it check the potential error is bad,
> > > > no? :)    
> > > 
> > > guest has enough info to detect conflicts if it wishes to.
> > > IIUC _CRS is not intended for RAM, it's for MMIO
> > > resources, if it works for RAM that's an accident.  
> > If range isn't reserved here, then guest might assume that it's
> > free to use it for a PCI device since PCI0._CRS reports it
> > as available.  
> 
> Does it really? I thought it's guest RAM allocated by BIOS, as opposed
> to PCI memory. Am I wrong?
Maybe I'm wrong,
but aren't RAM and PCI memory mapped into the same physical address space?
So what would happen when PCI MMIO BAR would be mapped over above range,
since guest thinks it's free to use it as unused resource?


> 
> > So we should either reserve range or punch a hole in PCI0._CRS.
> > Reserving ranges is simpler and that's what we've switched to
> > from manual hole punching, see PCI/CPU/Memory hotplug and other
> > motherboard resources.  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-07 14:49             ` Igor Mammedov
@ 2016-03-07 15:09               ` Michael S. Tsirkin
  2016-03-07 16:17                 ` Igor Mammedov
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-07 15:09 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Mon, Mar 07, 2016 at 03:49:47PM +0100, Igor Mammedov wrote:
> On Mon, 7 Mar 2016 14:22:38 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Mar 07, 2016 at 01:16:48PM +0100, Igor Mammedov wrote:
> > > On Thu, 3 Mar 2016 16:48:55 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > >   
> > > > On Thu, Mar 03, 2016 at 10:05:31PM +0800, Xiao Guangrong wrote:  
> > > > > 
> > > > > 
> > > > > On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:    
> > > > > >On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:    
> > > > > >>As Igor suggested that we can report the BIOS patched operation region
> > > > > >>so that OSPM could see that particular range is in use and be able to
> > > > > >>notice conflicts if it happens some day
> > > > > >>
> > > > > >>Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>    
> > > > > >
> > > > > >This is reserved RAM, exposing it in _CRS makes no sense to me.    
> > > > > 
> > > > > As more and more memory will be reserved by BIOS/QEMU, report the
> > > > > information to OSPM and let it check the potential error is bad,
> > > > > no? :)    
> > > > 
> > > > guest has enough info to detect conflicts if it wishes to.
> > > > IIUC _CRS is not intended for RAM, it's for MMIO
> > > > resources, if it works for RAM that's an accident.  
> > > If range isn't reserved here, then guest might assume that it's
> > > free to use it for a PCI device since PCI0._CRS reports it
> > > as available.  
> > 
> > Does it really? I thought it's guest RAM allocated by BIOS, as opposed
> > to PCI memory. Am I wrong?
> Maybe I'm wrong,
> but aren't RAM and PCI memory mapped into the same physical address space?

They are in the same address space but IIRC MMIO has lower priority.

> So what would happen when PCI MMIO BAR would be mapped over above range,
> since guest thinks it's free to use it as unused resource?

IIRC, allocating MMIO BAR over RAM would make the MMIO invisible,
irrespective of whether the RAM range is being used for anything.

> 
> > 
> > > So we should either reserve range or punch a hole in PCI0._CRS.
> > > Reserving ranges is simpler and that's what we've switched to
> > > from manual hole punching, see PCI/CPU/Memory hotplug and other
> > > motherboard resources.  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-07 15:09               ` Michael S. Tsirkin
@ 2016-03-07 16:17                 ` Igor Mammedov
  2016-03-07 16:19                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Igor Mammedov @ 2016-03-07 16:17 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Mon, 7 Mar 2016 17:09:32 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Mar 07, 2016 at 03:49:47PM +0100, Igor Mammedov wrote:
> > On Mon, 7 Mar 2016 14:22:38 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> >   
> > > On Mon, Mar 07, 2016 at 01:16:48PM +0100, Igor Mammedov wrote:  
> > > > On Thu, 3 Mar 2016 16:48:55 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > >     
> > > > > On Thu, Mar 03, 2016 at 10:05:31PM +0800, Xiao Guangrong wrote:    
> > > > > > 
> > > > > > 
> > > > > > On 03/03/2016 09:29 PM, Michael S. Tsirkin wrote:      
> > > > > > >On Wed, Mar 02, 2016 at 07:50:41PM +0800, Xiao Guangrong wrote:      
> > > > > > >>As Igor suggested that we can report the BIOS patched operation region
> > > > > > >>so that OSPM could see that particular range is in use and be able to
> > > > > > >>notice conflicts if it happens some day
> > > > > > >>
> > > > > > >>Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>      
> > > > > > >
> > > > > > >This is reserved RAM, exposing it in _CRS makes no sense to me.      
> > > > > > 
> > > > > > As more and more memory will be reserved by BIOS/QEMU, report the
> > > > > > information to OSPM and let it check the potential error is bad,
> > > > > > no? :)      
> > > > > 
> > > > > guest has enough info to detect conflicts if it wishes to.
> > > > > IIUC _CRS is not intended for RAM, it's for MMIO
> > > > > resources, if it works for RAM that's an accident.    
> > > > If range isn't reserved here, then guest might assume that it's
> > > > free to use it for a PCI device since PCI0._CRS reports it
> > > > as available.    
> > > 
> > > Does it really? I thought it's guest RAM allocated by BIOS, as opposed
> > > to PCI memory. Am I wrong?  
> > Maybe I'm wrong,
> > but aren't RAM and PCI memory mapped into the same physical address space?  
> 
> They are in the same address space but IIRC MMIO has lower priority.
> 
> > So what would happen when PCI MMIO BAR would be mapped over above range,
> > since guest thinks it's free to use it as unused resource?  
> 
> IIRC, allocating MMIO BAR over RAM would make the MMIO invisible,
> irrespective of whether the RAM range is being used for anything.
An then driver would start writing 'garbage' to RAM, resulting in
strange guest behavior and not work PCI device.

that's why reserving region is a good idea if it could be done.

> 
> >   
> > >   
> > > > So we should either reserve range or punch a hole in PCI0._CRS.
> > > > Reserving ranges is simpler and that's what we've switched to
> > > > from manual hole punching, see PCI/CPU/Memory hotplug and other
> > > > motherboard resources.    

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-07 16:17                 ` Igor Mammedov
@ 2016-03-07 16:19                   ` Michael S. Tsirkin
  2016-03-08  9:06                     ` Igor Mammedov
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2016-03-07 16:19 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Mon, Mar 07, 2016 at 05:17:19PM +0100, Igor Mammedov wrote:
> > > So what would happen when PCI MMIO BAR would be mapped over above range,
> > > since guest thinks it's free to use it as unused resource?  
> > 
> > IIRC, allocating MMIO BAR over RAM would make the MMIO invisible,
> > irrespective of whether the RAM range is being used for anything.
> An then driver would start writing 'garbage' to RAM, resulting in
> strange guest behavior and not work PCI device.

Do you observe such behaviour?

> that's why reserving region is a good idea if it could be done.

Which region? Reserve all of RAM in _CRS?

> > 
> > >   
> > > >   
> > > > > So we should either reserve range or punch a hole in PCI0._CRS.
> > > > > Reserving ranges is simpler and that's what we've switched to
> > > > > from manual hole punching, see PCI/CPU/Memory hotplug and other
> > > > > motherboard resources.    

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS
  2016-03-07 16:19                   ` Michael S. Tsirkin
@ 2016-03-08  9:06                     ` Igor Mammedov
  0 siblings, 0 replies; 28+ messages in thread
From: Igor Mammedov @ 2016-03-08  9:06 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xiao Guangrong, ehabkost, kvm, gleb, mtosatti, qemu-devel,
	stefanha, pbonzini, dan.j.williams, rth

On Mon, 7 Mar 2016 18:19:46 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Mar 07, 2016 at 05:17:19PM +0100, Igor Mammedov wrote:
> > > > So what would happen when PCI MMIO BAR would be mapped over above range,
> > > > since guest thinks it's free to use it as unused resource?    
> > > 
> > > IIRC, allocating MMIO BAR over RAM would make the MMIO invisible,
> > > irrespective of whether the RAM range is being used for anything.  
> > An then driver would start writing 'garbage' to RAM, resulting in
> > strange guest behavior and not work PCI device.  
> 
> Do you observe such behaviour?
> 
> > that's why reserving region is a good idea if it could be done.  
> 
> Which region? Reserve all of RAM in _CRS?
Ops, I'm wrong here since PCI0._CRS doesn't include RAM, it's
not needed to reserve region there.

> 
> > >   
> > > >     
> > > > >     
> > > > > > So we should either reserve range or punch a hole in PCI0._CRS.
> > > > > > Reserving ranges is simpler and that's what we've switched to
> > > > > > from manual hole punching, see PCI/CPU/Memory hotplug and other
> > > > > > motherboard resources.      
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2016-03-08  9:07 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-02 11:50 [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 1/5] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
2016-03-02 11:58   ` Michael S. Tsirkin
2016-03-02 16:10     ` Xiao Guangrong
2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 2/5] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
2016-03-03 13:12   ` Michael S. Tsirkin
2016-03-03 13:35     ` Xiao Guangrong
2016-03-04 15:32     ` Xiao Guangrong
2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 3/5] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
2016-03-03 13:23   ` Michael S. Tsirkin
2016-03-03 14:00     ` Xiao Guangrong
2016-03-04 15:03     ` Xiao Guangrong
2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 4/5] nvdimm acpi: emulate dsm method Xiao Guangrong
2016-03-03 13:25   ` Michael S. Tsirkin
2016-03-03 14:01     ` Xiao Guangrong
2016-03-02 11:50 ` [Qemu-devel] [PATCH v5 5/5] nvdimm acpi: add _CRS Xiao Guangrong
2016-03-03 13:29   ` Michael S. Tsirkin
2016-03-03 14:05     ` Xiao Guangrong
2016-03-03 14:48       ` Michael S. Tsirkin
2016-03-07 12:16         ` Igor Mammedov
2016-03-07 12:22           ` Michael S. Tsirkin
2016-03-07 14:49             ` Igor Mammedov
2016-03-07 15:09               ` Michael S. Tsirkin
2016-03-07 16:17                 ` Igor Mammedov
2016-03-07 16:19                   ` Michael S. Tsirkin
2016-03-08  9:06                     ` Igor Mammedov
2016-03-03 13:43 ` [Qemu-devel] [PATCH v5 0/5] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Michael S. Tsirkin
2016-03-03 14:07   ` Xiao Guangrong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).