* [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism
@ 2012-03-14 2:03 Wen Congyang
2012-03-14 2:05 ` [Qemu-devel] [RFC][PATCH 01/14 v9] Add API to create memory mapping list Wen Congyang
` (14 more replies)
0 siblings, 15 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:03 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
Hi, all
'virsh dump' can not work when host pci device is used by guest. We have
discussed this issue here:
http://lists.nongnu.org/archive/html/qemu-devel/2011-10/msg00736.html
The last version is here:
http://lists.nongnu.org/archive/html/qemu-devel/2012-03/msg00235.html
We have determined to introduce a new command dump to dump memory. The core
file's format can be elf.
Note:
1. The guest should be x86 or x86_64. The other arch is not supported now.
2. If you use old gdb, gdb may crash. I use gdb-7.3.1, and it does not crash.
3. If the OS is in the second kernel, gdb may not work well, and crash can
work by specifying '--machdep phys_addr=xxx' in the command line. The
reason is that the second kernel will update the page table, and we can
not get the page table for the first kernel.
4. The cpu's state is stored in QEMU note. You neet to modify crash to use
it to calculate phys_base.
5. If the guest OS is 32 bit and the memory size is larger than 4G, the vmcore
is elf64 format. You should use the gdb which is built with --enable-64-bit-bfd.
6. This patchset is based on the upstream tree, and apply one patch that is still
in Luiz Capitulino's tree, because I use the API qemu_get_fd() in this patchset.
Changes from v8 to v9:
1. remove async support(it will be reimplemented after QAPI async commands support
is finished)
2. fix some typo error
Changes from v7 to v8:
1. addressed Hatayama's comments
Changes from v6 to v7:
1. addressed Jan's comments
2. fix some bugs
3. store cpu's state into the vmcore
Changes from v5 to v6:
1. allow user to dump a fraction of the memory
2. fix some bugs
Changes from v4 to v5:
1. convert the new command dump to QAPI
Changes from v3 to v4:
1. support it to run asynchronously
2. add API to cancel dumping and query dumping progress
3. add API to control dumping speed
4. auto cancel dumping when the user resumes vm, and the status is failed.
Changes from v2 to v3:
1. address Jan Kiszka's comment
Changes from v1 to v2:
1. fix virt addr in the vmcore.
Wen Congyang (14):
Add API to create memory mapping list
Add API to check whether a physical address is I/O address
implement cpu_get_memory_mapping()
Add API to check whether paging mode is enabled
Add API to get memory mapping
Add API to get memory mapping without do paging
target-i386: Add API to write elf notes to core file
target-i386: Add API to write cpu status to core file
target-i386: add API to get dump info
make gdb_id() generally avialable
introduce a new monitor command 'dump' to dump guest's memory
support to cancel the current dumping
support to query dumping status
allow user to dump a fraction of the memory
Makefile.target | 3 +
configure | 8 +
cpu-all.h | 66 +++
cpu-common.h | 2 +
dump.c | 886 +++++++++++++++++++++++++++++++++++++
dump.h | 23 +
elf.h | 5 +
exec.c | 9 +
gdbstub.c | 9 -
gdbstub.h | 9 +
hmp-commands.hx | 43 ++
hmp.c | 43 ++
hmp.h | 3 +
memory_mapping.c | 238 ++++++++++
memory_mapping.h | 61 +++
monitor.c | 7 +
qapi-schema.json | 58 +++
qmp-commands.hx | 109 +++++
target-i386/arch_dump.c | 433 ++++++++++++++++++
target-i386/arch_memory_mapping.c | 271 +++++++++++
20 files changed, 2277 insertions(+), 9 deletions(-)
create mode 100644 dump.c
create mode 100644 dump.h
create mode 100644 memory_mapping.c
create mode 100644 memory_mapping.h
create mode 100644 target-i386/arch_dump.c
create mode 100644 target-i386/arch_memory_mapping.c
^ permalink raw reply [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 01/14 v9] Add API to create memory mapping list
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
@ 2012-03-14 2:05 ` Wen Congyang
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 02/14 v9] Add API to check whether a physical address is I/O address Wen Congyang
` (13 subsequent siblings)
14 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:05 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
The memory mapping list stores virtual address and physical address mapping.
The virtual address and physical address are contiguous in the mapping.
The folloing patch will use this information to create PT_LOAD in the vmcore.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
Makefile.target | 1 +
memory_mapping.c | 166 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
memory_mapping.h | 47 +++++++++++++++
3 files changed, 214 insertions(+), 0 deletions(-)
create mode 100644 memory_mapping.c
create mode 100644 memory_mapping.h
diff --git a/Makefile.target b/Makefile.target
index eb25941..ce4d815 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -211,6 +211,7 @@ obj-$(CONFIG_KVM) += kvm.o kvm-all.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-$(CONFIG_VGA) += vga.o
obj-y += memory.o savevm.o
+obj-y += memory_mapping.o
LIBS+=-lz
obj-i386-$(CONFIG_KVM) += hyperv.o
diff --git a/memory_mapping.c b/memory_mapping.c
new file mode 100644
index 0000000..718f271
--- /dev/null
+++ b/memory_mapping.c
@@ -0,0 +1,166 @@
+/*
+ * QEMU memory mapping
+ *
+ * Copyright Fujitsu, Corp. 2011, 2012
+ *
+ * Authors:
+ * Wen Congyang <wency@cn.fujitsu.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#include "cpu.h"
+#include "cpu-all.h"
+#include "memory_mapping.h"
+
+static void memory_mapping_list_add_mapping_sorted(MemoryMappingList *list,
+ MemoryMapping *mapping)
+{
+ MemoryMapping *p;
+
+ QTAILQ_FOREACH(p, &list->head, next) {
+ if (p->phys_addr >= mapping->phys_addr) {
+ QTAILQ_INSERT_BEFORE(p, mapping, next);
+ return;
+ }
+ }
+ QTAILQ_INSERT_TAIL(&list->head, mapping, next);
+}
+
+static void create_new_memory_mapping(MemoryMappingList *list,
+ target_phys_addr_t phys_addr,
+ target_phys_addr_t virt_addr,
+ ram_addr_t length)
+{
+ MemoryMapping *memory_mapping;
+
+ memory_mapping = g_malloc(sizeof(MemoryMapping));
+ memory_mapping->phys_addr = phys_addr;
+ memory_mapping->virt_addr = virt_addr;
+ memory_mapping->length = length;
+ list->last_mapping = memory_mapping;
+ list->num++;
+ memory_mapping_list_add_mapping_sorted(list, memory_mapping);
+}
+
+static inline bool mapping_contiguous(MemoryMapping *map,
+ target_phys_addr_t phys_addr,
+ target_phys_addr_t virt_addr)
+{
+ return phys_addr == map->phys_addr + map->length &&
+ virt_addr == map->virt_addr + map->length;
+}
+
+/*
+ * [map->phys_addr, map->phys_addr + map->length) and
+ * [phys_addr, phys_addr + length) have intersection?
+ */
+static inline bool mapping_have_same_region(MemoryMapping *map,
+ target_phys_addr_t phys_addr,
+ ram_addr_t length)
+{
+ return !(phys_addr + length < map->phys_addr ||
+ phys_addr >= map->phys_addr + map->length);
+}
+
+/*
+ * [map->phys_addr, map->phys_addr + map->length) and
+ * [phys_addr, phys_addr + length) have intersection. The virtual address in the
+ * intersection are the same?
+ */
+static inline bool mapping_conflict(MemoryMapping *map,
+ target_phys_addr_t phys_addr,
+ target_phys_addr_t virt_addr)
+{
+ return virt_addr - map->virt_addr != phys_addr - map->phys_addr;
+}
+
+/*
+ * [map->virt_addr, map->virt_addr + map->length) and
+ * [virt_addr, virt_addr + length) have intersection. And the physical address
+ * in the intersection are the same.
+ */
+static inline void mapping_merge(MemoryMapping *map,
+ target_phys_addr_t virt_addr,
+ ram_addr_t length)
+{
+ if (virt_addr < map->virt_addr) {
+ map->length += map->virt_addr - virt_addr;
+ map->virt_addr = virt_addr;
+ }
+
+ if ((virt_addr + length) >
+ (map->virt_addr + map->length)) {
+ map->length = virt_addr + length - map->virt_addr;
+ }
+}
+
+void memory_mapping_list_add_merge_sorted(MemoryMappingList *list,
+ target_phys_addr_t phys_addr,
+ target_phys_addr_t virt_addr,
+ ram_addr_t length)
+{
+ MemoryMapping *memory_mapping, *last_mapping;
+
+ if (QTAILQ_EMPTY(&list->head)) {
+ create_new_memory_mapping(list, phys_addr, virt_addr, length);
+ return;
+ }
+
+ last_mapping = list->last_mapping;
+ if (last_mapping) {
+ if (mapping_contiguous(last_mapping, phys_addr, virt_addr)) {
+ last_mapping->length += length;
+ return;
+ }
+ }
+
+ QTAILQ_FOREACH(memory_mapping, &list->head, next) {
+ if (mapping_contiguous(memory_mapping, phys_addr, virt_addr)) {
+ memory_mapping->length += length;
+ list->last_mapping = memory_mapping;
+ return;
+ }
+
+ if (phys_addr + length < memory_mapping->phys_addr) {
+ /* create a new region before memory_mapping */
+ break;
+ }
+
+ if (mapping_have_same_region(memory_mapping, phys_addr, length)) {
+ if (mapping_conflict(memory_mapping, phys_addr, virt_addr)) {
+ continue;
+ }
+
+ /* merge this region into memory_mapping */
+ mapping_merge(memory_mapping, virt_addr, length);
+ list->last_mapping = memory_mapping;
+ return;
+ }
+ }
+
+ /* this region can not be merged into any existed memory mapping. */
+ create_new_memory_mapping(list, phys_addr, virt_addr, length);
+}
+
+void memory_mapping_list_free(MemoryMappingList *list)
+{
+ MemoryMapping *p, *q;
+
+ QTAILQ_FOREACH_SAFE(p, &list->head, next, q) {
+ QTAILQ_REMOVE(&list->head, p, next);
+ g_free(p);
+ }
+
+ list->num = 0;
+ list->last_mapping = NULL;
+}
+
+void memory_mapping_list_init(MemoryMappingList *list)
+{
+ list->num = 0;
+ list->last_mapping = NULL;
+ QTAILQ_INIT(&list->head);
+}
diff --git a/memory_mapping.h b/memory_mapping.h
new file mode 100644
index 0000000..836b047
--- /dev/null
+++ b/memory_mapping.h
@@ -0,0 +1,47 @@
+/*
+ * QEMU memory mapping
+ *
+ * Copyright Fujitsu, Corp. 2011, 2012
+ *
+ * Authors:
+ * Wen Congyang <wency@cn.fujitsu.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMORY_MAPPING_H
+#define MEMORY_MAPPING_H
+
+#include "qemu-queue.h"
+
+/* The physical and virtual address in the memory mapping are contiguous. */
+typedef struct MemoryMapping {
+ target_phys_addr_t phys_addr;
+ target_ulong virt_addr;
+ ram_addr_t length;
+ QTAILQ_ENTRY(MemoryMapping) next;
+} MemoryMapping;
+
+typedef struct MemoryMappingList {
+ unsigned int num;
+ MemoryMapping *last_mapping;
+ QTAILQ_HEAD(, MemoryMapping) head;
+} MemoryMappingList;
+
+/*
+ * add or merge the memory region [phys_addr, phys_addr + length) into the
+ * memory mapping's list. The region's virtual address starts with virt_addr,
+ * and is contiguous. The list is sorted by phys_addr.
+ */
+void memory_mapping_list_add_merge_sorted(MemoryMappingList *list,
+ target_phys_addr_t phys_addr,
+ target_phys_addr_t virt_addr,
+ ram_addr_t length);
+
+void memory_mapping_list_free(MemoryMappingList *list);
+
+void memory_mapping_list_init(MemoryMappingList *list);
+
+#endif
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 02/14 v9] Add API to check whether a physical address is I/O address
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
2012-03-14 2:05 ` [Qemu-devel] [RFC][PATCH 01/14 v9] Add API to create memory mapping list Wen Congyang
@ 2012-03-14 2:06 ` Wen Congyang
2012-03-14 9:18 ` [Qemu-devel] [RESEND][PATCH " Wen Congyang
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 03/14 v9] implement cpu_get_memory_mapping() Wen Congyang
` (12 subsequent siblings)
14 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:06 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
This API will be used in the following patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
cpu-common.h | 2 ++
exec.c | 9 +++++++++
2 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/cpu-common.h b/cpu-common.h
index dca5175..fcd50dc 100644
--- a/cpu-common.h
+++ b/cpu-common.h
@@ -71,6 +71,8 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque));
void cpu_unregister_map_client(void *cookie);
+bool cpu_physical_memory_is_io(target_phys_addr_t phys_addr);
+
/* Coalesced MMIO regions are areas where write operations can be reordered.
* This usually implies that write operations are side-effect free. This allows
* batching which can make a major impact on performance when using
diff --git a/exec.c b/exec.c
index 0c86bce..952bbad 100644
--- a/exec.c
+++ b/exec.c
@@ -4646,3 +4646,12 @@ bool virtio_is_big_endian(void)
#undef env
#endif
+
+bool cpu_physical_memory_is_io(target_phys_addr_t phys_addr)
+{
+ MemoryRegionSection section;
+
+ section = phys_page_find(phys_addr >> TARGET_PAGE_BITS);
+
+ return !is_ram_rom_romd(§ion);
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 03/14 v9] implement cpu_get_memory_mapping()
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
2012-03-14 2:05 ` [Qemu-devel] [RFC][PATCH 01/14 v9] Add API to create memory mapping list Wen Congyang
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 02/14 v9] Add API to check whether a physical address is I/O address Wen Congyang
@ 2012-03-14 2:06 ` Wen Congyang
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 04/14 v9] Add API to check whether paging mode is enabled Wen Congyang
` (11 subsequent siblings)
14 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:06 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
Walk cpu's page table and collect all virtual address and physical address mapping.
Then, add these mapping into memory mapping list. If the guest does not use paging,
it will do nothing. Note: the I/O memory will be skipped.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
Makefile.target | 1 +
configure | 4 +
cpu-all.h | 10 ++
target-i386/arch_memory_mapping.c | 266 +++++++++++++++++++++++++++++++++++++
4 files changed, 281 insertions(+), 0 deletions(-)
create mode 100644 target-i386/arch_memory_mapping.c
diff --git a/Makefile.target b/Makefile.target
index ce4d815..e05a7dc 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -84,6 +84,7 @@ libobj-y += op_helper.o helper.o
ifeq ($(TARGET_BASE_ARCH), i386)
libobj-y += cpuid.o
endif
+libobj-$(CONFIG_HAVE_GET_MEMORY_MAPPING) += arch_memory_mapping.o
libobj-$(TARGET_SPARC64) += vis_helper.o
libobj-$(CONFIG_NEED_MMU) += mmu.o
libobj-$(TARGET_ARM) += neon_helper.o iwmmxt_helper.o
diff --git a/configure b/configure
index fe4fc4f..be10710 100755
--- a/configure
+++ b/configure
@@ -3652,6 +3652,10 @@ case "$target_arch2" in
fi
fi
esac
+case "$target_arch2" in
+ i386|x86_64)
+ echo "CONFIG_HAVE_GET_MEMORY_MAPPING=y" >> $config_target_mak
+esac
if test "$target_arch2" = "ppc64" -a "$fdt" = "yes"; then
echo "CONFIG_PSERIES=y" >> $config_target_mak
fi
diff --git a/cpu-all.h b/cpu-all.h
index f7f6e7a..786dbea 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -22,6 +22,7 @@
#include "qemu-common.h"
#include "qemu-tls.h"
#include "cpu-common.h"
+#include "memory_mapping.h"
/* some important defines:
*
@@ -516,4 +517,13 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
int cpu_memory_rw_debug(CPUState *env, target_ulong addr,
uint8_t *buf, int len, int is_write);
+#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
+int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env);
+#else
+static inline int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env)
+{
+ return -1;
+}
+#endif
+
#endif /* CPU_ALL_H */
diff --git a/target-i386/arch_memory_mapping.c b/target-i386/arch_memory_mapping.c
new file mode 100644
index 0000000..10d9b2c
--- /dev/null
+++ b/target-i386/arch_memory_mapping.c
@@ -0,0 +1,266 @@
+/*
+ * i386 memory mapping
+ *
+ * Copyright Fujitsu, Corp. 2011
+ *
+ * Authors:
+ * Wen Congyang <wency@cn.fujitsu.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#include "cpu.h"
+#include "cpu-all.h"
+
+/* PAE Paging or IA-32e Paging */
+static void walk_pte(MemoryMappingList *list, target_phys_addr_t pte_start_addr,
+ int32_t a20_mask, target_ulong start_line_addr)
+{
+ target_phys_addr_t pte_addr, start_paddr;
+ uint64_t pte;
+ target_ulong start_vaddr;
+ int i;
+
+ for (i = 0; i < 512; i++) {
+ pte_addr = (pte_start_addr + i * 8) & a20_mask;
+ pte = ldq_phys(pte_addr);
+ if (!(pte & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ start_paddr = (pte & ~0xfff) & ~(0x1ULL << 63);
+ if (cpu_physical_memory_is_io(start_paddr)) {
+ /* I/O region */
+ continue;
+ }
+
+ start_vaddr = start_line_addr | ((i & 0x1fff) << 12);
+ memory_mapping_list_add_merge_sorted(list, start_paddr,
+ start_vaddr, 1 << 12);
+ }
+}
+
+/* 32-bit Paging */
+static void walk_pte2(MemoryMappingList *list,
+ target_phys_addr_t pte_start_addr, int32_t a20_mask,
+ target_ulong start_line_addr)
+{
+ target_phys_addr_t pte_addr, start_paddr;
+ uint32_t pte;
+ target_ulong start_vaddr;
+ int i;
+
+ for (i = 0; i < 1024; i++) {
+ pte_addr = (pte_start_addr + i * 4) & a20_mask;
+ pte = ldl_phys(pte_addr);
+ if (!(pte & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ start_paddr = pte & ~0xfff;
+ if (cpu_physical_memory_is_io(start_paddr)) {
+ /* I/O region */
+ continue;
+ }
+
+ start_vaddr = start_line_addr | ((i & 0x3ff) << 12);
+ memory_mapping_list_add_merge_sorted(list, start_paddr,
+ start_vaddr, 1 << 12);
+ }
+}
+
+/* PAE Paging or IA-32e Paging */
+static void walk_pde(MemoryMappingList *list, target_phys_addr_t pde_start_addr,
+ int32_t a20_mask, target_ulong start_line_addr)
+{
+ target_phys_addr_t pde_addr, pte_start_addr, start_paddr;
+ uint64_t pde;
+ target_ulong line_addr, start_vaddr;
+ int i;
+
+ for (i = 0; i < 512; i++) {
+ pde_addr = (pde_start_addr + i * 8) & a20_mask;
+ pde = ldq_phys(pde_addr);
+ if (!(pde & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ line_addr = start_line_addr | ((i & 0x1ff) << 21);
+ if (pde & PG_PSE_MASK) {
+ /* 2 MB page */
+ start_paddr = (pde & ~0x1fffff) & ~(0x1ULL << 63);
+ if (cpu_physical_memory_is_io(start_paddr)) {
+ /* I/O region */
+ continue;
+ }
+ start_vaddr = line_addr;
+ memory_mapping_list_add_merge_sorted(list, start_paddr,
+ start_vaddr, 1 << 21);
+ continue;
+ }
+
+ pte_start_addr = (pde & ~0xfff) & a20_mask;
+ walk_pte(list, pte_start_addr, a20_mask, line_addr);
+ }
+}
+
+/* 32-bit Paging */
+static void walk_pde2(MemoryMappingList *list,
+ target_phys_addr_t pde_start_addr, int32_t a20_mask,
+ bool pse)
+{
+ target_phys_addr_t pde_addr, pte_start_addr, start_paddr;
+ uint32_t pde;
+ target_ulong line_addr, start_vaddr;
+ int i;
+
+ for (i = 0; i < 1024; i++) {
+ pde_addr = (pde_start_addr + i * 4) & a20_mask;
+ pde = ldl_phys(pde_addr);
+ if (!(pde & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ line_addr = (((unsigned int)i & 0x3ff) << 22);
+ if ((pde & PG_PSE_MASK) && pse) {
+ /* 4 MB page */
+ start_paddr = (pde & ~0x3fffff) | ((pde & 0x1fe000) << 19);
+ if (cpu_physical_memory_is_io(start_paddr)) {
+ /* I/O region */
+ continue;
+ }
+ start_vaddr = line_addr;
+ memory_mapping_list_add_merge_sorted(list, start_paddr,
+ start_vaddr, 1 << 22);
+ continue;
+ }
+
+ pte_start_addr = (pde & ~0xfff) & a20_mask;
+ walk_pte2(list, pte_start_addr, a20_mask, line_addr);
+ }
+}
+
+/* PAE Paging */
+static void walk_pdpe2(MemoryMappingList *list,
+ target_phys_addr_t pdpe_start_addr, int32_t a20_mask)
+{
+ target_phys_addr_t pdpe_addr, pde_start_addr;
+ uint64_t pdpe;
+ target_ulong line_addr;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ pdpe_addr = (pdpe_start_addr + i * 8) & a20_mask;
+ pdpe = ldq_phys(pdpe_addr);
+ if (!(pdpe & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ line_addr = (((unsigned int)i & 0x3) << 30);
+ pde_start_addr = (pdpe & ~0xfff) & a20_mask;
+ walk_pde(list, pde_start_addr, a20_mask, line_addr);
+ }
+}
+
+#ifdef TARGET_X86_64
+/* IA-32e Paging */
+static void walk_pdpe(MemoryMappingList *list,
+ target_phys_addr_t pdpe_start_addr, int32_t a20_mask,
+ target_ulong start_line_addr)
+{
+ target_phys_addr_t pdpe_addr, pde_start_addr, start_paddr;
+ uint64_t pdpe;
+ target_ulong line_addr, start_vaddr;
+ int i;
+
+ for (i = 0; i < 512; i++) {
+ pdpe_addr = (pdpe_start_addr + i * 8) & a20_mask;
+ pdpe = ldq_phys(pdpe_addr);
+ if (!(pdpe & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ line_addr = start_line_addr | ((i & 0x1ffULL) << 30);
+ if (pdpe & PG_PSE_MASK) {
+ /* 1 GB page */
+ start_paddr = (pdpe & ~0x3fffffff) & ~(0x1ULL << 63);
+ if (cpu_physical_memory_is_io(start_paddr)) {
+ /* I/O region */
+ continue;
+ }
+ start_vaddr = line_addr;
+ memory_mapping_list_add_merge_sorted(list, start_paddr,
+ start_vaddr, 1 << 30);
+ continue;
+ }
+
+ pde_start_addr = (pdpe & ~0xfff) & a20_mask;
+ walk_pde(list, pde_start_addr, a20_mask, line_addr);
+ }
+}
+
+/* IA-32e Paging */
+static void walk_pml4e(MemoryMappingList *list,
+ target_phys_addr_t pml4e_start_addr, int32_t a20_mask)
+{
+ target_phys_addr_t pml4e_addr, pdpe_start_addr;
+ uint64_t pml4e;
+ target_ulong line_addr;
+ int i;
+
+ for (i = 0; i < 512; i++) {
+ pml4e_addr = (pml4e_start_addr + i * 8) & a20_mask;
+ pml4e = ldq_phys(pml4e_addr);
+ if (!(pml4e & PG_PRESENT_MASK)) {
+ /* not present */
+ continue;
+ }
+
+ line_addr = ((i & 0x1ffULL) << 39) | (0xffffULL << 48);
+ pdpe_start_addr = (pml4e & ~0xfff) & a20_mask;
+ walk_pdpe(list, pdpe_start_addr, a20_mask, line_addr);
+ }
+}
+#endif
+
+int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env)
+{
+ if (!(env->cr[0] & CR0_PG_MASK)) {
+ /* paging is disabled */
+ return 0;
+ }
+
+ if (env->cr[4] & CR4_PAE_MASK) {
+#ifdef TARGET_X86_64
+ if (env->hflags & HF_LMA_MASK) {
+ target_phys_addr_t pml4e_addr;
+
+ pml4e_addr = (env->cr[3] & ~0xfff) & env->a20_mask;
+ walk_pml4e(list, pml4e_addr, env->a20_mask);
+ } else
+#endif
+ {
+ target_phys_addr_t pdpe_addr;
+
+ pdpe_addr = (env->cr[3] & ~0x1f) & env->a20_mask;
+ walk_pdpe2(list, pdpe_addr, env->a20_mask);
+ }
+ } else {
+ target_phys_addr_t pde_addr;
+ bool pse;
+
+ pde_addr = (env->cr[3] & ~0xfff) & env->a20_mask;
+ pse = !!(env->cr[4] & CR4_PSE_MASK);
+ walk_pde2(list, pde_addr, env->a20_mask, pse);
+ }
+
+ return 0;
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 04/14 v9] Add API to check whether paging mode is enabled
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (2 preceding siblings ...)
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 03/14 v9] implement cpu_get_memory_mapping() Wen Congyang
@ 2012-03-14 2:07 ` Wen Congyang
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping Wen Congyang
` (10 subsequent siblings)
14 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:07 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
This API will be used in the following patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
cpu-all.h | 6 ++++++
target-i386/arch_memory_mapping.c | 7 ++++++-
2 files changed, 12 insertions(+), 1 deletions(-)
diff --git a/cpu-all.h b/cpu-all.h
index 786dbea..4fe7174 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -519,11 +519,17 @@ int cpu_memory_rw_debug(CPUState *env, target_ulong addr,
#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env);
+bool cpu_paging_enabled(CPUState *env);
#else
static inline int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env)
{
return -1;
}
+
+static inline bool cpu_paging_enabled(CPUState *env)
+{
+ return true;
+}
#endif
#endif /* CPU_ALL_H */
diff --git a/target-i386/arch_memory_mapping.c b/target-i386/arch_memory_mapping.c
index 10d9b2c..824f293 100644
--- a/target-i386/arch_memory_mapping.c
+++ b/target-i386/arch_memory_mapping.c
@@ -233,7 +233,7 @@ static void walk_pml4e(MemoryMappingList *list,
int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env)
{
- if (!(env->cr[0] & CR0_PG_MASK)) {
+ if (!cpu_paging_enabled(env)) {
/* paging is disabled */
return 0;
}
@@ -264,3 +264,8 @@ int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env)
return 0;
}
+
+bool cpu_paging_enabled(CPUState *env)
+{
+ return env->cr[0] & CR0_PG_MASK;
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (3 preceding siblings ...)
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 04/14 v9] Add API to check whether paging mode is enabled Wen Congyang
@ 2012-03-14 2:07 ` Wen Congyang
2012-03-16 3:52 ` HATAYAMA Daisuke
2012-03-16 6:38 ` HATAYAMA Daisuke
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 06/14 v9] Add API to get memory mapping without do paging Wen Congyang
` (9 subsequent siblings)
14 siblings, 2 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:07 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
Add API to get all virtual address and physical address mapping.
If the guest doesn't use paging, the virtual address is equal to the phyical
address. The virtual address and physical address mapping is for gdb's user, and
it does not include the memory that is not referenced by the page table. So if
you want to use crash to anaylze the vmcore, please do not specify -p option.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
memory_mapping.c | 36 ++++++++++++++++++++++++++++++++++++
memory_mapping.h | 8 ++++++++
2 files changed, 44 insertions(+), 0 deletions(-)
diff --git a/memory_mapping.c b/memory_mapping.c
index 718f271..2ae8160 100644
--- a/memory_mapping.c
+++ b/memory_mapping.c
@@ -164,3 +164,39 @@ void memory_mapping_list_init(MemoryMappingList *list)
list->last_mapping = NULL;
QTAILQ_INIT(&list->head);
}
+
+int qemu_get_guest_memory_mapping(MemoryMappingList *list)
+{
+ CPUState *env;
+ RAMBlock *block;
+ ram_addr_t offset, length;
+ int ret;
+ bool paging_mode;
+
+#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
+ paging_mode = cpu_paging_enabled(first_cpu);
+ if (paging_mode) {
+ for (env = first_cpu; env != NULL; env = env->next_cpu) {
+ ret = cpu_get_memory_mapping(list, env);
+ if (ret < 0) {
+ return -1;
+ }
+ }
+ return 0;
+ }
+#else
+ return -2;
+#endif
+
+ /*
+ * If the guest doesn't use paging, the virtual address is equal to physical
+ * address.
+ */
+ QLIST_FOREACH(block, &ram_list.blocks, next) {
+ offset = block->offset;
+ length = block->length;
+ create_new_memory_mapping(list, offset, offset, length);
+ }
+
+ return 0;
+}
diff --git a/memory_mapping.h b/memory_mapping.h
index 836b047..ebd7cf6 100644
--- a/memory_mapping.h
+++ b/memory_mapping.h
@@ -44,4 +44,12 @@ void memory_mapping_list_free(MemoryMappingList *list);
void memory_mapping_list_init(MemoryMappingList *list);
+/*
+ * Return value:
+ * 0: success
+ * -1: failed
+ * -2: unsupported
+ */
+int qemu_get_guest_memory_mapping(MemoryMappingList *list);
+
#endif
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 06/14 v9] Add API to get memory mapping without do paging
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (4 preceding siblings ...)
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping Wen Congyang
@ 2012-03-14 2:08 ` Wen Congyang
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file Wen Congyang
` (8 subsequent siblings)
14 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:08 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
crash does not need the virtual address and physical address mapping, and the
mapping does not include the memory that is not referenced by the page table.
crash does not use the virtual address, so we can create the mapping for all
physical memory(virtual address is always 0). This patch provides a API to do
this thing, and it will be used in the following patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
memory_mapping.c | 9 +++++++++
memory_mapping.h | 3 +++
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/memory_mapping.c b/memory_mapping.c
index 2ae8160..8dd0750 100644
--- a/memory_mapping.c
+++ b/memory_mapping.c
@@ -200,3 +200,12 @@ int qemu_get_guest_memory_mapping(MemoryMappingList *list)
return 0;
}
+
+void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list)
+{
+ RAMBlock *block;
+
+ QLIST_FOREACH(block, &ram_list.blocks, next) {
+ create_new_memory_mapping(list, block->offset, 0, block->length);
+ }
+}
diff --git a/memory_mapping.h b/memory_mapping.h
index ebd7cf6..50b1f25 100644
--- a/memory_mapping.h
+++ b/memory_mapping.h
@@ -52,4 +52,7 @@ void memory_mapping_list_init(MemoryMappingList *list);
*/
int qemu_get_guest_memory_mapping(MemoryMappingList *list);
+/* get guest's memory mapping without do paging(virtual address is 0). */
+void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list);
+
#endif
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (5 preceding siblings ...)
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 06/14 v9] Add API to get memory mapping without do paging Wen Congyang
@ 2012-03-14 2:08 ` Wen Congyang
2012-03-16 1:17 ` HATAYAMA Daisuke
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status " Wen Congyang
` (7 subsequent siblings)
14 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:08 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
The core file contains register's value. These APIs write registers to
core file, and them will be called in the following patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
Makefile.target | 1 +
configure | 4 +
cpu-all.h | 23 +++++
target-i386/arch_dump.c | 249 +++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 277 insertions(+), 0 deletions(-)
create mode 100644 target-i386/arch_dump.c
diff --git a/Makefile.target b/Makefile.target
index e05a7dc..c81c4fa 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -213,6 +213,7 @@ obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-$(CONFIG_VGA) += vga.o
obj-y += memory.o savevm.o
obj-y += memory_mapping.o
+obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o
LIBS+=-lz
obj-i386-$(CONFIG_KVM) += hyperv.o
diff --git a/configure b/configure
index be10710..6900534 100755
--- a/configure
+++ b/configure
@@ -3671,6 +3671,10 @@ if test "$target_softmmu" = "yes" ; then
if test "$smartcard_nss" = "yes" ; then
echo "subdir-$target: subdir-libcacard" >> $config_host_mak
fi
+ case "$target_arch2" in
+ i386|x86_64)
+ echo "CONFIG_HAVE_CORE_DUMP=y" >> $config_target_mak
+ esac
fi
if test "$target_user_only" = "yes" ; then
echo "CONFIG_USER_ONLY=y" >> $config_target_mak
diff --git a/cpu-all.h b/cpu-all.h
index 4fe7174..1d681f8 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -532,4 +532,27 @@ static inline bool cpu_paging_enabled(CPUState *env)
}
#endif
+typedef int (*write_core_dump_function)
+ (target_phys_addr_t offset, void *buf, size_t size, void *opaque);
+#if defined(CONFIG_HAVE_CORE_DUMP)
+int cpu_write_elf64_note(write_core_dump_function f, CPUState *env, int cpuid,
+ target_phys_addr_t *offset, void *opaque);
+int cpu_write_elf32_note(write_core_dump_function f, CPUState *env, int cpuid,
+ target_phys_addr_t *offset, void *opaque);
+#else
+static inline int cpu_write_elf64_note(write_core_dump_function f,
+ CPUState *env, int cpuid,
+ target_phys_addr_t *offset, void *opaque)
+{
+ return -1;
+}
+
+static inline int cpu_write_elf32_note(write_core_dump_function f,
+ CPUState *env, int cpuid,
+ target_phys_addr_t *offset, void *opaque)
+{
+ return -1;
+}
+#endif
+
#endif /* CPU_ALL_H */
diff --git a/target-i386/arch_dump.c b/target-i386/arch_dump.c
new file mode 100644
index 0000000..3239c40
--- /dev/null
+++ b/target-i386/arch_dump.c
@@ -0,0 +1,249 @@
+/*
+ * i386 memory mapping
+ *
+ * Copyright Fujitsu, Corp. 2011
+ *
+ * Authors:
+ * Wen Congyang <wency@cn.fujitsu.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#include "cpu.h"
+#include "cpu-all.h"
+#include "elf.h"
+
+#ifdef TARGET_X86_64
+typedef struct {
+ target_ulong r15, r14, r13, r12, rbp, rbx, r11, r10;
+ target_ulong r9, r8, rax, rcx, rdx, rsi, rdi, orig_rax;
+ target_ulong rip, cs, eflags;
+ target_ulong rsp, ss;
+ target_ulong fs_base, gs_base;
+ target_ulong ds, es, fs, gs;
+} x86_64_user_regs_struct;
+
+static int x86_64_write_elf64_note(write_core_dump_function f, CPUState *env,
+ int id, target_phys_addr_t *offset,
+ void *opaque)
+{
+ x86_64_user_regs_struct regs;
+ Elf64_Nhdr *note;
+ char *buf;
+ int descsz, note_size, name_size = 5;
+ const char *name = "CORE";
+ int ret;
+
+ regs.r15 = env->regs[15];
+ regs.r14 = env->regs[14];
+ regs.r13 = env->regs[13];
+ regs.r12 = env->regs[12];
+ regs.r11 = env->regs[11];
+ regs.r10 = env->regs[10];
+ regs.r9 = env->regs[9];
+ regs.r8 = env->regs[8];
+ regs.rbp = env->regs[R_EBP];
+ regs.rsp = env->regs[R_ESP];
+ regs.rdi = env->regs[R_EDI];
+ regs.rsi = env->regs[R_ESI];
+ regs.rdx = env->regs[R_EDX];
+ regs.rcx = env->regs[R_ECX];
+ regs.rbx = env->regs[R_EBX];
+ regs.rax = env->regs[R_EAX];
+ regs.rip = env->eip;
+ regs.eflags = env->eflags;
+
+ regs.orig_rax = 0; /* FIXME */
+ regs.cs = env->segs[R_CS].selector;
+ regs.ss = env->segs[R_SS].selector;
+ regs.fs_base = env->segs[R_FS].base;
+ regs.gs_base = env->segs[R_GS].base;
+ regs.ds = env->segs[R_DS].selector;
+ regs.es = env->segs[R_ES].selector;
+ regs.fs = env->segs[R_FS].selector;
+ regs.gs = env->segs[R_GS].selector;
+
+ descsz = 336; /* sizeof(prstatus_t) is 336 on x86_64 box */
+ note_size = ((sizeof(Elf64_Nhdr) + 3) / 4 + (name_size + 3) / 4 +
+ (descsz + 3) / 4) * 4;
+ note = g_malloc(note_size);
+
+ memset(note, 0, note_size);
+ note->n_namesz = cpu_to_le32(name_size);
+ note->n_descsz = cpu_to_le32(descsz);
+ note->n_type = cpu_to_le32(NT_PRSTATUS);
+ buf = (char *)note;
+ buf += ((sizeof(Elf64_Nhdr) + 3) / 4) * 4;
+ memcpy(buf, name, name_size);
+ buf += ((name_size + 3) / 4) * 4;
+ memcpy(buf + 32, &id, 4); /* pr_pid */
+ buf += descsz - sizeof(x86_64_user_regs_struct)-sizeof(target_ulong);
+ memcpy(buf, ®s, sizeof(x86_64_user_regs_struct));
+
+ ret = f(*offset, note, note_size, opaque);
+ g_free(note);
+ if (ret < 0) {
+ return -1;
+ }
+
+ *offset += note_size;
+
+ return 0;
+}
+#endif
+
+typedef struct {
+ uint32_t ebx, ecx, edx, esi, edi, ebp, eax;
+ unsigned short ds, __ds, es, __es;
+ unsigned short fs, __fs, gs, __gs;
+ uint32_t orig_eax, eip;
+ unsigned short cs, __cs;
+ uint32_t eflags, esp;
+ unsigned short ss, __ss;
+} x86_user_regs_struct;
+
+static int x86_write_elf64_note(write_core_dump_function f, CPUState *env,
+ int id, target_phys_addr_t *offset,
+ void *opaque)
+{
+ x86_user_regs_struct regs;
+ Elf64_Nhdr *note;
+ char *buf;
+ int descsz, note_size, name_size = 5;
+ const char *name = "CORE";
+ int ret;
+
+ regs.ebp = env->regs[R_EBP] & 0xffffffff;
+ regs.esp = env->regs[R_ESP] & 0xffffffff;
+ regs.edi = env->regs[R_EDI] & 0xffffffff;
+ regs.esi = env->regs[R_ESI] & 0xffffffff;
+ regs.edx = env->regs[R_EDX] & 0xffffffff;
+ regs.ecx = env->regs[R_ECX] & 0xffffffff;
+ regs.ebx = env->regs[R_EBX] & 0xffffffff;
+ regs.eax = env->regs[R_EAX] & 0xffffffff;
+ regs.eip = env->eip & 0xffffffff;
+ regs.eflags = env->eflags & 0xffffffff;
+
+ regs.cs = env->segs[R_CS].selector;
+ regs.__cs = 0;
+ regs.ss = env->segs[R_SS].selector;
+ regs.__ss = 0;
+ regs.ds = env->segs[R_DS].selector;
+ regs.__ds = 0;
+ regs.es = env->segs[R_ES].selector;
+ regs.__es = 0;
+ regs.fs = env->segs[R_FS].selector;
+ regs.__fs = 0;
+ regs.gs = env->segs[R_GS].selector;
+ regs.__gs = 0;
+
+ descsz = 144; /* sizeof(prstatus_t) is 144 on x86 box */
+ note_size = ((sizeof(Elf64_Nhdr) + 3) / 4 + (name_size + 3) / 4 +
+ (descsz + 3) / 4) * 4;
+ note = g_malloc(note_size);
+
+ memset(note, 0, note_size);
+ note->n_namesz = cpu_to_le32(name_size);
+ note->n_descsz = cpu_to_le32(descsz);
+ note->n_type = cpu_to_le32(NT_PRSTATUS);
+ buf = (char *)note;
+ buf += ((sizeof(Elf64_Nhdr) + 3) / 4) * 4;
+ memcpy(buf, name, name_size);
+ buf += ((name_size + 3) / 4) * 4;
+ memcpy(buf + 24, &id, 4); /* pr_pid */
+ buf += descsz - sizeof(x86_user_regs_struct)-4;
+ memcpy(buf, ®s, sizeof(x86_user_regs_struct));
+
+ ret = f(*offset, note, note_size, opaque);
+ g_free(note);
+ if (ret < 0) {
+ return -1;
+ }
+
+ *offset += note_size;
+
+ return 0;
+}
+
+int cpu_write_elf64_note(write_core_dump_function f, CPUState *env, int cpuid,
+ target_phys_addr_t *offset, void *opaque)
+{
+ int ret;
+#ifdef TARGET_X86_64
+ bool lma = !!(first_cpu->hflags & HF_LMA_MASK);
+
+ if (lma) {
+ ret = x86_64_write_elf64_note(f, env, cpuid, offset, opaque);
+ } else {
+#endif
+ ret = x86_write_elf64_note(f, env, cpuid, offset, opaque);
+#ifdef TARGET_X86_64
+ }
+#endif
+
+ return ret;
+}
+
+int cpu_write_elf32_note(write_core_dump_function f, CPUState *env, int cpuid,
+ target_phys_addr_t *offset, void *opaque)
+{
+ x86_user_regs_struct regs;
+ Elf32_Nhdr *note;
+ char *buf;
+ int descsz, note_size, name_size = 5;
+ const char *name = "CORE";
+ int ret;
+
+ regs.ebp = env->regs[R_EBP] & 0xffffffff;
+ regs.esp = env->regs[R_ESP] & 0xffffffff;
+ regs.edi = env->regs[R_EDI] & 0xffffffff;
+ regs.esi = env->regs[R_ESI] & 0xffffffff;
+ regs.edx = env->regs[R_EDX] & 0xffffffff;
+ regs.ecx = env->regs[R_ECX] & 0xffffffff;
+ regs.ebx = env->regs[R_EBX] & 0xffffffff;
+ regs.eax = env->regs[R_EAX] & 0xffffffff;
+ regs.eip = env->eip & 0xffffffff;
+ regs.eflags = env->eflags & 0xffffffff;
+
+ regs.cs = env->segs[R_CS].selector;
+ regs.__cs = 0;
+ regs.ss = env->segs[R_SS].selector;
+ regs.__ss = 0;
+ regs.ds = env->segs[R_DS].selector;
+ regs.__ds = 0;
+ regs.es = env->segs[R_ES].selector;
+ regs.__es = 0;
+ regs.fs = env->segs[R_FS].selector;
+ regs.__fs = 0;
+ regs.gs = env->segs[R_GS].selector;
+ regs.__gs = 0;
+
+ descsz = 144; /* sizeof(prstatus_t) is 144 on x86 box */
+ note_size = ((sizeof(Elf32_Nhdr) + 3) / 4 + (name_size + 3) / 4 +
+ (descsz + 3) / 4) * 4;
+ note = g_malloc(note_size);
+
+ memset(note, 0, note_size);
+ note->n_namesz = cpu_to_le32(name_size);
+ note->n_descsz = cpu_to_le32(descsz);
+ note->n_type = cpu_to_le32(NT_PRSTATUS);
+ buf = (char *)note;
+ buf += ((sizeof(Elf32_Nhdr) + 3) / 4) * 4;
+ memcpy(buf, name, name_size);
+ buf += ((name_size + 3) / 4) * 4;
+ memcpy(buf + 24, &cpuid, 4); /* pr_pid */
+ buf += descsz - sizeof(x86_user_regs_struct)-4;
+ memcpy(buf, ®s, sizeof(x86_user_regs_struct));
+
+ ret = f(*offset, note, note_size, opaque);
+ g_free(note);
+ if (ret < 0) {
+ return -1;
+ }
+
+ *offset += note_size;
+
+ return 0;
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (6 preceding siblings ...)
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file Wen Congyang
@ 2012-03-14 2:09 ` Wen Congyang
2012-03-16 1:48 ` HATAYAMA Daisuke
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 09/14 v9] target-i386: add API to get dump info Wen Congyang
` (6 subsequent siblings)
14 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:09 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
The core file has register's value. But it does not include all registers value.
Store the cpu status into QEMU note, and the user can get more information
from vmcore. If you change QEMUCPUState, please count up QEMUCPUSTATE_VERSION.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
cpu-all.h | 20 ++++++
target-i386/arch_dump.c | 150 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 170 insertions(+), 0 deletions(-)
diff --git a/cpu-all.h b/cpu-all.h
index 1d681f8..f30c822 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -539,6 +539,10 @@ int cpu_write_elf64_note(write_core_dump_function f, CPUState *env, int cpuid,
target_phys_addr_t *offset, void *opaque);
int cpu_write_elf32_note(write_core_dump_function f, CPUState *env, int cpuid,
target_phys_addr_t *offset, void *opaque);
+int cpu_write_elf64_qemunote(write_core_dump_function f, CPUState *env,
+ target_phys_addr_t *offset, void *opaque);
+int cpu_write_elf32_qemunote(write_core_dump_function f, CPUState *env,
+ target_phys_addr_t *offset, void *opaque);
#else
static inline int cpu_write_elf64_note(write_core_dump_function f,
CPUState *env, int cpuid,
@@ -553,6 +557,22 @@ static inline int cpu_write_elf32_note(write_core_dump_function f,
{
return -1;
}
+
+static inline int cpu_write_elf64_qemunote(write_core_dump_function f,
+ CPUState *env,
+ target_phys_addr_t *offset,
+ void *opaque);
+{
+ return -1;
+}
+
+static inline int cpu_write_elf32_qemunote(write_core_dump_function f,
+ CPUState *env,
+ target_phys_addr_t *offset,
+ void *opaque)
+{
+ return -1;
+}
#endif
#endif /* CPU_ALL_H */
diff --git a/target-i386/arch_dump.c b/target-i386/arch_dump.c
index 3239c40..274bbec 100644
--- a/target-i386/arch_dump.c
+++ b/target-i386/arch_dump.c
@@ -247,3 +247,153 @@ int cpu_write_elf32_note(write_core_dump_function f, CPUState *env, int cpuid,
return 0;
}
+
+/*
+ * please count up QEMUCPUSTATE_VERSION if you have changed definition of
+ * QEMUCPUState, and modify the tools using this information accordingly.
+ */
+#define QEMUCPUSTATE_VERSION (1)
+
+struct QEMUCPUSegment {
+ uint32_t selector;
+ uint32_t limit;
+ uint32_t flags;
+ uint32_t pad;
+ uint64_t base;
+};
+
+typedef struct QEMUCPUSegment QEMUCPUSegment;
+
+struct QEMUCPUState {
+ uint32_t version;
+ uint32_t size;
+ uint64_t rax, rbx, rcx, rdx, rsi, rdi, rsp, rbp;
+ uint64_t r8, r9, r10, r11, r12, r13, r14, r15;
+ uint64_t rip, rflags;
+ QEMUCPUSegment cs, ds, es, fs, gs, ss;
+ QEMUCPUSegment ldt, tr, gdt, idt;
+ uint64_t cr[5];
+};
+
+typedef struct QEMUCPUState QEMUCPUState;
+
+static void copy_segment(QEMUCPUSegment *d, SegmentCache *s)
+{
+ d->pad = 0;
+ d->selector = s->selector;
+ d->limit = s->limit;
+ d->flags = s->flags;
+ d->base = s->base;
+}
+
+static void qemu_get_cpustate(QEMUCPUState *s, CPUState *env)
+{
+ memset(s, 0, sizeof(QEMUCPUState));
+
+ s->version = QEMUCPUSTATE_VERSION;
+ s->size = sizeof(QEMUCPUState);
+
+ s->rax = env->regs[R_EAX];
+ s->rbx = env->regs[R_EBX];
+ s->rcx = env->regs[R_ECX];
+ s->rdx = env->regs[R_EDX];
+ s->rsi = env->regs[R_ESI];
+ s->rdi = env->regs[R_EDI];
+ s->rsp = env->regs[R_ESP];
+ s->rbp = env->regs[R_EBP];
+#ifdef TARGET_X86_64
+ s->r8 = env->regs[8];
+ s->r9 = env->regs[9];
+ s->r10 = env->regs[10];
+ s->r11 = env->regs[11];
+ s->r12 = env->regs[12];
+ s->r13 = env->regs[13];
+ s->r14 = env->regs[14];
+ s->r15 = env->regs[15];
+#endif
+ s->rip = env->eip;
+ s->rflags = env->eflags;
+
+ copy_segment(&s->cs, &env->segs[R_CS]);
+ copy_segment(&s->ds, &env->segs[R_DS]);
+ copy_segment(&s->es, &env->segs[R_ES]);
+ copy_segment(&s->fs, &env->segs[R_FS]);
+ copy_segment(&s->gs, &env->segs[R_GS]);
+ copy_segment(&s->ss, &env->segs[R_SS]);
+ copy_segment(&s->ldt, &env->ldt);
+ copy_segment(&s->tr, &env->tr);
+ copy_segment(&s->gdt, &env->gdt);
+ copy_segment(&s->idt, &env->idt);
+
+ s->cr[0] = env->cr[0];
+ s->cr[1] = env->cr[1];
+ s->cr[2] = env->cr[2];
+ s->cr[3] = env->cr[3];
+ s->cr[4] = env->cr[4];
+}
+
+static inline int cpu_write_qemu_note(write_core_dump_function f, CPUState *env,
+ target_phys_addr_t *offset, void *opaque,
+ int type)
+{
+ QEMUCPUState state;
+ Elf64_Nhdr *note64;
+ Elf32_Nhdr *note32;
+ void *note;
+ char *buf;
+ int descsz, note_size, name_size = 5, note_head_size;
+ const char *name = "QEMU";
+ int ret;
+
+ qemu_get_cpustate(&state, env);
+
+ descsz = sizeof(state);
+ if (type == 0) {
+ note_head_size = sizeof(Elf32_Nhdr);
+ } else {
+ note_head_size = sizeof(Elf64_Nhdr);
+ }
+ note_size = ((note_head_size + 3) / 4 + (name_size + 3) / 4 +
+ (descsz + 3) / 4) * 4;
+ note = g_malloc(note_size);
+
+ memset(note, 0, note_size);
+ if (type == 0) {
+ note32 = note;
+ note32->n_namesz = cpu_to_le32(name_size);
+ note32->n_descsz = cpu_to_le32(descsz);
+ note32->n_type = 0;
+ } else {
+ note64 = note;
+ note64->n_namesz = cpu_to_le32(name_size);
+ note64->n_descsz = cpu_to_le32(descsz);
+ note64->n_type = 0;
+ }
+ buf = note;
+ buf += ((note_head_size + 3) / 4) * 4;
+ memcpy(buf, name, name_size);
+ buf += ((name_size + 3) / 4) * 4;
+ memcpy(buf, &state, sizeof(state));
+
+ ret = f(*offset, note, note_size, opaque);
+ g_free(note);
+ if (ret < 0) {
+ return -1;
+ }
+
+ *offset += note_size;
+
+ return 0;
+}
+
+int cpu_write_elf64_qemunote(write_core_dump_function f, CPUState *env,
+ target_phys_addr_t *offset, void *opaque)
+{
+ return cpu_write_qemu_note(f, env, offset, opaque, 1);
+}
+
+int cpu_write_elf32_qemunote(write_core_dump_function f, CPUState *env,
+ target_phys_addr_t *offset, void *opaque)
+{
+ return cpu_write_qemu_note(f, env, offset, opaque, 0);
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 09/14 v9] target-i386: add API to get dump info
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (7 preceding siblings ...)
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status " Wen Congyang
@ 2012-03-14 2:09 ` Wen Congyang
2012-03-14 2:10 ` [Qemu-devel] [RFC][PATCH 10/14 v9] make gdb_id() generally avialable Wen Congyang
` (5 subsequent siblings)
14 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:09 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
Dump info contains: endian, class and architecture. The next
patch will use these information to create vmcore. Note: on
x86 box, the class is ELFCLASS64 if the memory is larger than 4G.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
cpu-all.h | 7 +++++++
dump.h | 23 +++++++++++++++++++++++
target-i386/arch_dump.c | 34 ++++++++++++++++++++++++++++++++++
3 files changed, 64 insertions(+), 0 deletions(-)
create mode 100644 dump.h
diff --git a/cpu-all.h b/cpu-all.h
index f30c822..b4a28d9 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -23,6 +23,7 @@
#include "qemu-tls.h"
#include "cpu-common.h"
#include "memory_mapping.h"
+#include "dump.h"
/* some important defines:
*
@@ -543,6 +544,7 @@ int cpu_write_elf64_qemunote(write_core_dump_function f, CPUState *env,
target_phys_addr_t *offset, void *opaque);
int cpu_write_elf32_qemunote(write_core_dump_function f, CPUState *env,
target_phys_addr_t *offset, void *opaque);
+int cpu_get_dump_info(ArchDumpInfo *info);
#else
static inline int cpu_write_elf64_note(write_core_dump_function f,
CPUState *env, int cpuid,
@@ -573,6 +575,11 @@ static inline int cpu_write_elf32_qemunote(write_core_dump_function f,
{
return -1;
}
+
+static inline int cpu_get_dump_info(ArchDumpInfo *info)
+{
+ return -1;
+}
#endif
#endif /* CPU_ALL_H */
diff --git a/dump.h b/dump.h
new file mode 100644
index 0000000..28340cf
--- /dev/null
+++ b/dump.h
@@ -0,0 +1,23 @@
+/*
+ * QEMU dump
+ *
+ * Copyright Fujitsu, Corp. 2011, 2012
+ *
+ * Authors:
+ * Wen Congyang <wency@cn.fujitsu.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef DUMP_H
+#define DUMP_H
+
+typedef struct ArchDumpInfo {
+ int d_machine; /* Architecture */
+ int d_endian; /* ELFDATA2LSB or ELFDATA2MSB */
+ int d_class; /* ELFCLASS32 or ELFCLASS64 */
+} ArchDumpInfo;
+
+#endif
diff --git a/target-i386/arch_dump.c b/target-i386/arch_dump.c
index 274bbec..1518df7 100644
--- a/target-i386/arch_dump.c
+++ b/target-i386/arch_dump.c
@@ -13,6 +13,7 @@
#include "cpu.h"
#include "cpu-all.h"
+#include "dump.h"
#include "elf.h"
#ifdef TARGET_X86_64
@@ -397,3 +398,36 @@ int cpu_write_elf32_qemunote(write_core_dump_function f, CPUState *env,
{
return cpu_write_qemu_note(f, env, offset, opaque, 0);
}
+
+int cpu_get_dump_info(ArchDumpInfo *info)
+{
+ bool lma = false;
+ RAMBlock *block;
+
+#ifdef TARGET_X86_64
+ lma = !!(first_cpu->hflags & HF_LMA_MASK);
+#endif
+
+ if (lma) {
+ info->d_machine = EM_X86_64;
+ } else {
+ info->d_machine = EM_386;
+ }
+ info->d_endian = ELFDATA2LSB;
+
+ if (lma) {
+ info->d_class = ELFCLASS64;
+ } else {
+ info->d_class = ELFCLASS32;
+
+ QLIST_FOREACH(block, &ram_list.blocks, next) {
+ if (block->offset + block->length > UINT_MAX) {
+ /* The memory size is greater than 4G */
+ info->d_class = ELFCLASS64;
+ break;
+ }
+ }
+ }
+
+ return 0;
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 10/14 v9] make gdb_id() generally avialable
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (8 preceding siblings ...)
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 09/14 v9] target-i386: add API to get dump info Wen Congyang
@ 2012-03-14 2:10 ` Wen Congyang
2012-03-14 2:11 ` [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory Wen Congyang
` (4 subsequent siblings)
14 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:10 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
The following patch also needs this API, so make it generally avialable
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
gdbstub.c | 9 ---------
gdbstub.h | 9 +++++++++
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/gdbstub.c b/gdbstub.c
index ef95ac2..7522b42 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -1939,15 +1939,6 @@ static void gdb_set_cpu_pc(GDBState *s, target_ulong pc)
#endif
}
-static inline int gdb_id(CPUState *env)
-{
-#if defined(CONFIG_USER_ONLY) && defined(CONFIG_USE_NPTL)
- return env->host_tid;
-#else
- return env->cpu_index + 1;
-#endif
-}
-
static CPUState *find_cpu(uint32_t thread_id)
{
CPUState *env;
diff --git a/gdbstub.h b/gdbstub.h
index d82334f..f30bfe8 100644
--- a/gdbstub.h
+++ b/gdbstub.h
@@ -30,6 +30,15 @@ void gdb_register_coprocessor(CPUState *env,
gdb_reg_cb get_reg, gdb_reg_cb set_reg,
int num_regs, const char *xml, int g_pos);
+static inline int gdb_id(CPUState *env)
+{
+#if defined(CONFIG_USER_ONLY) && defined(CONFIG_USE_NPTL)
+ return env->host_tid;
+#else
+ return env->cpu_index + 1;
+#endif
+}
+
#endif
#ifdef CONFIG_USER_ONLY
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (9 preceding siblings ...)
2012-03-14 2:10 ` [Qemu-devel] [RFC][PATCH 10/14 v9] make gdb_id() generally avialable Wen Congyang
@ 2012-03-14 2:11 ` Wen Congyang
2012-03-14 17:18 ` Luiz Capitulino
2012-03-16 3:23 ` HATAYAMA Daisuke
2012-03-14 2:12 ` [Qemu-devel] [RFC][PATCH 12/14 v9] support to cancel the current dumping Wen Congyang
` (3 subsequent siblings)
14 siblings, 2 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:11 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
The command's usage:
dump [-p] file
file should be start with "file:"(the file's path) or "fd:"(the fd's name).
Note:
1. If you want to use gdb to analyse the core, please specify -p option.
2. This command doesn't support the fd that is is associated with a pipe,
socket, or FIFO(lseek will fail with such fd).
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
Makefile.target | 2 +-
dump.c | 714 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
elf.h | 5 +
hmp-commands.hx | 21 ++
hmp.c | 10 +
hmp.h | 1 +
qapi-schema.json | 14 +
qmp-commands.hx | 34 +++
8 files changed, 800 insertions(+), 1 deletions(-)
create mode 100644 dump.c
diff --git a/Makefile.target b/Makefile.target
index c81c4fa..287fbe7 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -213,7 +213,7 @@ obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-$(CONFIG_VGA) += vga.o
obj-y += memory.o savevm.o
obj-y += memory_mapping.o
-obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o
+obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o dump.o
LIBS+=-lz
obj-i386-$(CONFIG_KVM) += hyperv.o
diff --git a/dump.c b/dump.c
new file mode 100644
index 0000000..42e1681
--- /dev/null
+++ b/dump.c
@@ -0,0 +1,714 @@
+/*
+ * QEMU dump
+ *
+ * Copyright Fujitsu, Corp. 2011
+ *
+ * Authors:
+ * Wen Congyang <wency@cn.fujitsu.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu-common.h"
+#include <unistd.h>
+#include "elf.h"
+#include <sys/procfs.h>
+#include <glib.h>
+#include "cpu.h"
+#include "cpu-all.h"
+#include "targphys.h"
+#include "monitor.h"
+#include "kvm.h"
+#include "dump.h"
+#include "sysemu.h"
+#include "bswap.h"
+#include "memory_mapping.h"
+#include "error.h"
+#include "qmp-commands.h"
+#include "gdbstub.h"
+
+static inline uint16_t cpu_convert_to_target16(uint16_t val, int endian)
+{
+ if (endian == ELFDATA2LSB) {
+ val = cpu_to_le16(val);
+ } else {
+ val = cpu_to_be16(val);
+ }
+
+ return val;
+}
+
+static inline uint32_t cpu_convert_to_target32(uint32_t val, int endian)
+{
+ if (endian == ELFDATA2LSB) {
+ val = cpu_to_le32(val);
+ } else {
+ val = cpu_to_be32(val);
+ }
+
+ return val;
+}
+
+static inline uint64_t cpu_convert_to_target64(uint64_t val, int endian)
+{
+ if (endian == ELFDATA2LSB) {
+ val = cpu_to_le64(val);
+ } else {
+ val = cpu_to_be64(val);
+ }
+
+ return val;
+}
+
+enum {
+ DUMP_STATE_ERROR,
+ DUMP_STATE_SETUP,
+ DUMP_STATE_CANCELLED,
+ DUMP_STATE_ACTIVE,
+ DUMP_STATE_COMPLETED,
+};
+
+typedef struct DumpState {
+ ArchDumpInfo dump_info;
+ MemoryMappingList list;
+ uint16_t phdr_num;
+ uint32_t sh_info;
+ bool have_section;
+ int state;
+ bool resume;
+ char *error;
+ target_phys_addr_t memory_offset;
+ write_core_dump_function f;
+ void (*cleanup)(void *opaque);
+ void *opaque;
+} DumpState;
+
+static DumpState *dump_get_current(void)
+{
+ static DumpState current_dump = {
+ .state = DUMP_STATE_SETUP,
+ };
+
+ return ¤t_dump;
+}
+
+static int dump_cleanup(DumpState *s)
+{
+ int ret = 0;
+
+ memory_mapping_list_free(&s->list);
+ s->cleanup(s->opaque);
+ if (s->resume) {
+ vm_start();
+ }
+
+ return ret;
+}
+
+static void dump_error(DumpState *s, const char *reason)
+{
+ s->state = DUMP_STATE_ERROR;
+ s->error = g_strdup(reason);
+ dump_cleanup(s);
+}
+
+static int write_elf64_header(DumpState *s)
+{
+ Elf64_Ehdr elf_header;
+ int ret;
+ int endian = s->dump_info.d_endian;
+
+ memset(&elf_header, 0, sizeof(Elf64_Ehdr));
+ memcpy(&elf_header, ELFMAG, 4);
+ elf_header.e_ident[EI_CLASS] = ELFCLASS64;
+ elf_header.e_ident[EI_DATA] = s->dump_info.d_endian;
+ elf_header.e_ident[EI_VERSION] = EV_CURRENT;
+ elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
+ elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
+ endian);
+ elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
+ elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
+ elf_header.e_phoff = cpu_convert_to_target64(sizeof(Elf64_Ehdr), endian);
+ elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf64_Phdr),
+ endian);
+ elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
+ if (s->have_section) {
+ uint64_t shoff = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr) * s->sh_info;
+
+ elf_header.e_shoff = cpu_convert_to_target64(shoff, endian);
+ elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf64_Shdr),
+ endian);
+ elf_header.e_shnum = cpu_convert_to_target16(1, endian);
+ }
+
+ ret = s->f(0, &elf_header, sizeof(elf_header), s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write elf header.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int write_elf32_header(DumpState *s)
+{
+ Elf32_Ehdr elf_header;
+ int ret;
+ int endian = s->dump_info.d_endian;
+
+ memset(&elf_header, 0, sizeof(Elf32_Ehdr));
+ memcpy(&elf_header, ELFMAG, 4);
+ elf_header.e_ident[EI_CLASS] = ELFCLASS32;
+ elf_header.e_ident[EI_DATA] = endian;
+ elf_header.e_ident[EI_VERSION] = EV_CURRENT;
+ elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
+ elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
+ endian);
+ elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
+ elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
+ elf_header.e_phoff = cpu_convert_to_target32(sizeof(Elf32_Ehdr), endian);
+ elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf32_Phdr),
+ endian);
+ elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
+ if (s->have_section) {
+ uint32_t shoff = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr) * s->sh_info;
+
+ elf_header.e_shoff = cpu_convert_to_target32(shoff, endian);
+ elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf32_Shdr),
+ endian);
+ elf_header.e_shnum = cpu_convert_to_target16(1, endian);
+ }
+
+ ret = s->f(0, &elf_header, sizeof(elf_header), s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write elf header.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping,
+ int phdr_index, target_phys_addr_t offset)
+{
+ Elf64_Phdr phdr;
+ off_t phdr_offset;
+ int ret;
+ int endian = s->dump_info.d_endian;
+
+ memset(&phdr, 0, sizeof(Elf64_Phdr));
+ phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
+ phdr.p_offset = cpu_convert_to_target64(offset, endian);
+ phdr.p_paddr = cpu_convert_to_target64(memory_mapping->phys_addr, endian);
+ if (offset == -1) {
+ phdr.p_filesz = 0;
+ } else {
+ phdr.p_filesz = cpu_convert_to_target64(memory_mapping->length, endian);
+ }
+ phdr.p_memsz = cpu_convert_to_target64(memory_mapping->length, endian);
+ phdr.p_vaddr = cpu_convert_to_target64(memory_mapping->virt_addr, endian);
+
+ phdr_offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*phdr_index;
+ ret = s->f(phdr_offset, &phdr, sizeof(Elf64_Phdr), s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write program header table.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int write_elf32_load(DumpState *s, MemoryMapping *memory_mapping,
+ int phdr_index, target_phys_addr_t offset)
+{
+ Elf32_Phdr phdr;
+ off_t phdr_offset;
+ int ret;
+ int endian = s->dump_info.d_endian;
+
+ memset(&phdr, 0, sizeof(Elf32_Phdr));
+ phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
+ phdr.p_offset = cpu_convert_to_target32(offset, endian);
+ phdr.p_paddr = cpu_convert_to_target32(memory_mapping->phys_addr, endian);
+ if (offset == -1) {
+ phdr.p_filesz = 0;
+ } else {
+ phdr.p_filesz = cpu_convert_to_target32(memory_mapping->length, endian);
+ }
+ phdr.p_memsz = cpu_convert_to_target32(memory_mapping->length, endian);
+ phdr.p_vaddr = cpu_convert_to_target32(memory_mapping->virt_addr, endian);
+
+ phdr_offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*phdr_index;
+ ret = s->f(phdr_offset, &phdr, sizeof(Elf32_Phdr), s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write program header table.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int write_elf64_notes(DumpState *s, int phdr_index,
+ target_phys_addr_t *offset)
+{
+ CPUState *env;
+ int ret;
+ target_phys_addr_t begin = *offset;
+ Elf64_Phdr phdr;
+ off_t phdr_offset;
+ int id;
+ int endian = s->dump_info.d_endian;
+
+ for (env = first_cpu; env != NULL; env = env->next_cpu) {
+ id = gdb_id(env);
+ ret = cpu_write_elf64_note(s->f, env, id, offset, s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write elf notes.\n");
+ return -1;
+ }
+ }
+
+ for (env = first_cpu; env != NULL; env = env->next_cpu) {
+ ret = cpu_write_elf64_qemunote(s->f, env, offset, s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write CPU status.\n");
+ return -1;
+ }
+ }
+
+ memset(&phdr, 0, sizeof(Elf64_Phdr));
+ phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
+ phdr.p_offset = cpu_convert_to_target64(begin, endian);
+ phdr.p_paddr = 0;
+ phdr.p_filesz = cpu_convert_to_target64(*offset - begin, endian);
+ phdr.p_memsz = cpu_convert_to_target64(*offset - begin, endian);
+ phdr.p_vaddr = 0;
+
+ phdr_offset = sizeof(Elf64_Ehdr);
+ ret = s->f(phdr_offset, &phdr, sizeof(Elf64_Phdr), s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write program header table.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int write_elf32_notes(DumpState *s, int phdr_index,
+ target_phys_addr_t *offset)
+{
+ CPUState *env;
+ int ret;
+ target_phys_addr_t begin = *offset;
+ Elf32_Phdr phdr;
+ off_t phdr_offset;
+ int id;
+ int endian = s->dump_info.d_endian;
+
+ for (env = first_cpu; env != NULL; env = env->next_cpu) {
+ id = gdb_id(env);
+ ret = cpu_write_elf32_note(s->f, env, id, offset, s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write elf notes.\n");
+ return -1;
+ }
+ }
+
+ for (env = first_cpu; env != NULL; env = env->next_cpu) {
+ ret = cpu_write_elf32_qemunote(s->f, env, offset, s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write CPU status.\n");
+ return -1;
+ }
+ }
+
+ memset(&phdr, 0, sizeof(Elf32_Phdr));
+ phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
+ phdr.p_offset = cpu_convert_to_target32(begin, endian);
+ phdr.p_paddr = 0;
+ phdr.p_filesz = cpu_convert_to_target32(*offset - begin, endian);
+ phdr.p_memsz = cpu_convert_to_target32(*offset - begin, endian);
+ phdr.p_vaddr = 0;
+
+ phdr_offset = sizeof(Elf32_Ehdr);
+ ret = s->f(phdr_offset, &phdr, sizeof(Elf32_Phdr), s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write program header table.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int write_elf_section(DumpState *s, target_phys_addr_t *offset, int type)
+{
+ Elf32_Shdr shdr32;
+ Elf64_Shdr shdr64;
+ int endian = s->dump_info.d_endian;
+ int shdr_size;
+ void *shdr;
+ int ret;
+
+ if (type == 0) {
+ shdr_size = sizeof(Elf32_Shdr);
+ memset(&shdr32, 0, shdr_size);
+ shdr32.sh_info = cpu_convert_to_target32(s->sh_info, endian);
+ shdr = &shdr32;
+ } else {
+ shdr_size = sizeof(Elf64_Shdr);
+ memset(&shdr64, 0, shdr_size);
+ shdr64.sh_info = cpu_convert_to_target32(s->sh_info, endian);
+ shdr = &shdr64;
+ }
+
+ ret = s->f(*offset, &shdr, shdr_size, s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to write section header table.\n");
+ return -1;
+ }
+
+ *offset += shdr_size;
+ return 0;
+}
+
+static int write_data(DumpState *s, void *buf, int length,
+ target_phys_addr_t *offset)
+{
+ int ret;
+
+ ret = s->f(*offset, buf, length, s->opaque);
+ if (ret < 0) {
+ dump_error(s, "dump: failed to save memory.\n");
+ return -1;
+ }
+
+ *offset += length;
+ return 0;
+}
+
+/* write the memroy to vmcore. 1 page per I/O. */
+static int write_memory(DumpState *s, RAMBlock *block,
+ target_phys_addr_t *offset)
+{
+ int i, ret;
+
+ for (i = 0; i < block->length / TARGET_PAGE_SIZE; i++) {
+ ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
+ TARGET_PAGE_SIZE, offset);
+ if (ret < 0) {
+ return -1;
+ }
+ }
+
+ if ((block->length % TARGET_PAGE_SIZE) != 0) {
+ ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
+ block->length % TARGET_PAGE_SIZE, offset);
+ if (ret < 0) {
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* get the memory's offset in the vmcore */
+static target_phys_addr_t get_offset(target_phys_addr_t phys_addr,
+ target_phys_addr_t memory_offset)
+{
+ RAMBlock *block;
+ target_phys_addr_t offset = memory_offset;
+
+ QLIST_FOREACH(block, &ram_list.blocks, next) {
+ if (phys_addr >= block->offset &&
+ phys_addr < block->offset + block->length) {
+ return phys_addr - block->offset + offset;
+ }
+ offset += block->length;
+ }
+
+ return -1;
+}
+
+/* write elf header, PT_NOTE and elf note to vmcore. */
+static int dump_begin(DumpState *s)
+{
+ target_phys_addr_t offset;
+ int ret;
+
+ s->state = DUMP_STATE_ACTIVE;
+
+ /*
+ * the vmcore's format is:
+ * --------------
+ * | elf header |
+ * --------------
+ * | PT_NOTE |
+ * --------------
+ * | PT_LOAD |
+ * --------------
+ * | ...... |
+ * --------------
+ * | PT_LOAD |
+ * --------------
+ * | sec_hdr |
+ * --------------
+ * | elf note |
+ * --------------
+ * | memory |
+ * --------------
+ *
+ * we only know where the memory is saved after we write elf note into
+ * vmcore.
+ */
+
+ /* write elf header to vmcore */
+ if (s->dump_info.d_class == ELFCLASS64) {
+ ret = write_elf64_header(s);
+ } else {
+ ret = write_elf32_header(s);
+ }
+ if (ret < 0) {
+ return -1;
+ }
+
+ /* write elf section and notes to vmcore */
+ if (s->dump_info.d_class == ELFCLASS64) {
+ if (s->have_section) {
+ offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*s->sh_info;
+ if (write_elf_section(s, &offset, 1) < 0) {
+ return -1;
+ }
+ } else {
+ offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*s->phdr_num;
+ }
+ ret = write_elf64_notes(s, 0, &offset);
+ } else {
+ if (s->have_section) {
+ offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*s->sh_info;
+ if (write_elf_section(s, &offset, 0) < 0) {
+ return -1;
+ }
+ } else {
+ offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*s->phdr_num;
+ }
+ ret = write_elf32_notes(s, 0, &offset);
+ }
+
+ if (ret < 0) {
+ return -1;
+ }
+
+ s->memory_offset = offset;
+ return 0;
+}
+
+/* write PT_LOAD to vmcore */
+static int dump_completed(DumpState *s)
+{
+ target_phys_addr_t offset;
+ MemoryMapping *memory_mapping;
+ int phdr_index = 1, ret;
+
+ QTAILQ_FOREACH(memory_mapping, &s->list.head, next) {
+ offset = get_offset(memory_mapping->phys_addr, s->memory_offset);
+ if (s->dump_info.d_class == ELFCLASS64) {
+ ret = write_elf64_load(s, memory_mapping, phdr_index++, offset);
+ } else {
+ ret = write_elf32_load(s, memory_mapping, phdr_index++, offset);
+ }
+ if (ret < 0) {
+ return -1;
+ }
+ }
+
+ s->state = DUMP_STATE_COMPLETED;
+ dump_cleanup(s);
+ return 0;
+}
+
+/* write all memory to vmcore */
+static int dump_iterate(DumpState *s)
+{
+ RAMBlock *block;
+ target_phys_addr_t offset = s->memory_offset;
+ int ret;
+
+ /* write all memory to vmcore */
+ QLIST_FOREACH(block, &ram_list.blocks, next) {
+ ret = write_memory(s, block, &offset);
+ if (ret < 0) {
+ return -1;
+ }
+ }
+
+ return dump_completed(s);
+}
+
+static int create_vmcore(DumpState *s)
+{
+ int ret;
+
+ ret = dump_begin(s);
+ if (ret < 0) {
+ return -1;
+ }
+
+ ret = dump_iterate(s);
+ if (ret < 0) {
+ return -1;
+ }
+
+ return 0;
+}
+
+static DumpState *dump_init(bool paging, Error **errp)
+{
+ CPUState *env;
+ DumpState *s = dump_get_current();
+ int ret;
+
+ if (runstate_is_running()) {
+ vm_stop(RUN_STATE_PAUSED);
+ s->resume = true;
+ } else {
+ s->resume = false;
+ }
+ s->state = DUMP_STATE_SETUP;
+ if (s->error) {
+ g_free(s->error);
+ s->error = NULL;
+ }
+
+ /*
+ * get dump info: endian, class and architecture.
+ * If the target architecture is not supported, cpu_get_dump_info() will
+ * return -1.
+ *
+ * if we use kvm, we should synchronize the register before we get dump
+ * info.
+ */
+ for (env = first_cpu; env != NULL; env = env->next_cpu) {
+ cpu_synchronize_state(env);
+ }
+
+ ret = cpu_get_dump_info(&s->dump_info);
+ if (ret < 0) {
+ error_set(errp, QERR_UNSUPPORTED);
+ return NULL;
+ }
+
+ /* get memory mapping */
+ memory_mapping_list_init(&s->list);
+ if (paging) {
+ qemu_get_guest_memory_mapping(&s->list);
+ } else {
+ qemu_get_guest_simple_memory_mapping(&s->list);
+ }
+
+ /*
+ * calculate phdr_num
+ *
+ * the type of phdr->num is uint16_t, so we should avoid overflow
+ */
+ s->phdr_num = 1; /* PT_NOTE */
+ if (s->list.num < (1 << 16) - 2) {
+ s->phdr_num += s->list.num;
+ s->have_section = false;
+ } else {
+ s->have_section = true;
+ s->phdr_num = PN_XNUM;
+
+ /* the type of shdr->sh_info is uint32_t, so we should avoid overflow */
+ if (s->list.num > (1ULL << 32) - 2) {
+ s->sh_info = 0xffffffff;
+ } else {
+ s->sh_info += s->list.num;
+ }
+ }
+
+ return s;
+}
+
+static int fd_write_vmcore(target_phys_addr_t offset, void *buf, size_t size,
+ void *opaque)
+{
+ int fd = (int)(intptr_t)opaque;
+ int ret;
+
+ ret = lseek(fd, offset, SEEK_SET);
+ if (ret < 0) {
+ return -1;
+ }
+
+ ret = write(fd, buf, size);
+ if (ret != size) {
+ return -1;
+ }
+
+ return 0;
+}
+
+static void fd_cleanup(void *opaque)
+{
+ int fd = (int)(intptr_t)opaque;
+
+ if (fd != -1) {
+ close(fd);
+ }
+}
+
+static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
+{
+ DumpState *s = dump_init(paging, errp);
+
+ if (s == NULL) {
+ return NULL;
+ }
+
+ s->f = fd_write_vmcore;
+ s->cleanup = fd_cleanup;
+ s->opaque = (void *)(intptr_t)fd;
+
+ return s;
+}
+
+void qmp_dump(bool paging, const char *file, Error **errp)
+{
+ const char *p;
+ int fd = -1;
+ DumpState *s;
+
+#if !defined(WIN32)
+ if (strstart(file, "fd:", &p)) {
+ fd = qemu_get_fd(p);
+ if (fd == -1) {
+ error_set(errp, QERR_FD_NOT_FOUND, p);
+ return;
+ }
+ }
+#endif
+
+ if (strstart(file, "file:", &p)) {
+ fd = open(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
+ if (fd < 0) {
+ error_set(errp, QERR_OPEN_FILE_FAILED, p);
+ return;
+ }
+ }
+
+ if (fd == -1) {
+ error_set(errp, QERR_INVALID_PARAMETER, "file");
+ return;
+ }
+
+ s = dump_init_fd(fd, paging, errp);
+ if (!s) {
+ return;
+ }
+
+ if (create_vmcore(s) < 0) {
+ error_set(errp, QERR_IO_ERROR);
+ }
+}
diff --git a/elf.h b/elf.h
index 2e05d34..6a10657 100644
--- a/elf.h
+++ b/elf.h
@@ -1000,6 +1000,11 @@ typedef struct elf64_sym {
#define EI_NIDENT 16
+/* Special value for e_phnum. This indicates that the real number of
+ program headers is too large to fit into e_phnum. Instead the real
+ value is in the field sh_info of section 0. */
+#define PN_XNUM 0xffff
+
typedef struct elf32_hdr{
unsigned char e_ident[EI_NIDENT];
Elf32_Half e_type;
diff --git a/hmp-commands.hx b/hmp-commands.hx
index 6980214..d4cf2e5 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -880,6 +880,27 @@ server will ask the spice/vnc client to automatically reconnect using the
new parameters (if specified) once the vm migration finished successfully.
ETEXI
+#if defined(CONFIG_HAVE_CORE_DUMP)
+ {
+ .name = "dump",
+ .args_type = "paging:-p,file:s",
+ .params = "[-p] file",
+ .help = "dump to file",
+ .user_print = monitor_user_noop,
+ .mhandler.cmd = hmp_dump,
+ },
+
+
+STEXI
+@item dump [-p] @var{file}
+@findex dump
+Dump to @var{file}. The file can be processed with crash or gdb.
+ file: destination file(started with "file:") or destination file descriptor
+ (started with "fd:")
+ paging: do paging to get guest's memory mapping
+ETEXI
+#endif
+
{
.name = "snapshot_blkdev",
.args_type = "reuse:-n,device:B,snapshot-file:s?,format:s?",
diff --git a/hmp.c b/hmp.c
index 290c43d..e13b793 100644
--- a/hmp.c
+++ b/hmp.c
@@ -860,3 +860,13 @@ void hmp_block_job_cancel(Monitor *mon, const QDict *qdict)
hmp_handle_error(mon, &error);
}
+
+void hmp_dump(Monitor *mon, const QDict *qdict)
+{
+ Error *errp = NULL;
+ int paging = qdict_get_try_bool(qdict, "paging", 0);
+ const char *file = qdict_get_str(qdict, "file");
+
+ qmp_dump(!!paging, file, &errp);
+ hmp_handle_error(mon, &errp);
+}
diff --git a/hmp.h b/hmp.h
index 5409464..b055e50 100644
--- a/hmp.h
+++ b/hmp.h
@@ -59,5 +59,6 @@ void hmp_block_set_io_throttle(Monitor *mon, const QDict *qdict);
void hmp_block_stream(Monitor *mon, const QDict *qdict);
void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
+void hmp_dump(Monitor *mon, const QDict *qdict);
#endif
diff --git a/qapi-schema.json b/qapi-schema.json
index 04fa84f..81b8c7c 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -1663,3 +1663,17 @@
{ 'command': 'qom-list-types',
'data': { '*implements': 'str', '*abstract': 'bool' },
'returns': [ 'ObjectTypeInfo' ] }
+
+##
+# @dump
+#
+# Dump guest's memory to vmcore.
+#
+# @paging: if true, do paging to get guest's memory mapping
+# @file: the filename or file descriptor of the vmcore.
+#
+# Returns: nothing on success
+#
+# Since: 1.1
+##
+{ 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index dfe8a5b..9e39bd9 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -586,6 +586,40 @@ Example:
EQMP
+#if defined(CONFIG_HAVE_CORE_DUMP)
+ {
+ .name = "dump",
+ .args_type = "paging:-p,file:s",
+ .params = "[-p] file",
+ .help = "dump to file",
+ .user_print = monitor_user_noop,
+ .mhandler.cmd_new = qmp_marshal_input_dump,
+ },
+
+SQMP
+dump
+
+
+Dump to file. The file can be processed with crash or gdb.
+
+Arguments:
+
+- "paging": do paging to get guest's memory mapping (json-bool)
+- "file": destination file(started with "file:") or destination file descriptor
+ (started with "fd:") (json-string)
+
+Example:
+
+-> { "execute": "dump", "arguments": { "file": "fd:dump" } }
+<- { "return": {} }
+
+Notes:
+
+(1) All boolean arguments default to false
+
+EQMP
+#endif
+
{
.name = "netdev_add",
.args_type = "netdev:O",
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 12/14 v9] support to cancel the current dumping
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (10 preceding siblings ...)
2012-03-14 2:11 ` [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory Wen Congyang
@ 2012-03-14 2:12 ` Wen Congyang
2012-03-14 17:19 ` Luiz Capitulino
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 13/14 v9] support to query dumping status Wen Congyang
` (2 subsequent siblings)
14 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:12 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
Add API to allow the user to cancel the current dumping. It can only work after
async dumping is supported.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
dump.c | 12 ++++++++++++
hmp-commands.hx | 14 ++++++++++++++
hmp.c | 5 +++++
hmp.h | 1 +
qapi-schema.json | 13 +++++++++++++
qmp-commands.hx | 21 +++++++++++++++++++++
6 files changed, 66 insertions(+), 0 deletions(-)
diff --git a/dump.c b/dump.c
index 42e1681..dab0c84 100644
--- a/dump.c
+++ b/dump.c
@@ -712,3 +712,15 @@ void qmp_dump(bool paging, const char *file, Error **errp)
error_set(errp, QERR_IO_ERROR);
}
}
+
+void qmp_dump_cancel(Error **errp)
+{
+ DumpState *s = dump_get_current();
+
+ if (s->state != DUMP_STATE_ACTIVE) {
+ return;
+ }
+
+ s->state = DUMP_STATE_CANCELLED;
+ dump_cleanup(s);
+}
diff --git a/hmp-commands.hx b/hmp-commands.hx
index d4cf2e5..313f876 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -902,6 +902,20 @@ ETEXI
#endif
{
+ .name = "dump_cancel",
+ .args_type = "",
+ .params = "",
+ .help = "cancel the current VM dumping",
+ .mhandler.cmd = hmp_dump_cancel,
+ },
+
+STEXI
+@item dump_cancel
+@findex dump_cancel
+Cancel the current VM dumping.
+ETEXI
+
+ {
.name = "snapshot_blkdev",
.args_type = "reuse:-n,device:B,snapshot-file:s?,format:s?",
.params = "[-n] device [new-image-file] [format]",
diff --git a/hmp.c b/hmp.c
index e13b793..31e85d3 100644
--- a/hmp.c
+++ b/hmp.c
@@ -870,3 +870,8 @@ void hmp_dump(Monitor *mon, const QDict *qdict)
qmp_dump(!!paging, file, &errp);
hmp_handle_error(mon, &errp);
}
+
+void hmp_dump_cancel(Monitor *mon, const QDict *qdict)
+{
+ qmp_dump_cancel(NULL);
+}
diff --git a/hmp.h b/hmp.h
index b055e50..75c6c1d 100644
--- a/hmp.h
+++ b/hmp.h
@@ -60,5 +60,6 @@ void hmp_block_stream(Monitor *mon, const QDict *qdict);
void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
void hmp_dump(Monitor *mon, const QDict *qdict);
+void hmp_dump_cancel(Monitor *mon, const QDict *qdict);
#endif
diff --git a/qapi-schema.json b/qapi-schema.json
index 81b8c7c..1fdfee8 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -1677,3 +1677,16 @@
# Since: 1.1
##
{ 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
+
+##
+# @dump_cancel
+#
+# Cancel the current executing dumping process.
+#
+# Returns: nothing on success
+#
+# Notes: This command succeeds even if there is no dumping process running.
+#
+# Since: 1.1
+##
+{ 'command': 'dump_cancel' }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 9e39bd9..cbe5b91 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -621,6 +621,27 @@ EQMP
#endif
{
+ .name = "dump_cancel",
+ .args_type = "",
+ .mhandler.cmd_new = qmp_marshal_input_dump_cancel,
+ },
+
+SQMP
+dump_cancel
+
+
+Cancel the current dumping.
+
+Arguments: None.
+
+Example:
+
+-> { "execute": "dump_cancel" }
+<- { "return": {} }
+
+EQMP
+
+ {
.name = "netdev_add",
.args_type = "netdev:O",
.params = "[user|tap|socket],id=str[,prop=value][,...]",
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 13/14 v9] support to query dumping status
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (11 preceding siblings ...)
2012-03-14 2:12 ` [Qemu-devel] [RFC][PATCH 12/14 v9] support to cancel the current dumping Wen Congyang
@ 2012-03-14 2:13 ` Wen Congyang
2012-03-14 17:19 ` Luiz Capitulino
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 14/14 v9] allow user to dump a fraction of the memory Wen Congyang
2012-03-14 17:26 ` [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Luiz Capitulino
14 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:13 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
Add API to allow the user to query dumping status. It can only work after
async dumping is supported.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
dump.c | 32 ++++++++++++++++++++++++++++++++
hmp-commands.hx | 2 ++
hmp.c | 17 +++++++++++++++++
hmp.h | 1 +
monitor.c | 7 +++++++
qapi-schema.json | 27 +++++++++++++++++++++++++++
qmp-commands.hx | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
7 files changed, 136 insertions(+), 0 deletions(-)
diff --git a/dump.c b/dump.c
index dab0c84..7f9ea09 100644
--- a/dump.c
+++ b/dump.c
@@ -724,3 +724,35 @@ void qmp_dump_cancel(Error **errp)
s->state = DUMP_STATE_CANCELLED;
dump_cleanup(s);
}
+
+DumpInfo *qmp_query_dump(Error **errp)
+{
+ DumpInfo *info = g_malloc0(sizeof(*info));
+ DumpState *s = dump_get_current();
+
+ switch (s->state) {
+ case DUMP_STATE_SETUP:
+ /* no dumping has happened ever */
+ break;
+ case DUMP_STATE_ACTIVE:
+ info->has_status = true;
+ info->status = g_strdup("active");
+ break;
+ case DUMP_STATE_COMPLETED:
+ info->has_status = true;
+ info->status = g_strdup("completed");
+ break;
+ case DUMP_STATE_ERROR:
+ info->has_status = true;
+ info->status = g_strdup("failed");
+ info->has_error = true;
+ info->error = g_strdup(s->error);
+ break;
+ case DUMP_STATE_CANCELLED:
+ info->has_status = true;
+ info->status = g_strdup("cancelled");
+ break;
+ }
+
+ return info;
+}
diff --git a/hmp-commands.hx b/hmp-commands.hx
index 313f876..abd412e 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1438,6 +1438,8 @@ show device tree
show qdev device model list
@item info roms
show roms
+@item info dump
+show dumping status
@end table
ETEXI
diff --git a/hmp.c b/hmp.c
index 31e85d3..d0aa94b 100644
--- a/hmp.c
+++ b/hmp.c
@@ -875,3 +875,20 @@ void hmp_dump_cancel(Monitor *mon, const QDict *qdict)
{
qmp_dump_cancel(NULL);
}
+
+void hmp_info_dump(Monitor *mon)
+{
+ DumpInfo *info;
+
+ info = qmp_query_dump(NULL);
+
+ if (info->has_status) {
+ monitor_printf(mon, "Dumping status: %s\n", info->status);
+ }
+
+ if (info->has_error) {
+ monitor_printf(mon, "Dumping failed reason: %s\n", info->error);
+ }
+
+ qapi_free_DumpInfo(info);
+}
diff --git a/hmp.h b/hmp.h
index 75c6c1d..3d105a9 100644
--- a/hmp.h
+++ b/hmp.h
@@ -61,5 +61,6 @@ void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
void hmp_dump(Monitor *mon, const QDict *qdict);
void hmp_dump_cancel(Monitor *mon, const QDict *qdict);
+void hmp_info_dump(Monitor *mon);
#endif
diff --git a/monitor.c b/monitor.c
index fe54b7c..8348637 100644
--- a/monitor.c
+++ b/monitor.c
@@ -2603,6 +2603,13 @@ static mon_cmd_t info_cmds[] = {
.mhandler.info = do_trace_print_events,
},
{
+ .name = "dump",
+ .args_type = "",
+ .params = "",
+ .help = "show dumping status",
+ .mhandler.info = hmp_info_dump,
+ },
+ {
.name = NULL,
},
};
diff --git a/qapi-schema.json b/qapi-schema.json
index 1fdfee8..9728f5f 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -1690,3 +1690,30 @@
# Since: 1.1
##
{ 'command': 'dump_cancel' }
+
+##
+# @DumpInfo
+#
+# Information about current dumping process.
+#
+# @status: #optional string describing the current dumping status.
+# As of 1,1 this can be 'active', 'completed', 'failed' or
+# 'cancelled'. If this field is not returned, no dumping process
+# has been initiated
+# @error: #optional string describing the current dumping failed reason.
+#
+# Since: 1.1
+##
+{ 'type': 'DumpInfo',
+ 'data': { '*status': 'str', '*error': 'str' } }
+
+##
+# @query-dump
+#
+# Returns information about current dumping process.
+#
+# Returns: @DumpInfo
+#
+# Since: 1.1
+##
+{ 'command': 'query-dump', 'returns': 'DumpInfo' }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index cbe5b91..036e111 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -616,6 +616,8 @@ Example:
Notes:
(1) All boolean arguments default to false
+(2) The 'info dump' command should be used to check dumping's progress
+ and final result (this information is provided by the 'status' member)
EQMP
#endif
@@ -2106,6 +2108,54 @@ EQMP
},
SQMP
+query-dump
+-------------
+
+Dumping status.
+
+Return a json-object.
+
+The main json-object contains the following:
+
+- "status": dumping status (json-string)
+ - Possible values: "active", "completed", "failed", "cancelled"
+
+Examples:
+
+1. Before the first dumping
+
+-> { "execute": "query-dump" }
+<- { "return": {} }
+
+2. Dumping is done and has succeeded
+
+-> { "execute": "query-dump" }
+<- { "return": { "status": "completed" } }
+
+3. Dumping is done and has failed
+
+-> { "execute": "query-dump" }
+<- { "return": { "status": "failed",
+ "error": "dump: failed to save memory." } }
+
+4. Dumping is being performed:
+
+-> { "execute": "query-dump" }
+<- {
+ "return":{
+ "status":"active",
+ }
+ }
+
+EQMP
+
+ {
+ .name = "query-dump",
+ .args_type = "",
+ .mhandler.cmd_new = qmp_marshal_input_query_dump,
+ },
+
+SQMP
query-balloon
-------------
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RFC][PATCH 14/14 v9] allow user to dump a fraction of the memory
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (12 preceding siblings ...)
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 13/14 v9] support to query dumping status Wen Congyang
@ 2012-03-14 2:13 ` Wen Congyang
2012-03-14 17:20 ` Luiz Capitulino
2012-03-14 17:26 ` [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Luiz Capitulino
14 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 2:13 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
This API allows the user to limit how much memory to be dumped,
rather than forcing the user to dump all memory at once.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
dump.c | 186 +++++++++++++++++++++++++++++++++++++++++++++---------
hmp-commands.hx | 14 +++-
hmp.c | 13 ++++-
memory_mapping.c | 27 ++++++++
memory_mapping.h | 3 +
qapi-schema.json | 6 ++-
qmp-commands.hx | 8 ++-
7 files changed, 220 insertions(+), 37 deletions(-)
diff --git a/dump.c b/dump.c
index 7f9ea09..cd65488 100644
--- a/dump.c
+++ b/dump.c
@@ -83,6 +83,12 @@ typedef struct DumpState {
write_core_dump_function f;
void (*cleanup)(void *opaque);
void *opaque;
+
+ RAMBlock *block;
+ ram_addr_t start;
+ bool has_filter;
+ int64_t begin;
+ int64_t length;
} DumpState;
static DumpState *dump_get_current(void)
@@ -389,24 +395,30 @@ static int write_data(DumpState *s, void *buf, int length,
}
/* write the memroy to vmcore. 1 page per I/O. */
-static int write_memory(DumpState *s, RAMBlock *block,
- target_phys_addr_t *offset)
+static int write_memory(DumpState *s, RAMBlock *block, ram_addr_t start,
+ target_phys_addr_t *offset, int64_t *size)
{
int i, ret;
+ int64_t writen_size = 0;
- for (i = 0; i < block->length / TARGET_PAGE_SIZE; i++) {
- ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
+ *size = block->length - start;
+ for (i = 0; i < *size / TARGET_PAGE_SIZE; i++) {
+ ret = write_data(s, block->host + start + i * TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE, offset);
if (ret < 0) {
- return -1;
+ *size = writen_size;
+ return ret;
}
+
+ writen_size += TARGET_PAGE_SIZE;
}
- if ((block->length % TARGET_PAGE_SIZE) != 0) {
- ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
- block->length % TARGET_PAGE_SIZE, offset);
+ if ((*size % TARGET_PAGE_SIZE) != 0) {
+ ret = write_data(s, block->host + start + i * TARGET_PAGE_SIZE,
+ *size % TARGET_PAGE_SIZE, offset);
if (ret < 0) {
- return -1;
+ *size = writen_size;
+ return ret;
}
}
@@ -415,17 +427,47 @@ static int write_memory(DumpState *s, RAMBlock *block,
/* get the memory's offset in the vmcore */
static target_phys_addr_t get_offset(target_phys_addr_t phys_addr,
- target_phys_addr_t memory_offset)
+ DumpState *s)
{
RAMBlock *block;
- target_phys_addr_t offset = memory_offset;
+ target_phys_addr_t offset = s->memory_offset;
+ int64_t size_in_block, start;
+
+ if (s->has_filter) {
+ if (phys_addr < s->begin || phys_addr >= s->begin + s->length) {
+ return -1;
+ }
+ }
QLIST_FOREACH(block, &ram_list.blocks, next) {
- if (phys_addr >= block->offset &&
- phys_addr < block->offset + block->length) {
- return phys_addr - block->offset + offset;
+ if (s->has_filter) {
+ if (block->offset >= s->begin + s->length ||
+ block->offset + block->length <= s->begin) {
+ /* This block is out of the range */
+ continue;
+ }
+
+ if (s->begin <= block->offset) {
+ start = block->offset;
+ } else {
+ start = s->begin;
+ }
+
+ size_in_block = block->length - (start - block->offset);
+ if (s->begin + s->length < block->offset + block->length) {
+ size_in_block -= block->offset + block->length -
+ (s->begin + s->length);
+ }
+ } else {
+ start = block->offset;
+ size_in_block = block->length;
}
- offset += block->length;
+
+ if (phys_addr >= start && phys_addr < start + size_in_block) {
+ return phys_addr - start + offset;
+ }
+
+ offset += size_in_block;
}
return -1;
@@ -512,7 +554,7 @@ static int dump_completed(DumpState *s)
int phdr_index = 1, ret;
QTAILQ_FOREACH(memory_mapping, &s->list.head, next) {
- offset = get_offset(memory_mapping->phys_addr, s->memory_offset);
+ offset = get_offset(memory_mapping->phys_addr, s);
if (s->dump_info.d_class == ELFCLASS64) {
ret = write_elf64_load(s, memory_mapping, phdr_index++, offset);
} else {
@@ -528,22 +570,55 @@ static int dump_completed(DumpState *s)
return 0;
}
+static int get_next_block(DumpState *s, RAMBlock *block)
+{
+ while (1) {
+ block = QLIST_NEXT(block, next);
+ if (!block) {
+ /* no more block */
+ return 1;
+ }
+
+ s->start = 0;
+ s->block = block;
+ if (s->has_filter) {
+ if (block->offset >= s->begin + s->length ||
+ block->offset + block->length <= s->begin) {
+ /* This block is out of the range */
+ continue;
+ }
+
+ if (s->begin > block->offset) {
+ s->start = s->begin - block->offset;
+ }
+ }
+
+ return 0;
+ }
+}
+
/* write all memory to vmcore */
-static int dump_iterate(DumpState *s)
+static int dump_iterate(void *opaque)
{
+ DumpState *s = opaque;
RAMBlock *block;
target_phys_addr_t offset = s->memory_offset;
+ int64_t size;
int ret;
- /* write all memory to vmcore */
- QLIST_FOREACH(block, &ram_list.blocks, next) {
- ret = write_memory(s, block, &offset);
- if (ret < 0) {
- return -1;
+ while(1) {
+ block = s->block;
+ ret = write_memory(s, block, s->start, &offset, &size);
+ if (ret == -1) {
+ return ret;
}
- }
- return dump_completed(s);
+ ret = get_next_block(s, block);
+ if (ret == 1) {
+ dump_completed(s);
+ return 0;
+ }
+ }
}
static int create_vmcore(DumpState *s)
@@ -563,7 +638,36 @@ static int create_vmcore(DumpState *s)
return 0;
}
-static DumpState *dump_init(bool paging, Error **errp)
+static ram_addr_t get_start_block(DumpState *s)
+{
+ RAMBlock *block;
+
+ if (!s->has_filter) {
+ s->block = QLIST_FIRST(&ram_list.blocks);
+ return 0;
+ }
+
+ QLIST_FOREACH(block, &ram_list.blocks, next) {
+ if (block->offset >= s->begin + s->length ||
+ block->offset + block->length <= s->begin) {
+ /* This block is out of the range */
+ continue;
+ }
+
+ s->block = block;
+ if (s->begin > block->offset ) {
+ s->start = s->begin - block->offset;
+ } else {
+ s->start = 0;
+ }
+ return s->start;
+ }
+
+ return -1;
+}
+
+static DumpState *dump_init(bool paging, bool has_filter, int64_t begin,
+ int64_t length, Error **errp)
{
CPUState *env;
DumpState *s = dump_get_current();
@@ -581,6 +685,15 @@ static DumpState *dump_init(bool paging, Error **errp)
s->error = NULL;
}
+ s->has_filter = has_filter;
+ s->begin = begin;
+ s->length = length;
+ s->start = get_start_block(s);
+ if (s->start == -1) {
+ error_set(errp, QERR_INVALID_PARAMETER, "begin");
+ return NULL;
+ }
+
/*
* get dump info: endian, class and architecture.
* If the target architecture is not supported, cpu_get_dump_info() will
@@ -607,6 +720,10 @@ static DumpState *dump_init(bool paging, Error **errp)
qemu_get_guest_simple_memory_mapping(&s->list);
}
+ if (s->has_filter) {
+ memory_mapping_filter(&s->list, s->begin, s->length);
+ }
+
/*
* calculate phdr_num
*
@@ -659,9 +776,10 @@ static void fd_cleanup(void *opaque)
}
}
-static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
+static DumpState *dump_init_fd(int fd, bool paging, bool has_filter,
+ int64_t begin, int64_t length, Error **errp)
{
- DumpState *s = dump_init(paging, errp);
+ DumpState *s = dump_init(paging, has_filter, begin, length, errp);
if (s == NULL) {
return NULL;
@@ -674,12 +792,22 @@ static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
return s;
}
-void qmp_dump(bool paging, const char *file, Error **errp)
+void qmp_dump(bool paging, const char *file, bool has_begin,
+ int64_t begin, bool has_length, int64_t length, Error **errp)
{
const char *p;
int fd = -1;
DumpState *s;
+ if (has_begin && !has_length) {
+ error_set(errp, QERR_MISSING_PARAMETER, "length");
+ return;
+ }
+ if (!has_begin && has_length) {
+ error_set(errp, QERR_MISSING_PARAMETER, "begin");
+ return;
+ }
+
#if !defined(WIN32)
if (strstart(file, "fd:", &p)) {
fd = qemu_get_fd(p);
@@ -703,7 +831,7 @@ void qmp_dump(bool paging, const char *file, Error **errp)
return;
}
- s = dump_init_fd(fd, paging, errp);
+ s = dump_init_fd(fd, paging, has_begin, begin, length, errp);
if (!s) {
return;
}
diff --git a/hmp-commands.hx b/hmp-commands.hx
index abd412e..af0f112 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -883,21 +883,27 @@ ETEXI
#if defined(CONFIG_HAVE_CORE_DUMP)
{
.name = "dump",
- .args_type = "paging:-p,file:s",
- .params = "[-p] file",
- .help = "dump to file",
+ .args_type = "paging:-p,file:s,begin:i?,length:i?",
+ .params = "[-p] file [begin] [length]",
+ .help = "dump to file"
+ "\n\t\t\t begin(optional): the starting physical address"
+ "\n\t\t\t length(optional): the memory size, in bytes",
.user_print = monitor_user_noop,
.mhandler.cmd = hmp_dump,
},
STEXI
-@item dump [-p] @var{file}
+@item dump [-p] @var{file} @var{begin} @var{length}
@findex dump
Dump to @var{file}. The file can be processed with crash or gdb.
file: destination file(started with "file:") or destination file descriptor
(started with "fd:")
paging: do paging to get guest's memory mapping
+ begin: the starting physical address. It's optional, and should be specified
+ with length together.
+ length: the memory size, in bytes. It's optional, and should be specified with
+ begin together.
ETEXI
#endif
diff --git a/hmp.c b/hmp.c
index d0aa94b..b399119 100644
--- a/hmp.c
+++ b/hmp.c
@@ -866,8 +866,19 @@ void hmp_dump(Monitor *mon, const QDict *qdict)
Error *errp = NULL;
int paging = qdict_get_try_bool(qdict, "paging", 0);
const char *file = qdict_get_str(qdict, "file");
+ bool has_begin = qdict_haskey(qdict, "begin");
+ bool has_length = qdict_haskey(qdict, "length");
+ int64_t begin = 0;
+ int64_t length = 0;
- qmp_dump(!!paging, file, &errp);
+ if (has_begin) {
+ begin = qdict_get_int(qdict, "begin");
+ }
+ if (has_length) {
+ length = qdict_get_int(qdict, "length");
+ }
+
+ qmp_dump(!!paging, file, has_begin, begin, has_length, length, &errp);
hmp_handle_error(mon, &errp);
}
diff --git a/memory_mapping.c b/memory_mapping.c
index 8dd0750..f2b8252 100644
--- a/memory_mapping.c
+++ b/memory_mapping.c
@@ -209,3 +209,30 @@ void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list)
create_new_memory_mapping(list, block->offset, 0, block->length);
}
}
+
+void memory_mapping_filter(MemoryMappingList *list, int64_t begin,
+ int64_t length)
+{
+ MemoryMapping *cur, *next;
+
+ QTAILQ_FOREACH_SAFE(cur, &list->head, next, next) {
+ if (cur->phys_addr >= begin + length ||
+ cur->phys_addr + cur->length <= begin) {
+ QTAILQ_REMOVE(&list->head, cur, next);
+ list->num--;
+ continue;
+ }
+
+ if (cur->phys_addr < begin) {
+ cur->length -= begin - cur->phys_addr;
+ if (cur->virt_addr) {
+ cur->virt_addr += begin - cur->phys_addr;
+ }
+ cur->phys_addr = begin;
+ }
+
+ if (cur->phys_addr + cur->length > begin + length) {
+ cur->length -= cur->phys_addr + cur->length - begin - length;
+ }
+ }
+}
diff --git a/memory_mapping.h b/memory_mapping.h
index 50b1f25..c5004ed 100644
--- a/memory_mapping.h
+++ b/memory_mapping.h
@@ -55,4 +55,7 @@ int qemu_get_guest_memory_mapping(MemoryMappingList *list);
/* get guest's memory mapping without do paging(virtual address is 0). */
void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list);
+void memory_mapping_filter(MemoryMappingList *list, int64_t begin,
+ int64_t length);
+
#endif
diff --git a/qapi-schema.json b/qapi-schema.json
index 9728f5f..cc42fa0 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -1671,12 +1671,16 @@
#
# @paging: if true, do paging to get guest's memory mapping
# @file: the filename or file descriptor of the vmcore.
+# @begin: if specified, the starting physical address.
+# @length: if specified, the memory size, in bytes.
#
# Returns: nothing on success
#
# Since: 1.1
##
-{ 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
+{ 'command': 'dump',
+ 'data': { 'paging': 'bool', 'file': 'str', '*begin': 'int',
+ '*length': 'int' } }
##
# @dump_cancel
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 036e111..738fff8 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -589,8 +589,8 @@ EQMP
#if defined(CONFIG_HAVE_CORE_DUMP)
{
.name = "dump",
- .args_type = "paging:-p,file:s",
- .params = "[-p] file",
+ .args_type = "paging:-p,file:s,begin:i?,end:i?",
+ .params = "[-p] file [begin] [length]",
.help = "dump to file",
.user_print = monitor_user_noop,
.mhandler.cmd_new = qmp_marshal_input_dump,
@@ -607,6 +607,10 @@ Arguments:
- "paging": do paging to get guest's memory mapping (json-bool)
- "file": destination file(started with "file:") or destination file descriptor
(started with "fd:") (json-string)
+- "begin": the starting physical address. It's optional, and should be specified
+ with length together (json-int)
+- "length": the memory size, in bytes. It's optional, and should be specified
+ with begin together (json-int)
Example:
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* [Qemu-devel] [RESEND][PATCH 02/14 v9] Add API to check whether a physical address is I/O address
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 02/14 v9] Add API to check whether a physical address is I/O address Wen Congyang
@ 2012-03-14 9:18 ` Wen Congyang
0 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-14 9:18 UTC (permalink / raw)
To: qemu-devel, Jan Kiszka, Dave Anderson, HATAYAMA Daisuke,
Luiz Capitulino, Eric Blake
This API will be used in the following patch.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
cpu-common.h | 2 ++
exec.c | 9 +++++++++
2 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/cpu-common.h b/cpu-common.h
index dca5175..fcd50dc 100644
--- a/cpu-common.h
+++ b/cpu-common.h
@@ -71,6 +71,8 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque));
void cpu_unregister_map_client(void *cookie);
+bool cpu_physical_memory_is_io(target_phys_addr_t phys_addr);
+
/* Coalesced MMIO regions are areas where write operations can be reordered.
* This usually implies that write operations are side-effect free. This allows
* batching which can make a major impact on performance when using
diff --git a/exec.c b/exec.c
index 0c86bce..743dc12 100644
--- a/exec.c
+++ b/exec.c
@@ -4646,3 +4646,12 @@ bool virtio_is_big_endian(void)
#undef env
#endif
+
+bool cpu_physical_memory_is_io(target_phys_addr_t phys_addr)
+{
+ MemoryRegionSection* section;
+
+ section = phys_page_find(phys_addr >> TARGET_PAGE_BITS);
+
+ return !is_ram_rom_romd(section);
+}
--
1.7.1
^ permalink raw reply related [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 2:11 ` [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory Wen Congyang
@ 2012-03-14 17:18 ` Luiz Capitulino
2012-03-15 2:29 ` Wen Congyang
` (3 more replies)
2012-03-16 3:23 ` HATAYAMA Daisuke
1 sibling, 4 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-14 17:18 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
On Wed, 14 Mar 2012 10:11:35 +0800
Wen Congyang <wency@cn.fujitsu.com> wrote:
> The command's usage:
> dump [-p] file
> file should be start with "file:"(the file's path) or "fd:"(the fd's name).
>
> Note:
> 1. If you want to use gdb to analyse the core, please specify -p option.
> 2. This command doesn't support the fd that is is associated with a pipe,
> socket, or FIFO(lseek will fail with such fd).
>
> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
> Makefile.target | 2 +-
> dump.c | 714 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> elf.h | 5 +
> hmp-commands.hx | 21 ++
> hmp.c | 10 +
> hmp.h | 1 +
> qapi-schema.json | 14 +
> qmp-commands.hx | 34 +++
> 8 files changed, 800 insertions(+), 1 deletions(-)
> create mode 100644 dump.c
>
> diff --git a/Makefile.target b/Makefile.target
> index c81c4fa..287fbe7 100644
> --- a/Makefile.target
> +++ b/Makefile.target
> @@ -213,7 +213,7 @@ obj-$(CONFIG_NO_KVM) += kvm-stub.o
> obj-$(CONFIG_VGA) += vga.o
> obj-y += memory.o savevm.o
> obj-y += memory_mapping.o
> -obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o
> +obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o dump.o
> LIBS+=-lz
>
> obj-i386-$(CONFIG_KVM) += hyperv.o
> diff --git a/dump.c b/dump.c
> new file mode 100644
> index 0000000..42e1681
> --- /dev/null
> +++ b/dump.c
> @@ -0,0 +1,714 @@
> +/*
> + * QEMU dump
> + *
> + * Copyright Fujitsu, Corp. 2011
> + *
> + * Authors:
> + * Wen Congyang <wency@cn.fujitsu.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See
> + * the COPYING file in the top-level directory.
> + *
> + */
> +
> +#include "qemu-common.h"
> +#include <unistd.h>
> +#include "elf.h"
> +#include <sys/procfs.h>
> +#include <glib.h>
> +#include "cpu.h"
> +#include "cpu-all.h"
> +#include "targphys.h"
> +#include "monitor.h"
> +#include "kvm.h"
> +#include "dump.h"
> +#include "sysemu.h"
> +#include "bswap.h"
> +#include "memory_mapping.h"
> +#include "error.h"
> +#include "qmp-commands.h"
> +#include "gdbstub.h"
> +
> +static inline uint16_t cpu_convert_to_target16(uint16_t val, int endian)
> +{
> + if (endian == ELFDATA2LSB) {
> + val = cpu_to_le16(val);
> + } else {
> + val = cpu_to_be16(val);
> + }
> +
> + return val;
> +}
> +
> +static inline uint32_t cpu_convert_to_target32(uint32_t val, int endian)
> +{
> + if (endian == ELFDATA2LSB) {
> + val = cpu_to_le32(val);
> + } else {
> + val = cpu_to_be32(val);
> + }
> +
> + return val;
> +}
> +
> +static inline uint64_t cpu_convert_to_target64(uint64_t val, int endian)
> +{
> + if (endian == ELFDATA2LSB) {
> + val = cpu_to_le64(val);
> + } else {
> + val = cpu_to_be64(val);
> + }
> +
> + return val;
> +}
> +
> +enum {
> + DUMP_STATE_ERROR,
> + DUMP_STATE_SETUP,
> + DUMP_STATE_CANCELLED,
> + DUMP_STATE_ACTIVE,
> + DUMP_STATE_COMPLETED,
> +};
> +
> +typedef struct DumpState {
> + ArchDumpInfo dump_info;
> + MemoryMappingList list;
> + uint16_t phdr_num;
> + uint32_t sh_info;
> + bool have_section;
> + int state;
> + bool resume;
> + char *error;
> + target_phys_addr_t memory_offset;
> + write_core_dump_function f;
> + void (*cleanup)(void *opaque);
> + void *opaque;
> +} DumpState;
> +
> +static DumpState *dump_get_current(void)
> +{
> + static DumpState current_dump = {
> + .state = DUMP_STATE_SETUP,
> + };
> +
> + return ¤t_dump;
> +}
You just dropped a few asynchronous bits and resent this as a synchronous
command, letting all the asynchronous infrastructure in. This is bad, as the
command is more complex then it should be and doesn't make full use of the
added infrastructure.
For example, does the synchronous version really uses DumpState? If it doesn't,
let's just drop it and everything else which is not necessary.
*However*, note that while it's fine with me to have this as a synchronous
command we need a few more ACKs (from libvirt and Anthony and/or Jan). So, I
wouldn't go too far on making changes before we get those ACKs.
> +
> +static int dump_cleanup(DumpState *s)
> +{
> + int ret = 0;
> +
> + memory_mapping_list_free(&s->list);
> + s->cleanup(s->opaque);
> + if (s->resume) {
> + vm_start();
> + }
> +
> + return ret;
> +}
> +
> +static void dump_error(DumpState *s, const char *reason)
> +{
> + s->state = DUMP_STATE_ERROR;
> + s->error = g_strdup(reason);
> + dump_cleanup(s);
> +}
> +
> +static int write_elf64_header(DumpState *s)
> +{
> + Elf64_Ehdr elf_header;
> + int ret;
> + int endian = s->dump_info.d_endian;
> +
> + memset(&elf_header, 0, sizeof(Elf64_Ehdr));
> + memcpy(&elf_header, ELFMAG, 4);
> + elf_header.e_ident[EI_CLASS] = ELFCLASS64;
> + elf_header.e_ident[EI_DATA] = s->dump_info.d_endian;
> + elf_header.e_ident[EI_VERSION] = EV_CURRENT;
> + elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
> + elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
> + endian);
> + elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
> + elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
> + elf_header.e_phoff = cpu_convert_to_target64(sizeof(Elf64_Ehdr), endian);
> + elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf64_Phdr),
> + endian);
> + elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
> + if (s->have_section) {
> + uint64_t shoff = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr) * s->sh_info;
> +
> + elf_header.e_shoff = cpu_convert_to_target64(shoff, endian);
> + elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf64_Shdr),
> + endian);
> + elf_header.e_shnum = cpu_convert_to_target16(1, endian);
> + }
> +
> + ret = s->f(0, &elf_header, sizeof(elf_header), s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write elf header.\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int write_elf32_header(DumpState *s)
> +{
> + Elf32_Ehdr elf_header;
> + int ret;
> + int endian = s->dump_info.d_endian;
> +
> + memset(&elf_header, 0, sizeof(Elf32_Ehdr));
> + memcpy(&elf_header, ELFMAG, 4);
> + elf_header.e_ident[EI_CLASS] = ELFCLASS32;
> + elf_header.e_ident[EI_DATA] = endian;
> + elf_header.e_ident[EI_VERSION] = EV_CURRENT;
> + elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
> + elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
> + endian);
> + elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
> + elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
> + elf_header.e_phoff = cpu_convert_to_target32(sizeof(Elf32_Ehdr), endian);
> + elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf32_Phdr),
> + endian);
> + elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
> + if (s->have_section) {
> + uint32_t shoff = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr) * s->sh_info;
> +
> + elf_header.e_shoff = cpu_convert_to_target32(shoff, endian);
> + elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf32_Shdr),
> + endian);
> + elf_header.e_shnum = cpu_convert_to_target16(1, endian);
> + }
> +
> + ret = s->f(0, &elf_header, sizeof(elf_header), s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write elf header.\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping,
> + int phdr_index, target_phys_addr_t offset)
> +{
> + Elf64_Phdr phdr;
> + off_t phdr_offset;
> + int ret;
> + int endian = s->dump_info.d_endian;
> +
> + memset(&phdr, 0, sizeof(Elf64_Phdr));
> + phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
> + phdr.p_offset = cpu_convert_to_target64(offset, endian);
> + phdr.p_paddr = cpu_convert_to_target64(memory_mapping->phys_addr, endian);
> + if (offset == -1) {
> + phdr.p_filesz = 0;
> + } else {
> + phdr.p_filesz = cpu_convert_to_target64(memory_mapping->length, endian);
> + }
> + phdr.p_memsz = cpu_convert_to_target64(memory_mapping->length, endian);
> + phdr.p_vaddr = cpu_convert_to_target64(memory_mapping->virt_addr, endian);
> +
> + phdr_offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*phdr_index;
> + ret = s->f(phdr_offset, &phdr, sizeof(Elf64_Phdr), s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write program header table.\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int write_elf32_load(DumpState *s, MemoryMapping *memory_mapping,
> + int phdr_index, target_phys_addr_t offset)
> +{
> + Elf32_Phdr phdr;
> + off_t phdr_offset;
> + int ret;
> + int endian = s->dump_info.d_endian;
> +
> + memset(&phdr, 0, sizeof(Elf32_Phdr));
> + phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
> + phdr.p_offset = cpu_convert_to_target32(offset, endian);
> + phdr.p_paddr = cpu_convert_to_target32(memory_mapping->phys_addr, endian);
> + if (offset == -1) {
> + phdr.p_filesz = 0;
> + } else {
> + phdr.p_filesz = cpu_convert_to_target32(memory_mapping->length, endian);
> + }
> + phdr.p_memsz = cpu_convert_to_target32(memory_mapping->length, endian);
> + phdr.p_vaddr = cpu_convert_to_target32(memory_mapping->virt_addr, endian);
> +
> + phdr_offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*phdr_index;
> + ret = s->f(phdr_offset, &phdr, sizeof(Elf32_Phdr), s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write program header table.\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int write_elf64_notes(DumpState *s, int phdr_index,
> + target_phys_addr_t *offset)
> +{
> + CPUState *env;
> + int ret;
> + target_phys_addr_t begin = *offset;
> + Elf64_Phdr phdr;
> + off_t phdr_offset;
> + int id;
> + int endian = s->dump_info.d_endian;
> +
> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
> + id = gdb_id(env);
> + ret = cpu_write_elf64_note(s->f, env, id, offset, s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write elf notes.\n");
> + return -1;
> + }
> + }
> +
> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
> + ret = cpu_write_elf64_qemunote(s->f, env, offset, s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write CPU status.\n");
> + return -1;
> + }
> + }
> +
> + memset(&phdr, 0, sizeof(Elf64_Phdr));
> + phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
> + phdr.p_offset = cpu_convert_to_target64(begin, endian);
> + phdr.p_paddr = 0;
> + phdr.p_filesz = cpu_convert_to_target64(*offset - begin, endian);
> + phdr.p_memsz = cpu_convert_to_target64(*offset - begin, endian);
> + phdr.p_vaddr = 0;
> +
> + phdr_offset = sizeof(Elf64_Ehdr);
> + ret = s->f(phdr_offset, &phdr, sizeof(Elf64_Phdr), s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write program header table.\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int write_elf32_notes(DumpState *s, int phdr_index,
> + target_phys_addr_t *offset)
> +{
> + CPUState *env;
> + int ret;
> + target_phys_addr_t begin = *offset;
> + Elf32_Phdr phdr;
> + off_t phdr_offset;
> + int id;
> + int endian = s->dump_info.d_endian;
> +
> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
> + id = gdb_id(env);
> + ret = cpu_write_elf32_note(s->f, env, id, offset, s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write elf notes.\n");
> + return -1;
> + }
> + }
> +
> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
> + ret = cpu_write_elf32_qemunote(s->f, env, offset, s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write CPU status.\n");
> + return -1;
> + }
> + }
> +
> + memset(&phdr, 0, sizeof(Elf32_Phdr));
> + phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
> + phdr.p_offset = cpu_convert_to_target32(begin, endian);
> + phdr.p_paddr = 0;
> + phdr.p_filesz = cpu_convert_to_target32(*offset - begin, endian);
> + phdr.p_memsz = cpu_convert_to_target32(*offset - begin, endian);
> + phdr.p_vaddr = 0;
> +
> + phdr_offset = sizeof(Elf32_Ehdr);
> + ret = s->f(phdr_offset, &phdr, sizeof(Elf32_Phdr), s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write program header table.\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int write_elf_section(DumpState *s, target_phys_addr_t *offset, int type)
> +{
> + Elf32_Shdr shdr32;
> + Elf64_Shdr shdr64;
> + int endian = s->dump_info.d_endian;
> + int shdr_size;
> + void *shdr;
> + int ret;
> +
> + if (type == 0) {
> + shdr_size = sizeof(Elf32_Shdr);
> + memset(&shdr32, 0, shdr_size);
> + shdr32.sh_info = cpu_convert_to_target32(s->sh_info, endian);
> + shdr = &shdr32;
> + } else {
> + shdr_size = sizeof(Elf64_Shdr);
> + memset(&shdr64, 0, shdr_size);
> + shdr64.sh_info = cpu_convert_to_target32(s->sh_info, endian);
> + shdr = &shdr64;
> + }
> +
> + ret = s->f(*offset, &shdr, shdr_size, s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to write section header table.\n");
> + return -1;
> + }
> +
> + *offset += shdr_size;
> + return 0;
> +}
> +
> +static int write_data(DumpState *s, void *buf, int length,
> + target_phys_addr_t *offset)
> +{
> + int ret;
> +
> + ret = s->f(*offset, buf, length, s->opaque);
> + if (ret < 0) {
> + dump_error(s, "dump: failed to save memory.\n");
> + return -1;
> + }
> +
> + *offset += length;
> + return 0;
> +}
> +
> +/* write the memroy to vmcore. 1 page per I/O. */
> +static int write_memory(DumpState *s, RAMBlock *block,
> + target_phys_addr_t *offset)
> +{
> + int i, ret;
> +
> + for (i = 0; i < block->length / TARGET_PAGE_SIZE; i++) {
> + ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
> + TARGET_PAGE_SIZE, offset);
> + if (ret < 0) {
> + return -1;
> + }
> + }
> +
> + if ((block->length % TARGET_PAGE_SIZE) != 0) {
> + ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
> + block->length % TARGET_PAGE_SIZE, offset);
> + if (ret < 0) {
> + return -1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +/* get the memory's offset in the vmcore */
> +static target_phys_addr_t get_offset(target_phys_addr_t phys_addr,
> + target_phys_addr_t memory_offset)
> +{
> + RAMBlock *block;
> + target_phys_addr_t offset = memory_offset;
> +
> + QLIST_FOREACH(block, &ram_list.blocks, next) {
> + if (phys_addr >= block->offset &&
> + phys_addr < block->offset + block->length) {
> + return phys_addr - block->offset + offset;
> + }
> + offset += block->length;
> + }
> +
> + return -1;
> +}
> +
> +/* write elf header, PT_NOTE and elf note to vmcore. */
> +static int dump_begin(DumpState *s)
> +{
> + target_phys_addr_t offset;
> + int ret;
> +
> + s->state = DUMP_STATE_ACTIVE;
> +
> + /*
> + * the vmcore's format is:
> + * --------------
> + * | elf header |
> + * --------------
> + * | PT_NOTE |
> + * --------------
> + * | PT_LOAD |
> + * --------------
> + * | ...... |
> + * --------------
> + * | PT_LOAD |
> + * --------------
> + * | sec_hdr |
> + * --------------
> + * | elf note |
> + * --------------
> + * | memory |
> + * --------------
> + *
> + * we only know where the memory is saved after we write elf note into
> + * vmcore.
> + */
> +
> + /* write elf header to vmcore */
> + if (s->dump_info.d_class == ELFCLASS64) {
> + ret = write_elf64_header(s);
> + } else {
> + ret = write_elf32_header(s);
> + }
> + if (ret < 0) {
> + return -1;
> + }
> +
> + /* write elf section and notes to vmcore */
> + if (s->dump_info.d_class == ELFCLASS64) {
> + if (s->have_section) {
> + offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*s->sh_info;
> + if (write_elf_section(s, &offset, 1) < 0) {
> + return -1;
> + }
> + } else {
> + offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*s->phdr_num;
> + }
> + ret = write_elf64_notes(s, 0, &offset);
> + } else {
> + if (s->have_section) {
> + offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*s->sh_info;
> + if (write_elf_section(s, &offset, 0) < 0) {
> + return -1;
> + }
> + } else {
> + offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*s->phdr_num;
> + }
> + ret = write_elf32_notes(s, 0, &offset);
> + }
> +
> + if (ret < 0) {
> + return -1;
> + }
> +
> + s->memory_offset = offset;
> + return 0;
> +}
> +
> +/* write PT_LOAD to vmcore */
> +static int dump_completed(DumpState *s)
> +{
> + target_phys_addr_t offset;
> + MemoryMapping *memory_mapping;
> + int phdr_index = 1, ret;
> +
> + QTAILQ_FOREACH(memory_mapping, &s->list.head, next) {
> + offset = get_offset(memory_mapping->phys_addr, s->memory_offset);
> + if (s->dump_info.d_class == ELFCLASS64) {
> + ret = write_elf64_load(s, memory_mapping, phdr_index++, offset);
> + } else {
> + ret = write_elf32_load(s, memory_mapping, phdr_index++, offset);
> + }
> + if (ret < 0) {
> + return -1;
> + }
> + }
> +
> + s->state = DUMP_STATE_COMPLETED;
> + dump_cleanup(s);
> + return 0;
> +}
> +
> +/* write all memory to vmcore */
> +static int dump_iterate(DumpState *s)
> +{
> + RAMBlock *block;
> + target_phys_addr_t offset = s->memory_offset;
> + int ret;
> +
> + /* write all memory to vmcore */
> + QLIST_FOREACH(block, &ram_list.blocks, next) {
> + ret = write_memory(s, block, &offset);
> + if (ret < 0) {
> + return -1;
> + }
> + }
> +
> + return dump_completed(s);
> +}
> +
> +static int create_vmcore(DumpState *s)
> +{
> + int ret;
> +
> + ret = dump_begin(s);
> + if (ret < 0) {
> + return -1;
> + }
> +
> + ret = dump_iterate(s);
> + if (ret < 0) {
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static DumpState *dump_init(bool paging, Error **errp)
> +{
> + CPUState *env;
> + DumpState *s = dump_get_current();
> + int ret;
> +
> + if (runstate_is_running()) {
> + vm_stop(RUN_STATE_PAUSED);
> + s->resume = true;
Hmm, you actually stop the VM. Seems obvious now, but when people talked about
making this asynchronous I automatically assumed that what we didn't want was
having the global mutex held for too much time (ie. while this command was
running).
The only disadvantage of having this as a synchronous command is that libvirt
won't be able to cancel it and won't be able to run other commands in parallel.
Doesn't seem that serious to me.
Btw, RUN_STATE_PAUSED is not a good one. Doesn't matter that much, as this
is unlikely to be visible, but you should use RUN_STATE_SAVE_VM or
RUN_STATE_DEBUG.
> + } else {
> + s->resume = false;
> + }
> + s->state = DUMP_STATE_SETUP;
> + if (s->error) {
> + g_free(s->error);
> + s->error = NULL;
> + }
> +
> + /*
> + * get dump info: endian, class and architecture.
> + * If the target architecture is not supported, cpu_get_dump_info() will
> + * return -1.
> + *
> + * if we use kvm, we should synchronize the register before we get dump
> + * info.
> + */
> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
> + cpu_synchronize_state(env);
> + }
> +
> + ret = cpu_get_dump_info(&s->dump_info);
> + if (ret < 0) {
> + error_set(errp, QERR_UNSUPPORTED);
This will let the VM paused.
> + return NULL;
> + }
> +
> + /* get memory mapping */
> + memory_mapping_list_init(&s->list);
> + if (paging) {
> + qemu_get_guest_memory_mapping(&s->list);
> + } else {
> + qemu_get_guest_simple_memory_mapping(&s->list);
> + }
> +
> + /*
> + * calculate phdr_num
> + *
> + * the type of phdr->num is uint16_t, so we should avoid overflow
> + */
> + s->phdr_num = 1; /* PT_NOTE */
> + if (s->list.num < (1 << 16) - 2) {
> + s->phdr_num += s->list.num;
> + s->have_section = false;
> + } else {
> + s->have_section = true;
> + s->phdr_num = PN_XNUM;
> +
> + /* the type of shdr->sh_info is uint32_t, so we should avoid overflow */
> + if (s->list.num > (1ULL << 32) - 2) {
> + s->sh_info = 0xffffffff;
> + } else {
> + s->sh_info += s->list.num;
> + }
> + }
> +
> + return s;
> +}
> +
> +static int fd_write_vmcore(target_phys_addr_t offset, void *buf, size_t size,
> + void *opaque)
> +{
> + int fd = (int)(intptr_t)opaque;
> + int ret;
> +
> + ret = lseek(fd, offset, SEEK_SET);
> + if (ret < 0) {
> + return -1;
> + }
> +
> + ret = write(fd, buf, size);
> + if (ret != size) {
> + return -1;
> + }
I think you should use send_all() instead of plain write().
> +
> + return 0;
> +}
> +
> +static void fd_cleanup(void *opaque)
> +{
> + int fd = (int)(intptr_t)opaque;
> +
> + if (fd != -1) {
> + close(fd);
> + }
> +}
> +
> +static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
> +{
> + DumpState *s = dump_init(paging, errp);
> +
> + if (s == NULL) {
> + return NULL;
> + }
> +
> + s->f = fd_write_vmcore;
> + s->cleanup = fd_cleanup;
> + s->opaque = (void *)(intptr_t)fd;
Do we really need all these indirections?
> +
> + return s;
> +}
> +
> +void qmp_dump(bool paging, const char *file, Error **errp)
> +{
> + const char *p;
> + int fd = -1;
> + DumpState *s;
> +
> +#if !defined(WIN32)
> + if (strstart(file, "fd:", &p)) {
> + fd = qemu_get_fd(p);
qemu_get_fd() won't be merged, you should use monitor_get_fd(cur_mon, p);
> + if (fd == -1) {
> + error_set(errp, QERR_FD_NOT_FOUND, p);
> + return;
> + }
> + }
> +#endif
> +
> + if (strstart(file, "file:", &p)) {
> + fd = open(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
This is minor, but I'd use qemu_open() here.
> + if (fd < 0) {
> + error_set(errp, QERR_OPEN_FILE_FAILED, p);
> + return;
> + }
> + }
> +
> + if (fd == -1) {
> + error_set(errp, QERR_INVALID_PARAMETER, "file");
> + return;
> + }
> +
> + s = dump_init_fd(fd, paging, errp);
> + if (!s) {
> + return;
> + }
> +
> + if (create_vmcore(s) < 0) {
> + error_set(errp, QERR_IO_ERROR);
> + }
> +}
> diff --git a/elf.h b/elf.h
> index 2e05d34..6a10657 100644
> --- a/elf.h
> +++ b/elf.h
> @@ -1000,6 +1000,11 @@ typedef struct elf64_sym {
>
> #define EI_NIDENT 16
>
> +/* Special value for e_phnum. This indicates that the real number of
> + program headers is too large to fit into e_phnum. Instead the real
> + value is in the field sh_info of section 0. */
> +#define PN_XNUM 0xffff
> +
> typedef struct elf32_hdr{
> unsigned char e_ident[EI_NIDENT];
> Elf32_Half e_type;
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 6980214..d4cf2e5 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -880,6 +880,27 @@ server will ask the spice/vnc client to automatically reconnect using the
> new parameters (if specified) once the vm migration finished successfully.
> ETEXI
>
> +#if defined(CONFIG_HAVE_CORE_DUMP)
> + {
> + .name = "dump",
> + .args_type = "paging:-p,file:s",
> + .params = "[-p] file",
> + .help = "dump to file",
> + .user_print = monitor_user_noop,
> + .mhandler.cmd = hmp_dump,
> + },
> +
> +
> +STEXI
> +@item dump [-p] @var{file}
> +@findex dump
> +Dump to @var{file}. The file can be processed with crash or gdb.
> + file: destination file(started with "file:") or destination file descriptor
> + (started with "fd:")
> + paging: do paging to get guest's memory mapping
> +ETEXI
> +#endif
> +
> {
> .name = "snapshot_blkdev",
> .args_type = "reuse:-n,device:B,snapshot-file:s?,format:s?",
> diff --git a/hmp.c b/hmp.c
> index 290c43d..e13b793 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -860,3 +860,13 @@ void hmp_block_job_cancel(Monitor *mon, const QDict *qdict)
>
> hmp_handle_error(mon, &error);
> }
> +
> +void hmp_dump(Monitor *mon, const QDict *qdict)
> +{
> + Error *errp = NULL;
> + int paging = qdict_get_try_bool(qdict, "paging", 0);
> + const char *file = qdict_get_str(qdict, "file");
> +
> + qmp_dump(!!paging, file, &errp);
Why the double negation on 'paging'?
> + hmp_handle_error(mon, &errp);
> +}
> diff --git a/hmp.h b/hmp.h
> index 5409464..b055e50 100644
> --- a/hmp.h
> +++ b/hmp.h
> @@ -59,5 +59,6 @@ void hmp_block_set_io_throttle(Monitor *mon, const QDict *qdict);
> void hmp_block_stream(Monitor *mon, const QDict *qdict);
> void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
> void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
> +void hmp_dump(Monitor *mon, const QDict *qdict);
>
> #endif
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 04fa84f..81b8c7c 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -1663,3 +1663,17 @@
> { 'command': 'qom-list-types',
> 'data': { '*implements': 'str', '*abstract': 'bool' },
> 'returns': [ 'ObjectTypeInfo' ] }
> +
> +##
> +# @dump
'dump' is too generic, please call this dump-guest-memory-vmcore or something
more descriptive.
> +#
> +# Dump guest's memory to vmcore.
> +#
> +# @paging: if true, do paging to get guest's memory mapping
> +# @file: the filename or file descriptor of the vmcore.
'file' is not a good name because it can also dump to an fd, maybe 'protocol'?
> +#
> +# Returns: nothing on success
> +#
> +# Since: 1.1
> +##
> +{ 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
> diff --git a/qmp-commands.hx b/qmp-commands.hx
> index dfe8a5b..9e39bd9 100644
> --- a/qmp-commands.hx
> +++ b/qmp-commands.hx
> @@ -586,6 +586,40 @@ Example:
>
> EQMP
>
> +#if defined(CONFIG_HAVE_CORE_DUMP)
> + {
> + .name = "dump",
> + .args_type = "paging:-p,file:s",
> + .params = "[-p] file",
> + .help = "dump to file",
> + .user_print = monitor_user_noop,
> + .mhandler.cmd_new = qmp_marshal_input_dump,
> + },
> +
> +SQMP
> +dump
> +
> +
> +Dump to file. The file can be processed with crash or gdb.
> +
> +Arguments:
> +
> +- "paging": do paging to get guest's memory mapping (json-bool)
> +- "file": destination file(started with "file:") or destination file descriptor
> + (started with "fd:") (json-string)
> +
> +Example:
> +
> +-> { "execute": "dump", "arguments": { "file": "fd:dump" } }
> +<- { "return": {} }
> +
> +Notes:
> +
> +(1) All boolean arguments default to false
> +
> +EQMP
> +#endif
> +
> {
> .name = "netdev_add",
> .args_type = "netdev:O",
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 12/14 v9] support to cancel the current dumping
2012-03-14 2:12 ` [Qemu-devel] [RFC][PATCH 12/14 v9] support to cancel the current dumping Wen Congyang
@ 2012-03-14 17:19 ` Luiz Capitulino
0 siblings, 0 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-14 17:19 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
On Wed, 14 Mar 2012 10:12:05 +0800
Wen Congyang <wency@cn.fujitsu.com> wrote:
> Add API to allow the user to cancel the current dumping. It can only work after
> async dumping is supported.
NACK, we shouldn't add it then.
>
> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
> dump.c | 12 ++++++++++++
> hmp-commands.hx | 14 ++++++++++++++
> hmp.c | 5 +++++
> hmp.h | 1 +
> qapi-schema.json | 13 +++++++++++++
> qmp-commands.hx | 21 +++++++++++++++++++++
> 6 files changed, 66 insertions(+), 0 deletions(-)
>
> diff --git a/dump.c b/dump.c
> index 42e1681..dab0c84 100644
> --- a/dump.c
> +++ b/dump.c
> @@ -712,3 +712,15 @@ void qmp_dump(bool paging, const char *file, Error **errp)
> error_set(errp, QERR_IO_ERROR);
> }
> }
> +
> +void qmp_dump_cancel(Error **errp)
> +{
> + DumpState *s = dump_get_current();
> +
> + if (s->state != DUMP_STATE_ACTIVE) {
> + return;
> + }
> +
> + s->state = DUMP_STATE_CANCELLED;
> + dump_cleanup(s);
> +}
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index d4cf2e5..313f876 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -902,6 +902,20 @@ ETEXI
> #endif
>
> {
> + .name = "dump_cancel",
> + .args_type = "",
> + .params = "",
> + .help = "cancel the current VM dumping",
> + .mhandler.cmd = hmp_dump_cancel,
> + },
> +
> +STEXI
> +@item dump_cancel
> +@findex dump_cancel
> +Cancel the current VM dumping.
> +ETEXI
> +
> + {
> .name = "snapshot_blkdev",
> .args_type = "reuse:-n,device:B,snapshot-file:s?,format:s?",
> .params = "[-n] device [new-image-file] [format]",
> diff --git a/hmp.c b/hmp.c
> index e13b793..31e85d3 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -870,3 +870,8 @@ void hmp_dump(Monitor *mon, const QDict *qdict)
> qmp_dump(!!paging, file, &errp);
> hmp_handle_error(mon, &errp);
> }
> +
> +void hmp_dump_cancel(Monitor *mon, const QDict *qdict)
> +{
> + qmp_dump_cancel(NULL);
> +}
> diff --git a/hmp.h b/hmp.h
> index b055e50..75c6c1d 100644
> --- a/hmp.h
> +++ b/hmp.h
> @@ -60,5 +60,6 @@ void hmp_block_stream(Monitor *mon, const QDict *qdict);
> void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
> void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
> void hmp_dump(Monitor *mon, const QDict *qdict);
> +void hmp_dump_cancel(Monitor *mon, const QDict *qdict);
>
> #endif
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 81b8c7c..1fdfee8 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -1677,3 +1677,16 @@
> # Since: 1.1
> ##
> { 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
> +
> +##
> +# @dump_cancel
> +#
> +# Cancel the current executing dumping process.
> +#
> +# Returns: nothing on success
> +#
> +# Notes: This command succeeds even if there is no dumping process running.
> +#
> +# Since: 1.1
> +##
> +{ 'command': 'dump_cancel' }
> diff --git a/qmp-commands.hx b/qmp-commands.hx
> index 9e39bd9..cbe5b91 100644
> --- a/qmp-commands.hx
> +++ b/qmp-commands.hx
> @@ -621,6 +621,27 @@ EQMP
> #endif
>
> {
> + .name = "dump_cancel",
> + .args_type = "",
> + .mhandler.cmd_new = qmp_marshal_input_dump_cancel,
> + },
> +
> +SQMP
> +dump_cancel
> +
> +
> +Cancel the current dumping.
> +
> +Arguments: None.
> +
> +Example:
> +
> +-> { "execute": "dump_cancel" }
> +<- { "return": {} }
> +
> +EQMP
> +
> + {
> .name = "netdev_add",
> .args_type = "netdev:O",
> .params = "[user|tap|socket],id=str[,prop=value][,...]",
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 13/14 v9] support to query dumping status
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 13/14 v9] support to query dumping status Wen Congyang
@ 2012-03-14 17:19 ` Luiz Capitulino
0 siblings, 0 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-14 17:19 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
On Wed, 14 Mar 2012 10:13:15 +0800
Wen Congyang <wency@cn.fujitsu.com> wrote:
> Add API to allow the user to query dumping status. It can only work after
> async dumping is supported.
NACK, we shouldn't add it then.
>
> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
> dump.c | 32 ++++++++++++++++++++++++++++++++
> hmp-commands.hx | 2 ++
> hmp.c | 17 +++++++++++++++++
> hmp.h | 1 +
> monitor.c | 7 +++++++
> qapi-schema.json | 27 +++++++++++++++++++++++++++
> qmp-commands.hx | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
> 7 files changed, 136 insertions(+), 0 deletions(-)
>
> diff --git a/dump.c b/dump.c
> index dab0c84..7f9ea09 100644
> --- a/dump.c
> +++ b/dump.c
> @@ -724,3 +724,35 @@ void qmp_dump_cancel(Error **errp)
> s->state = DUMP_STATE_CANCELLED;
> dump_cleanup(s);
> }
> +
> +DumpInfo *qmp_query_dump(Error **errp)
> +{
> + DumpInfo *info = g_malloc0(sizeof(*info));
> + DumpState *s = dump_get_current();
> +
> + switch (s->state) {
> + case DUMP_STATE_SETUP:
> + /* no dumping has happened ever */
> + break;
> + case DUMP_STATE_ACTIVE:
> + info->has_status = true;
> + info->status = g_strdup("active");
> + break;
> + case DUMP_STATE_COMPLETED:
> + info->has_status = true;
> + info->status = g_strdup("completed");
> + break;
> + case DUMP_STATE_ERROR:
> + info->has_status = true;
> + info->status = g_strdup("failed");
> + info->has_error = true;
> + info->error = g_strdup(s->error);
> + break;
> + case DUMP_STATE_CANCELLED:
> + info->has_status = true;
> + info->status = g_strdup("cancelled");
> + break;
> + }
> +
> + return info;
> +}
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 313f876..abd412e 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1438,6 +1438,8 @@ show device tree
> show qdev device model list
> @item info roms
> show roms
> +@item info dump
> +show dumping status
> @end table
> ETEXI
>
> diff --git a/hmp.c b/hmp.c
> index 31e85d3..d0aa94b 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -875,3 +875,20 @@ void hmp_dump_cancel(Monitor *mon, const QDict *qdict)
> {
> qmp_dump_cancel(NULL);
> }
> +
> +void hmp_info_dump(Monitor *mon)
> +{
> + DumpInfo *info;
> +
> + info = qmp_query_dump(NULL);
> +
> + if (info->has_status) {
> + monitor_printf(mon, "Dumping status: %s\n", info->status);
> + }
> +
> + if (info->has_error) {
> + monitor_printf(mon, "Dumping failed reason: %s\n", info->error);
> + }
> +
> + qapi_free_DumpInfo(info);
> +}
> diff --git a/hmp.h b/hmp.h
> index 75c6c1d..3d105a9 100644
> --- a/hmp.h
> +++ b/hmp.h
> @@ -61,5 +61,6 @@ void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
> void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
> void hmp_dump(Monitor *mon, const QDict *qdict);
> void hmp_dump_cancel(Monitor *mon, const QDict *qdict);
> +void hmp_info_dump(Monitor *mon);
>
> #endif
> diff --git a/monitor.c b/monitor.c
> index fe54b7c..8348637 100644
> --- a/monitor.c
> +++ b/monitor.c
> @@ -2603,6 +2603,13 @@ static mon_cmd_t info_cmds[] = {
> .mhandler.info = do_trace_print_events,
> },
> {
> + .name = "dump",
> + .args_type = "",
> + .params = "",
> + .help = "show dumping status",
> + .mhandler.info = hmp_info_dump,
> + },
> + {
> .name = NULL,
> },
> };
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 1fdfee8..9728f5f 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -1690,3 +1690,30 @@
> # Since: 1.1
> ##
> { 'command': 'dump_cancel' }
> +
> +##
> +# @DumpInfo
> +#
> +# Information about current dumping process.
> +#
> +# @status: #optional string describing the current dumping status.
> +# As of 1,1 this can be 'active', 'completed', 'failed' or
> +# 'cancelled'. If this field is not returned, no dumping process
> +# has been initiated
> +# @error: #optional string describing the current dumping failed reason.
> +#
> +# Since: 1.1
> +##
> +{ 'type': 'DumpInfo',
> + 'data': { '*status': 'str', '*error': 'str' } }
> +
> +##
> +# @query-dump
> +#
> +# Returns information about current dumping process.
> +#
> +# Returns: @DumpInfo
> +#
> +# Since: 1.1
> +##
> +{ 'command': 'query-dump', 'returns': 'DumpInfo' }
> diff --git a/qmp-commands.hx b/qmp-commands.hx
> index cbe5b91..036e111 100644
> --- a/qmp-commands.hx
> +++ b/qmp-commands.hx
> @@ -616,6 +616,8 @@ Example:
> Notes:
>
> (1) All boolean arguments default to false
> +(2) The 'info dump' command should be used to check dumping's progress
> + and final result (this information is provided by the 'status' member)
>
> EQMP
> #endif
> @@ -2106,6 +2108,54 @@ EQMP
> },
>
> SQMP
> +query-dump
> +-------------
> +
> +Dumping status.
> +
> +Return a json-object.
> +
> +The main json-object contains the following:
> +
> +- "status": dumping status (json-string)
> + - Possible values: "active", "completed", "failed", "cancelled"
> +
> +Examples:
> +
> +1. Before the first dumping
> +
> +-> { "execute": "query-dump" }
> +<- { "return": {} }
> +
> +2. Dumping is done and has succeeded
> +
> +-> { "execute": "query-dump" }
> +<- { "return": { "status": "completed" } }
> +
> +3. Dumping is done and has failed
> +
> +-> { "execute": "query-dump" }
> +<- { "return": { "status": "failed",
> + "error": "dump: failed to save memory." } }
> +
> +4. Dumping is being performed:
> +
> +-> { "execute": "query-dump" }
> +<- {
> + "return":{
> + "status":"active",
> + }
> + }
> +
> +EQMP
> +
> + {
> + .name = "query-dump",
> + .args_type = "",
> + .mhandler.cmd_new = qmp_marshal_input_query_dump,
> + },
> +
> +SQMP
> query-balloon
> -------------
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 14/14 v9] allow user to dump a fraction of the memory
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 14/14 v9] allow user to dump a fraction of the memory Wen Congyang
@ 2012-03-14 17:20 ` Luiz Capitulino
0 siblings, 0 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-14 17:20 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
On Wed, 14 Mar 2012 10:13:50 +0800
Wen Congyang <wency@cn.fujitsu.com> wrote:
> This API allows the user to limit how much memory to be dumped,
> rather than forcing the user to dump all memory at once.
>
> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
> dump.c | 186 +++++++++++++++++++++++++++++++++++++++++++++---------
> hmp-commands.hx | 14 +++-
> hmp.c | 13 ++++-
> memory_mapping.c | 27 ++++++++
> memory_mapping.h | 3 +
> qapi-schema.json | 6 ++-
> qmp-commands.hx | 8 ++-
> 7 files changed, 220 insertions(+), 37 deletions(-)
>
> diff --git a/dump.c b/dump.c
> index 7f9ea09..cd65488 100644
> --- a/dump.c
> +++ b/dump.c
> @@ -83,6 +83,12 @@ typedef struct DumpState {
> write_core_dump_function f;
> void (*cleanup)(void *opaque);
> void *opaque;
> +
> + RAMBlock *block;
> + ram_addr_t start;
> + bool has_filter;
> + int64_t begin;
> + int64_t length;
> } DumpState;
>
> static DumpState *dump_get_current(void)
> @@ -389,24 +395,30 @@ static int write_data(DumpState *s, void *buf, int length,
> }
>
> /* write the memroy to vmcore. 1 page per I/O. */
> -static int write_memory(DumpState *s, RAMBlock *block,
> - target_phys_addr_t *offset)
> +static int write_memory(DumpState *s, RAMBlock *block, ram_addr_t start,
> + target_phys_addr_t *offset, int64_t *size)
> {
> int i, ret;
> + int64_t writen_size = 0;
>
> - for (i = 0; i < block->length / TARGET_PAGE_SIZE; i++) {
> - ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
> + *size = block->length - start;
> + for (i = 0; i < *size / TARGET_PAGE_SIZE; i++) {
> + ret = write_data(s, block->host + start + i * TARGET_PAGE_SIZE,
> TARGET_PAGE_SIZE, offset);
> if (ret < 0) {
> - return -1;
> + *size = writen_size;
> + return ret;
> }
> +
> + writen_size += TARGET_PAGE_SIZE;
> }
>
> - if ((block->length % TARGET_PAGE_SIZE) != 0) {
> - ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
> - block->length % TARGET_PAGE_SIZE, offset);
> + if ((*size % TARGET_PAGE_SIZE) != 0) {
> + ret = write_data(s, block->host + start + i * TARGET_PAGE_SIZE,
> + *size % TARGET_PAGE_SIZE, offset);
> if (ret < 0) {
> - return -1;
> + *size = writen_size;
> + return ret;
> }
> }
>
> @@ -415,17 +427,47 @@ static int write_memory(DumpState *s, RAMBlock *block,
>
> /* get the memory's offset in the vmcore */
> static target_phys_addr_t get_offset(target_phys_addr_t phys_addr,
> - target_phys_addr_t memory_offset)
> + DumpState *s)
> {
> RAMBlock *block;
> - target_phys_addr_t offset = memory_offset;
> + target_phys_addr_t offset = s->memory_offset;
> + int64_t size_in_block, start;
> +
> + if (s->has_filter) {
> + if (phys_addr < s->begin || phys_addr >= s->begin + s->length) {
> + return -1;
> + }
> + }
>
> QLIST_FOREACH(block, &ram_list.blocks, next) {
> - if (phys_addr >= block->offset &&
> - phys_addr < block->offset + block->length) {
> - return phys_addr - block->offset + offset;
> + if (s->has_filter) {
> + if (block->offset >= s->begin + s->length ||
> + block->offset + block->length <= s->begin) {
> + /* This block is out of the range */
> + continue;
> + }
> +
> + if (s->begin <= block->offset) {
> + start = block->offset;
> + } else {
> + start = s->begin;
> + }
> +
> + size_in_block = block->length - (start - block->offset);
> + if (s->begin + s->length < block->offset + block->length) {
> + size_in_block -= block->offset + block->length -
> + (s->begin + s->length);
> + }
> + } else {
> + start = block->offset;
> + size_in_block = block->length;
> }
> - offset += block->length;
> +
> + if (phys_addr >= start && phys_addr < start + size_in_block) {
> + return phys_addr - start + offset;
> + }
> +
> + offset += size_in_block;
> }
>
> return -1;
> @@ -512,7 +554,7 @@ static int dump_completed(DumpState *s)
> int phdr_index = 1, ret;
>
> QTAILQ_FOREACH(memory_mapping, &s->list.head, next) {
> - offset = get_offset(memory_mapping->phys_addr, s->memory_offset);
> + offset = get_offset(memory_mapping->phys_addr, s);
> if (s->dump_info.d_class == ELFCLASS64) {
> ret = write_elf64_load(s, memory_mapping, phdr_index++, offset);
> } else {
> @@ -528,22 +570,55 @@ static int dump_completed(DumpState *s)
> return 0;
> }
>
> +static int get_next_block(DumpState *s, RAMBlock *block)
> +{
> + while (1) {
> + block = QLIST_NEXT(block, next);
> + if (!block) {
> + /* no more block */
> + return 1;
> + }
> +
> + s->start = 0;
> + s->block = block;
> + if (s->has_filter) {
> + if (block->offset >= s->begin + s->length ||
> + block->offset + block->length <= s->begin) {
> + /* This block is out of the range */
> + continue;
> + }
> +
> + if (s->begin > block->offset) {
> + s->start = s->begin - block->offset;
> + }
> + }
> +
> + return 0;
> + }
> +}
> +
> /* write all memory to vmcore */
> -static int dump_iterate(DumpState *s)
> +static int dump_iterate(void *opaque)
> {
> + DumpState *s = opaque;
> RAMBlock *block;
> target_phys_addr_t offset = s->memory_offset;
> + int64_t size;
> int ret;
>
> - /* write all memory to vmcore */
> - QLIST_FOREACH(block, &ram_list.blocks, next) {
> - ret = write_memory(s, block, &offset);
> - if (ret < 0) {
> - return -1;
> + while(1) {
> + block = s->block;
> + ret = write_memory(s, block, s->start, &offset, &size);
> + if (ret == -1) {
> + return ret;
> }
> - }
>
> - return dump_completed(s);
> + ret = get_next_block(s, block);
> + if (ret == 1) {
> + dump_completed(s);
> + return 0;
> + }
> + }
> }
>
> static int create_vmcore(DumpState *s)
> @@ -563,7 +638,36 @@ static int create_vmcore(DumpState *s)
> return 0;
> }
>
> -static DumpState *dump_init(bool paging, Error **errp)
> +static ram_addr_t get_start_block(DumpState *s)
> +{
> + RAMBlock *block;
> +
> + if (!s->has_filter) {
> + s->block = QLIST_FIRST(&ram_list.blocks);
> + return 0;
> + }
> +
> + QLIST_FOREACH(block, &ram_list.blocks, next) {
> + if (block->offset >= s->begin + s->length ||
> + block->offset + block->length <= s->begin) {
> + /* This block is out of the range */
> + continue;
> + }
> +
> + s->block = block;
> + if (s->begin > block->offset ) {
> + s->start = s->begin - block->offset;
> + } else {
> + s->start = 0;
> + }
> + return s->start;
> + }
> +
> + return -1;
> +}
> +
> +static DumpState *dump_init(bool paging, bool has_filter, int64_t begin,
> + int64_t length, Error **errp)
> {
> CPUState *env;
> DumpState *s = dump_get_current();
> @@ -581,6 +685,15 @@ static DumpState *dump_init(bool paging, Error **errp)
> s->error = NULL;
> }
>
> + s->has_filter = has_filter;
> + s->begin = begin;
> + s->length = length;
> + s->start = get_start_block(s);
> + if (s->start == -1) {
> + error_set(errp, QERR_INVALID_PARAMETER, "begin");
This will let the VM stopped.
> + return NULL;
> + }
> +
> /*
> * get dump info: endian, class and architecture.
> * If the target architecture is not supported, cpu_get_dump_info() will
> @@ -607,6 +720,10 @@ static DumpState *dump_init(bool paging, Error **errp)
> qemu_get_guest_simple_memory_mapping(&s->list);
> }
>
> + if (s->has_filter) {
> + memory_mapping_filter(&s->list, s->begin, s->length);
> + }
> +
> /*
> * calculate phdr_num
> *
> @@ -659,9 +776,10 @@ static void fd_cleanup(void *opaque)
> }
> }
>
> -static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
> +static DumpState *dump_init_fd(int fd, bool paging, bool has_filter,
> + int64_t begin, int64_t length, Error **errp)
> {
> - DumpState *s = dump_init(paging, errp);
> + DumpState *s = dump_init(paging, has_filter, begin, length, errp);
>
> if (s == NULL) {
> return NULL;
> @@ -674,12 +792,22 @@ static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
> return s;
> }
>
> -void qmp_dump(bool paging, const char *file, Error **errp)
> +void qmp_dump(bool paging, const char *file, bool has_begin,
> + int64_t begin, bool has_length, int64_t length, Error **errp)
> {
> const char *p;
> int fd = -1;
> DumpState *s;
>
> + if (has_begin && !has_length) {
> + error_set(errp, QERR_MISSING_PARAMETER, "length");
> + return;
> + }
> + if (!has_begin && has_length) {
> + error_set(errp, QERR_MISSING_PARAMETER, "begin");
> + return;
> + }
> +
> #if !defined(WIN32)
> if (strstart(file, "fd:", &p)) {
> fd = qemu_get_fd(p);
> @@ -703,7 +831,7 @@ void qmp_dump(bool paging, const char *file, Error **errp)
> return;
> }
>
> - s = dump_init_fd(fd, paging, errp);
> + s = dump_init_fd(fd, paging, has_begin, begin, length, errp);
> if (!s) {
> return;
> }
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index abd412e..af0f112 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -883,21 +883,27 @@ ETEXI
> #if defined(CONFIG_HAVE_CORE_DUMP)
> {
> .name = "dump",
> - .args_type = "paging:-p,file:s",
> - .params = "[-p] file",
> - .help = "dump to file",
> + .args_type = "paging:-p,file:s,begin:i?,length:i?",
> + .params = "[-p] file [begin] [length]",
> + .help = "dump to file"
> + "\n\t\t\t begin(optional): the starting physical address"
> + "\n\t\t\t length(optional): the memory size, in bytes",
> .user_print = monitor_user_noop,
> .mhandler.cmd = hmp_dump,
> },
>
>
> STEXI
> -@item dump [-p] @var{file}
> +@item dump [-p] @var{file} @var{begin} @var{length}
> @findex dump
> Dump to @var{file}. The file can be processed with crash or gdb.
> file: destination file(started with "file:") or destination file descriptor
> (started with "fd:")
> paging: do paging to get guest's memory mapping
> + begin: the starting physical address. It's optional, and should be specified
> + with length together.
> + length: the memory size, in bytes. It's optional, and should be specified with
> + begin together.
> ETEXI
> #endif
>
> diff --git a/hmp.c b/hmp.c
> index d0aa94b..b399119 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -866,8 +866,19 @@ void hmp_dump(Monitor *mon, const QDict *qdict)
> Error *errp = NULL;
> int paging = qdict_get_try_bool(qdict, "paging", 0);
> const char *file = qdict_get_str(qdict, "file");
> + bool has_begin = qdict_haskey(qdict, "begin");
> + bool has_length = qdict_haskey(qdict, "length");
> + int64_t begin = 0;
> + int64_t length = 0;
>
> - qmp_dump(!!paging, file, &errp);
> + if (has_begin) {
> + begin = qdict_get_int(qdict, "begin");
> + }
> + if (has_length) {
> + length = qdict_get_int(qdict, "length");
> + }
> +
> + qmp_dump(!!paging, file, has_begin, begin, has_length, length, &errp);
> hmp_handle_error(mon, &errp);
> }
>
> diff --git a/memory_mapping.c b/memory_mapping.c
> index 8dd0750..f2b8252 100644
> --- a/memory_mapping.c
> +++ b/memory_mapping.c
> @@ -209,3 +209,30 @@ void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list)
> create_new_memory_mapping(list, block->offset, 0, block->length);
> }
> }
> +
> +void memory_mapping_filter(MemoryMappingList *list, int64_t begin,
> + int64_t length)
> +{
> + MemoryMapping *cur, *next;
> +
> + QTAILQ_FOREACH_SAFE(cur, &list->head, next, next) {
> + if (cur->phys_addr >= begin + length ||
> + cur->phys_addr + cur->length <= begin) {
> + QTAILQ_REMOVE(&list->head, cur, next);
> + list->num--;
> + continue;
> + }
> +
> + if (cur->phys_addr < begin) {
> + cur->length -= begin - cur->phys_addr;
> + if (cur->virt_addr) {
> + cur->virt_addr += begin - cur->phys_addr;
> + }
> + cur->phys_addr = begin;
> + }
> +
> + if (cur->phys_addr + cur->length > begin + length) {
> + cur->length -= cur->phys_addr + cur->length - begin - length;
> + }
> + }
> +}
> diff --git a/memory_mapping.h b/memory_mapping.h
> index 50b1f25..c5004ed 100644
> --- a/memory_mapping.h
> +++ b/memory_mapping.h
> @@ -55,4 +55,7 @@ int qemu_get_guest_memory_mapping(MemoryMappingList *list);
> /* get guest's memory mapping without do paging(virtual address is 0). */
> void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list);
>
> +void memory_mapping_filter(MemoryMappingList *list, int64_t begin,
> + int64_t length);
> +
> #endif
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 9728f5f..cc42fa0 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -1671,12 +1671,16 @@
> #
> # @paging: if true, do paging to get guest's memory mapping
> # @file: the filename or file descriptor of the vmcore.
> +# @begin: if specified, the starting physical address.
> +# @length: if specified, the memory size, in bytes.
'begin' and 'length' are optionals and have to be marked as such (look
in qapi-schema.json for examples).
Also, I would squash this patch into 11/14.
> #
> # Returns: nothing on success
> #
> # Since: 1.1
> ##
> -{ 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
> +{ 'command': 'dump',
> + 'data': { 'paging': 'bool', 'file': 'str', '*begin': 'int',
> + '*length': 'int' } }
>
> ##
> # @dump_cancel
> diff --git a/qmp-commands.hx b/qmp-commands.hx
> index 036e111..738fff8 100644
> --- a/qmp-commands.hx
> +++ b/qmp-commands.hx
> @@ -589,8 +589,8 @@ EQMP
> #if defined(CONFIG_HAVE_CORE_DUMP)
> {
> .name = "dump",
> - .args_type = "paging:-p,file:s",
> - .params = "[-p] file",
> + .args_type = "paging:-p,file:s,begin:i?,end:i?",
> + .params = "[-p] file [begin] [length]",
> .help = "dump to file",
> .user_print = monitor_user_noop,
> .mhandler.cmd_new = qmp_marshal_input_dump,
> @@ -607,6 +607,10 @@ Arguments:
> - "paging": do paging to get guest's memory mapping (json-bool)
> - "file": destination file(started with "file:") or destination file descriptor
> (started with "fd:") (json-string)
> +- "begin": the starting physical address. It's optional, and should be specified
> + with length together (json-int)
> +- "length": the memory size, in bytes. It's optional, and should be specified
> + with begin together (json-int)
>
> Example:
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
` (13 preceding siblings ...)
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 14/14 v9] allow user to dump a fraction of the memory Wen Congyang
@ 2012-03-14 17:26 ` Luiz Capitulino
2012-03-14 17:37 ` Eric Blake
2012-03-14 17:49 ` Anthony Liguori
14 siblings, 2 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-14 17:26 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, qemu-devel, HATAYAMA Daisuke, Dave Anderson,
Anthony Liguori, Eric Blake
On Wed, 14 Mar 2012 10:03:15 +0800
Wen Congyang <wency@cn.fujitsu.com> wrote:
> Changes from v8 to v9:
> 1. remove async support(it will be reimplemented after QAPI async commands support
> is finished)
I gave my review on this one (concentrating on the QMP part only), and one
important aspect of this command is that it's a long synchronous operation.
As it runs with vCPUs stopped, the only drawback is that libvirt won't be able
to run other commands in parallel and won't be able to cancel it either. Also
note that this command is most likely to be executed when the guest crashes.
Given all this, it's fine with me to have this as a synchronous command, but
I'd like to get an ACK from libvirt and another one from Jan and/or Anthony.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism
2012-03-14 17:26 ` [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Luiz Capitulino
@ 2012-03-14 17:37 ` Eric Blake
2012-03-14 17:49 ` Anthony Liguori
1 sibling, 0 replies; 40+ messages in thread
From: Eric Blake @ 2012-03-14 17:37 UTC (permalink / raw)
To: Luiz Capitulino
Cc: Jan Kiszka, qemu-devel, HATAYAMA Daisuke, Dave Anderson,
Anthony Liguori
[-- Attachment #1: Type: text/plain, Size: 1628 bytes --]
On 03/14/2012 11:26 AM, Luiz Capitulino wrote:
> On Wed, 14 Mar 2012 10:03:15 +0800
> Wen Congyang <wency@cn.fujitsu.com> wrote:
>
>> Changes from v8 to v9:
>> 1. remove async support(it will be reimplemented after QAPI async commands support
>> is finished)
>
> I gave my review on this one (concentrating on the QMP part only), and one
> important aspect of this command is that it's a long synchronous operation.
>
> As it runs with vCPUs stopped, the only drawback is that libvirt won't be able
> to run other commands in parallel and won't be able to cancel it either. Also
> note that this command is most likely to be executed when the guest crashes.
>
> Given all this, it's fine with me to have this as a synchronous command, but
> I'd like to get an ACK from libvirt and another one from Jan and/or Anthony.
The inability to cancel is annoying (which is why I first suggested that
we add async handling and cancel back in v2 or so); but your arguments that:
1. async support will be coming in qemu 1.2, and polluting the waters
with ad-hoc async stuff in 1.1 will just make it harder
2. dump typically only gets run when a guest will no longer be running,
therefore having it be blocking is not the end of the world, as long as
the user doesn't mind the inability to cancel
are sufficient that I'm okay with a synchronous-only version for qemu
1.1. It won't be the first time libvirt has had a long-running command
it couldn't cancel (think 'savevm', for instance).
--
Eric Blake eblake@redhat.com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 620 bytes --]
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism
2012-03-14 17:26 ` [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Luiz Capitulino
2012-03-14 17:37 ` Eric Blake
@ 2012-03-14 17:49 ` Anthony Liguori
2012-03-14 18:03 ` Luiz Capitulino
1 sibling, 1 reply; 40+ messages in thread
From: Anthony Liguori @ 2012-03-14 17:49 UTC (permalink / raw)
To: Luiz Capitulino
Cc: Jan Kiszka, qemu-devel, HATAYAMA Daisuke, Dave Anderson,
Eric Blake
On 03/14/2012 12:26 PM, Luiz Capitulino wrote:
> On Wed, 14 Mar 2012 10:03:15 +0800
> Wen Congyang<wency@cn.fujitsu.com> wrote:
>
>> Changes from v8 to v9:
>> 1. remove async support(it will be reimplemented after QAPI async commands support
>> is finished)
>
> I gave my review on this one (concentrating on the QMP part only), and one
> important aspect of this command is that it's a long synchronous operation.
>
> As it runs with vCPUs stopped, the only drawback is that libvirt won't be able
> to run other commands in parallel and won't be able to cancel it either. Also
> note that this command is most likely to be executed when the guest crashes.
>
> Given all this, it's fine with me to have this as a synchronous command, but
> I'd like to get an ACK from libvirt and another one from Jan and/or Anthony.
Just a general comment.
Why is this an RFC if it's at v9? RFC's are never added to my review queue so I
haven't looked at this series at all.
Can we start by posting a non-RFC because we start discussing committing this.
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism
2012-03-14 17:49 ` Anthony Liguori
@ 2012-03-14 18:03 ` Luiz Capitulino
0 siblings, 0 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-14 18:03 UTC (permalink / raw)
To: Anthony Liguori
Cc: Jan Kiszka, qemu-devel, HATAYAMA Daisuke, Dave Anderson,
Eric Blake
On Wed, 14 Mar 2012 12:49:59 -0500
Anthony Liguori <anthony@codemonkey.ws> wrote:
> On 03/14/2012 12:26 PM, Luiz Capitulino wrote:
> > On Wed, 14 Mar 2012 10:03:15 +0800
> > Wen Congyang<wency@cn.fujitsu.com> wrote:
> >
> >> Changes from v8 to v9:
> >> 1. remove async support(it will be reimplemented after QAPI async commands support
> >> is finished)
> >
> > I gave my review on this one (concentrating on the QMP part only), and one
> > important aspect of this command is that it's a long synchronous operation.
> >
> > As it runs with vCPUs stopped, the only drawback is that libvirt won't be able
> > to run other commands in parallel and won't be able to cancel it either. Also
> > note that this command is most likely to be executed when the guest crashes.
> >
> > Given all this, it's fine with me to have this as a synchronous command, but
> > I'd like to get an ACK from libvirt and another one from Jan and/or Anthony.
>
> Just a general comment.
>
> Why is this an RFC if it's at v9? RFC's are never added to my review queue so I
> haven't looked at this series at all.
>
> Can we start by posting a non-RFC because we start discussing committing this.
Ah, I noticed that too but forgot asking Wen to do that for the next
version. But the way I'm interpreting this is that it's a small mistake and he
intended to get the series reviewed and committed.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 17:18 ` Luiz Capitulino
@ 2012-03-15 2:29 ` Wen Congyang
2012-03-15 14:25 ` Luiz Capitulino
` (2 subsequent siblings)
3 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-15 2:29 UTC (permalink / raw)
To: Luiz Capitulino, Jan Kiszka, Eric Blake, Anthony Liguori
Cc: HATAYAMA Daisuke, Dave Anderson, qemu-devel
At 03/15/2012 01:18 AM, Luiz Capitulino Wrote:
> On Wed, 14 Mar 2012 10:11:35 +0800
> Wen Congyang <wency@cn.fujitsu.com> wrote:
>
<cut>
>
> You just dropped a few asynchronous bits and resent this as a synchronous
> command, letting all the asynchronous infrastructure in. This is bad, as the
> command is more complex then it should be and doesn't make full use of the
> added infrastructure.
>
> For example, does the synchronous version really uses DumpState? If it doesn't,
> let's just drop it and everything else which is not necessary.
>
> *However*, note that while it's fine with me to have this as a synchronous
> command we need a few more ACKs (from libvirt and Anthony and/or Jan). So, I
> wouldn't go too far on making changes before we get those ACKs.
>
Hi, Anthony, Luiz, Eric, Jan
At 03/15/2012 01:49 AM, Anthony Liguori Wrote:
>
> Can we start by posting a non-RFC because we start discussing committing
> this.
At 03/15/2012 01:37 AM, Eric Blake Wrote:
> are sufficient that I'm okay with a synchronous-only version for qemu
> 1.1.
So I think Anthony and Eric may ACK to it.
Jan reviewed the early version, and give many comments. So I think he also ACKs to it.
Is it OK to post non-RFC version?
Thanks
Wen Congyang
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 17:18 ` Luiz Capitulino
2012-03-15 2:29 ` Wen Congyang
@ 2012-03-15 14:25 ` Luiz Capitulino
2012-03-16 10:13 ` Wen Congyang
2012-03-19 2:28 ` Wen Congyang
3 siblings, 0 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-15 14:25 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
On Wed, 14 Mar 2012 14:18:47 -0300
Luiz Capitulino <lcapitulino@redhat.com> wrote:
> > + ret = write(fd, buf, size);
> > + if (ret != size) {
> > + return -1;
> > + }
>
> I think you should use send_all() instead of plain write().
Missed the fact that we actually have qemu_write_full().
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file Wen Congyang
@ 2012-03-16 1:17 ` HATAYAMA Daisuke
0 siblings, 0 replies; 40+ messages in thread
From: HATAYAMA Daisuke @ 2012-03-16 1:17 UTC (permalink / raw)
To: wency; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
From: Wen Congyang <wency@cn.fujitsu.com>
Subject: [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file
Date: Wed, 14 Mar 2012 10:08:48 +0800
> + descsz = 336; /* sizeof(prstatus_t) is 336 on x86_64 box */
Please introduce prstatus_t for both 32-bit and 64-bit versions. It's
more readable if content of note information is defined as a datatype.
Unecessary memmbers don't need to be defined explicitly. The one in
crash source code is a good example, where only members being used
have meaningful types.
struct elf_prstatus_i386 {
char pad[72];
elf_gregset_i386_t pr_reg; /* GP registers */
__u32 pr_fpvalid; /* True if math co-processor being used. */
};
> + descsz = 144; /* sizeof(prstatus_t) is 144 on x86 box */
Also.
> + descsz = 144; /* sizeof(prstatus_t) is 144 on x86 box */
Also.
Thanks.
HATAYAMA, Daisuke
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status " Wen Congyang
@ 2012-03-16 1:48 ` HATAYAMA Daisuke
2012-03-16 6:50 ` Wen Congyang
0 siblings, 1 reply; 40+ messages in thread
From: HATAYAMA Daisuke @ 2012-03-16 1:48 UTC (permalink / raw)
To: wency; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
From: Wen Congyang <wency@cn.fujitsu.com>
Subject: [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
Date: Wed, 14 Mar 2012 10:09:26 +0800
> + memset(note, 0, note_size);
> + if (type == 0) {
> + note32 = note;
> + note32->n_namesz = cpu_to_le32(name_size);
> + note32->n_descsz = cpu_to_le32(descsz);
> + note32->n_type = 0;
> + } else {
> + note64 = note;
> + note64->n_namesz = cpu_to_le32(name_size);
> + note64->n_descsz = cpu_to_le32(descsz);
> + note64->n_type = 0;
> + }
Why not give new type for this note information an explicit name?
Like NT_QEMUCPUSTATE? There might be another type in the future. This
way there's also a merit that we can know all the existing notes
relevant to qemu dump by looking at the names in a header file.
Thanks.
HATAYAMA, Daisuke
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 2:11 ` [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory Wen Congyang
2012-03-14 17:18 ` Luiz Capitulino
@ 2012-03-16 3:23 ` HATAYAMA Daisuke
2012-03-16 6:41 ` Wen Congyang
1 sibling, 1 reply; 40+ messages in thread
From: HATAYAMA Daisuke @ 2012-03-16 3:23 UTC (permalink / raw)
To: wency; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
From: Wen Congyang <wency@cn.fujitsu.com>
Subject: [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
Date: Wed, 14 Mar 2012 10:11:35 +0800
> +/*
> + * QEMU dump
> + *
> + * Copyright Fujitsu, Corp. 2011
> + *
Now 2012.
> + /*
> + * calculate phdr_num
> + *
> + * the type of phdr->num is uint16_t, so we should avoid overflow
e_phnum is correct.
> + */
> + s->phdr_num = 1; /* PT_NOTE */
> + if (s->list.num < (1 << 16) - 2) {
s->list.num < UINT16_MAX is better.
> + s->phdr_num += s->list.num;
> + s->have_section = false;
> + } else {
> + s->have_section = true;
> + s->phdr_num = PN_XNUM;
> +
> + /* the type of shdr->sh_info is uint32_t, so we should avoid overflow */
> + if (s->list.num > (1ULL << 32) - 2) {
s->list.num < UINT32_MAX is better.
> + s->sh_info = 0xffffffff;
UINT32_MAX is better. Is it rough around here?
> + } else {
> + s->sh_info += s->list.num;
> + }
> + }
Now orders of processings in positive and negative cases for e_phnum
and sh_info are different. It's better to make them sorted in the same
order.
if (phdr_num not overflow?) {
not overflow case;
} else {
overflow case;
if (sh_info not overflow?) {
not overflow case;
} else {
overflow case;
}
}
is better.
Thanks.
HATAYAMA, Daisuke
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping Wen Congyang
@ 2012-03-16 3:52 ` HATAYAMA Daisuke
2012-03-16 6:50 ` Wen Congyang
2012-03-16 6:38 ` HATAYAMA Daisuke
1 sibling, 1 reply; 40+ messages in thread
From: HATAYAMA Daisuke @ 2012-03-16 3:52 UTC (permalink / raw)
To: wency; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
From: Wen Congyang <wency@cn.fujitsu.com>
Subject: [RFC][PATCH 05/14 v9] Add API to get memory mapping
Date: Wed, 14 Mar 2012 10:07:48 +0800
> }
> +
> +int qemu_get_guest_memory_mapping(MemoryMappingList *list)
> +{
> + CPUState *env;
> + RAMBlock *block;
> + ram_addr_t offset, length;
> + int ret;
> + bool paging_mode;
> +
> +#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
> + paging_mode = cpu_paging_enabled(first_cpu);
> + if (paging_mode) {
> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
> + ret = cpu_get_memory_mapping(list, env);
> + if (ret < 0) {
> + return -1;
> + }
> + }
> + return 0;
> + }
> +#else
> + return -2;
> +#endif
Is it better to define the below somewhere else?
#ifndef CONFIG_HAVE_GET_MEMORY_MAPPING
static inline int qemu_get_guest_memory_mapping(MemoryMappingList *list)
{
return -2;
}
#endif
Thanks.
HATAYAMA, Daisuke
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping Wen Congyang
2012-03-16 3:52 ` HATAYAMA Daisuke
@ 2012-03-16 6:38 ` HATAYAMA Daisuke
2012-03-16 6:59 ` Wen Congyang
1 sibling, 1 reply; 40+ messages in thread
From: HATAYAMA Daisuke @ 2012-03-16 6:38 UTC (permalink / raw)
To: wency; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
From: Wen Congyang <wency@cn.fujitsu.com>
Subject: [RFC][PATCH 05/14 v9] Add API to get memory mapping
Date: Wed, 14 Mar 2012 10:07:48 +0800
> Add API to get all virtual address and physical address mapping.
> If the guest doesn't use paging, the virtual address is equal to the phyical
> address. The virtual address and physical address mapping is for gdb's user, and
> it does not include the memory that is not referenced by the page table. So if
> you want to use crash to anaylze the vmcore, please do not specify -p option.
It's necessary to write the reason why the -p option is not default
explicitly: guest machine in a catastrophic state can have corrupted
memory, which we cannot trust.
Thanks.
HATAYAMA, Daisuke
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-16 3:23 ` HATAYAMA Daisuke
@ 2012-03-16 6:41 ` Wen Congyang
0 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-16 6:41 UTC (permalink / raw)
To: HATAYAMA Daisuke; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
At 03/16/2012 11:23 AM, HATAYAMA Daisuke Wrote:
> From: Wen Congyang <wency@cn.fujitsu.com>
> Subject: [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
> Date: Wed, 14 Mar 2012 10:11:35 +0800
>
>> +/*
>> + * QEMU dump
>> + *
>> + * Copyright Fujitsu, Corp. 2011
>> + *
>
> Now 2012.
On, I forgot to update it.
>
>> + /*
>> + * calculate phdr_num
>> + *
>> + * the type of phdr->num is uint16_t, so we should avoid overflow
>
> e_phnum is correct.
Yes
>
>> + */
>> + s->phdr_num = 1; /* PT_NOTE */
>> + if (s->list.num < (1 << 16) - 2) {
>
> s->list.num < UINT16_MAX is better.
>
>> + s->phdr_num += s->list.num;
>> + s->have_section = false;
>> + } else {
>> + s->have_section = true;
>> + s->phdr_num = PN_XNUM;
>> +
>> + /* the type of shdr->sh_info is uint32_t, so we should avoid overflow */
>> + if (s->list.num > (1ULL << 32) - 2) {
>
> s->list.num < UINT32_MAX is better.
>
>> + s->sh_info = 0xffffffff;
>
> UINT32_MAX is better. Is it rough around here?
>
>> + } else {
>> + s->sh_info += s->list.num;
>> + }
>> + }
>
> Now orders of processings in positive and negative cases for e_phnum
> and sh_info are different. It's better to make them sorted in the same
> order.
>
> if (phdr_num not overflow?) {
> not overflow case;
> } else {
> overflow case;
> if (sh_info not overflow?) {
> not overflow case;
> } else {
> overflow case;
> }
> }
>
> is better.
OK
Thanks
Wen Congyang
>
> Thanks.
> HATAYAMA, Daisuke
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
2012-03-16 1:48 ` HATAYAMA Daisuke
@ 2012-03-16 6:50 ` Wen Congyang
2012-03-19 1:09 ` HATAYAMA Daisuke
0 siblings, 1 reply; 40+ messages in thread
From: Wen Congyang @ 2012-03-16 6:50 UTC (permalink / raw)
To: HATAYAMA Daisuke; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
At 03/16/2012 09:48 AM, HATAYAMA Daisuke Wrote:
> From: Wen Congyang <wency@cn.fujitsu.com>
> Subject: [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
> Date: Wed, 14 Mar 2012 10:09:26 +0800
>
>> + memset(note, 0, note_size);
>> + if (type == 0) {
>> + note32 = note;
>> + note32->n_namesz = cpu_to_le32(name_size);
>> + note32->n_descsz = cpu_to_le32(descsz);
>> + note32->n_type = 0;
>> + } else {
>> + note64 = note;
>> + note64->n_namesz = cpu_to_le32(name_size);
>> + note64->n_descsz = cpu_to_le32(descsz);
>> + note64->n_type = 0;
>> + }
>
> Why not give new type for this note information an explicit name?
> Like NT_QEMUCPUSTATE? There might be another type in the future. This
> way there's also a merit that we can know all the existing notes
> relevant to qemu dump by looking at the names in a header file.
Hmm, how to add a new type? Does someont manage this?
Thanks
Wen Congyang
>
> Thanks.
> HATAYAMA, Daisuke
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping
2012-03-16 3:52 ` HATAYAMA Daisuke
@ 2012-03-16 6:50 ` Wen Congyang
0 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-16 6:50 UTC (permalink / raw)
To: HATAYAMA Daisuke; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
At 03/16/2012 11:52 AM, HATAYAMA Daisuke Wrote:
> From: Wen Congyang <wency@cn.fujitsu.com>
> Subject: [RFC][PATCH 05/14 v9] Add API to get memory mapping
> Date: Wed, 14 Mar 2012 10:07:48 +0800
>
>> }
>> +
>> +int qemu_get_guest_memory_mapping(MemoryMappingList *list)
>> +{
>> + CPUState *env;
>> + RAMBlock *block;
>> + ram_addr_t offset, length;
>> + int ret;
>> + bool paging_mode;
>> +
>> +#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
>> + paging_mode = cpu_paging_enabled(first_cpu);
>> + if (paging_mode) {
>> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> + ret = cpu_get_memory_mapping(list, env);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> + }
>> + return 0;
>> + }
>> +#else
>> + return -2;
>> +#endif
>
> Is it better to define the below somewhere else?
>
> #ifndef CONFIG_HAVE_GET_MEMORY_MAPPING
> static inline int qemu_get_guest_memory_mapping(MemoryMappingList *list)
> {
> return -2;
> }
> #endif
Yes
Thanks
Wen Congyang
>
> Thanks.
> HATAYAMA, Daisuke
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping
2012-03-16 6:38 ` HATAYAMA Daisuke
@ 2012-03-16 6:59 ` Wen Congyang
0 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-16 6:59 UTC (permalink / raw)
To: HATAYAMA Daisuke; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
At 03/16/2012 02:38 PM, HATAYAMA Daisuke Wrote:
> From: Wen Congyang <wency@cn.fujitsu.com>
> Subject: [RFC][PATCH 05/14 v9] Add API to get memory mapping
> Date: Wed, 14 Mar 2012 10:07:48 +0800
>
>> Add API to get all virtual address and physical address mapping.
>> If the guest doesn't use paging, the virtual address is equal to the phyical
>> address. The virtual address and physical address mapping is for gdb's user, and
>> it does not include the memory that is not referenced by the page table. So if
>> you want to use crash to anaylze the vmcore, please do not specify -p option.
>
> It's necessary to write the reason why the -p option is not default
> explicitly: guest machine in a catastrophic state can have corrupted
> memory, which we cannot trust.
Yes
Thanks
Wen Congyang
>
> Thanks.
> HATAYAMA, Daisuke
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 17:18 ` Luiz Capitulino
2012-03-15 2:29 ` Wen Congyang
2012-03-15 14:25 ` Luiz Capitulino
@ 2012-03-16 10:13 ` Wen Congyang
2012-03-19 2:28 ` Wen Congyang
3 siblings, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-16 10:13 UTC (permalink / raw)
To: Luiz Capitulino
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
At 03/15/2012 01:18 AM, Luiz Capitulino Wrote:
> On Wed, 14 Mar 2012 10:11:35 +0800
> Wen Congyang <wency@cn.fujitsu.com> wrote:
>
>> The command's usage:
>> dump [-p] file
>> file should be start with "file:"(the file's path) or "fd:"(the fd's name).
>>
>> Note:
>> 1. If you want to use gdb to analyse the core, please specify -p option.
>> 2. This command doesn't support the fd that is is associated with a pipe,
>> socket, or FIFO(lseek will fail with such fd).
>>
>> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>> Makefile.target | 2 +-
>> dump.c | 714 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> elf.h | 5 +
>> hmp-commands.hx | 21 ++
>> hmp.c | 10 +
>> hmp.h | 1 +
>> qapi-schema.json | 14 +
>> qmp-commands.hx | 34 +++
>> 8 files changed, 800 insertions(+), 1 deletions(-)
>> create mode 100644 dump.c
>>
>> diff --git a/Makefile.target b/Makefile.target
>> index c81c4fa..287fbe7 100644
>> --- a/Makefile.target
>> +++ b/Makefile.target
>> @@ -213,7 +213,7 @@ obj-$(CONFIG_NO_KVM) += kvm-stub.o
>> obj-$(CONFIG_VGA) += vga.o
>> obj-y += memory.o savevm.o
>> obj-y += memory_mapping.o
>> -obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o
>> +obj-$(CONFIG_HAVE_CORE_DUMP) += arch_dump.o dump.o
>> LIBS+=-lz
>>
>> obj-i386-$(CONFIG_KVM) += hyperv.o
>> diff --git a/dump.c b/dump.c
>> new file mode 100644
>> index 0000000..42e1681
>> --- /dev/null
>> +++ b/dump.c
>> @@ -0,0 +1,714 @@
>> +/*
>> + * QEMU dump
>> + *
>> + * Copyright Fujitsu, Corp. 2011
>> + *
>> + * Authors:
>> + * Wen Congyang <wency@cn.fujitsu.com>
>> + *
>> + * This work is licensed under the terms of the GNU GPL, version 2. See
>> + * the COPYING file in the top-level directory.
>> + *
>> + */
>> +
>> +#include "qemu-common.h"
>> +#include <unistd.h>
>> +#include "elf.h"
>> +#include <sys/procfs.h>
>> +#include <glib.h>
>> +#include "cpu.h"
>> +#include "cpu-all.h"
>> +#include "targphys.h"
>> +#include "monitor.h"
>> +#include "kvm.h"
>> +#include "dump.h"
>> +#include "sysemu.h"
>> +#include "bswap.h"
>> +#include "memory_mapping.h"
>> +#include "error.h"
>> +#include "qmp-commands.h"
>> +#include "gdbstub.h"
>> +
>> +static inline uint16_t cpu_convert_to_target16(uint16_t val, int endian)
>> +{
>> + if (endian == ELFDATA2LSB) {
>> + val = cpu_to_le16(val);
>> + } else {
>> + val = cpu_to_be16(val);
>> + }
>> +
>> + return val;
>> +}
>> +
>> +static inline uint32_t cpu_convert_to_target32(uint32_t val, int endian)
>> +{
>> + if (endian == ELFDATA2LSB) {
>> + val = cpu_to_le32(val);
>> + } else {
>> + val = cpu_to_be32(val);
>> + }
>> +
>> + return val;
>> +}
>> +
>> +static inline uint64_t cpu_convert_to_target64(uint64_t val, int endian)
>> +{
>> + if (endian == ELFDATA2LSB) {
>> + val = cpu_to_le64(val);
>> + } else {
>> + val = cpu_to_be64(val);
>> + }
>> +
>> + return val;
>> +}
>> +
>> +enum {
>> + DUMP_STATE_ERROR,
>> + DUMP_STATE_SETUP,
>> + DUMP_STATE_CANCELLED,
>> + DUMP_STATE_ACTIVE,
>> + DUMP_STATE_COMPLETED,
>> +};
>> +
>> +typedef struct DumpState {
>> + ArchDumpInfo dump_info;
>> + MemoryMappingList list;
>> + uint16_t phdr_num;
>> + uint32_t sh_info;
>> + bool have_section;
>> + int state;
>> + bool resume;
>> + char *error;
>> + target_phys_addr_t memory_offset;
>> + write_core_dump_function f;
>> + void (*cleanup)(void *opaque);
>> + void *opaque;
>> +} DumpState;
>> +
>> +static DumpState *dump_get_current(void)
>> +{
>> + static DumpState current_dump = {
>> + .state = DUMP_STATE_SETUP,
>> + };
>> +
>> + return ¤t_dump;
>> +}
>
> You just dropped a few asynchronous bits and resent this as a synchronous
> command, letting all the asynchronous infrastructure in. This is bad, as the
> command is more complex then it should be and doesn't make full use of the
> added infrastructure.
>
> For example, does the synchronous version really uses DumpState? If it doesn't,
> let's just drop it and everything else which is not necessary.
I use this struct to avoid too many parameters...
I will try to make it simple and clean.
Thanks
Wen Congyang
>
> *However*, note that while it's fine with me to have this as a synchronous
> command we need a few more ACKs (from libvirt and Anthony and/or Jan). So, I
> wouldn't go too far on making changes before we get those ACKs.
>
>> +
>> +static int dump_cleanup(DumpState *s)
>> +{
>> + int ret = 0;
>> +
>> + memory_mapping_list_free(&s->list);
>> + s->cleanup(s->opaque);
>> + if (s->resume) {
>> + vm_start();
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static void dump_error(DumpState *s, const char *reason)
>> +{
>> + s->state = DUMP_STATE_ERROR;
>> + s->error = g_strdup(reason);
>> + dump_cleanup(s);
>> +}
>> +
>> +static int write_elf64_header(DumpState *s)
>> +{
>> + Elf64_Ehdr elf_header;
>> + int ret;
>> + int endian = s->dump_info.d_endian;
>> +
>> + memset(&elf_header, 0, sizeof(Elf64_Ehdr));
>> + memcpy(&elf_header, ELFMAG, 4);
>> + elf_header.e_ident[EI_CLASS] = ELFCLASS64;
>> + elf_header.e_ident[EI_DATA] = s->dump_info.d_endian;
>> + elf_header.e_ident[EI_VERSION] = EV_CURRENT;
>> + elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
>> + elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
>> + endian);
>> + elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
>> + elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
>> + elf_header.e_phoff = cpu_convert_to_target64(sizeof(Elf64_Ehdr), endian);
>> + elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf64_Phdr),
>> + endian);
>> + elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
>> + if (s->have_section) {
>> + uint64_t shoff = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr) * s->sh_info;
>> +
>> + elf_header.e_shoff = cpu_convert_to_target64(shoff, endian);
>> + elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf64_Shdr),
>> + endian);
>> + elf_header.e_shnum = cpu_convert_to_target16(1, endian);
>> + }
>> +
>> + ret = s->f(0, &elf_header, sizeof(elf_header), s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write elf header.\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int write_elf32_header(DumpState *s)
>> +{
>> + Elf32_Ehdr elf_header;
>> + int ret;
>> + int endian = s->dump_info.d_endian;
>> +
>> + memset(&elf_header, 0, sizeof(Elf32_Ehdr));
>> + memcpy(&elf_header, ELFMAG, 4);
>> + elf_header.e_ident[EI_CLASS] = ELFCLASS32;
>> + elf_header.e_ident[EI_DATA] = endian;
>> + elf_header.e_ident[EI_VERSION] = EV_CURRENT;
>> + elf_header.e_type = cpu_convert_to_target16(ET_CORE, endian);
>> + elf_header.e_machine = cpu_convert_to_target16(s->dump_info.d_machine,
>> + endian);
>> + elf_header.e_version = cpu_convert_to_target32(EV_CURRENT, endian);
>> + elf_header.e_ehsize = cpu_convert_to_target16(sizeof(elf_header), endian);
>> + elf_header.e_phoff = cpu_convert_to_target32(sizeof(Elf32_Ehdr), endian);
>> + elf_header.e_phentsize = cpu_convert_to_target16(sizeof(Elf32_Phdr),
>> + endian);
>> + elf_header.e_phnum = cpu_convert_to_target16(s->phdr_num, endian);
>> + if (s->have_section) {
>> + uint32_t shoff = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr) * s->sh_info;
>> +
>> + elf_header.e_shoff = cpu_convert_to_target32(shoff, endian);
>> + elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf32_Shdr),
>> + endian);
>> + elf_header.e_shnum = cpu_convert_to_target16(1, endian);
>> + }
>> +
>> + ret = s->f(0, &elf_header, sizeof(elf_header), s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write elf header.\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping,
>> + int phdr_index, target_phys_addr_t offset)
>> +{
>> + Elf64_Phdr phdr;
>> + off_t phdr_offset;
>> + int ret;
>> + int endian = s->dump_info.d_endian;
>> +
>> + memset(&phdr, 0, sizeof(Elf64_Phdr));
>> + phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
>> + phdr.p_offset = cpu_convert_to_target64(offset, endian);
>> + phdr.p_paddr = cpu_convert_to_target64(memory_mapping->phys_addr, endian);
>> + if (offset == -1) {
>> + phdr.p_filesz = 0;
>> + } else {
>> + phdr.p_filesz = cpu_convert_to_target64(memory_mapping->length, endian);
>> + }
>> + phdr.p_memsz = cpu_convert_to_target64(memory_mapping->length, endian);
>> + phdr.p_vaddr = cpu_convert_to_target64(memory_mapping->virt_addr, endian);
>> +
>> + phdr_offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*phdr_index;
>> + ret = s->f(phdr_offset, &phdr, sizeof(Elf64_Phdr), s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write program header table.\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int write_elf32_load(DumpState *s, MemoryMapping *memory_mapping,
>> + int phdr_index, target_phys_addr_t offset)
>> +{
>> + Elf32_Phdr phdr;
>> + off_t phdr_offset;
>> + int ret;
>> + int endian = s->dump_info.d_endian;
>> +
>> + memset(&phdr, 0, sizeof(Elf32_Phdr));
>> + phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian);
>> + phdr.p_offset = cpu_convert_to_target32(offset, endian);
>> + phdr.p_paddr = cpu_convert_to_target32(memory_mapping->phys_addr, endian);
>> + if (offset == -1) {
>> + phdr.p_filesz = 0;
>> + } else {
>> + phdr.p_filesz = cpu_convert_to_target32(memory_mapping->length, endian);
>> + }
>> + phdr.p_memsz = cpu_convert_to_target32(memory_mapping->length, endian);
>> + phdr.p_vaddr = cpu_convert_to_target32(memory_mapping->virt_addr, endian);
>> +
>> + phdr_offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*phdr_index;
>> + ret = s->f(phdr_offset, &phdr, sizeof(Elf32_Phdr), s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write program header table.\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int write_elf64_notes(DumpState *s, int phdr_index,
>> + target_phys_addr_t *offset)
>> +{
>> + CPUState *env;
>> + int ret;
>> + target_phys_addr_t begin = *offset;
>> + Elf64_Phdr phdr;
>> + off_t phdr_offset;
>> + int id;
>> + int endian = s->dump_info.d_endian;
>> +
>> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> + id = gdb_id(env);
>> + ret = cpu_write_elf64_note(s->f, env, id, offset, s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write elf notes.\n");
>> + return -1;
>> + }
>> + }
>> +
>> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> + ret = cpu_write_elf64_qemunote(s->f, env, offset, s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write CPU status.\n");
>> + return -1;
>> + }
>> + }
>> +
>> + memset(&phdr, 0, sizeof(Elf64_Phdr));
>> + phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
>> + phdr.p_offset = cpu_convert_to_target64(begin, endian);
>> + phdr.p_paddr = 0;
>> + phdr.p_filesz = cpu_convert_to_target64(*offset - begin, endian);
>> + phdr.p_memsz = cpu_convert_to_target64(*offset - begin, endian);
>> + phdr.p_vaddr = 0;
>> +
>> + phdr_offset = sizeof(Elf64_Ehdr);
>> + ret = s->f(phdr_offset, &phdr, sizeof(Elf64_Phdr), s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write program header table.\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int write_elf32_notes(DumpState *s, int phdr_index,
>> + target_phys_addr_t *offset)
>> +{
>> + CPUState *env;
>> + int ret;
>> + target_phys_addr_t begin = *offset;
>> + Elf32_Phdr phdr;
>> + off_t phdr_offset;
>> + int id;
>> + int endian = s->dump_info.d_endian;
>> +
>> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> + id = gdb_id(env);
>> + ret = cpu_write_elf32_note(s->f, env, id, offset, s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write elf notes.\n");
>> + return -1;
>> + }
>> + }
>> +
>> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> + ret = cpu_write_elf32_qemunote(s->f, env, offset, s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write CPU status.\n");
>> + return -1;
>> + }
>> + }
>> +
>> + memset(&phdr, 0, sizeof(Elf32_Phdr));
>> + phdr.p_type = cpu_convert_to_target32(PT_NOTE, endian);
>> + phdr.p_offset = cpu_convert_to_target32(begin, endian);
>> + phdr.p_paddr = 0;
>> + phdr.p_filesz = cpu_convert_to_target32(*offset - begin, endian);
>> + phdr.p_memsz = cpu_convert_to_target32(*offset - begin, endian);
>> + phdr.p_vaddr = 0;
>> +
>> + phdr_offset = sizeof(Elf32_Ehdr);
>> + ret = s->f(phdr_offset, &phdr, sizeof(Elf32_Phdr), s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write program header table.\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int write_elf_section(DumpState *s, target_phys_addr_t *offset, int type)
>> +{
>> + Elf32_Shdr shdr32;
>> + Elf64_Shdr shdr64;
>> + int endian = s->dump_info.d_endian;
>> + int shdr_size;
>> + void *shdr;
>> + int ret;
>> +
>> + if (type == 0) {
>> + shdr_size = sizeof(Elf32_Shdr);
>> + memset(&shdr32, 0, shdr_size);
>> + shdr32.sh_info = cpu_convert_to_target32(s->sh_info, endian);
>> + shdr = &shdr32;
>> + } else {
>> + shdr_size = sizeof(Elf64_Shdr);
>> + memset(&shdr64, 0, shdr_size);
>> + shdr64.sh_info = cpu_convert_to_target32(s->sh_info, endian);
>> + shdr = &shdr64;
>> + }
>> +
>> + ret = s->f(*offset, &shdr, shdr_size, s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to write section header table.\n");
>> + return -1;
>> + }
>> +
>> + *offset += shdr_size;
>> + return 0;
>> +}
>> +
>> +static int write_data(DumpState *s, void *buf, int length,
>> + target_phys_addr_t *offset)
>> +{
>> + int ret;
>> +
>> + ret = s->f(*offset, buf, length, s->opaque);
>> + if (ret < 0) {
>> + dump_error(s, "dump: failed to save memory.\n");
>> + return -1;
>> + }
>> +
>> + *offset += length;
>> + return 0;
>> +}
>> +
>> +/* write the memroy to vmcore. 1 page per I/O. */
>> +static int write_memory(DumpState *s, RAMBlock *block,
>> + target_phys_addr_t *offset)
>> +{
>> + int i, ret;
>> +
>> + for (i = 0; i < block->length / TARGET_PAGE_SIZE; i++) {
>> + ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
>> + TARGET_PAGE_SIZE, offset);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> + }
>> +
>> + if ((block->length % TARGET_PAGE_SIZE) != 0) {
>> + ret = write_data(s, block->host + i * TARGET_PAGE_SIZE,
>> + block->length % TARGET_PAGE_SIZE, offset);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +/* get the memory's offset in the vmcore */
>> +static target_phys_addr_t get_offset(target_phys_addr_t phys_addr,
>> + target_phys_addr_t memory_offset)
>> +{
>> + RAMBlock *block;
>> + target_phys_addr_t offset = memory_offset;
>> +
>> + QLIST_FOREACH(block, &ram_list.blocks, next) {
>> + if (phys_addr >= block->offset &&
>> + phys_addr < block->offset + block->length) {
>> + return phys_addr - block->offset + offset;
>> + }
>> + offset += block->length;
>> + }
>> +
>> + return -1;
>> +}
>> +
>> +/* write elf header, PT_NOTE and elf note to vmcore. */
>> +static int dump_begin(DumpState *s)
>> +{
>> + target_phys_addr_t offset;
>> + int ret;
>> +
>> + s->state = DUMP_STATE_ACTIVE;
>> +
>> + /*
>> + * the vmcore's format is:
>> + * --------------
>> + * | elf header |
>> + * --------------
>> + * | PT_NOTE |
>> + * --------------
>> + * | PT_LOAD |
>> + * --------------
>> + * | ...... |
>> + * --------------
>> + * | PT_LOAD |
>> + * --------------
>> + * | sec_hdr |
>> + * --------------
>> + * | elf note |
>> + * --------------
>> + * | memory |
>> + * --------------
>> + *
>> + * we only know where the memory is saved after we write elf note into
>> + * vmcore.
>> + */
>> +
>> + /* write elf header to vmcore */
>> + if (s->dump_info.d_class == ELFCLASS64) {
>> + ret = write_elf64_header(s);
>> + } else {
>> + ret = write_elf32_header(s);
>> + }
>> + if (ret < 0) {
>> + return -1;
>> + }
>> +
>> + /* write elf section and notes to vmcore */
>> + if (s->dump_info.d_class == ELFCLASS64) {
>> + if (s->have_section) {
>> + offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*s->sh_info;
>> + if (write_elf_section(s, &offset, 1) < 0) {
>> + return -1;
>> + }
>> + } else {
>> + offset = sizeof(Elf64_Ehdr) + sizeof(Elf64_Phdr)*s->phdr_num;
>> + }
>> + ret = write_elf64_notes(s, 0, &offset);
>> + } else {
>> + if (s->have_section) {
>> + offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*s->sh_info;
>> + if (write_elf_section(s, &offset, 0) < 0) {
>> + return -1;
>> + }
>> + } else {
>> + offset = sizeof(Elf32_Ehdr) + sizeof(Elf32_Phdr)*s->phdr_num;
>> + }
>> + ret = write_elf32_notes(s, 0, &offset);
>> + }
>> +
>> + if (ret < 0) {
>> + return -1;
>> + }
>> +
>> + s->memory_offset = offset;
>> + return 0;
>> +}
>> +
>> +/* write PT_LOAD to vmcore */
>> +static int dump_completed(DumpState *s)
>> +{
>> + target_phys_addr_t offset;
>> + MemoryMapping *memory_mapping;
>> + int phdr_index = 1, ret;
>> +
>> + QTAILQ_FOREACH(memory_mapping, &s->list.head, next) {
>> + offset = get_offset(memory_mapping->phys_addr, s->memory_offset);
>> + if (s->dump_info.d_class == ELFCLASS64) {
>> + ret = write_elf64_load(s, memory_mapping, phdr_index++, offset);
>> + } else {
>> + ret = write_elf32_load(s, memory_mapping, phdr_index++, offset);
>> + }
>> + if (ret < 0) {
>> + return -1;
>> + }
>> + }
>> +
>> + s->state = DUMP_STATE_COMPLETED;
>> + dump_cleanup(s);
>> + return 0;
>> +}
>> +
>> +/* write all memory to vmcore */
>> +static int dump_iterate(DumpState *s)
>> +{
>> + RAMBlock *block;
>> + target_phys_addr_t offset = s->memory_offset;
>> + int ret;
>> +
>> + /* write all memory to vmcore */
>> + QLIST_FOREACH(block, &ram_list.blocks, next) {
>> + ret = write_memory(s, block, &offset);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> + }
>> +
>> + return dump_completed(s);
>> +}
>> +
>> +static int create_vmcore(DumpState *s)
>> +{
>> + int ret;
>> +
>> + ret = dump_begin(s);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> +
>> + ret = dump_iterate(s);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static DumpState *dump_init(bool paging, Error **errp)
>> +{
>> + CPUState *env;
>> + DumpState *s = dump_get_current();
>> + int ret;
>> +
>> + if (runstate_is_running()) {
>> + vm_stop(RUN_STATE_PAUSED);
>> + s->resume = true;
>
> Hmm, you actually stop the VM. Seems obvious now, but when people talked about
> making this asynchronous I automatically assumed that what we didn't want was
> having the global mutex held for too much time (ie. while this command was
> running).
>
> The only disadvantage of having this as a synchronous command is that libvirt
> won't be able to cancel it and won't be able to run other commands in parallel.
> Doesn't seem that serious to me.
>
> Btw, RUN_STATE_PAUSED is not a good one. Doesn't matter that much, as this
> is unlikely to be visible, but you should use RUN_STATE_SAVE_VM or
> RUN_STATE_DEBUG.
>
>> + } else {
>> + s->resume = false;
>> + }
>> + s->state = DUMP_STATE_SETUP;
>> + if (s->error) {
>> + g_free(s->error);
>> + s->error = NULL;
>> + }
>> +
>> + /*
>> + * get dump info: endian, class and architecture.
>> + * If the target architecture is not supported, cpu_get_dump_info() will
>> + * return -1.
>> + *
>> + * if we use kvm, we should synchronize the register before we get dump
>> + * info.
>> + */
>> + for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> + cpu_synchronize_state(env);
>> + }
>> +
>> + ret = cpu_get_dump_info(&s->dump_info);
>> + if (ret < 0) {
>> + error_set(errp, QERR_UNSUPPORTED);
>
> This will let the VM paused.
>
>> + return NULL;
>> + }
>> +
>> + /* get memory mapping */
>> + memory_mapping_list_init(&s->list);
>> + if (paging) {
>> + qemu_get_guest_memory_mapping(&s->list);
>> + } else {
>> + qemu_get_guest_simple_memory_mapping(&s->list);
>> + }
>> +
>> + /*
>> + * calculate phdr_num
>> + *
>> + * the type of phdr->num is uint16_t, so we should avoid overflow
>> + */
>> + s->phdr_num = 1; /* PT_NOTE */
>> + if (s->list.num < (1 << 16) - 2) {
>> + s->phdr_num += s->list.num;
>> + s->have_section = false;
>> + } else {
>> + s->have_section = true;
>> + s->phdr_num = PN_XNUM;
>> +
>> + /* the type of shdr->sh_info is uint32_t, so we should avoid overflow */
>> + if (s->list.num > (1ULL << 32) - 2) {
>> + s->sh_info = 0xffffffff;
>> + } else {
>> + s->sh_info += s->list.num;
>> + }
>> + }
>> +
>> + return s;
>> +}
>> +
>> +static int fd_write_vmcore(target_phys_addr_t offset, void *buf, size_t size,
>> + void *opaque)
>> +{
>> + int fd = (int)(intptr_t)opaque;
>> + int ret;
>> +
>> + ret = lseek(fd, offset, SEEK_SET);
>> + if (ret < 0) {
>> + return -1;
>> + }
>> +
>> + ret = write(fd, buf, size);
>> + if (ret != size) {
>> + return -1;
>> + }
>
> I think you should use send_all() instead of plain write().
>
>> +
>> + return 0;
>> +}
>> +
>> +static void fd_cleanup(void *opaque)
>> +{
>> + int fd = (int)(intptr_t)opaque;
>> +
>> + if (fd != -1) {
>> + close(fd);
>> + }
>> +}
>> +
>> +static DumpState *dump_init_fd(int fd, bool paging, Error **errp)
>> +{
>> + DumpState *s = dump_init(paging, errp);
>> +
>> + if (s == NULL) {
>> + return NULL;
>> + }
>> +
>> + s->f = fd_write_vmcore;
>> + s->cleanup = fd_cleanup;
>> + s->opaque = (void *)(intptr_t)fd;
>
> Do we really need all these indirections?
>
>> +
>> + return s;
>> +}
>> +
>> +void qmp_dump(bool paging, const char *file, Error **errp)
>> +{
>> + const char *p;
>> + int fd = -1;
>> + DumpState *s;
>> +
>> +#if !defined(WIN32)
>> + if (strstart(file, "fd:", &p)) {
>> + fd = qemu_get_fd(p);
>
> qemu_get_fd() won't be merged, you should use monitor_get_fd(cur_mon, p);
>
>> + if (fd == -1) {
>> + error_set(errp, QERR_FD_NOT_FOUND, p);
>> + return;
>> + }
>> + }
>> +#endif
>> +
>> + if (strstart(file, "file:", &p)) {
>> + fd = open(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
>
> This is minor, but I'd use qemu_open() here.
>
>> + if (fd < 0) {
>> + error_set(errp, QERR_OPEN_FILE_FAILED, p);
>> + return;
>> + }
>> + }
>> +
>> + if (fd == -1) {
>> + error_set(errp, QERR_INVALID_PARAMETER, "file");
>> + return;
>> + }
>> +
>> + s = dump_init_fd(fd, paging, errp);
>> + if (!s) {
>> + return;
>> + }
>> +
>> + if (create_vmcore(s) < 0) {
>> + error_set(errp, QERR_IO_ERROR);
>> + }
>> +}
>> diff --git a/elf.h b/elf.h
>> index 2e05d34..6a10657 100644
>> --- a/elf.h
>> +++ b/elf.h
>> @@ -1000,6 +1000,11 @@ typedef struct elf64_sym {
>>
>> #define EI_NIDENT 16
>>
>> +/* Special value for e_phnum. This indicates that the real number of
>> + program headers is too large to fit into e_phnum. Instead the real
>> + value is in the field sh_info of section 0. */
>> +#define PN_XNUM 0xffff
>> +
>> typedef struct elf32_hdr{
>> unsigned char e_ident[EI_NIDENT];
>> Elf32_Half e_type;
>> diff --git a/hmp-commands.hx b/hmp-commands.hx
>> index 6980214..d4cf2e5 100644
>> --- a/hmp-commands.hx
>> +++ b/hmp-commands.hx
>> @@ -880,6 +880,27 @@ server will ask the spice/vnc client to automatically reconnect using the
>> new parameters (if specified) once the vm migration finished successfully.
>> ETEXI
>>
>> +#if defined(CONFIG_HAVE_CORE_DUMP)
>> + {
>> + .name = "dump",
>> + .args_type = "paging:-p,file:s",
>> + .params = "[-p] file",
>> + .help = "dump to file",
>> + .user_print = monitor_user_noop,
>> + .mhandler.cmd = hmp_dump,
>> + },
>> +
>> +
>> +STEXI
>> +@item dump [-p] @var{file}
>> +@findex dump
>> +Dump to @var{file}. The file can be processed with crash or gdb.
>> + file: destination file(started with "file:") or destination file descriptor
>> + (started with "fd:")
>> + paging: do paging to get guest's memory mapping
>> +ETEXI
>> +#endif
>> +
>> {
>> .name = "snapshot_blkdev",
>> .args_type = "reuse:-n,device:B,snapshot-file:s?,format:s?",
>> diff --git a/hmp.c b/hmp.c
>> index 290c43d..e13b793 100644
>> --- a/hmp.c
>> +++ b/hmp.c
>> @@ -860,3 +860,13 @@ void hmp_block_job_cancel(Monitor *mon, const QDict *qdict)
>>
>> hmp_handle_error(mon, &error);
>> }
>> +
>> +void hmp_dump(Monitor *mon, const QDict *qdict)
>> +{
>> + Error *errp = NULL;
>> + int paging = qdict_get_try_bool(qdict, "paging", 0);
>> + const char *file = qdict_get_str(qdict, "file");
>> +
>> + qmp_dump(!!paging, file, &errp);
>
> Why the double negation on 'paging'?
>
>> + hmp_handle_error(mon, &errp);
>> +}
>> diff --git a/hmp.h b/hmp.h
>> index 5409464..b055e50 100644
>> --- a/hmp.h
>> +++ b/hmp.h
>> @@ -59,5 +59,6 @@ void hmp_block_set_io_throttle(Monitor *mon, const QDict *qdict);
>> void hmp_block_stream(Monitor *mon, const QDict *qdict);
>> void hmp_block_job_set_speed(Monitor *mon, const QDict *qdict);
>> void hmp_block_job_cancel(Monitor *mon, const QDict *qdict);
>> +void hmp_dump(Monitor *mon, const QDict *qdict);
>>
>> #endif
>> diff --git a/qapi-schema.json b/qapi-schema.json
>> index 04fa84f..81b8c7c 100644
>> --- a/qapi-schema.json
>> +++ b/qapi-schema.json
>> @@ -1663,3 +1663,17 @@
>> { 'command': 'qom-list-types',
>> 'data': { '*implements': 'str', '*abstract': 'bool' },
>> 'returns': [ 'ObjectTypeInfo' ] }
>> +
>> +##
>> +# @dump
>
> 'dump' is too generic, please call this dump-guest-memory-vmcore or something
> more descriptive.
>
>> +#
>> +# Dump guest's memory to vmcore.
>> +#
>> +# @paging: if true, do paging to get guest's memory mapping
>> +# @file: the filename or file descriptor of the vmcore.
>
> 'file' is not a good name because it can also dump to an fd, maybe 'protocol'?
>
>> +#
>> +# Returns: nothing on success
>> +#
>> +# Since: 1.1
>> +##
>> +{ 'command': 'dump', 'data': { 'paging': 'bool', 'file': 'str' } }
>> diff --git a/qmp-commands.hx b/qmp-commands.hx
>> index dfe8a5b..9e39bd9 100644
>> --- a/qmp-commands.hx
>> +++ b/qmp-commands.hx
>> @@ -586,6 +586,40 @@ Example:
>>
>> EQMP
>>
>> +#if defined(CONFIG_HAVE_CORE_DUMP)
>> + {
>> + .name = "dump",
>> + .args_type = "paging:-p,file:s",
>> + .params = "[-p] file",
>> + .help = "dump to file",
>> + .user_print = monitor_user_noop,
>> + .mhandler.cmd_new = qmp_marshal_input_dump,
>> + },
>> +
>> +SQMP
>> +dump
>> +
>> +
>> +Dump to file. The file can be processed with crash or gdb.
>> +
>> +Arguments:
>> +
>> +- "paging": do paging to get guest's memory mapping (json-bool)
>> +- "file": destination file(started with "file:") or destination file descriptor
>> + (started with "fd:") (json-string)
>> +
>> +Example:
>> +
>> +-> { "execute": "dump", "arguments": { "file": "fd:dump" } }
>> +<- { "return": {} }
>> +
>> +Notes:
>> +
>> +(1) All boolean arguments default to false
>> +
>> +EQMP
>> +#endif
>> +
>> {
>> .name = "netdev_add",
>> .args_type = "netdev:O",
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
2012-03-16 6:50 ` Wen Congyang
@ 2012-03-19 1:09 ` HATAYAMA Daisuke
0 siblings, 0 replies; 40+ messages in thread
From: HATAYAMA Daisuke @ 2012-03-19 1:09 UTC (permalink / raw)
To: wency; +Cc: jan.kiszka, anderson, qemu-devel, eblake, lcapitulino
From: Wen Congyang <wency@cn.fujitsu.com>
Subject: Re: [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
Date: Fri, 16 Mar 2012 14:50:06 +0800
> At 03/16/2012 09:48 AM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang <wency@cn.fujitsu.com>
>> Subject: [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to core file
>> Date: Wed, 14 Mar 2012 10:09:26 +0800
>>
>>> + memset(note, 0, note_size);
>>> + if (type == 0) {
>>> + note32 = note;
>>> + note32->n_namesz = cpu_to_le32(name_size);
>>> + note32->n_descsz = cpu_to_le32(descsz);
>>> + note32->n_type = 0;
>>> + } else {
>>> + note64 = note;
>>> + note64->n_namesz = cpu_to_le32(name_size);
>>> + note64->n_descsz = cpu_to_le32(descsz);
>>> + note64->n_type = 0;
>>> + }
>>
>> Why not give new type for this note information an explicit name?
>> Like NT_QEMUCPUSTATE? There might be another type in the future. This
>> way there's also a merit that we can know all the existing notes
>> relevant to qemu dump by looking at the names in a header file.
>
> Hmm, how to add a new type? Does someont manage this?
>
Sorry. I overlooked this.
For the first question, just prepare a name like NT_QEMUCPUSTATE, and
put it in elf.h.
For the second question, we will use it, and someone that will finds
another information worth being note information in qemu would extends
note informaiton. At least, crash needs to use CPU state information,
and Jan says he wants to use this information in his gdb extension.
Also, you've introduced new name "QEMU". The same type on different
name has different meaning. So, in theory, you don't have to worry
about collision of the new type with something else; they belong to
differnet namespace.
But, in reality, looking at elfcore_grok_note() in gdb that reads note
information of corefile, it don't see "CORE" name explicitly. It
regards everything as the one in "CORE" name if it doesn't belong to
any namespace other than "CORE" name. But in "CORE" core namespace,
indexing starts from 1, NT_PRSTATUS; no type 0 in "CORE".
/* Values of note segment descriptor types for core files. */
#define NT_PRSTATUS 1 /* Contains copy of prstatus struct */
#define NT_FPREGSET 2 /* Contains copy of fpregset struct */
#define NT_PRPSINFO 3 /* Contains copy of prpsinfo struct */
#define NT_TASKSTRUCT 4 /* Contains copy of task struct */
#define NT_AUXV 6 /* Contains copy of Elfxx_auxv_t */
#define NT_PRXFPREG 0x46e62b7f /* Contains a user_xfpregs_struct; */
It appears to me that type 0 is reserved in order to avoid confilicts.
So, you don't have to fix gdb for now as long as you introduce
NT_QEMUCPUSTATE only and index it with type 0.
Considering name "QEMU", QEMU in NT_QEMUCPUSTATE might be redundant.
Thanks.
HATAYAMA, Daisuke
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-14 17:18 ` Luiz Capitulino
` (2 preceding siblings ...)
2012-03-16 10:13 ` Wen Congyang
@ 2012-03-19 2:28 ` Wen Congyang
2012-03-19 8:31 ` Wen Congyang
2012-03-19 13:16 ` Luiz Capitulino
3 siblings, 2 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-19 2:28 UTC (permalink / raw)
To: Luiz Capitulino
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
At 03/15/2012 01:18 AM, Luiz Capitulino Wrote:
> On Wed, 14 Mar 2012 10:11:35 +0800
> Wen Congyang <wency@cn.fujitsu.com> wrote:
>
>> The command's usage:
>> dump [-p] file
>> file should be start with "file:"(the file's path) or "fd:"(the fd's name).
>>
>> Note:
>> 1. If you want to use gdb to analyse the core, please specify -p option.
>> 2. This command doesn't support the fd that is is associated with a pipe,
>> socket, or FIFO(lseek will fail with such fd).
>>
>> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
<cut>
>> +
>> +static DumpState *dump_init(bool paging, Error **errp)
>> +{
>> + CPUState *env;
>> + DumpState *s = dump_get_current();
>> + int ret;
>> +
>> + if (runstate_is_running()) {
>> + vm_stop(RUN_STATE_PAUSED);
>> + s->resume = true;
>
> Hmm, you actually stop the VM. Seems obvious now, but when people talked about
> making this asynchronous I automatically assumed that what we didn't want was
> having the global mutex held for too much time (ie. while this command was
> running).
Yes, In the earlier version, I add a vm state change handler. If the vm is resumed
by the user, qemu dump will be auto cancelled.
>
> The only disadvantage of having this as a synchronous command is that libvirt
> won't be able to cancel it and won't be able to run other commands in parallel.
> Doesn't seem that serious to me.
>
> Btw, RUN_STATE_PAUSED is not a good one. Doesn't matter that much, as this
> is unlikely to be visible, but you should use RUN_STATE_SAVE_VM or
> RUN_STATE_DEBUG.
OK, I will use RUN_STATE_SAVE_VM.
>
>> + } else {
<cut>
>> + ret = cpu_get_dump_info(&s->dump_info);
>> + if (ret < 0) {
>> + error_set(errp, QERR_UNSUPPORTED);
>
> This will let the VM paused.
Hmm, in which function the vm is paused?
>
>> + return NULL;
<cut>
>> + ret = write(fd, buf, size);
>> + if (ret != size) {
>> + return -1;
>> + }
>
> I think you should use send_all() instead of plain write().
OK, I will use qemu_write_full() you mentioned in anohter mail.
>
>> +
>> + return 0;
>> +}
<cut>
>> +
>> + s->f = fd_write_vmcore;
>> + s->cleanup = fd_cleanup;
>> + s->opaque = (void *)(intptr_t)fd;
>
> Do we really need all these indirections?
At 02/15/2012 01:31 AM, Jan Kiszka Wrote:
> Is writing to file descriptor generic enough? What if we want to dump
> via QMP, letting the receiver side decide about where to write it?
So I use these indirections.
>
>> +
>> + return s;
>> +}
>> +
>> +void qmp_dump(bool paging, const char *file, Error **errp)
>> +{
>> + const char *p;
>> + int fd = -1;
>> + DumpState *s;
>> +
>> +#if !defined(WIN32)
>> + if (strstart(file, "fd:", &p)) {
>> + fd = qemu_get_fd(p);
>
> qemu_get_fd() won't be merged, you should use monitor_get_fd(cur_mon, p);
OK
>
>> + if (fd == -1) {
>> + error_set(errp, QERR_FD_NOT_FOUND, p);
>> + return;
>> + }
>> + }
>> +#endif
>> +
>> + if (strstart(file, "file:", &p)) {
>> + fd = open(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
>
> This is minor, but I'd use qemu_open() here.
OK
>
>> + if (fd < 0) {
<cut>
>> +
>> + qmp_dump(!!paging, file, &errp);
>
> Why the double negation on 'paging'?
OK, I will remove double negation.
>
>> + hmp_handle_error(mon, &errp);
<cut>
>> +
>> +##
>> +# @dump
>
> 'dump' is too generic, please call this dump-guest-memory-vmcore or something
> more descriptive.
Hmm, dump-guest-memory-vmcore is too long. What about dump-guest-memory or
dump-memory?
>
>> +#
>> +# Dump guest's memory to vmcore.
>> +#
>> +# @paging: if true, do paging to get guest's memory mapping
>> +# @file: the filename or file descriptor of the vmcore.
>
> 'file' is not a good name because it can also dump to an fd, maybe 'protocol'?
OK
Thanks for you reviewing
Wen Congyang
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-19 2:28 ` Wen Congyang
@ 2012-03-19 8:31 ` Wen Congyang
2012-03-19 13:16 ` Luiz Capitulino
1 sibling, 0 replies; 40+ messages in thread
From: Wen Congyang @ 2012-03-19 8:31 UTC (permalink / raw)
To: Luiz Capitulino
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
At 03/19/2012 10:28 AM, Wen Congyang Wrote:
> At 03/15/2012 01:18 AM, Luiz Capitulino Wrote:
>> On Wed, 14 Mar 2012 10:11:35 +0800
>> Wen Congyang <wency@cn.fujitsu.com> wrote:
>>
>>> The command's usage:
>>> dump [-p] file
>>> file should be start with "file:"(the file's path) or "fd:"(the fd's name).
>>>
>>> Note:
>>> 1. If you want to use gdb to analyse the core, please specify -p option.
>>> 2. This command doesn't support the fd that is is associated with a pipe,
>>> socket, or FIFO(lseek will fail with such fd).
>>>
>>> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
>
> <cut>
>
>>> +
>>> +static DumpState *dump_init(bool paging, Error **errp)
>>> +{
>>> + CPUState *env;
>>> + DumpState *s = dump_get_current();
>>> + int ret;
>>> +
>>> + if (runstate_is_running()) {
>>> + vm_stop(RUN_STATE_PAUSED);
>>> + s->resume = true;
>>
>> Hmm, you actually stop the VM. Seems obvious now, but when people talked about
>> making this asynchronous I automatically assumed that what we didn't want was
>> having the global mutex held for too much time (ie. while this command was
>> running).
>
> Yes, In the earlier version, I add a vm state change handler. If the vm is resumed
> by the user, qemu dump will be auto cancelled.
>
>>
>> The only disadvantage of having this as a synchronous command is that libvirt
>> won't be able to cancel it and won't be able to run other commands in parallel.
>> Doesn't seem that serious to me.
>>
>> Btw, RUN_STATE_PAUSED is not a good one. Doesn't matter that much, as this
>> is unlikely to be visible, but you should use RUN_STATE_SAVE_VM or
>> RUN_STATE_DEBUG.
>
> OK, I will use RUN_STATE_SAVE_VM.
>
>>
>>> + } else {
>
> <cut>
>
>>> + ret = cpu_get_dump_info(&s->dump_info);
>>> + if (ret < 0) {
>>> + error_set(errp, QERR_UNSUPPORTED);
>>
>> This will let the VM paused.
>
> Hmm, in which function the vm is paused?
Sorry for my misundestand. I forgot to resume vm before dump_init() returns.
Thanks
Wen Congyang
>
>>
>>> + return NULL;
>
> <cut>
>
>>> + ret = write(fd, buf, size);
>>> + if (ret != size) {
>>> + return -1;
>>> + }
>>
>> I think you should use send_all() instead of plain write().
>
> OK, I will use qemu_write_full() you mentioned in anohter mail.
>
>>
>>> +
>>> + return 0;
>>> +}
>
> <cut>
>
>>> +
>>> + s->f = fd_write_vmcore;
>>> + s->cleanup = fd_cleanup;
>>> + s->opaque = (void *)(intptr_t)fd;
>>
>> Do we really need all these indirections?
>
> At 02/15/2012 01:31 AM, Jan Kiszka Wrote:
>> Is writing to file descriptor generic enough? What if we want to dump
>> via QMP, letting the receiver side decide about where to write it?
>
> So I use these indirections.
>
>>
>>> +
>>> + return s;
>>> +}
>>> +
>>> +void qmp_dump(bool paging, const char *file, Error **errp)
>>> +{
>>> + const char *p;
>>> + int fd = -1;
>>> + DumpState *s;
>>> +
>>> +#if !defined(WIN32)
>>> + if (strstart(file, "fd:", &p)) {
>>> + fd = qemu_get_fd(p);
>>
>> qemu_get_fd() won't be merged, you should use monitor_get_fd(cur_mon, p);
>
> OK
>
>>
>>> + if (fd == -1) {
>>> + error_set(errp, QERR_FD_NOT_FOUND, p);
>>> + return;
>>> + }
>>> + }
>>> +#endif
>>> +
>>> + if (strstart(file, "file:", &p)) {
>>> + fd = open(p, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR);
>>
>> This is minor, but I'd use qemu_open() here.
>
> OK
>
>>
>>> + if (fd < 0) {
>
> <cut>
>
>>> +
>>> + qmp_dump(!!paging, file, &errp);
>>
>> Why the double negation on 'paging'?
>
> OK, I will remove double negation.
>
>>
>>> + hmp_handle_error(mon, &errp);
>
> <cut>
>
>>> +
>>> +##
>>> +# @dump
>>
>> 'dump' is too generic, please call this dump-guest-memory-vmcore or something
>> more descriptive.
>
> Hmm, dump-guest-memory-vmcore is too long. What about dump-guest-memory or
> dump-memory?
>
>>
>>> +#
>>> +# Dump guest's memory to vmcore.
>>> +#
>>> +# @paging: if true, do paging to get guest's memory mapping
>>> +# @file: the filename or file descriptor of the vmcore.
>>
>> 'file' is not a good name because it can also dump to an fd, maybe 'protocol'?
>
> OK
>
> Thanks for you reviewing
> Wen Congyang
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory
2012-03-19 2:28 ` Wen Congyang
2012-03-19 8:31 ` Wen Congyang
@ 2012-03-19 13:16 ` Luiz Capitulino
1 sibling, 0 replies; 40+ messages in thread
From: Luiz Capitulino @ 2012-03-19 13:16 UTC (permalink / raw)
To: Wen Congyang
Cc: Jan Kiszka, HATAYAMA Daisuke, Dave Anderson, qemu-devel,
Eric Blake
On Mon, 19 Mar 2012 10:28:17 +0800
Wen Congyang <wency@cn.fujitsu.com> wrote:
> > 'dump' is too generic, please call this dump-guest-memory-vmcore or something
> > more descriptive.
>
> Hmm, dump-guest-memory-vmcore is too long. What about dump-guest-memory or
> dump-memory?
dump-guest-memory is acceptable, but we don't have problems with long names.
We prefer a long, but descriptive name, rather than a short & ambiguous one.
^ permalink raw reply [flat|nested] 40+ messages in thread
end of thread, other threads:[~2012-03-19 13:19 UTC | newest]
Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-14 2:03 [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Wen Congyang
2012-03-14 2:05 ` [Qemu-devel] [RFC][PATCH 01/14 v9] Add API to create memory mapping list Wen Congyang
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 02/14 v9] Add API to check whether a physical address is I/O address Wen Congyang
2012-03-14 9:18 ` [Qemu-devel] [RESEND][PATCH " Wen Congyang
2012-03-14 2:06 ` [Qemu-devel] [RFC][PATCH 03/14 v9] implement cpu_get_memory_mapping() Wen Congyang
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 04/14 v9] Add API to check whether paging mode is enabled Wen Congyang
2012-03-14 2:07 ` [Qemu-devel] [RFC][PATCH 05/14 v9] Add API to get memory mapping Wen Congyang
2012-03-16 3:52 ` HATAYAMA Daisuke
2012-03-16 6:50 ` Wen Congyang
2012-03-16 6:38 ` HATAYAMA Daisuke
2012-03-16 6:59 ` Wen Congyang
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 06/14 v9] Add API to get memory mapping without do paging Wen Congyang
2012-03-14 2:08 ` [Qemu-devel] [RFC][PATCH 07/14 v9] target-i386: Add API to write elf notes to core file Wen Congyang
2012-03-16 1:17 ` HATAYAMA Daisuke
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status " Wen Congyang
2012-03-16 1:48 ` HATAYAMA Daisuke
2012-03-16 6:50 ` Wen Congyang
2012-03-19 1:09 ` HATAYAMA Daisuke
2012-03-14 2:09 ` [Qemu-devel] [RFC][PATCH 09/14 v9] target-i386: add API to get dump info Wen Congyang
2012-03-14 2:10 ` [Qemu-devel] [RFC][PATCH 10/14 v9] make gdb_id() generally avialable Wen Congyang
2012-03-14 2:11 ` [Qemu-devel] [RFC][PATCH 11/14 v9] introduce a new monitor command 'dump' to dump guest's memory Wen Congyang
2012-03-14 17:18 ` Luiz Capitulino
2012-03-15 2:29 ` Wen Congyang
2012-03-15 14:25 ` Luiz Capitulino
2012-03-16 10:13 ` Wen Congyang
2012-03-19 2:28 ` Wen Congyang
2012-03-19 8:31 ` Wen Congyang
2012-03-19 13:16 ` Luiz Capitulino
2012-03-16 3:23 ` HATAYAMA Daisuke
2012-03-16 6:41 ` Wen Congyang
2012-03-14 2:12 ` [Qemu-devel] [RFC][PATCH 12/14 v9] support to cancel the current dumping Wen Congyang
2012-03-14 17:19 ` Luiz Capitulino
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 13/14 v9] support to query dumping status Wen Congyang
2012-03-14 17:19 ` Luiz Capitulino
2012-03-14 2:13 ` [Qemu-devel] [RFC][PATCH 14/14 v9] allow user to dump a fraction of the memory Wen Congyang
2012-03-14 17:20 ` Luiz Capitulino
2012-03-14 17:26 ` [Qemu-devel] [RFC][PATCH 00/14 v9] introducing a new, dedicated memory dump mechanism Luiz Capitulino
2012-03-14 17:37 ` Eric Blake
2012-03-14 17:49 ` Anthony Liguori
2012-03-14 18:03 ` Luiz Capitulino
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).