* [Qemu-devel] [PATCH 0/5] Replace tcg memory functions @ 2008-12-17 20:46 Glauber Costa 2008-12-17 20:46 ` [Qemu-devel] [PATCH 1/5] re-register whole area upon lfb unmap Glauber Costa 2008-12-22 17:10 ` [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Ian Jackson 0 siblings, 2 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-17 20:46 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini Hi, This series replaces some of the tcg memory handling functions, like cpu_physical_memory_rw and cpu_register_physical_memory by kvm versions. I believe this approach pays, because it'll reduce the dependency that kvm, xen, etc have on the tcg part of qemu. My mid term goal with it is to be able to easily compile tcg out. Right now, I'm using constructions that depends on kvm_enabled() to test for the presence/absence of kvm support. But the goal is to have QEMUAccel back in the near future, just using those hooks, instead of the lower level ones I was proposing a while ago. I hope you like it. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] [PATCH 1/5] re-register whole area upon lfb unmap. 2008-12-17 20:46 [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Glauber Costa @ 2008-12-17 20:46 ` Glauber Costa 2008-12-17 20:46 ` [Qemu-devel] [PATCH 2/5] isolate io handling routing Glauber Costa 2008-12-17 20:53 ` [Qemu-devel] Re: [PATCH 1/5] re-register whole area upon lfb unmap Anthony Liguori 2008-12-22 17:10 ` [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Ian Jackson 1 sibling, 2 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-17 20:46 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini set phys_offset correctly for the whole vga area when unmapping linear vram (for vga optimization). We first register the old pieces as unassigned memory, to make things easier for kvm (and possibly other slot based implementations in the future). Replacing the region directly would make the slot management significantly more complex. Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/cirrus_vga.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diff --git a/hw/cirrus_vga.c b/hw/cirrus_vga.c index 83c5f40..6e81906 100644 --- a/hw/cirrus_vga.c +++ b/hw/cirrus_vga.c @@ -2658,8 +2658,10 @@ static void map_linear_vram(CirrusVGAState *s) vga_dirty_log_start((VGAState *)s); } else { - cpu_register_physical_memory(isa_mem_base + 0xa0000, 0x8000, s->vga_io_memory); - cpu_register_physical_memory(isa_mem_base + 0xa8000, 0x8000, s->vga_io_memory); + cpu_register_physical_memory(isa_mem_base + 0xa0000, 0x8000, IO_MEM_UNASSIGNED); + cpu_register_physical_memory(isa_mem_base + 0xa8000, 0x8000, IO_MEM_UNASSIGNED); + + cpu_register_physical_memory(isa_mem_base + 0xa0000, 0x20000, s->vga_io_memory); } } -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PATCH 2/5] isolate io handling routing 2008-12-17 20:46 ` [Qemu-devel] [PATCH 1/5] re-register whole area upon lfb unmap Glauber Costa @ 2008-12-17 20:46 ` Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 3/5] replace cpu_physical_memory_rw Glauber Costa 2008-12-17 20:54 ` [Qemu-devel] Re: [PATCH 2/5] isolate io handling routing Anthony Liguori 2008-12-17 20:53 ` [Qemu-devel] Re: [PATCH 1/5] re-register whole area upon lfb unmap Anthony Liguori 1 sibling, 2 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-17 20:46 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini introduce cpu_physical_memory_do_io, which handles the mmio part of cpu_physical_memory_rw. KVM can use it to do mmio, since mmio is essentially the same for both KVM and tcg. Signed-off-by: Glauber Costa <glommer@redhat.com> --- cpu-all.h | 2 + exec.c | 89 ++++++++++++++++++++++++++++++++++-------------------------- 2 files changed, 52 insertions(+), 39 deletions(-) diff --git a/cpu-all.h b/cpu-all.h index 648264c..d46da05 100644 --- a/cpu-all.h +++ b/cpu-all.h @@ -910,6 +910,8 @@ int cpu_register_io_memory(int io_index, CPUWriteMemoryFunc **cpu_get_io_memory_write(int io_index); CPUReadMemoryFunc **cpu_get_io_memory_read(int io_index); +int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, + int is_write, unsigned long pd); void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int len, int is_write); static inline void cpu_physical_memory_read(target_phys_addr_t addr, diff --git a/exec.c b/exec.c index 44f6a42..04eadfe 100644 --- a/exec.c +++ b/exec.c @@ -2891,12 +2891,58 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, } #else +int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, int is_write, unsigned long pd) +{ + int io_index; + uint32_t val; + + io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); + if (is_write) { + /* XXX: could force cpu_single_env to NULL to avoid + potential bugs */ + if (l >= 4 && ((addr & 3) == 0)) { + /* 32 bit write access */ + val = ldl_p(buf); + io_mem_write[io_index][2](io_mem_opaque[io_index], addr, val); + l = 4; + } else if (l >= 2 && ((addr & 1) == 0)) { + /* 16 bit write access */ + val = lduw_p(buf); + io_mem_write[io_index][1](io_mem_opaque[io_index], addr, val); + l = 2; + } else { + /* 8 bit write access */ + val = ldub_p(buf); + io_mem_write[io_index][0](io_mem_opaque[io_index], addr, val); + l = 1; + } + + } else { + if (l >= 4 && ((addr & 3) == 0)) { + /* 32 bit read access */ + val = io_mem_read[io_index][2](io_mem_opaque[io_index], addr); + stl_p(buf, val); + l = 4; + } else if (l >= 2 && ((addr & 1) == 0)) { + /* 16 bit read access */ + val = io_mem_read[io_index][1](io_mem_opaque[io_index], addr); + stw_p(buf, val); + l = 2; + } else { + /* 8 bit read access */ + val = io_mem_read[io_index][0](io_mem_opaque[io_index], addr); + stb_p(buf, val); + l = 1; + } + } + return l; +} + void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int len, int is_write) { - int l, io_index; + int l; uint8_t *ptr; - uint32_t val; target_phys_addr_t page; unsigned long pd; PhysPageDesc *p; @@ -2915,27 +2961,9 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, if (is_write) { if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) { - io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); if (p) addr = (addr & ~TARGET_PAGE_MASK) + p->region_offset; - /* XXX: could force cpu_single_env to NULL to avoid - potential bugs */ - if (l >= 4 && ((addr & 3) == 0)) { - /* 32 bit write access */ - val = ldl_p(buf); - io_mem_write[io_index][2](io_mem_opaque[io_index], addr, val); - l = 4; - } else if (l >= 2 && ((addr & 1) == 0)) { - /* 16 bit write access */ - val = lduw_p(buf); - io_mem_write[io_index][1](io_mem_opaque[io_index], addr, val); - l = 2; - } else { - /* 8 bit write access */ - val = ldub_p(buf); - io_mem_write[io_index][0](io_mem_opaque[io_index], addr, val); - l = 1; - } + l = cpu_physical_memory_do_io(addr, buf, len, is_write, pd); } else { unsigned long addr1; addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); @@ -2953,26 +2981,9 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, } else { if ((pd & ~TARGET_PAGE_MASK) > IO_MEM_ROM && !(pd & IO_MEM_ROMD)) { - /* I/O case */ - io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); if (p) addr = (addr & ~TARGET_PAGE_MASK) + p->region_offset; - if (l >= 4 && ((addr & 3) == 0)) { - /* 32 bit read access */ - val = io_mem_read[io_index][2](io_mem_opaque[io_index], addr); - stl_p(buf, val); - l = 4; - } else if (l >= 2 && ((addr & 1) == 0)) { - /* 16 bit read access */ - val = io_mem_read[io_index][1](io_mem_opaque[io_index], addr); - stw_p(buf, val); - l = 2; - } else { - /* 8 bit read access */ - val = io_mem_read[io_index][0](io_mem_opaque[io_index], addr); - stb_p(buf, val); - l = 1; - } + l = cpu_physical_memory_do_io(addr, buf, len, is_write, pd); } else { /* RAM case */ ptr = phys_ram_base + (pd & TARGET_PAGE_MASK) + -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PATCH 3/5] replace cpu_physical_memory_rw 2008-12-17 20:46 ` [Qemu-devel] [PATCH 2/5] isolate io handling routing Glauber Costa @ 2008-12-17 20:47 ` Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 4/5] hook cpu_register_physical_mem Glauber Costa 2008-12-17 20:56 ` [Qemu-devel] Re: [PATCH 3/5] replace cpu_physical_memory_rw Anthony Liguori 2008-12-17 20:54 ` [Qemu-devel] Re: [PATCH 2/5] isolate io handling routing Anthony Liguori 1 sibling, 2 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-17 20:47 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini This patch introduces a kvm version of cpu_physical_memory_rw. The main motivation is to bypass tcg version, which contains tcg-specific code, as well as data structures not used by kvm, such as l1_phys_map. In this patch, I'm using a runtime selection of which function to call, but the mid-term goal is to use function pointers in a way very close to which QEMUAccel used to be. Signed-off-by: Glauber Costa <glommer@redhat.com> --- exec.c | 13 +++++++++++-- kvm-all.c | 39 +++++++++++++++++++++++++++++++++++---- kvm.h | 2 ++ 3 files changed, 48 insertions(+), 6 deletions(-) diff --git a/exec.c b/exec.c index 04eadfe..d5c88b1 100644 --- a/exec.c +++ b/exec.c @@ -2938,8 +2938,8 @@ int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, int return l; } -void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, - int len, int is_write) +static void tcg_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, + int len, int is_write) { int l; uint8_t *ptr; @@ -2997,6 +2997,15 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, } } +void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, + int len, int is_write) +{ + if (kvm_enabled()) + kvm_cpu_physical_memory_rw(addr, buf, len, is_write); + else + tcg_physical_memory_rw(addr, buf, len, is_write); +} + /* used for ROM loading : can write in RAM and ROM */ void cpu_physical_memory_write_rom(target_phys_addr_t addr, const uint8_t *buf, int len) diff --git a/kvm-all.c b/kvm-all.c index 11034df..15eb326 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -540,7 +540,7 @@ void kvm_set_phys_mem(target_phys_addr_t start_addr, mem = kvm_lookup_slot(s, start_addr); if (mem) { - if ((flags == IO_MEM_UNASSIGNED) || (flags >= TLB_MMIO)) { + if ((flags == IO_MEM_UNASSIGNED)) { mem->memory_size = 0; mem->start_addr = start_addr; mem->phys_offset = 0; @@ -559,6 +559,8 @@ void kvm_set_phys_mem(target_phys_addr_t start_addr, mem->phys_offset) return; + if ((phys_offset & ~TARGET_PAGE_MASK) != 0) + return; /* unregister whole slot */ memcpy(&slot, mem, sizeof(slot)); mem->memory_size = 0; @@ -588,16 +590,21 @@ void kvm_set_phys_mem(target_phys_addr_t start_addr, } } /* KVM does not need to know about this memory */ - if (flags >= IO_MEM_UNASSIGNED) + if (flags == IO_MEM_UNASSIGNED) return; - mem = kvm_alloc_slot(s); + if (!mem) + mem = kvm_alloc_slot(s); mem->memory_size = size; mem->start_addr = start_addr; mem->phys_offset = phys_offset; mem->flags = 0; - kvm_set_user_memory_region(s, mem); + if (flags < TLB_MMIO) + kvm_set_user_memory_region(s, mem); + else{ + mem->phys_offset = flags; + } /* FIXME deal with errors */ } @@ -663,3 +670,27 @@ int kvm_has_sync_mmu(void) return 0; } + +void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, + int len, int is_write) +{ + KVMSlot *mem; + KVMState *s = kvm_state; + int l; + + mem = kvm_lookup_slot(s, addr); + if (!mem) + return; + + if ((mem->phys_offset & ~TARGET_PAGE_MASK) >= TLB_MMIO) { + l = 0; + while (len > l) + l += cpu_physical_memory_do_io(addr + l, buf + l, len - l, is_write, mem->phys_offset); + } else { + uint8_t *uaddr = phys_ram_base + mem->phys_offset + (addr - mem->start_addr); + if (!is_write) + memcpy(buf, uaddr, len); + else + memcpy(uaddr, buf, len); + } +} diff --git a/kvm.h b/kvm.h index efce145..e3e9ca0 100644 --- a/kvm.h +++ b/kvm.h @@ -49,6 +49,8 @@ int kvm_has_sync_mmu(void); int kvm_coalesce_mmio_region(target_phys_addr_t start, ram_addr_t size); int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size); +void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, + int len, int is_write); /* internal API */ struct KVMState; -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PATCH 4/5] hook cpu_register_physical_mem 2008-12-17 20:47 ` [Qemu-devel] [PATCH 3/5] replace cpu_physical_memory_rw Glauber Costa @ 2008-12-17 20:47 ` Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 5/5] cache slot lookup Glauber Costa 2008-12-17 20:56 ` [Qemu-devel] Re: [PATCH 3/5] replace cpu_physical_memory_rw Anthony Liguori 1 sibling, 1 reply; 19+ messages in thread From: Glauber Costa @ 2008-12-17 20:47 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini Since now we have our own memory read/write function, we don't depend on all of tcg data structures anymore. So, instead of filling them up, bypass it altogether by using kvm_set_phys mem alone. To do that, we now have to provide our own way to get page information given the address. (kvm_get_physical_page_desc) Signed-off-by: Glauber Costa <glommer@redhat.com> --- exec.c | 44 +++++++++++++++++++++++++++++--------------- kvm-all.c | 13 +++++++++++++ kvm.h | 2 ++ 3 files changed, 44 insertions(+), 15 deletions(-) diff --git a/exec.c b/exec.c index d5c88b1..fe0591b 100644 --- a/exec.c +++ b/exec.c @@ -2240,14 +2240,8 @@ static void *subpage_init (target_phys_addr_t base, ram_addr_t *phys, } \ } while (0) -/* register physical memory. 'size' must be a multiple of the target - page size. If (phys_offset & ~TARGET_PAGE_MASK) != 0, then it is an - io memory page. The address used when calling the IO function is - the offset from the start of the region, plus region_offset. Both - start_region and regon_offset are rounded down to a page boundary - before calculating this offset. This should not be a problem unless - the low bits of start_addr and region_offset differ. */ -void cpu_register_physical_memory_offset(target_phys_addr_t start_addr, + +static void tcg_register_physical_memory_offset(target_phys_addr_t start_addr, ram_addr_t size, ram_addr_t phys_offset, ram_addr_t region_offset) @@ -2265,8 +2259,6 @@ void cpu_register_physical_memory_offset(target_phys_addr_t start_addr, kqemu_set_phys_mem(start_addr, size, phys_offset); } #endif - if (kvm_enabled()) - kvm_set_phys_mem(start_addr, size, phys_offset); region_offset &= TARGET_PAGE_MASK; size = (size + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK; @@ -2333,15 +2325,37 @@ void cpu_register_physical_memory_offset(target_phys_addr_t start_addr, } } +/* register physical memory. 'size' must be a multiple of the target + page size. If (phys_offset & ~TARGET_PAGE_MASK) != 0, then it is an + io memory page. The address used when calling the IO function is + the offset from the start of the region, plus region_offset. Both + start_region and regon_offset are rounded down to a page boundary + before calculating this offset. This should not be a problem unless + the low bits of start_addr and region_offset differ. */ +void cpu_register_physical_memory_offset(target_phys_addr_t start_addr, + ram_addr_t size, + ram_addr_t phys_offset, + ram_addr_t region_offset) +{ + if (kvm_enabled()) + kvm_set_phys_mem(start_addr, size, phys_offset); + else + tcg_register_physical_memory_offset(start_addr, size, phys_offset, region_offset); +} + /* XXX: temporary until new memory mapping API */ ram_addr_t cpu_get_physical_page_desc(target_phys_addr_t addr) { - PhysPageDesc *p; + if (kvm_enabled()) { + return kvm_get_physical_page_desc(addr); + } else { + PhysPageDesc *p; - p = phys_page_find(addr >> TARGET_PAGE_BITS); - if (!p) - return IO_MEM_UNASSIGNED; - return p->phys_offset; + p = phys_page_find(addr >> TARGET_PAGE_BITS); + if (!p) + return IO_MEM_UNASSIGNED; + return p->phys_offset; + } } void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size) diff --git a/kvm-all.c b/kvm-all.c index 15eb326..b432e14 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -694,3 +694,16 @@ void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, memcpy(uaddr, buf, len); } } + +ram_addr_t kvm_get_physical_page_desc(target_phys_addr_t addr) +{ + + KVMSlot *mem; + KVMState *s = kvm_state; + mem = kvm_lookup_slot(s, addr); + + if (!mem) + return IO_MEM_UNASSIGNED; + else + return (addr - mem->start_addr) + mem->phys_offset; +} diff --git a/kvm.h b/kvm.h index e3e9ca0..776cfcf 100644 --- a/kvm.h +++ b/kvm.h @@ -51,6 +51,8 @@ int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size); void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int len, int is_write); + +ram_addr_t kvm_get_physical_page_desc(target_phys_addr_t addr); /* internal API */ struct KVMState; -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PATCH 5/5] cache slot lookup 2008-12-17 20:47 ` [Qemu-devel] [PATCH 4/5] hook cpu_register_physical_mem Glauber Costa @ 2008-12-17 20:47 ` Glauber Costa 2008-12-18 9:41 ` [Qemu-devel] " Avi Kivity 0 siblings, 1 reply; 19+ messages in thread From: Glauber Costa @ 2008-12-17 20:47 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini record slot used in last lookup. For the common mmio case, we'll usually access the same memory slot repeatedly. Signed-off-by: Glauber Costa <glommer@redhat.com> --- kvm-all.c | 11 ++++++++++- 1 files changed, 10 insertions(+), 1 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index b432e14..51fc3ed 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -75,16 +75,25 @@ static KVMSlot *kvm_alloc_slot(KVMState *s) return NULL; } +static KVMSlot *last_slot = NULL; + static KVMSlot *kvm_lookup_slot(KVMState *s, target_phys_addr_t start_addr) { int i; + + if (last_slot && (start_addr >= last_slot->start_addr && + start_addr < (last_slot->start_addr + last_slot->memory_size))) + return last_slot; + for (i = 0; i < ARRAY_SIZE(s->slots); i++) { KVMSlot *mem = &s->slots[i]; if (start_addr >= mem->start_addr && - start_addr < (mem->start_addr + mem->memory_size)) + start_addr < (mem->start_addr + mem->memory_size)) { + last_slot = mem; return mem; + } } return NULL; -- 1.5.6.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 5/5] cache slot lookup 2008-12-17 20:47 ` [Qemu-devel] [PATCH 5/5] cache slot lookup Glauber Costa @ 2008-12-18 9:41 ` Avi Kivity 2008-12-18 10:48 ` Glauber Costa 0 siblings, 1 reply; 19+ messages in thread From: Avi Kivity @ 2008-12-18 9:41 UTC (permalink / raw) To: Glauber Costa; +Cc: Ian.Jackson, qemu-devel, kvm, stefano.stabellini Glauber Costa wrote: > record slot used in last lookup. For the common mmio case, > we'll usually access the same memory slot repeatedly. > > --- a/kvm-all.c > +++ b/kvm-all.c > @@ -75,16 +75,25 @@ static KVMSlot *kvm_alloc_slot(KVMState *s) > return NULL; > } > > +static KVMSlot *last_slot = NULL; > + > static KVMSlot *kvm_lookup_slot(KVMState *s, target_phys_addr_t start_addr) > { > int i; > > + > + if (last_slot && (start_addr >= last_slot->start_addr && > + start_addr < (last_slot->start_addr + last_slot->memory_size))) > + return last_slot; > + > for (i = 0; i < ARRAY_SIZE(s->slots); i++) { > KVMSlot *mem = &s->slots[i]; > > if (start_addr >= mem->start_addr && > - start_addr < (mem->start_addr + mem->memory_size)) > + start_addr < (mem->start_addr + mem->memory_size)) { > + last_slot = mem; > return mem; > + } > } > This wasn't introduced by this patch, but the comparison is broken ion i386 hosts, where target_phys_addr_t is 32 bits wide. mem->start_addr + mem->memory_size can overflow (this in fact happens for the bios slot at 4G-128K) -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 5/5] cache slot lookup 2008-12-18 9:41 ` [Qemu-devel] " Avi Kivity @ 2008-12-18 10:48 ` Glauber Costa 2008-12-18 11:00 ` Daniel P. Berrange 2008-12-18 11:24 ` Avi Kivity 0 siblings, 2 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-18 10:48 UTC (permalink / raw) To: Avi Kivity; +Cc: Ian.Jackson, qemu-devel, kvm, stefano.stabellini On Thu, Dec 18, 2008 at 11:41:24AM +0200, Avi Kivity wrote: > Glauber Costa wrote: >> record slot used in last lookup. For the common mmio case, >> we'll usually access the same memory slot repeatedly. >> > >> --- a/kvm-all.c >> +++ b/kvm-all.c >> @@ -75,16 +75,25 @@ static KVMSlot *kvm_alloc_slot(KVMState *s) >> return NULL; >> } >> +static KVMSlot *last_slot = NULL; >> + >> static KVMSlot *kvm_lookup_slot(KVMState *s, target_phys_addr_t start_addr) >> { >> int i; >> + >> + if (last_slot && (start_addr >= last_slot->start_addr && >> + start_addr < (last_slot->start_addr + last_slot->memory_size))) >> + return last_slot; >> + >> for (i = 0; i < ARRAY_SIZE(s->slots); i++) { >> KVMSlot *mem = &s->slots[i]; >> if (start_addr >= mem->start_addr && >> - start_addr < (mem->start_addr + mem->memory_size)) >> + start_addr < (mem->start_addr + mem->memory_size)) { >> + last_slot = mem; >> return mem; >> + } >> } >> > > This wasn't introduced by this patch, but the comparison is broken ion > i386 hosts, where target_phys_addr_t is 32 bits wide. mem->start_addr + > mem->memory_size can overflow (this in fact happens for the bios slot at > 4G-128K) AFAIK, the assumption is that kvm will always be qemu-system-x86_64, due to migration issues. Then, _target_ phys_addr_t is always 64 bit wide. If it's not the case, then this is really a problem. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 5/5] cache slot lookup 2008-12-18 10:48 ` Glauber Costa @ 2008-12-18 11:00 ` Daniel P. Berrange 2008-12-18 11:07 ` Glauber Costa 2008-12-18 11:24 ` Avi Kivity 1 sibling, 1 reply; 19+ messages in thread From: Daniel P. Berrange @ 2008-12-18 11:00 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, Avi Kivity, kvm, stefano.stabellini On Thu, Dec 18, 2008 at 08:48:50AM -0200, Glauber Costa wrote: > On Thu, Dec 18, 2008 at 11:41:24AM +0200, Avi Kivity wrote: > > Glauber Costa wrote: > >> record slot used in last lookup. For the common mmio case, > >> we'll usually access the same memory slot repeatedly. > >> > > > >> --- a/kvm-all.c > >> +++ b/kvm-all.c > >> @@ -75,16 +75,25 @@ static KVMSlot *kvm_alloc_slot(KVMState *s) > >> return NULL; > >> } > >> +static KVMSlot *last_slot = NULL; > >> + > >> static KVMSlot *kvm_lookup_slot(KVMState *s, target_phys_addr_t start_addr) > >> { > >> int i; > >> + > >> + if (last_slot && (start_addr >= last_slot->start_addr && > >> + start_addr < (last_slot->start_addr + last_slot->memory_size))) > >> + return last_slot; > >> + > >> for (i = 0; i < ARRAY_SIZE(s->slots); i++) { > >> KVMSlot *mem = &s->slots[i]; > >> if (start_addr >= mem->start_addr && > >> - start_addr < (mem->start_addr + mem->memory_size)) > >> + start_addr < (mem->start_addr + mem->memory_size)) { > >> + last_slot = mem; > >> return mem; > >> + } > >> } > >> > > > > This wasn't introduced by this patch, but the comparison is broken ion > > i386 hosts, where target_phys_addr_t is 32 bits wide. mem->start_addr + > > mem->memory_size can overflow (this in fact happens for the bios slot at > > 4G-128K) > > AFAIK, the assumption is that kvm will always be qemu-system-x86_64, due to > migration issues. Then, _target_ phys_addr_t is always 64 bit wide. > If it's not the case, then this is really a problem. Migration compatability is a problem for mgmt apps to solve & merely saying to use qemu-system-x86_64 everywhere is a tiny insignificant piece of the problem. AFAIK, current Fedora packages build a 32-bit 'qemu' for KVM on i386 and a qemu-system-x86_64' for x86_64, which happens to be called 'qemu-kvm' on both, to avoid clash with the base QEMU binary names. Regards, Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 5/5] cache slot lookup 2008-12-18 11:00 ` Daniel P. Berrange @ 2008-12-18 11:07 ` Glauber Costa 0 siblings, 0 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-18 11:07 UTC (permalink / raw) To: Daniel P. Berrange, qemu-devel Cc: Ian.Jackson, Avi Kivity, kvm, stefano.stabellini > Migration compatability is a problem for mgmt apps to solve & merely > saying to use qemu-system-x86_64 everywhere is a tiny insignificant > piece of the problem. AFAIK, current Fedora packages build a 32-bit > 'qemu' for KVM on i386 and a qemu-system-x86_64' for x86_64, which > happens to be called 'qemu-kvm' on both, to avoid clash with the base > QEMU binary names. > To the best of my knowledge, Fedora builds a 64-bit qemu for both targets too, as does upstream. -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 5/5] cache slot lookup 2008-12-18 10:48 ` Glauber Costa 2008-12-18 11:00 ` Daniel P. Berrange @ 2008-12-18 11:24 ` Avi Kivity 1 sibling, 0 replies; 19+ messages in thread From: Avi Kivity @ 2008-12-18 11:24 UTC (permalink / raw) To: Glauber Costa; +Cc: Ian.Jackson, qemu-devel, kvm, stefano.stabellini Glauber Costa wrote: >> This wasn't introduced by this patch, but the comparison is broken ion >> i386 hosts, where target_phys_addr_t is 32 bits wide. mem->start_addr + >> mem->memory_size can overflow (this in fact happens for the bios slot at >> 4G-128K) >> > AFAIK, the assumption is that kvm will always be qemu-system-x86_64, due to > migration issues. That's an incorrect assumption. Users are free to build any qemu variant they like. 32-bit qemu ought to work. > Then, _target_ phys_addr_t is always 64 bit wide. > It is not. On a 32-bit host, qemu-system-x87_43's target_phys_addr_t is 32 bits wide. > If it's not the case, then this is really a problem. > It isn't, so it is. I hacked around it in kvm-userspace. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 3/5] replace cpu_physical_memory_rw 2008-12-17 20:47 ` [Qemu-devel] [PATCH 3/5] replace cpu_physical_memory_rw Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 4/5] hook cpu_register_physical_mem Glauber Costa @ 2008-12-17 20:56 ` Anthony Liguori 2008-12-17 21:00 ` Glauber Costa 1 sibling, 1 reply; 19+ messages in thread From: Anthony Liguori @ 2008-12-17 20:56 UTC (permalink / raw) To: Glauber Costa; +Cc: stefano.stabellini, Ian.Jackson, qemu-devel, kvm, avi Glauber Costa wrote: > This patch introduces a kvm version of cpu_physical_memory_rw. > The main motivation is to bypass tcg version, which contains > tcg-specific code, as well as data structures not used by kvm, > such as l1_phys_map. > > In this patch, I'm using a runtime selection of which function > to call, but the mid-term goal is to use function pointers in > a way very close to which QEMUAccel used to be. > > Signed-off-by: Glauber Costa <glommer@redhat.com> > --- > exec.c | 13 +++++++++++-- > kvm-all.c | 39 +++++++++++++++++++++++++++++++++++---- > kvm.h | 2 ++ > 3 files changed, 48 insertions(+), 6 deletions(-) > > diff --git a/exec.c b/exec.c > index 04eadfe..d5c88b1 100644 > --- a/exec.c > +++ b/exec.c > @@ -2938,8 +2938,8 @@ int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, int > > + > +void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, > + int len, int is_write) > +{ > + KVMSlot *mem; > + KVMState *s = kvm_state; > + int l; > + > + mem = kvm_lookup_slot(s, addr); > + if (!mem) > + return; > + > + if ((mem->phys_offset & ~TARGET_PAGE_MASK) >= TLB_MMIO) { > + l = 0; > + while (len > l) > + l += cpu_physical_memory_do_io(addr + l, buf + l, len - l, is_write, mem->phys_offset); > + } else { > + uint8_t *uaddr = phys_ram_base + mem->phys_offset + (addr - mem->start_addr); > + if (!is_write) > + memcpy(buf, uaddr, len); > + else > + memcpy(uaddr, buf, len); > + } > +} > I think this is a bit optimistic. It assumes addr..len fits entirely within a slot. That's not necessarily the case though. I think you should probably limit len to whatever is left in the slot, then if necessary, (tail) recursively call kvm_cpu_physical_memory_rw. Regards, Anthony Liguori ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 3/5] replace cpu_physical_memory_rw 2008-12-17 20:56 ` [Qemu-devel] Re: [PATCH 3/5] replace cpu_physical_memory_rw Anthony Liguori @ 2008-12-17 21:00 ` Glauber Costa 0 siblings, 0 replies; 19+ messages in thread From: Glauber Costa @ 2008-12-17 21:00 UTC (permalink / raw) To: Anthony Liguori; +Cc: stefano.stabellini, Ian.Jackson, qemu-devel, kvm, avi On Wed, Dec 17, 2008 at 02:56:05PM -0600, Anthony Liguori wrote: > Glauber Costa wrote: >> This patch introduces a kvm version of cpu_physical_memory_rw. >> The main motivation is to bypass tcg version, which contains >> tcg-specific code, as well as data structures not used by kvm, >> such as l1_phys_map. >> >> In this patch, I'm using a runtime selection of which function >> to call, but the mid-term goal is to use function pointers in >> a way very close to which QEMUAccel used to be. >> >> Signed-off-by: Glauber Costa <glommer@redhat.com> >> --- >> exec.c | 13 +++++++++++-- >> kvm-all.c | 39 +++++++++++++++++++++++++++++++++++---- >> kvm.h | 2 ++ >> 3 files changed, 48 insertions(+), 6 deletions(-) >> >> diff --git a/exec.c b/exec.c >> index 04eadfe..d5c88b1 100644 >> --- a/exec.c >> +++ b/exec.c >> @@ -2938,8 +2938,8 @@ int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, int >> >> + >> +void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, >> + int len, int is_write) >> +{ >> + KVMSlot *mem; >> + KVMState *s = kvm_state; >> + int l; >> + >> + mem = kvm_lookup_slot(s, addr); >> + if (!mem) >> + return; >> + >> + if ((mem->phys_offset & ~TARGET_PAGE_MASK) >= TLB_MMIO) { >> + l = 0; >> + while (len > l) >> + l += cpu_physical_memory_do_io(addr + l, buf + l, len - l, is_write, mem->phys_offset); >> + } else { >> + uint8_t *uaddr = phys_ram_base + mem->phys_offset + (addr - mem->start_addr); >> + if (!is_write) >> + memcpy(buf, uaddr, len); >> + else >> + memcpy(uaddr, buf, len); >> + } >> +} >> > > I think this is a bit optimistic. It assumes addr..len fits entirely > within a slot. That's not necessarily the case though. I think you > should probably limit len to whatever is left in the slot, then if > necessary, (tail) recursively call kvm_cpu_physical_memory_rw. It makes sense. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 2/5] isolate io handling routing 2008-12-17 20:46 ` [Qemu-devel] [PATCH 2/5] isolate io handling routing Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 3/5] replace cpu_physical_memory_rw Glauber Costa @ 2008-12-17 20:54 ` Anthony Liguori 1 sibling, 0 replies; 19+ messages in thread From: Anthony Liguori @ 2008-12-17 20:54 UTC (permalink / raw) To: Glauber Costa; +Cc: stefano.stabellini, Ian.Jackson, qemu-devel, kvm, avi Glauber Costa wrote: > introduce cpu_physical_memory_do_io, which handles > the mmio part of cpu_physical_memory_rw. KVM can use > it to do mmio, since mmio is essentially the same for > both KVM and tcg. > > Signed-off-by: Glauber Costa <glommer@redhat.com> > --- > cpu-all.h | 2 + > exec.c | 89 ++++++++++++++++++++++++++++++++++-------------------------- > 2 files changed, 52 insertions(+), 39 deletions(-) > > diff --git a/cpu-all.h b/cpu-all.h > index 648264c..d46da05 100644 > --- a/cpu-all.h > +++ b/cpu-all.h > @@ -910,6 +910,8 @@ int cpu_register_io_memory(int io_index, > CPUWriteMemoryFunc **cpu_get_io_memory_write(int io_index); > CPUReadMemoryFunc **cpu_get_io_memory_read(int io_index); > > +int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, > + int is_write, unsigned long pd); > void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, > int len, int is_write); > static inline void cpu_physical_memory_read(target_phys_addr_t addr, > diff --git a/exec.c b/exec.c > index 44f6a42..04eadfe 100644 > --- a/exec.c > +++ b/exec.c > @@ -2891,12 +2891,58 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, > } > > #else > +int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, int is_write, unsigned long pd) > +{ > + int io_index; > + uint32_t val; > + > + io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); > + if (is_write) { > + /* XXX: could force cpu_single_env to NULL to avoid > That's some big identing :-) Regards, Anthony Liguori ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Qemu-devel] Re: [PATCH 1/5] re-register whole area upon lfb unmap. 2008-12-17 20:46 ` [Qemu-devel] [PATCH 1/5] re-register whole area upon lfb unmap Glauber Costa 2008-12-17 20:46 ` [Qemu-devel] [PATCH 2/5] isolate io handling routing Glauber Costa @ 2008-12-17 20:53 ` Anthony Liguori 2008-12-25 20:08 ` andrzej zaborowski 1 sibling, 1 reply; 19+ messages in thread From: Anthony Liguori @ 2008-12-17 20:53 UTC (permalink / raw) To: Glauber Costa; +Cc: stefano.stabellini, Ian.Jackson, qemu-devel, kvm, avi Glauber Costa wrote: > set phys_offset correctly for the whole vga area when unmapping linear vram > (for vga optimization). We first register the old pieces as unassigned > memory, to make things easier for kvm (and possibly other slot based > implementations in the future). Replacing the region directly would > make the slot management significantly more complex. > This change worries me because it involves explicitly unassigning slots and then assigning a new, bigger slot. This is not necessary for TCG. It suggests to me that there's a bug in the kvm slot code and that we're changing QEMU to work around it. That will means there may be other places in the code that are completely valid, but exercise this bug. Or is this purely an optimization? Regards, Anthony Liguori > Signed-off-by: Glauber Costa <glommer@redhat.com> > --- > hw/cirrus_vga.c | 6 ++++-- > 1 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/hw/cirrus_vga.c b/hw/cirrus_vga.c > index 83c5f40..6e81906 100644 > --- a/hw/cirrus_vga.c > +++ b/hw/cirrus_vga.c > @@ -2658,8 +2658,10 @@ static void map_linear_vram(CirrusVGAState *s) > vga_dirty_log_start((VGAState *)s); > } > else { > - cpu_register_physical_memory(isa_mem_base + 0xa0000, 0x8000, s->vga_io_memory); > - cpu_register_physical_memory(isa_mem_base + 0xa8000, 0x8000, s->vga_io_memory); > + cpu_register_physical_memory(isa_mem_base + 0xa0000, 0x8000, IO_MEM_UNASSIGNED); > + cpu_register_physical_memory(isa_mem_base + 0xa8000, 0x8000, IO_MEM_UNASSIGNED); > + > + cpu_register_physical_memory(isa_mem_base + 0xa0000, 0x20000, s->vga_io_memory); > } > > } > ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 1/5] re-register whole area upon lfb unmap. 2008-12-17 20:53 ` [Qemu-devel] Re: [PATCH 1/5] re-register whole area upon lfb unmap Anthony Liguori @ 2008-12-25 20:08 ` andrzej zaborowski 0 siblings, 0 replies; 19+ messages in thread From: andrzej zaborowski @ 2008-12-25 20:08 UTC (permalink / raw) To: qemu-devel; +Cc: Glauber Costa, Ian.Jackson, avi, kvm, stefano.stabellini 2008/12/17 Anthony Liguori <anthony@codemonkey.ws>: > Glauber Costa wrote: >> >> set phys_offset correctly for the whole vga area when unmapping linear >> vram >> (for vga optimization). We first register the old pieces as unassigned >> memory, to make things easier for kvm (and possibly other slot based >> implementations in the future). Replacing the region directly would >> make the slot management significantly more complex. >> > > This change worries me because it involves explicitly unassigning slots and > then assigning a new, bigger slot. This is not necessary for TCG. It > suggests to me that there's a bug in the kvm slot code and that we're > changing QEMU to work around it. > > That will means there may be other places in the code that are completely > valid, but exercise this bug. > > Or is this purely an optimization? It also changes the semantics because IO callbacks are now passed offsets from regions starts instead of absolute addresses. I'm not able to tell if the change is for good or for bad though. Cheers ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] [PATCH 0/5] Replace tcg memory functions 2008-12-17 20:46 [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Glauber Costa 2008-12-17 20:46 ` [Qemu-devel] [PATCH 1/5] re-register whole area upon lfb unmap Glauber Costa @ 2008-12-22 17:10 ` Ian Jackson 2008-12-22 17:18 ` Glauber Costa 1 sibling, 1 reply; 19+ messages in thread From: Ian Jackson @ 2008-12-22 17:10 UTC (permalink / raw) To: qemu-devel; +Cc: Ian.Jackson, avi, kvm, stefano.stabellini Glauber Costa writes ("[Qemu-devel] [PATCH 0/5] Replace tcg memory functions"): > This series replaces some of the tcg memory handling functions, > like cpu_physical_memory_rw and cpu_register_physical_memory > by kvm versions. > > I believe this approach pays, because it'll reduce the dependency > that kvm, xen, etc have on the tcg part of qemu. My mid term goal > with it is to be able to easily compile tcg out. I don't have an objection to this patch series even it may mean I have to do some additional fixups. But I did want to point out a misapprehension in Glauber Costa's comments. The qemu-xen tree does not use the TCG parts of qemu at all; we don't even compile them in. We provide our own implementation of cpu_physical_memory_rw and so on. Ian. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] [PATCH 0/5] Replace tcg memory functions 2008-12-22 17:10 ` [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Ian Jackson @ 2008-12-22 17:18 ` Glauber Costa 2008-12-22 17:22 ` Ian Jackson 0 siblings, 1 reply; 19+ messages in thread From: Glauber Costa @ 2008-12-22 17:18 UTC (permalink / raw) To: Ian Jackson; +Cc: stefano.stabellini, qemu-devel, kvm, avi > I don't have an objection to this patch series even it may mean I have > to do some additional fixups. > > But I did want to point out a misapprehension in Glauber Costa's > comments. The qemu-xen tree does not use the TCG parts of qemu at > all; we don't even compile them in. We provide our own implementation > of cpu_physical_memory_rw and so on. > I never stated that you depend on that. We (kvm) don't depend on that either, but nevertheless, have to live with it. An alternative, of course, is to heavily patch your qemu tree, providing your own implementation of this, and everything else. The aim of this patch series is _exactly_ to provide a way for us (kvm, xen, etc) to do that in a clean way, without deviating from upstream, preferably merging your code back. -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] [PATCH 0/5] Replace tcg memory functions 2008-12-22 17:18 ` Glauber Costa @ 2008-12-22 17:22 ` Ian Jackson 0 siblings, 0 replies; 19+ messages in thread From: Ian Jackson @ 2008-12-22 17:22 UTC (permalink / raw) To: Glauber Costa; +Cc: stefano.stabellini, qemu-devel, kvm, avi Glauber Costa writes ("Re: [Qemu-devel] [PATCH 0/5] Replace tcg memory functions"): > I never stated that you depend on that. We (kvm) don't depend on > that either, but nevertheless, have to live with it. An alternative, > of course, is to heavily patch your qemu tree, providing your own > implementation of this, and everything else. The aim of this patch > series is _exactly_ to provide a way for us (kvm, xen, etc) to do > that in a clean way, without deviating from upstream, preferably > merging your code back. Right, absolutely. Thanks, Ian. ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2008-12-25 20:08 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-12-17 20:46 [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Glauber Costa 2008-12-17 20:46 ` [Qemu-devel] [PATCH 1/5] re-register whole area upon lfb unmap Glauber Costa 2008-12-17 20:46 ` [Qemu-devel] [PATCH 2/5] isolate io handling routing Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 3/5] replace cpu_physical_memory_rw Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 4/5] hook cpu_register_physical_mem Glauber Costa 2008-12-17 20:47 ` [Qemu-devel] [PATCH 5/5] cache slot lookup Glauber Costa 2008-12-18 9:41 ` [Qemu-devel] " Avi Kivity 2008-12-18 10:48 ` Glauber Costa 2008-12-18 11:00 ` Daniel P. Berrange 2008-12-18 11:07 ` Glauber Costa 2008-12-18 11:24 ` Avi Kivity 2008-12-17 20:56 ` [Qemu-devel] Re: [PATCH 3/5] replace cpu_physical_memory_rw Anthony Liguori 2008-12-17 21:00 ` Glauber Costa 2008-12-17 20:54 ` [Qemu-devel] Re: [PATCH 2/5] isolate io handling routing Anthony Liguori 2008-12-17 20:53 ` [Qemu-devel] Re: [PATCH 1/5] re-register whole area upon lfb unmap Anthony Liguori 2008-12-25 20:08 ` andrzej zaborowski 2008-12-22 17:10 ` [Qemu-devel] [PATCH 0/5] Replace tcg memory functions Ian Jackson 2008-12-22 17:18 ` Glauber Costa 2008-12-22 17:22 ` Ian Jackson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).