From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [PATCH 3/5] replace cpu_physical_memory_rw Date: Wed, 17 Dec 2008 14:56:05 -0600 Message-ID: <49496765.8000404@codemonkey.ws> References: <1229546822-11972-1-git-send-email-glommer@redhat.com> <1229546822-11972-2-git-send-email-glommer@redhat.com> <1229546822-11972-3-git-send-email-glommer@redhat.com> <1229546822-11972-4-git-send-email-glommer@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, avi@redhat.com, stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com To: Glauber Costa Return-path: Received: from qw-out-2122.google.com ([74.125.92.27]:8460 "EHLO qw-out-2122.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751241AbYLQU4O (ORCPT ); Wed, 17 Dec 2008 15:56:14 -0500 Received: by qw-out-2122.google.com with SMTP id 3so20766qwe.37 for ; Wed, 17 Dec 2008 12:56:12 -0800 (PST) In-Reply-To: <1229546822-11972-4-git-send-email-glommer@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Glauber Costa wrote: > This patch introduces a kvm version of cpu_physical_memory_rw. > The main motivation is to bypass tcg version, which contains > tcg-specific code, as well as data structures not used by kvm, > such as l1_phys_map. > > In this patch, I'm using a runtime selection of which function > to call, but the mid-term goal is to use function pointers in > a way very close to which QEMUAccel used to be. > > Signed-off-by: Glauber Costa > --- > exec.c | 13 +++++++++++-- > kvm-all.c | 39 +++++++++++++++++++++++++++++++++++---- > kvm.h | 2 ++ > 3 files changed, 48 insertions(+), 6 deletions(-) > > diff --git a/exec.c b/exec.c > index 04eadfe..d5c88b1 100644 > --- a/exec.c > +++ b/exec.c > @@ -2938,8 +2938,8 @@ int cpu_physical_memory_do_io(target_phys_addr_t addr, uint8_t *buf, int l, int > > + > +void kvm_cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, > + int len, int is_write) > +{ > + KVMSlot *mem; > + KVMState *s = kvm_state; > + int l; > + > + mem = kvm_lookup_slot(s, addr); > + if (!mem) > + return; > + > + if ((mem->phys_offset & ~TARGET_PAGE_MASK) >= TLB_MMIO) { > + l = 0; > + while (len > l) > + l += cpu_physical_memory_do_io(addr + l, buf + l, len - l, is_write, mem->phys_offset); > + } else { > + uint8_t *uaddr = phys_ram_base + mem->phys_offset + (addr - mem->start_addr); > + if (!is_write) > + memcpy(buf, uaddr, len); > + else > + memcpy(uaddr, buf, len); > + } > +} > I think this is a bit optimistic. It assumes addr..len fits entirely within a slot. That's not necessarily the case though. I think you should probably limit len to whatever is left in the slot, then if necessary, (tail) recursively call kvm_cpu_physical_memory_rw. Regards, Anthony Liguori