From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NjYq9-00078e-N8 for qemu-devel@nongnu.org; Mon, 22 Feb 2010 09:00:50 -0500 Received: from [199.232.76.173] (port=37465 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NjYq8-00078G-1u for qemu-devel@nongnu.org; Mon, 22 Feb 2010 09:00:48 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NjYpy-00026a-3A for qemu-devel@nongnu.org; Mon, 22 Feb 2010 09:00:47 -0500 Received: from bhuna.collabora.co.uk ([93.93.128.226]:34873) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1NjYpx-00025Z-MQ for qemu-devel@nongnu.org; Mon, 22 Feb 2010 09:00:37 -0500 Message-ID: <4B828DC2.3000609@collabora.co.uk> Date: Mon, 22 Feb 2010 13:59:30 +0000 From: Ian Molton MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Address translation - virt->phys->ram List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Hi folks, I've been updating some old patches which make use of a function to translate guest virtual addresses into pointers into the guest RAM. As I understand it qemu has guest virtual and physical addresses, the latter of which map somehow to host ram addresses. The function which the code had been using appears not to work under kvm, which leads me to think that qemu doesnt emulate the MMU (or at least not in the same manner) when it is using kvm as opposed to pure emulation. If I turn off kvm, the patch works, albeit slowly. If I enable it, the code takes the path which looks for the magic value (below). Is there a 'proper' way to translate guest virtual addresses into host RAM addresses? Here is the code:- static /*inline*/ void *get_phys_mem_addr(CPUState *env, target_ulong addr) { int mmu_idx; int index; int i; index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); mmu_idx = cpu_mmu_index(env); if (__builtin_expect (env->tlb_table[mmu_idx][index].addr_code != (addr & TARGET_PAGE_MASK), 0)) { target_ulong ret = cpu_get_phys_page_debug((CPUState *) env, addr); if (ret == -1) { fprintf(stderr, "not in phys mem " TARGET_FMT_lx "(" TARGET_FMT_lx " " TARGET_FMT_lx ")\n", addr, env->tlb_table[mmu_idx][index].addr_code, addr & TARGET_PAGE_MASK); fprintf(stderr, "cpu_x86_handle_mmu_fault = %d\n", cpu_x86_handle_mmu_fault((CPUState *) env, addr, 0, mmu_idx, 1)); return NULL; } else { if (ret + TARGET_PAGE_SIZE <= ram_size) { return qemu_get_ram_ptr((ret + (((target_ulong) addr) & (TARGET_PAGE_SIZE - 1)))); } else { fprintf(stderr, "cpu_get_phys_page_debug(env, " TARGET_FMT_lx ") == " TARGET_FMT_lx "\n", addr, ret); fprintf(stderr, "ram_size= " TARGET_FMT_lx "\n", ret, (target_ulong) ram_size); for(i = 0 ; i < ram_size-10 ; i++) { char *ptr = qemu_get_ram_ptr(i); if(!strncmp("magic_string", ptr, 10)) { fprintf(stderr, "found magic_string at: %lx %lx\n", i, ptr); break; } } return qemu_get_ram_ptr(i-128); //Evil horrible hack } } } else return (void *) addr + env->tlb_table[mmu_idx][index].addend; }