From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:60800) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T5gul-0000NY-2d for qemu-devel@nongnu.org; Sun, 26 Aug 2012 13:46:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1T5guj-0006ug-9x for qemu-devel@nongnu.org; Sun, 26 Aug 2012 13:46:23 -0400 Received: from mail-iy0-f173.google.com ([209.85.210.173]:44833) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T5guj-0006ua-46 for qemu-devel@nongnu.org; Sun, 26 Aug 2012 13:46:21 -0400 Received: by iakx26 with SMTP id x26so6395409iak.4 for ; Sun, 26 Aug 2012 10:46:19 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20120825131731.GA542@cs.nctu.edu.tw> References: <20120824031442.GA62168@cs.nctu.edu.tw> <20120825131731.GA542@cs.nctu.edu.tw> From: Blue Swirl Date: Sun, 26 Aug 2012 17:45:59 +0000 Message-ID: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] Get host virtual address corresponding to guest physical address? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?UTF-8?B?6Zmz6Z+L5Lu7IChXZWktUmVuIENoZW4p?= Cc: Peter Maydell , qemu-devel@nongnu.org On Sat, Aug 25, 2012 at 1:17 PM, =E9=99=B3=E9=9F=8B=E4=BB=BB (Wei-Ren Chen) wrote: > On Sat, Aug 25, 2012 at 11:56:13AM +0100, Peter Maydell wrote: >> On 24 August 2012 04:14, =E9=99=B3=E9=9F=8B=E4=BB=BB (Wei-Ren Chen) wrote: >> > I would like to know if there is a function in QEMU which converts >> > a guest physical address into corresponding host virtual address. >> >> So the question is, what do you want to do with the host virtual >> address when you've got it? cpu_physical_memory_map() is really intended >> (as Blue says) for the case where you have a bit of host code that wants >> to write a chunk of data and doesn't want to do a sequence of >> cpu_physical_memory_read()/_write() calls. Instead you _map() the memory= , >> write to it and then _unmap() it. > > We want to let host MMU hardware to do what softmmu does. As a prototyp= e > (x86 guest on x86_64 host), we want to do the following: > > 1. Get guest page table entries (GVA -> GPA). > > 2. Get corresponding HVA. > > 3. Then we use /dev/mem (with host cr3) to find out HPA. > > 4. We insert GVA -> HPA mapping into host page table through /dev/mem, > we already move QEMU above 4G to make way for the guest. > > So we don't write data into the host virtual addr. I don't think this GVA to HPA mapping function will help. I'd use the memory API to construct the GPA-HVA mappings after board init. The GVA-GPA mappings need to be gathered from guest MMU tables when MMU is enabled. Then the page tables need to be tracked and any changes to either guest MMU setup/tables or in guest physical memory space must propagate to the host memory maps. > >> Note that not all guest physical addresses have a meaningful host >> virtual address -- in particular memory mapped devices won't. > > I guess in our case, we don't touch MMIO? > >> > 1. I am running x86 guest on a x86_64 host and using the cod below >> > to get the host virtual address, I am not sure what value of len >> > should be. >> >> The length should be the length of the area of memory you want to >> either read or write from. > > Actually I want to know where guest page are mapped to host virtual > address. The GPA we get from step 1 points to guest page table, and > we want to know its corresponding HVA. > >> > static inline void *gpa2hva(target_phys_addr_t addr) >> > { >> > target_phys_addr_t len =3D 4; >> > return cpu_physical_memory_map(addr, &len, 0); >> > } >> >> If you try this on a memory mapped device address then the first >> time round it will give you back the address of a "bounce buffer", >> ie a bit of temporary RAM you can read/write and which unmap will >> then actually feed to the device's read/write functions. Since you >> never call unmap, this means that anybody else who tries to use >> cpu_physical_memory_map() on a device from now on will get back >> NULL (meaning resource exhaustion, because the bouncebuffer is in >> use). > > You mean if I call cpu_physical_memory_map with a guest MMIO (physcial) > address, the first time it'll return the address of a buffer that I can w= rite > data into. The second time it'll return NULL since I don't call > cpu_physical_memory_umap to flush the buffer. Do I understand you correct= ly? > Hmm, I think we don't not have such issue in our use case... What do you > think? > > Regards, > chenwj > > -- > Wei-Ren Chen (=E9=99=B3=E9=9F=8B=E4=BB=BB) > Computer Systems Lab, Institute of Information Science, > Academia Sinica, Taiwan (R.O.C.) > Tel:886-2-2788-3799 #1667 > Homepage: http://people.cs.nctu.edu.tw/~chenwj