From mboxrd@z Thu Jan 1 00:00:00 1970 From: Benjamin Herrenschmidt Subject: Re: [PATCH 12/13] kvm/powerpc: Accelerate H_PUT_TCE by implementing it in real mode Date: Tue, 17 May 2011 19:35:45 +1000 Message-ID: <1305624945.2781.21.camel@pasglop> References: <20110511103443.GA2837@brick.ozlabs.ibm.com> <20110511104615.GM2837@brick.ozlabs.ibm.com> <1305623510.2781.20.camel@pasglop> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: Paul Mackerras , linuxppc-dev@ozlabs.org, kvm@vger.kernel.org To: Alexander Graf Return-path: Received: from gate.crashing.org ([63.228.1.57]:37856 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753626Ab1EQJf7 (ORCPT ); Tue, 17 May 2011 05:35:59 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Tue, 2011-05-17 at 11:31 +0200, Alexander Graf wrote: > On 17.05.2011, at 11:11, Benjamin Herrenschmidt wrote: > > > On Tue, 2011-05-17 at 10:01 +0200, Alexander Graf wrote: > >> I'm not sure I fully understand how this is supposed to work. If the > >> tables are kept inside the kernel, how does userspace get to know > >> where to DMA to? > > > > The guest gets a dma range from the device-tree which is the range of > > device-side dma addresses it can use that correspond to the table. > > > > The guest kernel uses the normal linux iommu space allocator to allocate > > space in that region and uses H_PUT_TCE to populate the corresponding > > table entries. > > > > This is the same interface that is used for "real" iommu's with PCI > > devices btw. > > I'm still slightly puzzled here :). IIUC the main point of an IOMMU is for the kernel > to change where device accesses actually go to. So device DMAs address A, goes through > the IOMMU, in reality accesses address B. Right :-) > Now, how do we tell the devices implemented in qemu that they're supposed to DMA to > address B instead of A if the mapping table is kept in-kernel? Oh, bcs qemu mmaps the table :-) Cheers, Ben.