From mboxrd@z Thu Jan 1 00:00:00 1970 From: Benjamin Herrenschmidt Subject: Re: [PATCH 12/13] kvm/powerpc: Accelerate H_PUT_TCE by implementing it in real mode Date: Tue, 17 May 2011 19:11:50 +1000 Message-ID: <1305623510.2781.20.camel@pasglop> References: <20110511103443.GA2837@brick.ozlabs.ibm.com> <20110511104615.GM2837@brick.ozlabs.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: Paul Mackerras , linuxppc-dev@ozlabs.org, kvm@vger.kernel.org To: Alexander Graf Return-path: Received: from gate.crashing.org ([63.228.1.57]:59870 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751790Ab1EQJMG (ORCPT ); Tue, 17 May 2011 05:12:06 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Tue, 2011-05-17 at 10:01 +0200, Alexander Graf wrote: > I'm not sure I fully understand how this is supposed to work. If the > tables are kept inside the kernel, how does userspace get to know > where to DMA to? The guest gets a dma range from the device-tree which is the range of device-side dma addresses it can use that correspond to the table. The guest kernel uses the normal linux iommu space allocator to allocate space in that region and uses H_PUT_TCE to populate the corresponding table entries. This is the same interface that is used for "real" iommu's with PCI devices btw. Cheers, Ben.