From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 60F36B6EEE for ; Tue, 17 May 2011 19:11:57 +1000 (EST) Subject: Re: [PATCH 12/13] kvm/powerpc: Accelerate H_PUT_TCE by implementing it in real mode From: Benjamin Herrenschmidt To: Alexander Graf In-Reply-To: References: <20110511103443.GA2837@brick.ozlabs.ibm.com> <20110511104615.GM2837@brick.ozlabs.ibm.com> Content-Type: text/plain; charset="UTF-8" Date: Tue, 17 May 2011 19:11:50 +1000 Message-ID: <1305623510.2781.20.camel@pasglop> Mime-Version: 1.0 Cc: linuxppc-dev@ozlabs.org, Paul Mackerras , kvm@vger.kernel.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Tue, 2011-05-17 at 10:01 +0200, Alexander Graf wrote: > I'm not sure I fully understand how this is supposed to work. If the > tables are kept inside the kernel, how does userspace get to know > where to DMA to? The guest gets a dma range from the device-tree which is the range of device-side dma addresses it can use that correspond to the table. The guest kernel uses the normal linux iommu space allocator to allocate space in that region and uses H_PUT_TCE to populate the corresponding table entries. This is the same interface that is used for "real" iommu's with PCI devices btw. Cheers, Ben.