From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=43186 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OZ9Vv-0007qR-1a for qemu-devel@nongnu.org; Wed, 14 Jul 2010 17:29:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OZ9Vt-0002kW-Pk for qemu-devel@nongnu.org; Wed, 14 Jul 2010 17:29:10 -0400 Received: from mail-gx0-f173.google.com ([209.85.161.173]:54324) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OZ9Vt-0002kN-LM for qemu-devel@nongnu.org; Wed, 14 Jul 2010 17:29:09 -0400 Received: by gxk19 with SMTP id 19so222575gxk.4 for ; Wed, 14 Jul 2010 14:29:08 -0700 (PDT) Message-ID: <4C3E2C2E.70507@codemonkey.ws> Date: Wed, 14 Jul 2010 16:29:18 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC PATCH 4/7] ide: IOMMU support References: <1279086307-9596-1-git-send-email-eduard.munteanu@linux360.ro> <201007141453.06131.paul@codesourcery.com> <20100714183343.GB23755@8bytes.org> <201007142113.44913.paul@codesourcery.com> In-Reply-To: <201007142113.44913.paul@codesourcery.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paul Brook Cc: Joerg Roedel , Eduard - Gabriel Munteanu , qemu-devel@nongnu.org, kvm@vger.kernel.org, avi@redhat.com On 07/14/2010 03:13 PM, Paul Brook wrote: >> On Wed, Jul 14, 2010 at 02:53:03PM +0100, Paul Brook wrote: >> >>>> Memory accesses must go through the IOMMU layer. >>>> >>> No. Devices should not know or care whether an IOMMU is present. >>> >> There are real devices that care very much about an IOMMU. Basically all >> devices supporting ATS care about that. So I don't see a problem if the >> device emulation code of qemu also cares about present IOMMUs. >> >> >>> You should be adding a DeviceState argument to >>> cpu_physical_memory_{rw,map}. This should then handle IOMMU translation >>> transparently. >>> >> That's not a good idea imho. With an IOMMU the device no longer accesses >> cpu physical memory. It accesses device virtual memory. Using >> cpu_physical_memory* functions in device code becomes misleading when >> the device virtual address space differs from cpu physical. >> > Well, ok, the function name needs fixing too. However I think the only thing > missing from the current API is that it does not provide a way to determine > which device is performing the access. > I agree with Paul. The right approach IMHO is to convert devices to use bus-specific functions to access memory. The bus specific functions should have a device argument as the first parameter. For PCI-based IOMMUs, the implementation exists solely within the PCI bus. For platforms (like SPARC) that have lower level IOMMUs, we would need to probably introduce a sysbus memory access layer and then provide a hook to implement an IOMMU there. > Depending how the we decide to handle IOMMU invalidation, it may also be > necessary to augment the memory_map API to allow the system to request a > mapping be revoked. However this issue is not specific to the IOMMU > implementation. Such bugs are already present on any system that allows > dynamic reconfiguration of the address space, e.g. by changing PCI BARs. > That's why the memory_map API today does not allow mappings to persist after trips back to the main loop. Regards, Anthony Liguori >> So different >> functions for devices make a lot of sense here. Another reason for >> seperate functions is that we can extend them later to support emulation >> of ATS devices. >> > I disagree. ATS should be an independent feature, and is inherently bus > specific. As usual the PCI spec is not publicly available, but based on the > AMD IOMMU docs I'd say that ATS is completely independent of memory accesses - > the convention being that you trust an ATS capable device to DTRT, and > configure the bus IOMMU to apply a flat mapping for accesses from such > devices. > > >>> You also need to accomodate the the case where multiple IOMMU are >>> present. >>> >> This, indeed, is something transparent to the device. This should be >> handled inside the iommu emulation code. >> > I think you've got the abstraction boundaries all wrong. > > A device performs a memory access on its local bus. It has no knowledge of how > that access is routed to its destination. The device should not be aware of > any IOMMUs, in the same way that it doesn't know whether it happens to be > accessing RAM or memory mapped peripherals on another device. > > Each IOMMU is fundamentally part of a bus bridge. For example the bridge > between a PCI bus and the system bus. It provides a address mapping from one > bus to another. > > There should be no direct interaction between an IOMMU and a device (ignoring > ATS, which is effectively a separate data channel). Everything should be done > via the cpu_phsycial_memory_* code. Likewise on a system with multiple nested > IOMMUs there should be no direct interatcion between these. > cpu_physical_memory_* should walk the device/bus tree to determine where the > access terminates, applying mappings appropriately. > > Paul > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >