From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gordan Bobic Subject: Re: Multi-bridged PCIe devices (Was: Re: iommuu/vt-d issues with LSI MegaSAS (PERC5i)) Date: Wed, 11 Sep 2013 12:44:47 +0100 Message-ID: <8c1b11eec8ed133e55222fa57b4c08a8@mail.shatteredsilicon.net> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VJiqo-00023Q-8i for xen-devel@lists.xenproject.org; Wed, 11 Sep 2013 11:44:50 +0000 Received: from mail.shatteredsilicon.net (localhost [127.0.0.1]) by external.sentinel2 (Postfix) with ESMTP id B793F220294 for ; Wed, 11 Sep 2013 12:44:47 +0100 (BST) In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org This got me thinking - if the problem is broken IOMMU implementation, is the IOMMU _actually_ required for PCI passthrough to HVM guests if all the memory holes and BARs are made exactly the same in dom0 and domU? If vBAR=pBAR, then surely there is no memory range remapping to be done anyway - which means that there is no need for the strict IOMMU requirements (over and above the requirements and caveats of PV domUs). In turn, this would enable PCI passthrough (incl. secondary VGA, unless I am very much mistaken) to HVM guests while running xen with iommu=0. It shifts the design from virtualization to partitioning, which I see having obvious advantages and no disadvantages (e.g. VM migration doesn't work with PCI passthrough anyway). The reason I am mentioning this is because I'm working on a vhole=phole+vBAR=pBAR patch set anyway, and this would be a neat logical extension that would help me work around yet more problems on what appears to be a fairly common hardware implementation bug. Gordan On Wed, 11 Sep 2013 12:25:18 +0100, Gordan Bobic wrote: > It looks like I'm definitely not the first person to > hit this problem: > > > http://www.gossamer-threads.com/lists/xen/users/168557?do=post_view_threaded#168557 > > No responses or workarounds suggested back then. :( > > Gordan > > On Wed, 11 Sep 2013 12:05:35 +0100, Gordan Bobic > wrote: >> I found this: >> >> http://lists.xen.org/archives/html/xen-devel/2010-06/msg00093.html >> >> while looking for a solution to a similar problem. I am >> facing a similar issue with LSI (8408E, 3081E-R) and >> Adaptec (31605) SAS cards. Was there ever a proper, more general >> fix or workaround for this issue? >> >> These SAS cards experience these problems in dom0. When running >> a vanilla kernel on bare metal, they work OK without intel_iommu >> set. As soon as I set intel_iommu, the same thing happens (on >> bare metal, not dom0). >> >> Clearly there is something badly broken with multiple layers >> of bridges when it comes to IOMMU in my setup (Intel 5520 PCIe >> root hub -> NF200 bridge -> Intel 80333 Bridge -> SAS controller) >> >> I tried iommu=dom0-passthrough and it doesn't appear to have >> helped. >> >> I am not seeing similar problems with other PCIe devices that >> are also, in theory doing DMA (e.g. GPUs), but LSI and Adapted >> controllers appear to be affected for some reason. >> >> Is there anything else I could try/do to make this work? >> >> Gordan >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel