From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gordan Bobic Subject: Multi-bridged PCIe devices (Was: Re: iommuu/vt-d issues with LSI MegaSAS (PERC5i)) Date: Wed, 11 Sep 2013 12:05:35 +0100 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VJiEv-0000lp-LI for xen-devel@lists.xenproject.org; Wed, 11 Sep 2013 11:05:41 +0000 Received: from mail.shatteredsilicon.net (localhost [127.0.0.1]) by external.sentinel2 (Postfix) with ESMTP id 83C8B220294 for ; Wed, 11 Sep 2013 12:05:35 +0100 (BST) List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org I found this: http://lists.xen.org/archives/html/xen-devel/2010-06/msg00093.html while looking for a solution to a similar problem. I am facing a similar issue with LSI (8408E, 3081E-R) and Adaptec (31605) SAS cards. Was there ever a proper, more general fix or workaround for this issue? These SAS cards experience these problems in dom0. When running a vanilla kernel on bare metal, they work OK without intel_iommu set. As soon as I set intel_iommu, the same thing happens (on bare metal, not dom0). Clearly there is something badly broken with multiple layers of bridges when it comes to IOMMU in my setup (Intel 5520 PCIe root hub -> NF200 bridge -> Intel 80333 Bridge -> SAS controller) I tried iommu=dom0-passthrough and it doesn't appear to have helped. I am not seeing similar problems with other PCIe devices that are also, in theory doing DMA (e.g. GPUs), but LSI and Adapted controllers appear to be affected for some reason. Is there anything else I could try/do to make this work? Gordan