From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: Powerdown problem on XEN | ACPI S5 Date: Wed, 14 Aug 2013 22:11:09 +0100 Message-ID: <520BF26D.2080901@citrix.com> References: <520B4465.6000600@web2web.at> <520B784F02000078000EBD42@nat28.tlf.novell.com> <520B8B8E.5020504@web2web.at> <520B8D63.6040903@citrix.com> <520BB7B7.1000108@web2web.at> <520BBEC3.6060004@citrix.com> <520BCF0B.4010106@web2web.at> <520BD621.1040701@web2web.at> <520BD80B.4060101@citrix.com> <520BDCE4.3040803@web2web.at> <520BE5FA.8040306@citrix.com> <520BE76A.9010905@web2web.at> <520BEAC9.107@citrix.com> <520BEE8D.3060704@web2web.at> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1V9iLZ-0008Ou-HO for xen-devel@lists.xenproject.org; Wed, 14 Aug 2013 21:11:13 +0000 In-Reply-To: <520BEE8D.3060704@web2web.at> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Atom2 Cc: xen-devel , Ian Campbell , Jan Beulich List-Id: xen-devel@lists.xenproject.org On 14/08/13 21:54, Atom2 wrote: > Andrew, > that does not sound too promising at the moment. Is there anything > else I could provide to come to a resolution given that I/O > virtualization is what the system is supposed to do. > > You mention ACPI tables - I have no clue how to provide those, but I > am more than happy to do what I can? > > Ian (Campbell) originally diverted me to Jan (Beulich) who seemed to > have an idea before you thankfully jumped in. Jan's idea seemed to > also revolve around ACPI - although not tables, but registers: > > quote from Jan Beulich: > > It would be particularly interesting to know whether perhaps > > some of the ACPI registers live in memory space on that > > system - I already have a patch queued up (but not submitted > > yet) that fixes problems in that case. > > @Jan: would it make sense to go down that route? > > Thanks again to everybody for their help so far. I would first start with the suggestion from Ben & Konrad, especially as this smells the same in terms of breakage. If that fails, you can get at the acpi tables using `acpidump` which looks to live in the pmtools package in gentoo. ~Andrew > > > Am 14.08.13 22:38, schrieb Andrew Cooper: >> On 14/08/13 21:24, Atom2 wrote: >>> Am 14.08.13 22:18, schrieb Andrew Cooper: >>> [...] >>>> >>>> Ok thanks. >>>> >>>> Do you mind confirming whether S5 works with "iommu=off" on the Xen >>>> command line? >>>> >>>> ~Andrew >>>> >>> Yes, that works: The system powers off after issuing >>> shutdown -h now >>> from the dom0. >> >> So it is certainly an iommu interaction issue, which was sadly suspected >> given that we have seen similar problems in the past. >> >> Curiously, there are two IOMMUs on the system. I am not familiar enough >> with Cougar Point chipsets to know how they are layed out. Perhaps the >> ACPI tables might have more information. > I'm just curios, but where did you see two IOMMUs? (XEN) PCI: Using MCFG for segment 0000 bus 00-3f (XEN) Intel VT-d iommu 0 supported page sizes: 4kB. (XEN) Intel VT-d iommu 1 supported page sizes: 4kB. (XEN) Intel VT-d Snoop Control not enabled. (XEN) Intel VT-d Dom0 DMA Passthrough not enabled. (XEN) Intel VT-d Queued Invalidation enabled. (XEN) Intel VT-d Interrupt Remapping enabled. (XEN) Intel VT-d Shared EPT tables not enabled. (XEN) I/O virtualisation enabled (XEN) - Dom0 mode: Relaxed I added the iommu index into that loop because we found multi-socket servers with multiple iommus, but I was not expecting to see two iommus on a single socket workstation chipset. >> >> ACPI interaction with Xen is tricky at the best of times. Xen as no AML >> interpreter, so relies on Linux in dom0 to most of the ACPI legwork. >> >> My best guess at the moment is that something in the ACPI code for S5 is >> turning off enough of the PCH that Xen can no longer talk to the one of >> the IOMMUs. >> >> ~Andrew >>