From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36134) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bs30I-00086C-P5 for qemu-devel@nongnu.org; Thu, 06 Oct 2016 03:22:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bs30E-00027O-J5 for qemu-devel@nongnu.org; Thu, 06 Oct 2016 03:22:05 -0400 References: <1475722987-18644-1-git-send-email-david@gibson.dropbear.id.au> <1475722987-18644-3-git-send-email-david@gibson.dropbear.id.au> From: Laurent Vivier Message-ID: <4490e7a0-439c-7f0a-d41d-3eb533ebb1c8@redhat.com> Date: Thu, 6 Oct 2016 09:21:56 +0200 MIME-Version: 1.0 In-Reply-To: <1475722987-18644-3-git-send-email-david@gibson.dropbear.id.au> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC 2/4] spapr: Adjust placement of PCI host bridge to allow > 1TiB RAM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson , qemu-ppc@nongnu.org Cc: qemu-devel@nongnu.org, benh@kernel.crashing.org, thuth@redhat.com, agraf@suse.de, mst@redhat.com, aik@ozlabs.ru, mdroth@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, bharata@linux.vnet.ibm.com, abologna@redhat.com, mpolednik@redhat.com On 06/10/2016 05:03, David Gibson wrote: > Currently the default PCI host bridge for the 'pseries' machine type is > constructed with its IO windows in the 1TiB..(1TiB + 64GiB) range in > guest memory space. This means that if > 1TiB of guest RAM is specified, > the RAM will collide with the PCI IO windows, causing serious problems. > > Problems won't be obvious until guest RAM goes a bit beyond 1TiB, because > there's a little unused space at the bottom of the area reserved for PCI, > but essentially this means that > 1TiB of RAM has never worked with the > pseries machine type. > > This patch fixes this by altering the placement of PHBs on large-RAM VMs. > Instead of always placing the first PHB at 1TiB, it is placed at the next > 1 TiB boundary after the maximum RAM address. > > Technically, this changes behaviour in a migration-breaking way for > existing machines with > 1TiB maximum memory, but since having > 1 TiB > memory was broken anyway, this seems like a reasonable trade-off. Perhaps you can add an SPAPR_COMPAT_XX property with PHB0 base to not break compatibility? I think spapr without PCI card (only VIO, for instance), should work with 1TiB+. On another side, how the SPAPR kernel manages memory? Is it possible to add an hole in RAM between 1TiB and 1TiB+64GiB to allow the kernel to register the I/O space? Laurent