From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41125) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bs56o-0008Fa-On for qemu-devel@nongnu.org; Thu, 06 Oct 2016 05:36:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bs56j-0002pl-2K for qemu-devel@nongnu.org; Thu, 06 Oct 2016 05:36:57 -0400 References: <1475722987-18644-1-git-send-email-david@gibson.dropbear.id.au> <1475722987-18644-3-git-send-email-david@gibson.dropbear.id.au> From: Laurent Vivier Message-ID: <52f34db7-0ae1-cca2-3bd2-e67d70ab2f3f@redhat.com> Date: Thu, 6 Oct 2016 11:36:47 +0200 MIME-Version: 1.0 In-Reply-To: <1475722987-18644-3-git-send-email-david@gibson.dropbear.id.au> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC 2/4] spapr: Adjust placement of PCI host bridge to allow > 1TiB RAM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson , qemu-ppc@nongnu.org Cc: qemu-devel@nongnu.org, benh@kernel.crashing.org, thuth@redhat.com, agraf@suse.de, mst@redhat.com, aik@ozlabs.ru, mdroth@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, bharata@linux.vnet.ibm.com, abologna@redhat.com, mpolednik@redhat.com On 06/10/2016 05:03, David Gibson wrote: > Currently the default PCI host bridge for the 'pseries' machine type is > constructed with its IO windows in the 1TiB..(1TiB + 64GiB) range in > guest memory space. This means that if > 1TiB of guest RAM is specified, > the RAM will collide with the PCI IO windows, causing serious problems. > > Problems won't be obvious until guest RAM goes a bit beyond 1TiB, because > there's a little unused space at the bottom of the area reserved for PCI, > but essentially this means that > 1TiB of RAM has never worked with the > pseries machine type. > > This patch fixes this by altering the placement of PHBs on large-RAM VMs. > Instead of always placing the first PHB at 1TiB, it is placed at the next > 1 TiB boundary after the maximum RAM address. > > Technically, this changes behaviour in a migration-breaking way for > existing machines with > 1TiB maximum memory, but since having > 1 TiB > memory was broken anyway, this seems like a reasonable trade-off. > > Signed-off-by: David Gibson Reviewed-by: Laurent Vivier > --- > hw/ppc/spapr.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c > index f6e9c2a..9f3e004 100644 > --- a/hw/ppc/spapr.c > +++ b/hw/ppc/spapr.c > @@ -2376,12 +2376,15 @@ static void spapr_phb_placement(sPAPRMachineState *spapr, uint32_t index, > unsigned n_dma, uint32_t *liobns, Error **errp) > { > const uint64_t base_buid = 0x800000020000000ULL; > - const hwaddr phb0_base = 0x10000000000ULL; /* 1 TiB */ > const hwaddr phb_spacing = 0x1000000000ULL; /* 64 GiB */ > const hwaddr mmio_offset = 0xa0000000; /* 2 GiB + 512 MiB */ > const hwaddr pio_offset = 0x80000000; /* 2 GiB */ > const uint32_t max_index = 255; > + const hwaddr phb0_alignment = 0x10000000000ULL; /* 1 TiB */ > > + uint64_t max_hotplug_addr = spapr->hotplug_memory.base + > + memory_region_size(&spapr->hotplug_memory.mr); > + hwaddr phb0_base = QEMU_ALIGN_UP(max_hotplug_addr, phb0_alignment); > hwaddr phb_base; > int i; > >