From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36389) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fLF56-0007P4-Fk for qemu-devel@nongnu.org; Tue, 22 May 2018 17:44:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fLF52-0000Y1-Ic for qemu-devel@nongnu.org; Tue, 22 May 2018 17:44:32 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:41098 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fLF52-0000Xl-EE for qemu-devel@nongnu.org; Tue, 22 May 2018 17:44:28 -0400 Date: Wed, 23 May 2018 00:44:22 +0300 From: "Michael S. Tsirkin" Message-ID: <20180523004236-mutt-send-email-mst@kernel.org> References: <1526801333-30613-1-git-send-email-whois.zihan.yang@gmail.com> <1526801333-30613-4-git-send-email-whois.zihan.yang@gmail.com> <17a3765f-b835-2d45-e8b9-ffd4aff909f9@redhat.com> <20180522234410-mutt-send-email-mst@kernel.org> <20180522153659.2e33fbe0@w520.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180522153659.2e33fbe0@w520.home> Subject: Re: [Qemu-devel] [RFC 3/3] acpi-build: allocate mcfg for multiple host bridges List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Williamson Cc: Laszlo Ersek , Marcel Apfelbaum , Zihan Yang , qemu-devel@nongnu.org, Igor Mammedov , Eric Auger , Drew Jones , Wei Huang On Tue, May 22, 2018 at 03:36:59PM -0600, Alex Williamson wrote: > On Tue, 22 May 2018 23:58:30 +0300 > "Michael S. Tsirkin" wrote: > > > > It's not hard to think of a use-case where >256 devices > > are helpful, for example a nested virt scenario where > > each device is passed on to a different nested guest. > > > > But I think the main feature this is needed for is numa modeling. > > Guests seem to assume a numa node per PCI root, ergo we need more PCI > > roots. > > But even if we have NUMA affinity per PCI host bridge, a PCI host > bridge does not necessarily imply a new PCIe domain. What are you calling a PCIe domain? > Nearly any Intel > multi-socket system proves this. Don't get me wrong, I'd like to see > PCIe domain support and I'm surprised edk2 is so far from supporting > it, but other than >256 hotpluggable slots, I'm having trouble coming > up with use cases. Maybe hotpluggable PCI root hierarchies are easier > with separate domains? Thanks, > > Alex