From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44785) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1baUju-0006Gd-Rz for qemu-devel@nongnu.org; Thu, 18 Aug 2016 17:20:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1baUjt-0004y8-UN for qemu-devel@nongnu.org; Thu, 18 Aug 2016 17:20:38 -0400 References: <20160817161303.jdglwirs522vn2wa@kamzik.localdomain> <45cfb252-6690-f274-d641-d1d3bff29ae3@redhat.com> <3a31e28f-15d6-d061-abc8-264a51375f96@redhat.com> From: Laine Stump Message-ID: Date: Thu, 18 Aug 2016 17:20:28 -0400 MIME-Version: 1.0 In-Reply-To: <3a31e28f-15d6-d061-abc8-264a51375f96@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: QEMU Developers Cc: Marcel Apfelbaum , Peter Maydell , Andrew Jones , Kevin Zhao , Gema Gomez-Solano , Marcel Apfelbaum , Andrea Bolognani , Thomas Hanson , qemu-arm On 08/18/2016 08:10 AM, Marcel Apfelbaum wrote: > On 08/17/2016 08:00 PM, Laine Stump wrote: > > What I'm not sure about is whether we should always auto-add an extra > pcie-*-root to be sure a device can be hotplugged, or if we should admit > that 1 >> available slot isn't good enough for all situations, so we should >> instead just leave it up to the user/management to manually add extra >> ports if they think they'll want to hotplug something later. > > Why not? Leaving 1 or 2 PCIe ports should be enough. On each port you > can hotplug a switch with several downstream ports. You can continue > nesting the switches up to depth of 6-7 I think. When did qemu start supporting hotplug of pcie switch ports? My understanding is that in real hardware the only way this is supported is to plug in the entire "upstream+downstream+downstream+..." set as a single unit, since there is no mechanism for guest kernel to notify the upstream port that a new downstream port has been attached to it (or something like that; I'm vague on the details). From the other end, qemu can only hotplug a single PCI device at a time, so by the time we get to the point of plugging in the downstream ports, the upstream port is already cemented in place by the guest kernel. I think that would be a really desirable feature though. Maybe qemu could queue up any downstream-ports which are pointing to an as-yet-nonexistent upstream-port id, then when the upstream-port with the proper id is finally attached, it could send the right magic to the guest (similar to the way it allows hotplugging all non-0 functions first, then takes action when function 0 is hotplugged). If that was available, then yes, it would make perfect sense for libvirt to simply always make sure at least one empty pcie-*-port was available. If you have plans of doing something to support hotplugging a pcie-switch-* collection, then that's what I'll do.