From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Samudrala, Sridhar" Subject: Re: [RFC PATCH net-next v2 2/2] virtio_net: Extend virtio to use VF datapath when available Date: Mon, 22 Jan 2018 13:05:15 -0800 Message-ID: References: <1515736720-39368-1-git-send-email-sridhar.samudrala@intel.com> <1515736720-39368-3-git-send-email-sridhar.samudrala@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: "Michael S. Tsirkin" , Stephen Hemminger , David Miller , Netdev , virtualization@lists.linux-foundation.org, virtio-dev@lists.oasis-open.org, "Brandeburg, Jesse" , Alexander Duyck , Jakub Kicinski To: Siwei Liu Return-path: Received: from mga11.intel.com ([192.55.52.93]:25697 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750878AbeAVVFQ (ORCPT ); Mon, 22 Jan 2018 16:05:16 -0500 In-Reply-To: Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 1/22/2018 12:27 PM, Siwei Liu wrote: > First off, as mentioned in another thread, the model of stacking up > virt-bond functionality over virtio seems a wrong direction to me. > Essentially the migration process would need to carry over all guest > side configurations previously done on the VF/PT and get them moved to > the new device being it virtio or VF/PT. Without the help of a new > upper layer bond driver that enslaves virtio and VF/PT devices > underneath, virtio will be overloaded with too much specifics being a > VF/PT backup in the future. I hope you're already aware of the issue > in longer term and move to that model as soon as possible. See more > inline. The idea behind this design is to  provide a low latency datapath to virtio_net while preserving live migration feature without the need for the guest admin to configure a bond between VF and virtio_net. As this feature is enabled and configured via virtio_net which has a back channel to the hypervisor, adding this functionality to virtio_net looks like a reasonable option. Adding a new driver and a new device requires defining a new interface and a channel between the hypervisor and the VM and if required we may implement that in future. > > On Thu, Jan 11, 2018 at 9:58 PM, Sridhar Samudrala > wrote: >> This patch enables virtio_net to switch over to a VF datapath when a VF >> netdev is present with the same MAC address. The VF datapath is only used >> for unicast traffic. Broadcasts/multicasts go via virtio datapath so that >> east-west broadcasts don't use the PCI bandwidth. > Why not making an this an option/optimization rather than being the > only means? The problem of east-west broadcast eating PCI bandwidth > depends on specifics of the (virtual) network setup, while some users > won't want to lose VF's merits such as latency. Why restricting > broadcast/multicast xmit to virtio only which potentially regresses > the performance against raw VF? I am planning to remove this option when i resubmit the patches. > >> It allows live migration >> of a VM with a direct attached VF without the need to setup a bond/team >> between a VF and virtio net device in the guest. >> >> The hypervisor needs to unplug the VF device from the guest on the source >> host and reset the MAC filter of the VF to initiate failover of datapath to >> virtio before starting the migration. After the migration is completed, the >> destination hypervisor sets the MAC filter on the VF and plugs it back to >> the guest to switch over to VF datapath. > Is there a host side patch (planned) for this MAC filter switching > process? As said in another thread, that simple script won't work for > macvtap backend. The host side patch to enable qemu to configure this feature is included in this patch series. I have been testing this feature using a shell script, but i hope someone in the libvirt community  will extend 'virsh' to handle live migration when this feature is supported. Thanks Sridhar