From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46051) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7iUX-00045U-MQ for qemu-devel@nongnu.org; Wed, 24 Jun 2015 07:05:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7iUS-0008R8-Mi for qemu-devel@nongnu.org; Wed, 24 Jun 2015 07:05:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47948) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7iUS-0008QD-F3 for qemu-devel@nongnu.org; Wed, 24 Jun 2015 07:05:12 -0400 Date: Wed, 24 Jun 2015 13:05:09 +0200 From: "Michael S. Tsirkin" Message-ID: <20150624130113-mutt-send-email-mst@redhat.com> References: <557E8211.1050606@redhat.com> <20150615103617-mutt-send-email-mst@redhat.com> <557FB435.1030202@redhat.com> <557FD8C4.8050809@redhat.com> <5588C081.1040800@redhat.com> <20150623073410-mutt-send-email-mst@redhat.com> <558A6AD3.2090005@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <558A6AD3.2090005@redhat.com> Subject: Re: [Qemu-devel] [PATCH v3 2/2] vhost user: Add RARP injection for legacy guest List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: Thibaut Collet , qemu-devel , Stefan Hajnoczi On Wed, Jun 24, 2015 at 04:31:15PM +0800, Jason Wang wrote: > > > On 06/23/2015 01:49 PM, Michael S. Tsirkin wrote: > > On Tue, Jun 23, 2015 at 10:12:17AM +0800, Jason Wang wrote: > >> > > >> > > >> > On 06/18/2015 11:16 PM, Thibaut Collet wrote: > >>> > > On Tue, Jun 16, 2015 at 10:05 AM, Jason Wang wrote: > >>>> > >> > >>>> > >> On 06/16/2015 03:24 PM, Thibaut Collet wrote: > >>>>> > >>> If my understanding is correct, on a resume operation, we have the > >>>>> > >>> following callback trace: > >>>>> > >>> 1. virtio_pci_restore function that calls all restore call back of > >>>>> > >>> virtio devices > >>>>> > >>> 2. virtnet_restore that calls try_fill_recv function for each virtual queues > >>>>> > >>> 3. try_fill_recv function kicks the virtual queue (through > >>>>> > >>> virtqueue_kick function) > >>>> > >> Yes, but this happens only after pm resume not migration. Migration is > >>>> > >> totally transparent to guest. > >>>> > >> > >>> > > Hi Jason, > >>> > > > >>> > > After a deeper look in the migration code of QEMU a resume event is > >>> > > always sent when the live migration is finished. > >>> > > On a live migration we have the following callback trace: > >>> > > 1. The VM on the new host is set to the state RUN_STATE_INMIGRATE, the > >>> > > autostart boolean to 1 and calls the qemu_start_incoming_migration > >>> > > function (see function main of vl.c) > >>> > > ..... > >>> > > 2. call of process_incoming_migration function in > >>> > > migration/migration.c file whatever the way to do the live migration > >>> > > (tcp:, fd:, unix:, exec: ...) > >>> > > 3. call of process_incoming_migration_co function in migration/migration.c > >>> > > 4. call of vm_start function in vl.c (otherwise the migrated VM stay > >>> > > in the pause state, the autostart boolean is set to 1 by the main > >>> > > function in vl.c) > >>> > > 5. call of vm_start function that sets the VM is the RUN_STATE_RUNNING state. > >>> > > 6. call of qapi_event_send_resume function that ends a resume event to the VM > >> > > >> > AFAIK, this function sends resume event to qemu monitor not VM. > >> > > >>> > > > >>> > > So when a live migration is ended: > >>> > > 1. a resume event is sent to the guest > >>> > > 2. On the reception of this resume event the virtual queue are kicked > >>> > > by the guest > >>> > > 3. Backend vhost user catches this kick and can emit a RARP to guest > >>> > > that does not support GUEST_ANNOUNCE > >>> > > > >>> > > This solution, as solution based on detection of DRIVER_OK status > >>> > > suggested by Michael, allows backend to send the RARP to legacy guest > >>> > > without involving QEMU and add ioctl to vhost-user. > >> > > >> > A question here is did vhost-user code pass status to the backend? If > >> > not, how can userspace backend detect DRIVER_OK? > > Sorry, I must have been unclear. > > vhost core calls VHOST_NET_SET_BACKEND on DRIVER_OK. > > Unfortunately vhost user currently translates it to VHOST_USER_NONE. > > Looks like VHOST_NET_SET_BACKEND was only used for tap backend. > > > As a work around, I think kicking ioeventfds once you get > > VHOST_NET_SET_BACKEND will work. > > Maybe just a eventfd_set() in vhost_net_start(). But is this > "workaround" elegant enough to be documented? Is it better to do this > explicitly with a new feature? If you are going to do this anyway, there are a couple of other changes we should do, in particular, decide what we want to do with control vq. -- MST