From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35476) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZeZZx-0006aN-OX for qemu-devel@nongnu.org; Tue, 22 Sep 2015 22:14:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZeZZu-000273-Ig for qemu-devel@nongnu.org; Tue, 22 Sep 2015 22:14:41 -0400 Received: from mga03.intel.com ([134.134.136.65]:5949) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZeZZu-000245-DR for qemu-devel@nongnu.org; Tue, 22 Sep 2015 22:14:38 -0400 Date: Wed, 23 Sep 2015 10:16:48 +0800 From: Yuanhan Liu Message-ID: <20150923021648.GE2339@yliu-dev.sh.intel.com> References: <1442588324-11365-1-git-send-email-yuanhan.liu@linux.intel.com> <1442588324-11365-7-git-send-email-yuanhan.liu@linux.intel.com> <560129F6.2020704@redhat.com> <20150923015741.GA2339@yliu-dev.sh.intel.com> <56020A7A.8030801@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56020A7A.8030801@redhat.com> Subject: Re: [Qemu-devel] [PATCH v10 6/7] vhost-user: add multiple queue support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: Changchun Ouyang , qemu-devel@nongnu.org, Changchun.ouyang@hotmail.com, mst@redhat.com On Wed, Sep 23, 2015 at 10:12:10AM +0800, Jason Wang wrote: > > > On 09/23/2015 09:57 AM, Yuanhan Liu wrote: > > On Tue, Sep 22, 2015 at 06:14:14PM +0800, Jason Wang wrote: > >> > > [...] > >>> -static void net_vhost_link_down(VhostUserState *s, bool link_down) > >>> +static void net_vhost_link_down(int queues, NetClientState *ncs[], > >>> + bool link_down) > >>> { > >>> - s->nc.link_down = link_down; > >>> + NetClientState *nc; > >>> + int i; > >>> > >>> - if (s->nc.peer) { > >>> - s->nc.peer->link_down = link_down; > >>> - } > >>> + for (i = 0; i < queues; i++) { > >>> + nc = ncs[i]; > >>> > >>> - if (s->nc.info->link_status_changed) { > >>> - s->nc.info->link_status_changed(&s->nc); > >>> - } > >>> + nc->link_down = link_down; > >>> + > >>> + if (nc->peer) { > >>> + nc->peer->link_down = link_down; > >>> + } > >>> + > >>> + if (nc->info->link_status_changed) { > >>> + nc->info->link_status_changed(nc); > >>> + } > >>> > >>> - if (s->nc.peer && s->nc.peer->info->link_status_changed) { > >>> - s->nc.peer->info->link_status_changed(s->nc.peer); > >>> + if (nc->peer && nc->peer->info->link_status_changed) { > >>> + nc->peer->info->link_status_changed(nc->peer); > >>> + } > >>> } > >>> } > >> The semantics is a little bit difference with qmp_set_link. What's the > >> reason for this? If there's no specific reason, probably, we could just > >> reuse qmp_set_link() (or part of) here? > > I did this just because I refactored the vhost_user init code, making > > the char dev handler to initiate all ncs. Hence I turned net_vhost_link_down() > > to handle all ncs, just like what I did for vhost_user_start(). > > > > And TBH, I'm not aware of qmp_set_link() before, and after reading the > > code, I do think we can simply invoke it instead. And thanks for the > > info. > > > >> Other looks good to me. > > Thanks for the review. May I add your reviewed-by if I fix above issue? > > > > --yliu > > I prefer to add it myself :) Fair enough; I meant to save your effort :-) --yliu