From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency Date: Wed, 14 Aug 2013 14:37:39 +0300 Message-ID: <20130814113739.GE5430@redhat.com> References: <520B2B47.9040002@acm.org> <520B2B88.6020307@acm.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Asias He , kvm-devel To: Bart Van Assche Return-path: Received: from mx1.redhat.com ([209.132.183.28]:37986 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759738Ab3HNLgF (ORCPT ); Wed, 14 Aug 2013 07:36:05 -0400 Content-Disposition: inline In-Reply-To: <520B2B88.6020307@acm.org> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote: > If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush() > waiters before rescheduling instead of after rescheduling. > > Signed-off-by: Bart Van Assche > Cc: Michael S. Tsirkin > Cc: Asias He Why exactly? It's not like flush needs to be extra fast ... > --- > drivers/vhost/vhost.c | 42 +++++++++++++++++++----------------------- > 1 file changed, 19 insertions(+), 23 deletions(-) > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index e58cf00..e7ffc10 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -201,47 +201,43 @@ static void vhost_vq_reset(struct vhost_dev *dev, > static int vhost_worker(void *data) > { > struct vhost_dev *dev = data; > - struct vhost_work *work = NULL; > - unsigned uninitialized_var(seq); > + struct vhost_work *work; > + unsigned seq; > mm_segment_t oldfs = get_fs(); > > set_fs(USER_DS); > use_mm(dev->mm); > > - for (;;) { > + spin_lock_irq(&dev->work_lock); > + while (!kthread_should_stop()) { > /* mb paired w/ kthread_stop */ > set_current_state(TASK_INTERRUPTIBLE); > - > - spin_lock_irq(&dev->work_lock); > - if (work) { > - work->done_seq = seq; > - if (work->flushing) > - wake_up_all(&work->done); > - } > - > - if (kthread_should_stop()) { > - spin_unlock_irq(&dev->work_lock); > - __set_current_state(TASK_RUNNING); > - break; > - } > if (!list_empty(&dev->work_list)) { > work = list_first_entry(&dev->work_list, > struct vhost_work, node); > list_del_init(&work->node); > seq = work->queue_seq; > - } else > - work = NULL; > - spin_unlock_irq(&dev->work_lock); > + spin_unlock_irq(&dev->work_lock); > > - if (work) { > __set_current_state(TASK_RUNNING); > work->fn(work); > - if (need_resched()) > - schedule(); > - } else > + > + spin_lock_irq(&dev->work_lock); > + work->done_seq = seq; > + if (work->flushing) > + wake_up_all(&work->done); > + } > + if (list_empty(&dev->work_list) || need_resched()) { > + spin_unlock_irq(&dev->work_lock); > + > schedule(); > > + spin_lock_irq(&dev->work_lock); > + } > } > + spin_unlock_irq(&dev->work_lock); > + > + __set_current_state(TASK_RUNNING); > unuse_mm(dev->mm); > set_fs(oldfs); > return 0; > -- > 1.7.10.4