From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754721AbcAYDAR (ORCPT ); Sun, 24 Jan 2016 22:00:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54047 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753558AbcAYDAN (ORCPT ); Sun, 24 Jan 2016 22:00:13 -0500 Subject: Re: [PATCH V2 0/3] basic busy polling support for vhost_net To: Mike Rapoport , linux-kernel@vger.kernel.org References: <1448951985-12385-1-git-send-email-jasowang@redhat.com> From: Jason Wang Message-ID: <56A58FB5.8020101@redhat.com> Date: Mon, 25 Jan 2016 11:00:05 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/24/2016 05:00 PM, Mike Rapoport wrote: > Hi Jason, > >> Jason Wang redhat.com> writes: >> >> Hi all: >> >> This series tries to add basic busy polling for vhost net. The idea is >> simple: at the end of tx/rx processing, busy polling for new tx added >> descriptor and rx receive socket for a while. > There were several conciens Michael raised on the Razya's attempt to add > polling to vhost-net ([1], [2]). Some of them seem relevant for these > patches as well: > > - What happens in overcommit scenarios? We have an optimization here: busy polling will end if more than one processes is runnable on local cpu. This was done by checking single_task_running() in each iteration. So at the worst case, busy polling should be as fast as or only a minor regression compared to normal case. You can see this from the last test result. > - Have you checked the effect of polling on some macro benchmarks? I'm not sure I get the question. Cover letters shows some benchmark result of netperf. What do you mean by "macro benchmarks"? > >> The maximum number of time (in us) could be spent on busy polling was >> specified ioctl. > Although ioctl is definitely more appropriate interface to allow user to > tune polling, it's still not clear for me how *end user* will interact with > it and how easy it would be for him/her. There will be qemu part of the codes for end user. E.g. a vhost_poll_us parameter for tap like: -netdev tap,id=hn0,vhost=on,vhost_pull_us=20 Thanks > > [1] http://thread.gmane.org/gmane.linux.kernel/1765593 > [2] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/131343 > > -- > Sincerely yours, > Mike. > >