From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=40595 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Phd4a-0004C0-7j for qemu-devel@nongnu.org; Tue, 25 Jan 2011 02:12:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Phd4Y-0006af-Pt for qemu-devel@nongnu.org; Tue, 25 Jan 2011 02:12:16 -0500 Received: from mail-gw0-f45.google.com ([74.125.83.45]:33748) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Phd4Y-0006aY-NP for qemu-devel@nongnu.org; Tue, 25 Jan 2011 02:12:14 -0500 Received: by gwaa12 with SMTP id a12so1818835gwa.4 for ; Mon, 24 Jan 2011 23:12:14 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <4D3DDB99.8090001@redhat.com> References: <1292166128-10874-1-git-send-email-stefanha@linux.vnet.ibm.com> <1292166128-10874-3-git-send-email-stefanha@linux.vnet.ibm.com> <4D3DCADC.6010308@redhat.com> <20110124193653.GC29941@redhat.com> <4D3DD775.3060003@redhat.com> <20110124194729.GE29941@redhat.com> <4D3DDB99.8090001@redhat.com> Date: Tue, 25 Jan 2011 07:12:14 +0000 Message-ID: Subject: Re: [Qemu-devel] [PATCH v5 2/4] virtio-pci: Use ioeventfd for virtqueue notify From: Stefan Hajnoczi Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: qemu-devel@nongnu.org, Stefan Hajnoczi , "Michael S. Tsirkin" On Mon, Jan 24, 2011 at 8:05 PM, Kevin Wolf wrote: > Am 24.01.2011 20:47, schrieb Michael S. Tsirkin: >> On Mon, Jan 24, 2011 at 08:48:05PM +0100, Kevin Wolf wrote: >>> Am 24.01.2011 20:36, schrieb Michael S. Tsirkin: >>>> On Mon, Jan 24, 2011 at 07:54:20PM +0100, Kevin Wolf wrote: >>>>> Am 12.12.2010 16:02, schrieb Stefan Hajnoczi: >>>>>> Virtqueue notify is currently handled synchronously in userspace vir= tio. =A0This >>>>>> prevents the vcpu from executing guest code while hardware emulation= code >>>>>> handles the notify. >>>>>> >>>>>> On systems that support KVM, the ioeventfd mechanism can be used to = make >>>>>> virtqueue notify a lightweight exit by deferring hardware emulation = to the >>>>>> iothread and allowing the VM to continue execution. =A0This model is= similar to >>>>>> how vhost receives virtqueue notifies. >>>>>> >>>>>> The result of this change is improved performance for userspace virt= io devices. >>>>>> Virtio-blk throughput increases especially for multithreaded scenari= os and >>>>>> virtio-net transmit throughput increases substantially. >>>>>> >>>>>> Some virtio devices are known to have guest drivers which expect a n= otify to be >>>>>> processed synchronously and spin waiting for completion. =A0Only ena= ble ioeventfd >>>>>> for virtio-blk and virtio-net for now. >>>>>> >>>>>> Care must be taken not to interfere with vhost-net, which uses host >>>>>> notifiers. =A0If the set_host_notifier() API is used by a device >>>>>> virtio-pci will disable virtio-ioeventfd and let the device deal wit= h >>>>>> host notifiers as it wishes. >>>>>> >>>>>> After migration and on VM change state (running/paused) virtio-ioeve= ntfd >>>>>> will enable/disable itself. >>>>>> >>>>>> =A0* VIRTIO_CONFIG_S_DRIVER_OK -> enable virtio-ioeventfd >>>>>> =A0* !VIRTIO_CONFIG_S_DRIVER_OK -> disable virtio-ioeventfd >>>>>> =A0* virtio_pci_set_host_notifier() -> disable virtio-ioeventfd >>>>>> =A0* vm_change_state(running=3D0) -> disable virtio-ioeventfd >>>>>> =A0* vm_change_state(running=3D1) -> enable virtio-ioeventfd >>>>>> >>>>>> Signed-off-by: Stefan Hajnoczi >>>>> >>>>> On current git master I'm getting hangs when running iozone on a >>>>> virtio-blk disk. "Hang" means that it's not responsive any more and h= as >>>>> 100% CPU consumption. >>>>> >>>>> I bisected the problem to this patch. Any ideas? >>>>> >>>>> Kevin >>>> >>>> Does it help if you set ioeventfd=3Doff on command line? >>> >>> Yes, with ioeventfd=3Doff it seems to work fine. >>> >>> Kevin >> >> Then it's the ioeventfd that is to blame. >> Is it the io thread that consumes 100% CPU? >> Or the vcpu thread? > > I was building with the default options, i.e. there is no IO thread. > > Now I'm just running the test with IO threads enabled, and so far > everything looks good. So I can only reproduce the problem with IO > threads disabled. Hrm...aio uses SIGUSR2 to force the vcpu to process aio completions (relevant when --enable-io-thread is not used). I will take a look at that again and see why we're spinning without checking for ioeventfd completion. Stefan