From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=36725 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PhSeK-0000B5-AY for qemu-devel@nongnu.org; Mon, 24 Jan 2011 15:04:33 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PhSeJ-00027w-1p for qemu-devel@nongnu.org; Mon, 24 Jan 2011 15:04:28 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58829) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PhSeI-00027j-NE for qemu-devel@nongnu.org; Mon, 24 Jan 2011 15:04:26 -0500 Message-ID: <4D3DDB99.8090001@redhat.com> Date: Mon, 24 Jan 2011 21:05:45 +0100 From: Kevin Wolf MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH v5 2/4] virtio-pci: Use ioeventfd for virtqueue notify References: <1292166128-10874-1-git-send-email-stefanha@linux.vnet.ibm.com> <1292166128-10874-3-git-send-email-stefanha@linux.vnet.ibm.com> <4D3DCADC.6010308@redhat.com> <20110124193653.GC29941@redhat.com> <4D3DD775.3060003@redhat.com> <20110124194729.GE29941@redhat.com> In-Reply-To: <20110124194729.GE29941@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, Stefan Hajnoczi Am 24.01.2011 20:47, schrieb Michael S. Tsirkin: > On Mon, Jan 24, 2011 at 08:48:05PM +0100, Kevin Wolf wrote: >> Am 24.01.2011 20:36, schrieb Michael S. Tsirkin: >>> On Mon, Jan 24, 2011 at 07:54:20PM +0100, Kevin Wolf wrote: >>>> Am 12.12.2010 16:02, schrieb Stefan Hajnoczi: >>>>> Virtqueue notify is currently handled synchronously in userspace virtio. This >>>>> prevents the vcpu from executing guest code while hardware emulation code >>>>> handles the notify. >>>>> >>>>> On systems that support KVM, the ioeventfd mechanism can be used to make >>>>> virtqueue notify a lightweight exit by deferring hardware emulation to the >>>>> iothread and allowing the VM to continue execution. This model is similar to >>>>> how vhost receives virtqueue notifies. >>>>> >>>>> The result of this change is improved performance for userspace virtio devices. >>>>> Virtio-blk throughput increases especially for multithreaded scenarios and >>>>> virtio-net transmit throughput increases substantially. >>>>> >>>>> Some virtio devices are known to have guest drivers which expect a notify to be >>>>> processed synchronously and spin waiting for completion. Only enable ioeventfd >>>>> for virtio-blk and virtio-net for now. >>>>> >>>>> Care must be taken not to interfere with vhost-net, which uses host >>>>> notifiers. If the set_host_notifier() API is used by a device >>>>> virtio-pci will disable virtio-ioeventfd and let the device deal with >>>>> host notifiers as it wishes. >>>>> >>>>> After migration and on VM change state (running/paused) virtio-ioeventfd >>>>> will enable/disable itself. >>>>> >>>>> * VIRTIO_CONFIG_S_DRIVER_OK -> enable virtio-ioeventfd >>>>> * !VIRTIO_CONFIG_S_DRIVER_OK -> disable virtio-ioeventfd >>>>> * virtio_pci_set_host_notifier() -> disable virtio-ioeventfd >>>>> * vm_change_state(running=0) -> disable virtio-ioeventfd >>>>> * vm_change_state(running=1) -> enable virtio-ioeventfd >>>>> >>>>> Signed-off-by: Stefan Hajnoczi >>>> >>>> On current git master I'm getting hangs when running iozone on a >>>> virtio-blk disk. "Hang" means that it's not responsive any more and has >>>> 100% CPU consumption. >>>> >>>> I bisected the problem to this patch. Any ideas? >>>> >>>> Kevin >>> >>> Does it help if you set ioeventfd=off on command line? >> >> Yes, with ioeventfd=off it seems to work fine. >> >> Kevin > > Then it's the ioeventfd that is to blame. > Is it the io thread that consumes 100% CPU? > Or the vcpu thread? I was building with the default options, i.e. there is no IO thread. Now I'm just running the test with IO threads enabled, and so far everything looks good. So I can only reproduce the problem with IO threads disabled. Kevin