From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53764) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c4WRF-0000Ee-Sx for qemu-devel@nongnu.org; Wed, 09 Nov 2016 12:13:31 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c4WRB-0000W1-CS for qemu-devel@nongnu.org; Wed, 09 Nov 2016 12:13:29 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54578) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c4WRB-0000Vj-7H for qemu-devel@nongnu.org; Wed, 09 Nov 2016 12:13:25 -0500 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7DFF18F4F6 for ; Wed, 9 Nov 2016 17:13:24 +0000 (UTC) From: Stefan Hajnoczi Date: Wed, 9 Nov 2016 17:13:19 +0000 Message-Id: <1478711602-12620-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [RFC 0/3] aio: experimental virtio-blk polling mode List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Karl Rister , Fam Zheng , Stefan Hajnoczi Recent performance investigation work done by Karl Rister shows that the guest->host notification takes around 20 us. This is more than the "overhead" of QEMU itself (e.g. block layer). One way to avoid the costly exit is to use polling instead of notification. The main drawback of polling is that it consumes CPU resources. In order to benefit performance the host must have extra CPU cycles available on physical CPUs that aren't used by the guest. This is an experimental AioContext polling implementation. It adds a polling callback into the event loop. Polling functions are implemented for virtio-blk virtqueue guest->host kick and Linux AIO completion. The QEMU_AIO_POLL_MAX_NS environment variable sets the number of nanoseconds to poll before entering the usual blocking poll(2) syscall. Try setting this variable to the time from old request completion to new virtqueue kick. By default no polling is done. The QEMU_AIO_POLL_MAX_NS must be set to get any polling! Karl: I hope you can try this patch series with several QEMU_AIO_POLL_MAX_NS values. If you don't find a good value we should double-check the tracing data to see if this experimental code can be improved. Stefan Hajnoczi (3): aio-posix: add aio_set_poll_handler() virtio: poll virtqueues for new buffers linux-aio: poll ring for completions aio-posix.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++ block/linux-aio.c | 17 +++++++ hw/virtio/virtio.c | 19 ++++++++ include/block/aio.h | 16 +++++++ 4 files changed, 185 insertions(+) -- 2.7.4