From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38067) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X2hQt-0001Yt-CR for qemu-devel@nongnu.org; Thu, 03 Jul 2014 09:52:21 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1X2hQn-0000GK-4T for qemu-devel@nongnu.org; Thu, 03 Jul 2014 09:52:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:6021) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X2hQm-0000Fm-SW for qemu-devel@nongnu.org; Thu, 03 Jul 2014 09:52:09 -0400 From: Stefan Hajnoczi Date: Thu, 3 Jul 2014 15:51:59 +0200 Message-Id: <1404395521-11158-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PATCH 0/2] coroutine: dynamically scale pool size List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Paolo Bonzini , ming.lei@canonical.com, Stefan Hajnoczi The coroutine pool reuses exited coroutines to make qemu_coroutine_create() cheap. The size of the pool is capped to prevent it from hogging memory after a period of high coroutine activity. Previously the max size was hardcoded to 64 but this doesn't scale with guest size. A guest with lots of disks can do more parallel I/O and therefore requires a larger coroutine pool size. This series tries to solve the problem by scaling pool size according to the number of drives. Ming: Please let me know if this eliminates the rt_sigprocmask system calls you are seeing. It should solve part of the performance regression you have seen in qemu.git/master virtio-blk dataplane. Stefan Hajnoczi (2): coroutine: make pool size dynamic block: bump coroutine pool size for drives block.c | 4 ++++ include/block/coroutine.h | 11 +++++++++++ qemu-coroutine.c | 25 +++++++++++++++++++------ 3 files changed, 34 insertions(+), 6 deletions(-) -- 1.9.3