From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:39959) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SVO17-0003jh-Bs for qemu-devel@nongnu.org; Fri, 18 May 2012 10:18:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SVO0z-0005w0-Tm for qemu-devel@nongnu.org; Fri, 18 May 2012 10:18:52 -0400 Received: from mail-pz0-f45.google.com ([209.85.210.45]:34862) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SVO0z-0005va-LI for qemu-devel@nongnu.org; Fri, 18 May 2012 10:18:45 -0400 Received: by dadv2 with SMTP id v2so4854728dad.4 for ; Fri, 18 May 2012 07:18:43 -0700 (PDT) Sender: Paolo Bonzini From: Paolo Bonzini Date: Fri, 18 May 2012 16:18:25 +0200 Message-Id: <1337350712-29183-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [RFC PATCH 0/7] Manual writethrough cache and cache mode toggle List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kwolf@redhat.com, stefanha@linux.vnet.ibm.com, anthony@codemonkey.ws This is an alternative implementation of writethrough caching. By always opening protocols in writethrough mode and doing flushes manually after every write, it achieves two results: 1) it makes flipping the cache mode extremely easy; 2) it lets formats control flushes during metadata updates even in writethrough mode, which makes the updates more efficient; 3) it makes cache=writethrough automatically flush metadata without needing extra work in the formats. In practice, the performance result is a wash. I measured "make -j3 vmlinux" on a 2-core guest/4-core host, with 2GB memory in the guest and 8GB in the host. Performance was measured starting qemu-kvm with an empty qcow2 image, a virtio disk and cache=writethrough (F16 installation + exploded kernel tarball in the backing file), and the results are as follows: without patches: real 9m25.057s user 12m11.091s sys 3m48.281s real 9m23.429s user 11m58.628s sys 3m47.125s real 9m23.524s user 12m2.458s sys 3m44.722s with patches: real 9m25.808s user 12m16.543s sys 3m50.648s real 9m22.711s user 12m12.172s sys 3m49.426s real 9m21.516s user 12m18.127s sys 3m50.762s So 1%-2% more CPU usage was measured in the guest, but that doesn't make much sense for virtio with ioeventfd, so I assume it is all within noise. Any opinions? Should I run any more tests, perhaps with cache=directsync? Does performance of cache=writethrough matter much, especially if we flip the default? Thanks, Paolo Paolo Bonzini (7): block: flush in writethrough mode after writes block: flush in writethrough mode after snapshot operations savevm: flush after saving vm state block: do not pass BDRV_O_CACHE_WB to the protocol block: copy enable_write_cache in bdrv_append block: add bdrv_set_enable_write_cache ide: support enable/disable write cache block: do not handle writethrough in qcow2 caches block.c | 20 +++++++++++++++++--- block.h | 1 + block/qcow2-cache.c | 25 ++----------------------- block/qcow2-refcount.c | 12 ------------ block/qcow2.c | 7 ++----- block/qcow2.h | 5 +---- hw/ide/core.c | 18 +++++++++++++++--- savevm.c | 2 +- 9 files changed, 39 insertions(+), 51 deletions(-) -- 1.7.10.1