From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH] kvm: Flush coalesced MMIO buffer periodly Date: Mon, 25 Jan 2010 14:08:03 -0200 Message-ID: <20100125160803.GA16043@amt.cnet> References: <1264405604-13506-1-git-send-email-sheng@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Avi Kivity , kvm@vger.kernel.org To: Sheng Yang Return-path: Received: from mx1.redhat.com ([209.132.183.28]:43051 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752696Ab0AYQIe (ORCPT ); Mon, 25 Jan 2010 11:08:34 -0500 Content-Disposition: inline In-Reply-To: <1264405604-13506-1-git-send-email-sheng@linux.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Jan 25, 2010 at 03:46:44PM +0800, Sheng Yang wrote: > The default action of coalesced MMIO is, cache the writing in buffer, until: > 1. The buffer is full. > 2. Or the exit to QEmu due to other reasons. > > But this would result in a very late writing in some condition. > 1. The each time write to MMIO content is small. > 2. The writing interval is big. > 3. No need for input or accessing other devices frequently. > > This issue was observed in a experimental embbed system. The test image > simply print "test" every 1 seconds. The output in QEmu meets expectation, > but the output in KVM is delayed for seconds. > > Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update > handler. By this way, We don't need vcpu explicit exit to QEmu to > handle this issue. Sheng, Can you send this to QEMU upstream first, since the feature is present there. > Signed-off-by: Sheng Yang > --- > cpu-all.h | 2 ++ > exec.c | 6 ++++++ > kvm-all.c | 20 ++++++++++++++++++++ > qemu-kvm.c | 9 +++++++-- > qemu-kvm.h | 2 ++ > vl.c | 2 ++ > 6 files changed, 39 insertions(+), 2 deletions(-) > > diff --git a/cpu-all.h b/cpu-all.h > index 8ed76c7..51effc0 100644 > --- a/cpu-all.h > +++ b/cpu-all.h > @@ -916,6 +916,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size); > > void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size); > > +void qemu_flush_coalesced_mmio_buffer(void); > + > /*******************************************/ > /* host CPU ticks (if available) */ > > diff --git a/exec.c b/exec.c > index 99e88e1..40c01a1 100644 > --- a/exec.c > +++ b/exec.c > @@ -2424,6 +2424,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size) > kvm_uncoalesce_mmio_region(addr, size); > } > > +void qemu_flush_coalesced_mmio_buffer(void) > +{ > + if (kvm_enabled()) > + kvm_flush_coalesced_mmio_buffer(); > +} > + > #ifdef __linux__ > > #include > diff --git a/kvm-all.c b/kvm-all.c > index 0423fff..3d9fcc0 100644 > --- a/kvm-all.c > +++ b/kvm-all.c > @@ -25,6 +25,9 @@ > #include "hw/hw.h" > #include "gdbstub.h" > #include "kvm.h" > +#ifndef KVM_UPSTREAM > +#include "libkvm.h" > +#endif > > #ifdef KVM_UPSTREAM > /* KVM uses PAGE_SIZE in it's definition of COALESCED_MMIO_MAX */ > @@ -385,6 +388,23 @@ int kvm_uncoalesce_mmio_region(target_phys_addr_t start, ram_addr_t size) > return ret; > } > > +void kvm_flush_coalesced_mmio_buffer(void) > +{ > +#ifdef KVM_CAP_COALESCED_MMIO > + if (kvm_state->coalesced_mmio_ring) { > + struct kvm_coalesced_mmio_ring *ring = > + kvm_state->coalesced_mmio_ring; > + while (ring->first != ring->last) { > + cpu_physical_memory_rw(ring->coalesced_mmio[ring->first].phys_addr, > + &ring->coalesced_mmio[ring->first].data[0], > + ring->coalesced_mmio[ring->first].len, 1); > + smp_wmb(); Tab breakage.