From: Alexander Graf <agraf@suse.de>
To: Sheng Yang <sheng@linux.intel.com>
Cc: qemu-devel@nongnu.org, Marcelo Tosatti <mtosatti@redhat.com>,
Avi Kivity <avi@redhat.com>,
kvm@vger.kernel.org
Subject: [Qemu-devel] Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly
Date: Tue, 26 Jan 2010 10:59:17 +0100 [thread overview]
Message-ID: <366919FC-E282-4A23-8D9C-595D8583C97F@suse.de> (raw)
In-Reply-To: <1264498913-23655-1-git-send-email-sheng@linux.intel.com>
On 26.01.2010, at 10:41, Sheng Yang wrote:
> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
>
> But this would result in a very late writing in some condition.
> 1. The each time write to MMIO content is small.
> 2. The writing interval is big.
> 3. No need for input or accessing other devices frequently.
>
> This issue was observed in a experimental embbed system. The test image
> simply print "test" every 1 seconds. The output in QEmu meets expectation,
> but the output in KVM is delayed for seconds.
>
> Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
> handler. By this way, We don't need vcpu explicit exit to QEmu to
> handle this issue.
>
> Signed-off-by: Sheng Yang <sheng@linux.intel.com>
> ---
> cpu-all.h | 2 ++
> exec.c | 6 ++++++
> kvm-all.c | 21 +++++++++++++--------
> kvm.h | 1 +
> vl.c | 2 ++
> 5 files changed, 24 insertions(+), 8 deletions(-)
>
> diff --git a/cpu-all.h b/cpu-all.h
> index 57b69f8..1ccc9a8 100644
> --- a/cpu-all.h
> +++ b/cpu-all.h
> @@ -915,6 +915,8 @@ void qemu_register_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
>
> void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size);
>
> +void qemu_flush_coalesced_mmio_buffer(void);
> +
> /*******************************************/
> /* host CPU ticks (if available) */
>
> diff --git a/exec.c b/exec.c
> index 1190591..6875370 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2406,6 +2406,12 @@ void qemu_unregister_coalesced_mmio(target_phys_addr_t addr, ram_addr_t size)
> kvm_uncoalesce_mmio_region(addr, size);
> }
>
> +void qemu_flush_coalesced_mmio_buffer(void)
> +{
> + if (kvm_enabled())
> + kvm_flush_coalesced_mmio_buffer();
> +}
> +
> ram_addr_t qemu_ram_alloc(ram_addr_t size)
> {
> RAMBlock *new_block;
> diff --git a/kvm-all.c b/kvm-all.c
> index 15ec38e..889fc42 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -59,6 +59,7 @@ struct KVMState
> int vmfd;
> int regs_modified;
> int coalesced_mmio;
> + struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
I guess this needs to be guarded by an #ifdef?
Alex
next prev parent reply other threads:[~2010-01-26 9:59 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-01-26 9:41 [Qemu-devel] [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly Sheng Yang
2010-01-26 9:59 ` Alexander Graf [this message]
2010-01-26 11:17 ` [Qemu-devel] " Sheng Yang
2010-01-26 11:21 ` [Qemu-devel] [PATCH v3][uqmaster] " Sheng Yang
2010-01-26 12:43 ` [Qemu-devel] " Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=366919FC-E282-4A23-8D9C-595D8583C97F@suse.de \
--to=agraf@suse.de \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sheng@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).