From: "Jérémy Fanguède" <j.fanguede@virtualopensystems.com>
To: qemu-devel@nongnu.org
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
"Jérémy Fanguède" <j.fanguede@virtualopensystems.com>,
tech@virtualopensystems.com, kvmarm@lists.cs.columbia.edu
Subject: [Qemu-devel] [RFC 4/4] exec: Flush data cache when needed
Date: Tue, 5 May 2015 11:13:47 +0200 [thread overview]
Message-ID: <1430817227-6278-5-git-send-email-j.fanguede@virtualopensystems.com> (raw)
In-Reply-To: <1430817227-6278-1-git-send-email-j.fanguede@virtualopensystems.com>
Flush the data cache when accesses occur in the guest ram memory.
Signed-off-by: Jérémy Fanguède <j.fanguede@virtualopensystems.com>
---
exec.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/exec.c b/exec.c
index ae37b98..0f859a3 100644
--- a/exec.c
+++ b/exec.c
@@ -2372,6 +2372,9 @@ MemTxResult address_space_rw(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
ptr = qemu_get_ram_ptr(addr1);
memcpy(ptr, buf, l);
invalidate_and_set_dirty(addr1, l);
+ if (kvm_enabled()) {
+ kvm_arch_cache_flush_needed(addr, l, is_write);
+ }
}
} else {
if (!memory_access_is_direct(mr, is_write)) {
@@ -2408,6 +2411,9 @@ MemTxResult address_space_rw(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
} else {
/* RAM case */
ptr = qemu_get_ram_ptr(mr->ram_addr + addr1);
+ if (kvm_enabled()) {
+ kvm_arch_cache_flush_needed(addr, l, is_write);
+ }
memcpy(buf, ptr, l);
}
}
@@ -2646,6 +2652,14 @@ void *address_space_map(AddressSpace *as,
return bounce.buffer;
}
+ /* Need to be flushed only if we are reading */
+ if (!is_write) {
+ /* Don't flush if it's a cpu_physical_memory_map call */
+ if (kvm_enabled() && as != &address_space_memory) {
+ kvm_arch_cache_flush_needed(addr, l, is_write);
+ }
+ }
+
base = xlat;
raddr = memory_region_get_ram_addr(mr);
@@ -2679,6 +2693,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
if (buffer != bounce.buffer) {
MemoryRegion *mr;
ram_addr_t addr1;
+ hwaddr base;
mr = qemu_ram_addr_from_host(buffer, &addr1);
assert(mr != NULL);
@@ -2688,6 +2703,10 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
if (xen_enabled()) {
xen_invalidate_map_cache_entry(buffer);
}
+ if (kvm_enabled() && as != &address_space_memory) {
+ base = object_property_get_int(OBJECT(mr), "addr", NULL);
+ kvm_arch_cache_flush_needed(addr1 + base, access_len, is_write);
+ }
memory_region_unref(mr);
return;
}
--
1.9.1
prev parent reply other threads:[~2015-05-05 9:14 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-05 9:13 [Qemu-devel] [RFC 0/4] arm/arm64: KVM: Get around cache incoherency Jérémy Fanguède
2015-05-05 9:13 ` [Qemu-devel] [RFC 1/4] linux-headers update Jérémy Fanguède
2015-05-05 9:13 ` [Qemu-devel] [RFC 2/4] target-arm/kvm: Flush data cache support Jérémy Fanguède
2015-05-05 9:13 ` [Qemu-devel] [RFC 3/4] kvm-all: Pre-run cache coherency maintenance Jérémy Fanguède
2015-05-05 9:13 ` Jérémy Fanguède [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1430817227-6278-5-git-send-email-j.fanguede@virtualopensystems.com \
--to=j.fanguede@virtualopensystems.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tech@virtualopensystems.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).