From: <stefano.stabellini@eu.citrix.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xensource.com, agraf@suse.de,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Qemu-devel] [PATCH v2 5/5] xen: mapcache performance improvements
Date: Thu, 19 May 2011 18:35:46 +0100 [thread overview]
Message-ID: <1305826546-19059-5-git-send-email-stefano.stabellini@eu.citrix.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1105191830010.12963@kaball-desktop>
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Use qemu_invalidate_entry in cpu_physical_memory_unmap.
Do not lock mapcache entries in qemu_get_ram_ptr if the address falls in
the ramblock with offset == 0. We don't need to do that because the
callers of qemu_get_ram_ptr either try to map an entire block, other
from the main ramblock, or until the end of a page to implement a single
read or write in the main ramblock.
If we don't lock mapcache entries in qemu_get_ram_ptr we don't need to
call qemu_invalidate_entry in qemu_put_ram_ptr anymore because we can
leave with few long lived block mappings requested by devices.
Also move the call to qemu_ram_addr_from_mapcache at the beginning of
qemu_ram_addr_from_host.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
exec.c | 28 ++++++++++------------------
1 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/exec.c b/exec.c
index ff9c174..aebb23b 100644
--- a/exec.c
+++ b/exec.c
@@ -3065,9 +3065,10 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
if (xen_mapcache_enabled()) {
/* We need to check if the requested address is in the RAM
* because we don't want to map the entire memory in QEMU.
+ * In that case just map until the end of the page.
*/
if (block->offset == 0) {
- return qemu_map_cache(addr, 0, 1);
+ return qemu_map_cache(addr, 0, 0);
} else if (block->host == NULL) {
block->host = qemu_map_cache(block->offset, block->length, 1);
}
@@ -3094,9 +3095,10 @@ void *qemu_safe_ram_ptr(ram_addr_t addr)
if (xen_mapcache_enabled()) {
/* We need to check if the requested address is in the RAM
* because we don't want to map the entire memory in QEMU.
+ * In that case just map until the end of the page.
*/
if (block->offset == 0) {
- return qemu_map_cache(addr, 0, 1);
+ return qemu_map_cache(addr, 0, 0);
} else if (block->host == NULL) {
block->host = qemu_map_cache(block->offset, block->length, 1);
}
@@ -3139,10 +3141,6 @@ void *qemu_ram_ptr_length(target_phys_addr_t addr, target_phys_addr_t *size)
void qemu_put_ram_ptr(void *addr)
{
trace_qemu_put_ram_ptr(addr);
-
- if (xen_mapcache_enabled()) {
- qemu_invalidate_entry(block->host);
- }
}
int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
@@ -3150,6 +3148,11 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
RAMBlock *block;
uint8_t *host = ptr;
+ if (xen_mapcache_enabled()) {
+ *ram_addr = qemu_ram_addr_from_mapcache(ptr);
+ return 0;
+ }
+
QLIST_FOREACH(block, &ram_list.blocks, next) {
/* This case append when the block is not mapped. */
if (block->host == NULL) {
@@ -3161,11 +3164,6 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
}
}
- if (xen_mapcache_enabled()) {
- *ram_addr = qemu_ram_addr_from_mapcache(ptr);
- return 0;
- }
-
return -1;
}
@@ -4066,13 +4064,7 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
}
}
if (xen_mapcache_enabled()) {
- uint8_t *buffer1 = buffer;
- uint8_t *end_buffer = buffer + len;
-
- while (buffer1 < end_buffer) {
- qemu_put_ram_ptr(buffer1);
- buffer1 += TARGET_PAGE_SIZE;
- }
+ qemu_invalidate_entry(buffer);
}
return;
}
--
1.7.2.3
next prev parent reply other threads:[~2011-05-19 17:33 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-19 17:34 [Qemu-devel] [PATCH v2 0/5] xen mapcache fixes and improvements Stefano Stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 1/5] xen: fix qemu_map_cache with size != MCACHE_BUCKET_SIZE stefano.stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 2/5] xen: remove qemu_map_cache_unlock stefano.stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 3/5] xen: remove xen_map_block and xen_unmap_block stefano.stabellini
2011-05-19 17:35 ` [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map stefano.stabellini
2011-07-11 22:17 ` Jan Kiszka
2011-07-12 6:14 ` Alexander Graf
2011-07-12 6:21 ` Alexander Graf
2011-07-12 6:28 ` Alexander Graf
2011-07-12 7:15 ` Jan Kiszka
2011-07-12 7:18 ` Paolo Bonzini
2011-07-12 7:48 ` Markus Armbruster
2011-07-22 5:42 ` Liu Yu-B13201
2011-07-22 5:59 ` Alexander Graf
2011-07-22 6:14 ` Liu Yu-B13201
2011-05-19 17:35 ` stefano.stabellini [this message]
2011-05-27 23:30 ` [Qemu-devel] [PATCH v2 0/5] xen mapcache fixes and improvements Alexander Graf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1305826546-19059-5-git-send-email-stefano.stabellini@eu.citrix.com \
--to=stefano.stabellini@eu.citrix.com \
--cc=agraf@suse.de \
--cc=qemu-devel@nongnu.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).