From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: npiggin@gmail.com, qemu-ppc@nongnu.org
Subject: [PATCH v2 3/3] util/cacheflush: Optimize flushing when ppc host has coherent icache
Date: Mon, 20 Jun 2022 18:48:37 -0700 [thread overview]
Message-ID: <20220621014837.189139-4-richard.henderson@linaro.org> (raw)
In-Reply-To: <20220621014837.189139-1-richard.henderson@linaro.org>
From: Nicholas Piggin <npiggin@gmail.com>
On linux, the AT_HWCAP bit PPC_FEATURE_ICACHE_SNOOP indicates
that we can use a simplified 3 instruction flush sequence.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220519141131.29839-1-npiggin@gmail.com>
[rth: update after merging cacheflush.c and cacheinfo.c]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
util/cacheflush.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/util/cacheflush.c b/util/cacheflush.c
index 01b6cb7583..2c2c73e085 100644
--- a/util/cacheflush.c
+++ b/util/cacheflush.c
@@ -117,6 +117,10 @@ static void sys_cache_info(int *isize, int *dsize)
* Architecture (+ OS) specific cache detection mechanisms.
*/
+#if defined(__powerpc__)
+static bool have_coherent_icache;
+#endif
+
#if defined(__aarch64__) && !defined(CONFIG_DARWIN)
/* Apple does not expose CTR_EL0, so we must use system interfaces. */
static uint64_t save_ctr_el0;
@@ -156,6 +160,7 @@ static void arch_cache_info(int *isize, int *dsize)
if (*dsize == 0) {
*dsize = qemu_getauxval(AT_DCACHEBSIZE);
}
+ have_coherent_icache = qemu_getauxval(AT_HWCAP) & PPC_FEATURE_ICACHE_SNOOP;
}
#else
@@ -298,8 +303,24 @@ void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len)
void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len)
{
uintptr_t p, b, e;
- size_t dsize = qemu_dcache_linesize;
- size_t isize = qemu_icache_linesize;
+ size_t dsize, isize;
+
+ /*
+ * Some processors have coherent caches and support a simplified
+ * flushing procedure. See
+ * POWER9 UM, 4.6.2.2 Instruction Cache Block Invalidate (icbi)
+ * https://ibm.ent.box.com/s/tmklq90ze7aj8f4n32er1mu3sy9u8k3k
+ */
+ if (have_coherent_icache) {
+ asm volatile ("sync\n\t"
+ "icbi 0,%0\n\t"
+ "isync"
+ : : "r"(rx) : "memory");
+ return;
+ }
+
+ dsize = qemu_dcache_linesize;
+ isize = qemu_icache_linesize;
b = rw & ~(dsize - 1);
e = (rw + len + dsize - 1) & ~(dsize - 1);
--
2.34.1
next prev parent reply other threads:[~2022-06-21 1:52 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-21 1:48 [PATCH v2 0/3] util: Optimize flushing when ppc host has coherent icache Richard Henderson
2022-06-21 1:48 ` [PATCH v2 1/3] util: Merge cacheflush.c and cacheinfo.c Richard Henderson
2022-06-21 15:16 ` Peter Maydell
2022-06-21 1:48 ` [PATCH v2 2/3] util/cacheflush: Merge aarch64 ctr_el0 usage Richard Henderson
2022-06-21 15:20 ` Peter Maydell
2022-06-21 1:48 ` Richard Henderson [this message]
2022-06-21 15:21 ` [PATCH v2 3/3] util/cacheflush: Optimize flushing when ppc host has coherent icache Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220621014837.189139-4-richard.henderson@linaro.org \
--to=richard.henderson@linaro.org \
--cc=npiggin@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).