From: Sergey Fedorov <sergey.fedorov@linaro.org>
To: qemu-devel@nongnu.org
Cc: "Sergey Fedorov" <sergey.fedorov@linaro.org>,
"Peter Crosthwaite" <crosthwaite.peter@gmail.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Sergey Fedorov" <serge.fdrv@gmail.com>,
"Alex Bennée" <alex.bennee@linaro.org>,
"Richard Henderson" <rth@twiddle.net>
Subject: [Qemu-devel] [PATCH v2 3/3] cpu-exec: elide more icount code if CONFIG_USER_ONLY
Date: Tue, 29 Mar 2016 22:48:12 +0300 [thread overview]
Message-ID: <1459280892-8789-4-git-send-email-sergey.fedorov@linaro.org> (raw)
In-Reply-To: <1459280892-8789-1-git-send-email-sergey.fedorov@linaro.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[Alex Bennée: #ifndef replay code to match elided functions]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---
cpu-exec.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/cpu-exec.c b/cpu-exec.c
index 44116f180859..5d1b4c90a687 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -183,6 +183,7 @@ static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, uint8_t *tb_ptr)
return next_tb;
}
+#ifndef CONFIG_USER_ONLY
/* Execute the code without caching the generated code. An interpreter
could be used if available. */
static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
@@ -207,6 +208,7 @@ static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
tb_phys_invalidate(tb, -1);
tb_free(tb);
}
+#endif
static TranslationBlock *tb_find_physical(CPUState *cpu,
target_ulong pc,
@@ -422,12 +424,14 @@ int cpu_exec(CPUState *cpu)
}
#endif
}
+#ifndef CONFIG_USER_ONLY
} else if (replay_has_exception()
&& cpu->icount_decr.u16.low + cpu->icount_extra == 0) {
/* try to cause an exception pending in the log */
cpu_exec_nocache(cpu, 1, tb_find_fast(cpu), true);
ret = -1;
break;
+#endif
}
next_tb = 0; /* force lookup of first TB */
@@ -542,6 +546,9 @@ int cpu_exec(CPUState *cpu)
case TB_EXIT_ICOUNT_EXPIRED:
{
/* Instruction counter expired. */
+#ifdef CONFIG_USER_ONLY
+ abort();
+#else
int insns_left = cpu->icount_decr.u32;
if (cpu->icount_extra && insns_left >= 0) {
/* Refill decrementer and continue execution. */
@@ -561,6 +568,7 @@ int cpu_exec(CPUState *cpu)
cpu_loop_exit(cpu);
}
break;
+#endif
}
default:
break;
--
2.7.3
prev parent reply other threads:[~2016-03-29 19:48 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-29 19:48 [Qemu-devel] [PATCH v2 0/3] tcg: Misc clean-up patches from Paolo and Alex Sergey Fedorov
2016-03-29 19:48 ` [Qemu-devel] [PATCH v2 1/3] tcg: code_bitmap is not used by user-mode emulation Sergey Fedorov
2016-03-29 20:05 ` Richard Henderson
2016-03-31 13:49 ` Alex Bennée
2016-03-29 19:48 ` [Qemu-devel] [PATCH v2 2/3] tcg: reorganize tb_find_physical loop Sergey Fedorov
2016-03-29 20:26 ` Richard Henderson
2016-03-29 20:27 ` Richard Henderson
2016-03-29 19:48 ` Sergey Fedorov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1459280892-8789-4-git-send-email-sergey.fedorov@linaro.org \
--to=sergey.fedorov@linaro.org \
--cc=alex.bennee@linaro.org \
--cc=crosthwaite.peter@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
--cc=serge.fdrv@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).