From: Bin Meng <bmeng.cn@gmail.com>
To: Alistair Francis <Alistair.Francis@wdc.com>,
qemu-devel@nongnu.org, qemu-riscv@nongnu.org
Cc: Bin Meng <bin.meng@windriver.com>,
Alistair Francis <alistair.francis@wdc.com>
Subject: [RESEND PATCH v3 3/7] target/riscv: debug: Implement debug related TCGCPUOps
Date: Wed, 5 Jan 2022 11:08:40 +0800 [thread overview]
Message-ID: <20220105030844.780642-4-bmeng.cn@gmail.com> (raw)
In-Reply-To: <20220105030844.780642-1-bmeng.cn@gmail.com>
From: Bin Meng <bin.meng@windriver.com>
Implement .debug_excp_handler, .debug_check_{breakpoint, watchpoint}
TCGCPUOps and hook them into riscv_tcg_ops.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
(no changes since v2)
Changes in v2:
- use 0 instead of GETPC()
target/riscv/debug.h | 4 +++
target/riscv/cpu.c | 3 ++
target/riscv/debug.c | 75 ++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 82 insertions(+)
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
index 0a3fda6c72..d0f63e2414 100644
--- a/target/riscv/debug.h
+++ b/target/riscv/debug.h
@@ -105,4 +105,8 @@ void tselect_csr_write(CPURISCVState *env, target_ulong val);
target_ulong tdata_csr_read(CPURISCVState *env, int tdata_index);
void tdata_csr_write(CPURISCVState *env, int tdata_index, target_ulong val);
+void riscv_cpu_debug_excp_handler(CPUState *cs);
+bool riscv_cpu_debug_check_breakpoint(CPUState *cs);
+bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp);
+
#endif /* RISCV_DEBUG_H */
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 6ef3314bce..3aa07bc019 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -705,6 +705,9 @@ static const struct TCGCPUOps riscv_tcg_ops = {
.do_interrupt = riscv_cpu_do_interrupt,
.do_transaction_failed = riscv_cpu_do_transaction_failed,
.do_unaligned_access = riscv_cpu_do_unaligned_access,
+ .debug_excp_handler = riscv_cpu_debug_excp_handler,
+ .debug_check_breakpoint = riscv_cpu_debug_check_breakpoint,
+ .debug_check_watchpoint = riscv_cpu_debug_check_watchpoint,
#endif /* !CONFIG_USER_ONLY */
};
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
index 530e030007..7760c4611f 100644
--- a/target/riscv/debug.c
+++ b/target/riscv/debug.c
@@ -337,3 +337,78 @@ void tdata_csr_write(CPURISCVState *env, int tdata_index, target_ulong val)
return write_func(env, env->trigger_cur, tdata_index, val);
}
+
+void riscv_cpu_debug_excp_handler(CPUState *cs)
+{
+ RISCVCPU *cpu = RISCV_CPU(cs);
+ CPURISCVState *env = &cpu->env;
+
+ if (cs->watchpoint_hit) {
+ if (cs->watchpoint_hit->flags & BP_CPU) {
+ cs->watchpoint_hit = NULL;
+ riscv_raise_exception(env, RISCV_EXCP_BREAKPOINT, 0);
+ }
+ } else {
+ if (cpu_breakpoint_test(cs, env->pc, BP_CPU)) {
+ riscv_raise_exception(env, RISCV_EXCP_BREAKPOINT, 0);
+ }
+ }
+}
+
+bool riscv_cpu_debug_check_breakpoint(CPUState *cs)
+{
+ RISCVCPU *cpu = RISCV_CPU(cs);
+ CPURISCVState *env = &cpu->env;
+ CPUBreakpoint *bp;
+ target_ulong ctrl;
+ target_ulong pc;
+ int i;
+
+ QTAILQ_FOREACH(bp, &cs->breakpoints, entry) {
+ for (i = 0; i < TRIGGER_TYPE2_NUM; i++) {
+ ctrl = env->trigger_type2[i].mcontrol;
+ pc = env->trigger_type2[i].maddress;
+
+ if ((ctrl & TYPE2_EXEC) && (bp->pc == pc)) {
+ /* check U/S/M bit against current privilege level */
+ if ((ctrl >> 3) & BIT(env->priv)) {
+ return true;
+ }
+ }
+ }
+ }
+
+ return false;
+}
+
+bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
+{
+ RISCVCPU *cpu = RISCV_CPU(cs);
+ CPURISCVState *env = &cpu->env;
+ target_ulong ctrl;
+ target_ulong addr;
+ int flags;
+ int i;
+
+ for (i = 0; i < TRIGGER_TYPE2_NUM; i++) {
+ ctrl = env->trigger_type2[i].mcontrol;
+ addr = env->trigger_type2[i].maddress;
+ flags = 0;
+
+ if (ctrl & TYPE2_LOAD) {
+ flags |= BP_MEM_READ;
+ }
+ if (ctrl & TYPE2_STORE) {
+ flags |= BP_MEM_WRITE;
+ }
+
+ if ((wp->flags & flags) && (wp->vaddr == addr)) {
+ /* check U/S/M bit against current privilege level */
+ if ((ctrl >> 3) & BIT(env->priv)) {
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
--
2.25.1
next prev parent reply other threads:[~2022-01-05 3:20 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-05 3:08 [RESEND PATCH v3 0/7] target/riscv: Initial support for native debug feature via M-mode CSRs Bin Meng
2022-01-05 3:08 ` [RESEND PATCH v3 1/7] target/riscv: Add initial support for native debug Bin Meng
2022-01-19 3:15 ` Alistair Francis
2022-03-14 9:25 ` Bin Meng
2022-01-05 3:08 ` [RESEND PATCH v3 2/7] target/riscv: machine: Add debug state description Bin Meng
2022-01-05 3:08 ` Bin Meng [this message]
2022-01-05 3:08 ` [RESEND PATCH v3 4/7] target/riscv: cpu: Add a config option for native debug Bin Meng
2022-01-05 3:08 ` [RESEND PATCH v3 5/7] target/riscv: csr: Hook debug CSR read/write Bin Meng
2022-01-19 3:06 ` Alistair Francis
2022-03-14 9:44 ` Bin Meng
2022-01-05 3:08 ` [RESEND PATCH v3 6/7] target/riscv: cpu: Enable native debug feature Bin Meng
2022-01-05 3:08 ` [RESEND PATCH v3 7/7] hw/core: tcg-cpu-ops.h: Update comments of debug_check_watchpoint() Bin Meng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220105030844.780642-4-bmeng.cn@gmail.com \
--to=bmeng.cn@gmail.com \
--cc=Alistair.Francis@wdc.com \
--cc=bin.meng@windriver.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-riscv@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).