qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Nested virtualization fixes for QEMU
@ 2022-11-08 12:56 Anup Patel
  2022-11-08 12:56 ` [PATCH v2 1/5] target/riscv: Typo fix in sstc() predicate Anup Patel
                   ` (6 more replies)
  0 siblings, 7 replies; 19+ messages in thread
From: Anup Patel @ 2022-11-08 12:56 UTC (permalink / raw)
  To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Anup Patel

This series mainly includes fixes discovered while developing nested
virtualization running on QEMU.

These patches can also be found in the riscv_nested_fixes_v2 branch at:
https://github.com/avpatel/qemu.git

Changes since v1:
 - Added Alistair's Reviewed-by tags to appropriate patches
 - Added detailed comment block in PATCH4

Anup Patel (5):
  target/riscv: Typo fix in sstc() predicate
  target/riscv: Update VS timer whenever htimedelta changes
  target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP
  target/riscv: No need to re-start QEMU timer when timecmp ==
    UINT64_MAX
  target/riscv: Ensure opcode is saved for all relevant instructions

 target/riscv/cpu_helper.c                   |  2 --
 target/riscv/csr.c                          | 18 ++++++++++-
 target/riscv/insn_trans/trans_rva.c.inc     | 10 ++++--
 target/riscv/insn_trans/trans_rvd.c.inc     |  2 ++
 target/riscv/insn_trans/trans_rvf.c.inc     |  2 ++
 target/riscv/insn_trans/trans_rvh.c.inc     |  3 ++
 target/riscv/insn_trans/trans_rvi.c.inc     |  2 ++
 target/riscv/insn_trans/trans_rvzfh.c.inc   |  2 ++
 target/riscv/insn_trans/trans_svinval.c.inc |  3 ++
 target/riscv/time_helper.c                  | 36 ++++++++++++++++++---
 10 files changed, 70 insertions(+), 10 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/5] target/riscv: Typo fix in sstc() predicate
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
@ 2022-11-08 12:56 ` Anup Patel
  2022-11-08 12:57 ` [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes Anup Patel
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Anup Patel @ 2022-11-08 12:56 UTC (permalink / raw)
  To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Anup Patel, Alistair Francis

We should use "&&" instead of "&" when checking hcounteren.TM and
henvcfg.STCE bits.

Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 5c9a7ee287..716f9d960e 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -838,7 +838,7 @@ static RISCVException sstc(CPURISCVState *env, int csrno)
     }
 
     if (riscv_cpu_virt_enabled(env)) {
-        if (!(get_field(env->hcounteren, COUNTEREN_TM) &
+        if (!(get_field(env->hcounteren, COUNTEREN_TM) &&
               get_field(env->henvcfg, HENVCFG_STCE))) {
             return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
         }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
  2022-11-08 12:56 ` [PATCH v2 1/5] target/riscv: Typo fix in sstc() predicate Anup Patel
@ 2022-11-08 12:57 ` Anup Patel
  2022-12-08  3:29   ` Alistair Francis
  2022-11-08 12:57 ` [PATCH v2 3/5] target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP Anup Patel
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2022-11-08 12:57 UTC (permalink / raw)
  To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Anup Patel, Alistair Francis

The htimedelta[h] CSR has impact on the VS timer comparison so we
should call riscv_timer_write_timecmp() whenever htimedelta changes.

Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/csr.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 716f9d960e..4b1a608260 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -2722,6 +2722,8 @@ static RISCVException read_htimedelta(CPURISCVState *env, int csrno,
 static RISCVException write_htimedelta(CPURISCVState *env, int csrno,
                                        target_ulong val)
 {
+    RISCVCPU *cpu = env_archcpu(env);
+
     if (!env->rdtime_fn) {
         return RISCV_EXCP_ILLEGAL_INST;
     }
@@ -2731,6 +2733,12 @@ static RISCVException write_htimedelta(CPURISCVState *env, int csrno,
     } else {
         env->htimedelta = val;
     }
+
+    if (cpu->cfg.ext_sstc && env->rdtime_fn) {
+        riscv_timer_write_timecmp(cpu, env->vstimer, env->vstimecmp,
+                                  env->htimedelta, MIP_VSTIP);
+    }
+
     return RISCV_EXCP_NONE;
 }
 
@@ -2748,11 +2756,19 @@ static RISCVException read_htimedeltah(CPURISCVState *env, int csrno,
 static RISCVException write_htimedeltah(CPURISCVState *env, int csrno,
                                         target_ulong val)
 {
+    RISCVCPU *cpu = env_archcpu(env);
+
     if (!env->rdtime_fn) {
         return RISCV_EXCP_ILLEGAL_INST;
     }
 
     env->htimedelta = deposit64(env->htimedelta, 32, 32, (uint64_t)val);
+
+    if (cpu->cfg.ext_sstc && env->rdtime_fn) {
+        riscv_timer_write_timecmp(cpu, env->vstimer, env->vstimecmp,
+                                  env->htimedelta, MIP_VSTIP);
+    }
+
     return RISCV_EXCP_NONE;
 }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/5] target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
  2022-11-08 12:56 ` [PATCH v2 1/5] target/riscv: Typo fix in sstc() predicate Anup Patel
  2022-11-08 12:57 ` [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes Anup Patel
@ 2022-11-08 12:57 ` Anup Patel
  2022-11-08 12:57 ` [PATCH v2 4/5] target/riscv: No need to re-start QEMU timer when timecmp == UINT64_MAX Anup Patel
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Anup Patel @ 2022-11-08 12:57 UTC (permalink / raw)
  To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Anup Patel, Alistair Francis

Instead of clearing mask in riscv_cpu_update_mip() for VSTIP, we
should call riscv_cpu_update_mip() with mask == 0 from timer_helper.c
for VSTIP.

Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/cpu_helper.c  |  2 --
 target/riscv/time_helper.c | 12 ++++++++----
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 5d66246c2c..a403825e49 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -617,8 +617,6 @@ uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value)
         vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
     }
 
-    /* No need to update mip for VSTIP */
-    mask = ((mask == MIP_VSTIP) && env->vstime_irq) ? 0 : mask;
     vstip = env->vstime_irq ? MIP_VSTIP : 0;
 
     if (!qemu_mutex_iothread_locked()) {
diff --git a/target/riscv/time_helper.c b/target/riscv/time_helper.c
index 8cce667dfd..4fb2a471a9 100644
--- a/target/riscv/time_helper.c
+++ b/target/riscv/time_helper.c
@@ -27,7 +27,7 @@ static void riscv_vstimer_cb(void *opaque)
     RISCVCPU *cpu = opaque;
     CPURISCVState *env = &cpu->env;
     env->vstime_irq = 1;
-    riscv_cpu_update_mip(cpu, MIP_VSTIP, BOOL_TO_MASK(1));
+    riscv_cpu_update_mip(cpu, 0, BOOL_TO_MASK(1));
 }
 
 static void riscv_stimer_cb(void *opaque)
@@ -57,16 +57,20 @@ void riscv_timer_write_timecmp(RISCVCPU *cpu, QEMUTimer *timer,
          */
         if (timer_irq == MIP_VSTIP) {
             env->vstime_irq = 1;
+            riscv_cpu_update_mip(cpu, 0, BOOL_TO_MASK(1));
+        } else {
+            riscv_cpu_update_mip(cpu, MIP_STIP, BOOL_TO_MASK(1));
         }
-        riscv_cpu_update_mip(cpu, timer_irq, BOOL_TO_MASK(1));
         return;
     }
 
+    /* Clear the [VS|S]TIP bit in mip */
     if (timer_irq == MIP_VSTIP) {
         env->vstime_irq = 0;
+        riscv_cpu_update_mip(cpu, 0, BOOL_TO_MASK(0));
+    } else {
+        riscv_cpu_update_mip(cpu, timer_irq, BOOL_TO_MASK(0));
     }
-    /* Clear the [V]STIP bit in mip */
-    riscv_cpu_update_mip(cpu, timer_irq, BOOL_TO_MASK(0));
 
     /* otherwise, set up the future timer interrupt */
     diff = timecmp - rtc_r;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/5] target/riscv: No need to re-start QEMU timer when timecmp == UINT64_MAX
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
                   ` (2 preceding siblings ...)
  2022-11-08 12:57 ` [PATCH v2 3/5] target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP Anup Patel
@ 2022-11-08 12:57 ` Anup Patel
  2022-11-08 12:57 ` [PATCH v2 5/5] target/riscv: Ensure opcode is saved for all relevant instructions Anup Patel
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Anup Patel @ 2022-11-08 12:57 UTC (permalink / raw)
  To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Anup Patel, Alistair Francis

The time CSR will wrap-around immediately after reaching UINT64_MAX
so we don't need to re-start QEMU timer when timecmp == UINT64_MAX
in riscv_timer_write_timecmp().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/time_helper.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/target/riscv/time_helper.c b/target/riscv/time_helper.c
index 4fb2a471a9..b654f91af9 100644
--- a/target/riscv/time_helper.c
+++ b/target/riscv/time_helper.c
@@ -72,6 +72,30 @@ void riscv_timer_write_timecmp(RISCVCPU *cpu, QEMUTimer *timer,
         riscv_cpu_update_mip(cpu, timer_irq, BOOL_TO_MASK(0));
     }
 
+    /*
+     * Sstc specification says the following about timer interrupt:
+     * "A supervisor timer interrupt becomes pending - as reflected in
+     * the STIP bit in the mip and sip registers - whenever time contains
+     * a value greater than or equal to stimecmp, treating the values
+     * as unsigned integers. Writes to stimecmp are guaranteed to be
+     * reflected in STIP eventually, but not necessarily immediately.
+     * The interrupt remains posted until stimecmp becomes greater
+     * than time - typically as a result of writing stimecmp."
+     *
+     * When timecmp = UINT64_MAX, the time CSR will eventually reach
+     * timecmp value but on next timer tick the time CSR will wrap-around
+     * and become zero which is less than UINT64_MAX. Now, the timer
+     * interrupt behaves like a level triggered interrupt so it will
+     * become 1 when time = timecmp = UINT64_MAX and next timer tick
+     * it will become 0 again because time = 0 < timecmp = UINT64_MAX.
+     *
+     * Based on above, we don't re-start the QEMU timer when timecmp
+     * equals UINT64_MAX.
+     */
+    if (timecmp == UINT64_MAX) {
+        return;
+    }
+
     /* otherwise, set up the future timer interrupt */
     diff = timecmp - rtc_r;
     /* back to ns (note args switched in muldiv64) */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 5/5] target/riscv: Ensure opcode is saved for all relevant instructions
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
                   ` (3 preceding siblings ...)
  2022-11-08 12:57 ` [PATCH v2 4/5] target/riscv: No need to re-start QEMU timer when timecmp == UINT64_MAX Anup Patel
@ 2022-11-08 12:57 ` Anup Patel
  2022-11-21  7:23   ` Alistair Francis
  2022-11-21  3:01 ` [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
  2022-11-22  1:03 ` Alistair Francis
  6 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2022-11-08 12:57 UTC (permalink / raw)
  To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Anup Patel

We should call decode_save_opc() for all relevant instructions which
can potentially generate a virtual instruction fault or a guest page
fault because generating transformed instruction upon guest page fault
expects opcode to be available. Without this, hypervisor will see
transformed instruction as zero in htinst CSR for guest MMIO emulation
which makes MMIO emulation in hypervisor slow and also breaks nested
virtualization.

Fixes: a9814e3e08d2 ("target/riscv: Minimize the calls to decode_save_opc")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 target/riscv/insn_trans/trans_rva.c.inc     | 10 +++++++---
 target/riscv/insn_trans/trans_rvd.c.inc     |  2 ++
 target/riscv/insn_trans/trans_rvf.c.inc     |  2 ++
 target/riscv/insn_trans/trans_rvh.c.inc     |  3 +++
 target/riscv/insn_trans/trans_rvi.c.inc     |  2 ++
 target/riscv/insn_trans/trans_rvzfh.c.inc   |  2 ++
 target/riscv/insn_trans/trans_svinval.c.inc |  3 +++
 7 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/target/riscv/insn_trans/trans_rva.c.inc b/target/riscv/insn_trans/trans_rva.c.inc
index 45db82c9be..5f194a447b 100644
--- a/target/riscv/insn_trans/trans_rva.c.inc
+++ b/target/riscv/insn_trans/trans_rva.c.inc
@@ -20,8 +20,10 @@
 
 static bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
-    TCGv src1 = get_address(ctx, a->rs1, 0);
+    TCGv src1;
 
+    decode_save_opc(ctx);
+    src1 = get_address(ctx, a->rs1, 0);
     if (a->rl) {
         tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
     }
@@ -43,6 +45,7 @@ static bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
 
+    decode_save_opc(ctx);
     src1 = get_address(ctx, a->rs1, 0);
     tcg_gen_brcond_tl(TCG_COND_NE, load_res, src1, l1);
 
@@ -81,9 +84,10 @@ static bool gen_amo(DisasContext *ctx, arg_atomic *a,
                     MemOp mop)
 {
     TCGv dest = dest_gpr(ctx, a->rd);
-    TCGv src1 = get_address(ctx, a->rs1, 0);
-    TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
+    TCGv src1, src2 = get_gpr(ctx, a->rs2, EXT_NONE);
 
+    decode_save_opc(ctx);
+    src1 = get_address(ctx, a->rs1, 0);
     func(dest, src1, src2, ctx->mem_idx, mop);
 
     gen_set_gpr(ctx, a->rd, dest);
diff --git a/target/riscv/insn_trans/trans_rvd.c.inc b/target/riscv/insn_trans/trans_rvd.c.inc
index 1397c1ce1c..6e3159b797 100644
--- a/target/riscv/insn_trans/trans_rvd.c.inc
+++ b/target/riscv/insn_trans/trans_rvd.c.inc
@@ -38,6 +38,7 @@ static bool trans_fld(DisasContext *ctx, arg_fld *a)
     REQUIRE_FPU;
     REQUIRE_EXT(ctx, RVD);
 
+    decode_save_opc(ctx);
     addr = get_address(ctx, a->rs1, a->imm);
     tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], addr, ctx->mem_idx, MO_TEUQ);
 
@@ -52,6 +53,7 @@ static bool trans_fsd(DisasContext *ctx, arg_fsd *a)
     REQUIRE_FPU;
     REQUIRE_EXT(ctx, RVD);
 
+    decode_save_opc(ctx);
     addr = get_address(ctx, a->rs1, a->imm);
     tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEUQ);
     return true;
diff --git a/target/riscv/insn_trans/trans_rvf.c.inc b/target/riscv/insn_trans/trans_rvf.c.inc
index a1d3eb52ad..965e1f8d11 100644
--- a/target/riscv/insn_trans/trans_rvf.c.inc
+++ b/target/riscv/insn_trans/trans_rvf.c.inc
@@ -38,6 +38,7 @@ static bool trans_flw(DisasContext *ctx, arg_flw *a)
     REQUIRE_FPU;
     REQUIRE_EXT(ctx, RVF);
 
+    decode_save_opc(ctx);
     addr = get_address(ctx, a->rs1, a->imm);
     dest = cpu_fpr[a->rd];
     tcg_gen_qemu_ld_i64(dest, addr, ctx->mem_idx, MO_TEUL);
@@ -54,6 +55,7 @@ static bool trans_fsw(DisasContext *ctx, arg_fsw *a)
     REQUIRE_FPU;
     REQUIRE_EXT(ctx, RVF);
 
+    decode_save_opc(ctx);
     addr = get_address(ctx, a->rs1, a->imm);
     tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEUL);
     return true;
diff --git a/target/riscv/insn_trans/trans_rvh.c.inc b/target/riscv/insn_trans/trans_rvh.c.inc
index 4f8aecddc7..9248b48c36 100644
--- a/target/riscv/insn_trans/trans_rvh.c.inc
+++ b/target/riscv/insn_trans/trans_rvh.c.inc
@@ -36,6 +36,7 @@ static bool do_hlv(DisasContext *ctx, arg_r2 *a, MemOp mop)
 #ifdef CONFIG_USER_ONLY
     return false;
 #else
+    decode_save_opc(ctx);
     if (check_access(ctx)) {
         TCGv dest = dest_gpr(ctx, a->rd);
         TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
@@ -82,6 +83,7 @@ static bool do_hsv(DisasContext *ctx, arg_r2_s *a, MemOp mop)
 #ifdef CONFIG_USER_ONLY
     return false;
 #else
+    decode_save_opc(ctx);
     if (check_access(ctx)) {
         TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
         TCGv data = get_gpr(ctx, a->rs2, EXT_NONE);
@@ -135,6 +137,7 @@ static bool trans_hsv_d(DisasContext *ctx, arg_hsv_d *a)
 static bool do_hlvx(DisasContext *ctx, arg_r2 *a,
                     void (*func)(TCGv, TCGv_env, TCGv))
 {
+    decode_save_opc(ctx);
     if (check_access(ctx)) {
         TCGv dest = dest_gpr(ctx, a->rd);
         TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
index c49dbec0eb..1665efb639 100644
--- a/target/riscv/insn_trans/trans_rvi.c.inc
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
@@ -261,6 +261,7 @@ static bool gen_load_i128(DisasContext *ctx, arg_lb *a, MemOp memop)
 
 static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
+    decode_save_opc(ctx);
     if (get_xl(ctx) == MXL_RV128) {
         return gen_load_i128(ctx, a, memop);
     } else {
@@ -350,6 +351,7 @@ static bool gen_store_i128(DisasContext *ctx, arg_sb *a, MemOp memop)
 
 static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
+    decode_save_opc(ctx);
     if (get_xl(ctx) == MXL_RV128) {
         return gen_store_i128(ctx, a, memop);
     } else {
diff --git a/target/riscv/insn_trans/trans_rvzfh.c.inc b/target/riscv/insn_trans/trans_rvzfh.c.inc
index 5d07150cd0..2ad5716312 100644
--- a/target/riscv/insn_trans/trans_rvzfh.c.inc
+++ b/target/riscv/insn_trans/trans_rvzfh.c.inc
@@ -49,6 +49,7 @@ static bool trans_flh(DisasContext *ctx, arg_flh *a)
     REQUIRE_FPU;
     REQUIRE_ZFH_OR_ZFHMIN(ctx);
 
+    decode_save_opc(ctx);
     t0 = get_gpr(ctx, a->rs1, EXT_NONE);
     if (a->imm) {
         TCGv temp = temp_new(ctx);
@@ -71,6 +72,7 @@ static bool trans_fsh(DisasContext *ctx, arg_fsh *a)
     REQUIRE_FPU;
     REQUIRE_ZFH_OR_ZFHMIN(ctx);
 
+    decode_save_opc(ctx);
     t0 = get_gpr(ctx, a->rs1, EXT_NONE);
     if (a->imm) {
         TCGv temp = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_svinval.c.inc b/target/riscv/insn_trans/trans_svinval.c.inc
index 2682bd969f..f3cd7d5c0b 100644
--- a/target/riscv/insn_trans/trans_svinval.c.inc
+++ b/target/riscv/insn_trans/trans_svinval.c.inc
@@ -28,6 +28,7 @@ static bool trans_sinval_vma(DisasContext *ctx, arg_sinval_vma *a)
     /* Do the same as sfence.vma currently */
     REQUIRE_EXT(ctx, RVS);
 #ifndef CONFIG_USER_ONLY
+    decode_save_opc(ctx);
     gen_helper_tlb_flush(cpu_env);
     return true;
 #endif
@@ -56,6 +57,7 @@ static bool trans_hinval_vvma(DisasContext *ctx, arg_hinval_vvma *a)
     /* Do the same as hfence.vvma currently */
     REQUIRE_EXT(ctx, RVH);
 #ifndef CONFIG_USER_ONLY
+    decode_save_opc(ctx);
     gen_helper_hyp_tlb_flush(cpu_env);
     return true;
 #endif
@@ -68,6 +70,7 @@ static bool trans_hinval_gvma(DisasContext *ctx, arg_hinval_gvma *a)
     /* Do the same as hfence.gvma currently */
     REQUIRE_EXT(ctx, RVH);
 #ifndef CONFIG_USER_ONLY
+    decode_save_opc(ctx);
     gen_helper_hyp_gvma_tlb_flush(cpu_env);
     return true;
 #endif
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/5] Nested virtualization fixes for QEMU
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
                   ` (4 preceding siblings ...)
  2022-11-08 12:57 ` [PATCH v2 5/5] target/riscv: Ensure opcode is saved for all relevant instructions Anup Patel
@ 2022-11-21  3:01 ` Anup Patel
  2022-11-22  1:03 ` Alistair Francis
  6 siblings, 0 replies; 19+ messages in thread
From: Anup Patel @ 2022-11-21  3:01 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel, Palmer Dabbelt, Peter Maydell, Sagar Karandikar

Hi Alistair,

On Tue, Nov 8, 2022 at 6:27 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> This series mainly includes fixes discovered while developing nested
> virtualization running on QEMU.
>
> These patches can also be found in the riscv_nested_fixes_v2 branch at:
> https://github.com/avpatel/qemu.git
>
> Changes since v1:
>  - Added Alistair's Reviewed-by tags to appropriate patches
>  - Added detailed comment block in PATCH4
>
> Anup Patel (5):
>   target/riscv: Typo fix in sstc() predicate
>   target/riscv: Update VS timer whenever htimedelta changes
>   target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP
>   target/riscv: No need to re-start QEMU timer when timecmp ==
>     UINT64_MAX
>   target/riscv: Ensure opcode is saved for all relevant instructions

Friendly ping ?

Regards,
Anup

>
>  target/riscv/cpu_helper.c                   |  2 --
>  target/riscv/csr.c                          | 18 ++++++++++-
>  target/riscv/insn_trans/trans_rva.c.inc     | 10 ++++--
>  target/riscv/insn_trans/trans_rvd.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvf.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvh.c.inc     |  3 ++
>  target/riscv/insn_trans/trans_rvi.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvzfh.c.inc   |  2 ++
>  target/riscv/insn_trans/trans_svinval.c.inc |  3 ++
>  target/riscv/time_helper.c                  | 36 ++++++++++++++++++---
>  10 files changed, 70 insertions(+), 10 deletions(-)
>
> --
> 2.34.1
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/5] target/riscv: Ensure opcode is saved for all relevant instructions
  2022-11-08 12:57 ` [PATCH v2 5/5] target/riscv: Ensure opcode is saved for all relevant instructions Anup Patel
@ 2022-11-21  7:23   ` Alistair Francis
  0 siblings, 0 replies; 19+ messages in thread
From: Alistair Francis @ 2022-11-21  7:23 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Tue, Nov 8, 2022 at 11:09 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> We should call decode_save_opc() for all relevant instructions which
> can potentially generate a virtual instruction fault or a guest page
> fault because generating transformed instruction upon guest page fault
> expects opcode to be available. Without this, hypervisor will see
> transformed instruction as zero in htinst CSR for guest MMIO emulation
> which makes MMIO emulation in hypervisor slow and also breaks nested
> virtualization.
>
> Fixes: a9814e3e08d2 ("target/riscv: Minimize the calls to decode_save_opc")
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/insn_trans/trans_rva.c.inc     | 10 +++++++---
>  target/riscv/insn_trans/trans_rvd.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvf.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvh.c.inc     |  3 +++
>  target/riscv/insn_trans/trans_rvi.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvzfh.c.inc   |  2 ++
>  target/riscv/insn_trans/trans_svinval.c.inc |  3 +++
>  7 files changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/target/riscv/insn_trans/trans_rva.c.inc b/target/riscv/insn_trans/trans_rva.c.inc
> index 45db82c9be..5f194a447b 100644
> --- a/target/riscv/insn_trans/trans_rva.c.inc
> +++ b/target/riscv/insn_trans/trans_rva.c.inc
> @@ -20,8 +20,10 @@
>
>  static bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
>  {
> -    TCGv src1 = get_address(ctx, a->rs1, 0);
> +    TCGv src1;
>
> +    decode_save_opc(ctx);
> +    src1 = get_address(ctx, a->rs1, 0);
>      if (a->rl) {
>          tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
>      }
> @@ -43,6 +45,7 @@ static bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
>      TCGLabel *l1 = gen_new_label();
>      TCGLabel *l2 = gen_new_label();
>
> +    decode_save_opc(ctx);
>      src1 = get_address(ctx, a->rs1, 0);
>      tcg_gen_brcond_tl(TCG_COND_NE, load_res, src1, l1);
>
> @@ -81,9 +84,10 @@ static bool gen_amo(DisasContext *ctx, arg_atomic *a,
>                      MemOp mop)
>  {
>      TCGv dest = dest_gpr(ctx, a->rd);
> -    TCGv src1 = get_address(ctx, a->rs1, 0);
> -    TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
> +    TCGv src1, src2 = get_gpr(ctx, a->rs2, EXT_NONE);
>
> +    decode_save_opc(ctx);
> +    src1 = get_address(ctx, a->rs1, 0);
>      func(dest, src1, src2, ctx->mem_idx, mop);
>
>      gen_set_gpr(ctx, a->rd, dest);
> diff --git a/target/riscv/insn_trans/trans_rvd.c.inc b/target/riscv/insn_trans/trans_rvd.c.inc
> index 1397c1ce1c..6e3159b797 100644
> --- a/target/riscv/insn_trans/trans_rvd.c.inc
> +++ b/target/riscv/insn_trans/trans_rvd.c.inc
> @@ -38,6 +38,7 @@ static bool trans_fld(DisasContext *ctx, arg_fld *a)
>      REQUIRE_FPU;
>      REQUIRE_EXT(ctx, RVD);
>
> +    decode_save_opc(ctx);
>      addr = get_address(ctx, a->rs1, a->imm);
>      tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], addr, ctx->mem_idx, MO_TEUQ);
>
> @@ -52,6 +53,7 @@ static bool trans_fsd(DisasContext *ctx, arg_fsd *a)
>      REQUIRE_FPU;
>      REQUIRE_EXT(ctx, RVD);
>
> +    decode_save_opc(ctx);
>      addr = get_address(ctx, a->rs1, a->imm);
>      tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEUQ);
>      return true;
> diff --git a/target/riscv/insn_trans/trans_rvf.c.inc b/target/riscv/insn_trans/trans_rvf.c.inc
> index a1d3eb52ad..965e1f8d11 100644
> --- a/target/riscv/insn_trans/trans_rvf.c.inc
> +++ b/target/riscv/insn_trans/trans_rvf.c.inc
> @@ -38,6 +38,7 @@ static bool trans_flw(DisasContext *ctx, arg_flw *a)
>      REQUIRE_FPU;
>      REQUIRE_EXT(ctx, RVF);
>
> +    decode_save_opc(ctx);
>      addr = get_address(ctx, a->rs1, a->imm);
>      dest = cpu_fpr[a->rd];
>      tcg_gen_qemu_ld_i64(dest, addr, ctx->mem_idx, MO_TEUL);
> @@ -54,6 +55,7 @@ static bool trans_fsw(DisasContext *ctx, arg_fsw *a)
>      REQUIRE_FPU;
>      REQUIRE_EXT(ctx, RVF);
>
> +    decode_save_opc(ctx);
>      addr = get_address(ctx, a->rs1, a->imm);
>      tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEUL);
>      return true;
> diff --git a/target/riscv/insn_trans/trans_rvh.c.inc b/target/riscv/insn_trans/trans_rvh.c.inc
> index 4f8aecddc7..9248b48c36 100644
> --- a/target/riscv/insn_trans/trans_rvh.c.inc
> +++ b/target/riscv/insn_trans/trans_rvh.c.inc
> @@ -36,6 +36,7 @@ static bool do_hlv(DisasContext *ctx, arg_r2 *a, MemOp mop)
>  #ifdef CONFIG_USER_ONLY
>      return false;
>  #else
> +    decode_save_opc(ctx);
>      if (check_access(ctx)) {
>          TCGv dest = dest_gpr(ctx, a->rd);
>          TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
> @@ -82,6 +83,7 @@ static bool do_hsv(DisasContext *ctx, arg_r2_s *a, MemOp mop)
>  #ifdef CONFIG_USER_ONLY
>      return false;
>  #else
> +    decode_save_opc(ctx);
>      if (check_access(ctx)) {
>          TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
>          TCGv data = get_gpr(ctx, a->rs2, EXT_NONE);
> @@ -135,6 +137,7 @@ static bool trans_hsv_d(DisasContext *ctx, arg_hsv_d *a)
>  static bool do_hlvx(DisasContext *ctx, arg_r2 *a,
>                      void (*func)(TCGv, TCGv_env, TCGv))
>  {
> +    decode_save_opc(ctx);
>      if (check_access(ctx)) {
>          TCGv dest = dest_gpr(ctx, a->rd);
>          TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
> diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
> index c49dbec0eb..1665efb639 100644
> --- a/target/riscv/insn_trans/trans_rvi.c.inc
> +++ b/target/riscv/insn_trans/trans_rvi.c.inc
> @@ -261,6 +261,7 @@ static bool gen_load_i128(DisasContext *ctx, arg_lb *a, MemOp memop)
>
>  static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
>  {
> +    decode_save_opc(ctx);
>      if (get_xl(ctx) == MXL_RV128) {
>          return gen_load_i128(ctx, a, memop);
>      } else {
> @@ -350,6 +351,7 @@ static bool gen_store_i128(DisasContext *ctx, arg_sb *a, MemOp memop)
>
>  static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
>  {
> +    decode_save_opc(ctx);
>      if (get_xl(ctx) == MXL_RV128) {
>          return gen_store_i128(ctx, a, memop);
>      } else {
> diff --git a/target/riscv/insn_trans/trans_rvzfh.c.inc b/target/riscv/insn_trans/trans_rvzfh.c.inc
> index 5d07150cd0..2ad5716312 100644
> --- a/target/riscv/insn_trans/trans_rvzfh.c.inc
> +++ b/target/riscv/insn_trans/trans_rvzfh.c.inc
> @@ -49,6 +49,7 @@ static bool trans_flh(DisasContext *ctx, arg_flh *a)
>      REQUIRE_FPU;
>      REQUIRE_ZFH_OR_ZFHMIN(ctx);
>
> +    decode_save_opc(ctx);
>      t0 = get_gpr(ctx, a->rs1, EXT_NONE);
>      if (a->imm) {
>          TCGv temp = temp_new(ctx);
> @@ -71,6 +72,7 @@ static bool trans_fsh(DisasContext *ctx, arg_fsh *a)
>      REQUIRE_FPU;
>      REQUIRE_ZFH_OR_ZFHMIN(ctx);
>
> +    decode_save_opc(ctx);
>      t0 = get_gpr(ctx, a->rs1, EXT_NONE);
>      if (a->imm) {
>          TCGv temp = tcg_temp_new();
> diff --git a/target/riscv/insn_trans/trans_svinval.c.inc b/target/riscv/insn_trans/trans_svinval.c.inc
> index 2682bd969f..f3cd7d5c0b 100644
> --- a/target/riscv/insn_trans/trans_svinval.c.inc
> +++ b/target/riscv/insn_trans/trans_svinval.c.inc
> @@ -28,6 +28,7 @@ static bool trans_sinval_vma(DisasContext *ctx, arg_sinval_vma *a)
>      /* Do the same as sfence.vma currently */
>      REQUIRE_EXT(ctx, RVS);
>  #ifndef CONFIG_USER_ONLY
> +    decode_save_opc(ctx);
>      gen_helper_tlb_flush(cpu_env);
>      return true;
>  #endif
> @@ -56,6 +57,7 @@ static bool trans_hinval_vvma(DisasContext *ctx, arg_hinval_vvma *a)
>      /* Do the same as hfence.vvma currently */
>      REQUIRE_EXT(ctx, RVH);
>  #ifndef CONFIG_USER_ONLY
> +    decode_save_opc(ctx);
>      gen_helper_hyp_tlb_flush(cpu_env);
>      return true;
>  #endif
> @@ -68,6 +70,7 @@ static bool trans_hinval_gvma(DisasContext *ctx, arg_hinval_gvma *a)
>      /* Do the same as hfence.gvma currently */
>      REQUIRE_EXT(ctx, RVH);
>  #ifndef CONFIG_USER_ONLY
> +    decode_save_opc(ctx);
>      gen_helper_hyp_gvma_tlb_flush(cpu_env);
>      return true;
>  #endif
> --
> 2.34.1
>
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/5] Nested virtualization fixes for QEMU
  2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
                   ` (5 preceding siblings ...)
  2022-11-21  3:01 ` [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
@ 2022-11-22  1:03 ` Alistair Francis
  6 siblings, 0 replies; 19+ messages in thread
From: Alistair Francis @ 2022-11-22  1:03 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Tue, Nov 8, 2022 at 10:59 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> This series mainly includes fixes discovered while developing nested
> virtualization running on QEMU.
>
> These patches can also be found in the riscv_nested_fixes_v2 branch at:
> https://github.com/avpatel/qemu.git
>
> Changes since v1:
>  - Added Alistair's Reviewed-by tags to appropriate patches
>  - Added detailed comment block in PATCH4
>
> Anup Patel (5):
>   target/riscv: Typo fix in sstc() predicate
>   target/riscv: Update VS timer whenever htimedelta changes
>   target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP
>   target/riscv: No need to re-start QEMU timer when timecmp ==
>     UINT64_MAX
>   target/riscv: Ensure opcode is saved for all relevant instructions

Thanks!

Applied to riscv-to-apply.next

Alistair

>
>  target/riscv/cpu_helper.c                   |  2 --
>  target/riscv/csr.c                          | 18 ++++++++++-
>  target/riscv/insn_trans/trans_rva.c.inc     | 10 ++++--
>  target/riscv/insn_trans/trans_rvd.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvf.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvh.c.inc     |  3 ++
>  target/riscv/insn_trans/trans_rvi.c.inc     |  2 ++
>  target/riscv/insn_trans/trans_rvzfh.c.inc   |  2 ++
>  target/riscv/insn_trans/trans_svinval.c.inc |  3 ++
>  target/riscv/time_helper.c                  | 36 ++++++++++++++++++---
>  10 files changed, 70 insertions(+), 10 deletions(-)
>
> --
> 2.34.1
>
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-11-08 12:57 ` [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes Anup Patel
@ 2022-12-08  3:29   ` Alistair Francis
  2022-12-08  8:41     ` Anup Patel
  0 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2022-12-08  3:29 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The htimedelta[h] CSR has impact on the VS timer comparison so we
> should call riscv_timer_write_timecmp() whenever htimedelta changes.
>
> Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:

qemu-system-riscv64 -machine virt \
    -m 1G -serial mon:stdio -serial null -nographic \
    -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
/;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
vserial bind guest0/uart0"' \
    -smp 4 -d guest_errors \
    -bios none \
    -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
    -kernel ./images/qemuriscv64/fw_jump.elf \
    -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true

Running:

Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)

I see this failure:

INIT: bootcmd:  guest kick guest0

guest0: Kicked

INIT: bootcmd:  vserial bind guest0/uart0

[guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
size=0x4096 map failed

do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)

       zero=0x0000000000000000          ra=0x0000000080001B4E

         sp=0x000000008001CF80          gp=0x0000000000000000

         tp=0x0000000000000000          s0=0x000000008001CFB0

         s1=0x0000000000000000          a0=0x0000000010001048

         a1=0x0000000000000000          a2=0x0000000000989680

         a3=0x000000003B9ACA00          a4=0x0000000000000048

         a5=0x0000000000000000          a6=0x0000000000019000

         a7=0x0000000000000000          s2=0x0000000000000000

         s3=0x0000000000000000          s4=0x0000000000000000

         s5=0x0000000000000000          s6=0x0000000000000000

         s7=0x0000000000000000          s8=0x0000000000000000

         s9=0x0000000000000000         s10=0x0000000000000000

        s11=0x0000000000000000          t0=0x0000000000004000

         t1=0x0000000000000100          t2=0x0000000000000000

         t3=0x0000000000000000          t4=0x0000000000000000

         t5=0x0000000000000000          t6=0x0000000000000000

       sepc=0x0000000080001918     sstatus=0x0000000200004120

    hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000

     scause=0x0000000000000017       stval=0x000000003B9ACAF8

      htval=0x000000000EE6B2BE      htinst=0x0000000000D03021

I have tried updating to a newer Xvisor release, but with that I don't
get any serial output.

Can you help get the Xvisor tests back up and running?

Alistair


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-08  3:29   ` Alistair Francis
@ 2022-12-08  8:41     ` Anup Patel
  2022-12-12  5:53       ` Alistair Francis
  0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2022-12-08  8:41 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> >
> > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>
> This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
>
> qemu-system-riscv64 -machine virt \
>     -m 1G -serial mon:stdio -serial null -nographic \
>     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> vserial bind guest0/uart0"' \
>     -smp 4 -d guest_errors \
>     -bios none \
>     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
>     -kernel ./images/qemuriscv64/fw_jump.elf \
>     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
>
> Running:
>
> Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
>
> I see this failure:
>
> INIT: bootcmd:  guest kick guest0
>
> guest0: Kicked
>
> INIT: bootcmd:  vserial bind guest0/uart0
>
> [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> size=0x4096 map failed
>
> do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
>
>        zero=0x0000000000000000          ra=0x0000000080001B4E
>
>          sp=0x000000008001CF80          gp=0x0000000000000000
>
>          tp=0x0000000000000000          s0=0x000000008001CFB0
>
>          s1=0x0000000000000000          a0=0x0000000010001048
>
>          a1=0x0000000000000000          a2=0x0000000000989680
>
>          a3=0x000000003B9ACA00          a4=0x0000000000000048
>
>          a5=0x0000000000000000          a6=0x0000000000019000
>
>          a7=0x0000000000000000          s2=0x0000000000000000
>
>          s3=0x0000000000000000          s4=0x0000000000000000
>
>          s5=0x0000000000000000          s6=0x0000000000000000
>
>          s7=0x0000000000000000          s8=0x0000000000000000
>
>          s9=0x0000000000000000         s10=0x0000000000000000
>
>         s11=0x0000000000000000          t0=0x0000000000004000
>
>          t1=0x0000000000000100          t2=0x0000000000000000
>
>          t3=0x0000000000000000          t4=0x0000000000000000
>
>          t5=0x0000000000000000          t6=0x0000000000000000
>
>        sepc=0x0000000080001918     sstatus=0x0000000200004120
>
>     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
>
>      scause=0x0000000000000017       stval=0x000000003B9ACAF8
>
>       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
>
> I have tried updating to a newer Xvisor release, but with that I don't
> get any serial output.
>
> Can you help get the Xvisor tests back up and running?

I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
with your QEMU riscv-to-apply.next branch and it works fine (both
with and without Sstc).

Here's the QEMU command which I use:

qemu-system-riscv64 -M virt -m 512M -nographic \
-bios opensbi/build/platform/generic/firmware/fw_jump.bin \
-kernel ../xvisor-next/build/vmm.bin \
-initrd rbd_v64.img \
-append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
cat /system/banner.txt\"" \
-smp 4

Also, I will be releasing Xvisor-0.3.2 by the end of Dec 2022 so I
suggest using this upcoming release in your test.

Regards,
Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-08  8:41     ` Anup Patel
@ 2022-12-12  5:53       ` Alistair Francis
  2022-12-12 11:12         ` Anup Patel
  0 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2022-12-12  5:53 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > >
> > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > >
> > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >
> > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> >
> > qemu-system-riscv64 -machine virt \
> >     -m 1G -serial mon:stdio -serial null -nographic \
> >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > vserial bind guest0/uart0"' \
> >     -smp 4 -d guest_errors \
> >     -bios none \
> >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> >     -kernel ./images/qemuriscv64/fw_jump.elf \
> >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> >
> > Running:
> >
> > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> >
> > I see this failure:
> >
> > INIT: bootcmd:  guest kick guest0
> >
> > guest0: Kicked
> >
> > INIT: bootcmd:  vserial bind guest0/uart0
> >
> > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > size=0x4096 map failed
> >
> > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> >
> >        zero=0x0000000000000000          ra=0x0000000080001B4E
> >
> >          sp=0x000000008001CF80          gp=0x0000000000000000
> >
> >          tp=0x0000000000000000          s0=0x000000008001CFB0
> >
> >          s1=0x0000000000000000          a0=0x0000000010001048
> >
> >          a1=0x0000000000000000          a2=0x0000000000989680
> >
> >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> >
> >          a5=0x0000000000000000          a6=0x0000000000019000
> >
> >          a7=0x0000000000000000          s2=0x0000000000000000
> >
> >          s3=0x0000000000000000          s4=0x0000000000000000
> >
> >          s5=0x0000000000000000          s6=0x0000000000000000
> >
> >          s7=0x0000000000000000          s8=0x0000000000000000
> >
> >          s9=0x0000000000000000         s10=0x0000000000000000
> >
> >         s11=0x0000000000000000          t0=0x0000000000004000
> >
> >          t1=0x0000000000000100          t2=0x0000000000000000
> >
> >          t3=0x0000000000000000          t4=0x0000000000000000
> >
> >          t5=0x0000000000000000          t6=0x0000000000000000
> >
> >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> >
> >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> >
> >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> >
> >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> >
> > I have tried updating to a newer Xvisor release, but with that I don't
> > get any serial output.
> >
> > Can you help get the Xvisor tests back up and running?
>
> I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> with your QEMU riscv-to-apply.next branch and it works fine (both
> with and without Sstc).

Does it work with the latest release?

Alistair

>
> Here's the QEMU command which I use:
>
> qemu-system-riscv64 -M virt -m 512M -nographic \
> -bios opensbi/build/platform/generic/firmware/fw_jump.bin \
> -kernel ../xvisor-next/build/vmm.bin \
> -initrd rbd_v64.img \
> -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> cat /system/banner.txt\"" \
> -smp 4
>
> Also, I will be releasing Xvisor-0.3.2 by the end of Dec 2022 so I
> suggest using this upcoming release in your test.
>
> Regards,
> Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-12  5:53       ` Alistair Francis
@ 2022-12-12 11:12         ` Anup Patel
  2022-12-15  3:25           ` Alistair Francis
  0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2022-12-12 11:12 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > >
> > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > >
> > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > >
> > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > >
> > > qemu-system-riscv64 -machine virt \
> > >     -m 1G -serial mon:stdio -serial null -nographic \
> > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > vserial bind guest0/uart0"' \
> > >     -smp 4 -d guest_errors \
> > >     -bios none \
> > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > >
> > > Running:
> > >
> > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > >
> > > I see this failure:
> > >
> > > INIT: bootcmd:  guest kick guest0
> > >
> > > guest0: Kicked
> > >
> > > INIT: bootcmd:  vserial bind guest0/uart0
> > >
> > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > size=0x4096 map failed
> > >
> > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > >
> > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > >
> > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > >
> > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > >
> > >          s1=0x0000000000000000          a0=0x0000000010001048
> > >
> > >          a1=0x0000000000000000          a2=0x0000000000989680
> > >
> > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > >
> > >          a5=0x0000000000000000          a6=0x0000000000019000
> > >
> > >          a7=0x0000000000000000          s2=0x0000000000000000
> > >
> > >          s3=0x0000000000000000          s4=0x0000000000000000
> > >
> > >          s5=0x0000000000000000          s6=0x0000000000000000
> > >
> > >          s7=0x0000000000000000          s8=0x0000000000000000
> > >
> > >          s9=0x0000000000000000         s10=0x0000000000000000
> > >
> > >         s11=0x0000000000000000          t0=0x0000000000004000
> > >
> > >          t1=0x0000000000000100          t2=0x0000000000000000
> > >
> > >          t3=0x0000000000000000          t4=0x0000000000000000
> > >
> > >          t5=0x0000000000000000          t6=0x0000000000000000
> > >
> > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > >
> > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > >
> > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > >
> > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > >
> > > I have tried updating to a newer Xvisor release, but with that I don't
> > > get any serial output.
> > >
> > > Can you help get the Xvisor tests back up and running?
> >
> > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > with your QEMU riscv-to-apply.next branch and it works fine (both
> > with and without Sstc).
>
> Does it work with the latest release?

Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
your riscv-to-apply.next branch (commit 51bb9de2d188)

Regards,
Anup

>
> Alistair
>
> >
> > Here's the QEMU command which I use:
> >
> > qemu-system-riscv64 -M virt -m 512M -nographic \
> > -bios opensbi/build/platform/generic/firmware/fw_jump.bin \
> > -kernel ../xvisor-next/build/vmm.bin \
> > -initrd rbd_v64.img \
> > -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> > cat /system/banner.txt\"" \
> > -smp 4
> >
> > Also, I will be releasing Xvisor-0.3.2 by the end of Dec 2022 so I
> > suggest using this upcoming release in your test.
> >
> > Regards,
> > Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-12 11:12         ` Anup Patel
@ 2022-12-15  3:25           ` Alistair Francis
  2022-12-23 13:14             ` Anup Patel
  0 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2022-12-15  3:25 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Mon, Dec 12, 2022 at 9:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > >
> > > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > >
> > > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > >
> > > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > > >
> > > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > >
> > > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > > >
> > > > qemu-system-riscv64 -machine virt \
> > > >     -m 1G -serial mon:stdio -serial null -nographic \
> > > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > > vserial bind guest0/uart0"' \
> > > >     -smp 4 -d guest_errors \
> > > >     -bios none \
> > > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > > >
> > > > Running:
> > > >
> > > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > > >
> > > > I see this failure:
> > > >
> > > > INIT: bootcmd:  guest kick guest0
> > > >
> > > > guest0: Kicked
> > > >
> > > > INIT: bootcmd:  vserial bind guest0/uart0
> > > >
> > > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > > size=0x4096 map failed
> > > >
> > > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > > >
> > > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > > >
> > > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > > >
> > > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > > >
> > > >          s1=0x0000000000000000          a0=0x0000000010001048
> > > >
> > > >          a1=0x0000000000000000          a2=0x0000000000989680
> > > >
> > > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > > >
> > > >          a5=0x0000000000000000          a6=0x0000000000019000
> > > >
> > > >          a7=0x0000000000000000          s2=0x0000000000000000
> > > >
> > > >          s3=0x0000000000000000          s4=0x0000000000000000
> > > >
> > > >          s5=0x0000000000000000          s6=0x0000000000000000
> > > >
> > > >          s7=0x0000000000000000          s8=0x0000000000000000
> > > >
> > > >          s9=0x0000000000000000         s10=0x0000000000000000
> > > >
> > > >         s11=0x0000000000000000          t0=0x0000000000004000
> > > >
> > > >          t1=0x0000000000000100          t2=0x0000000000000000
> > > >
> > > >          t3=0x0000000000000000          t4=0x0000000000000000
> > > >
> > > >          t5=0x0000000000000000          t6=0x0000000000000000
> > > >
> > > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > > >
> > > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > > >
> > > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > > >
> > > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > > >
> > > > I have tried updating to a newer Xvisor release, but with that I don't
> > > > get any serial output.
> > > >
> > > > Can you help get the Xvisor tests back up and running?
> > >
> > > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > > with your QEMU riscv-to-apply.next branch and it works fine (both
> > > with and without Sstc).
> >
> > Does it work with the latest release?
>
> Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
> your riscv-to-apply.next branch (commit 51bb9de2d188)

I can't get anything to work with this patch. I have dropped this and
the patches after this.

I'm building the latest Xvisor release with:

export CROSS_COMPILE=riscv64-linux-gnu-
ARCH=riscv make generic-64b-defconfig
make

and running it as above, yet nothing. What am I missing here?

Alistair

>
> Regards,
> Anup
>
> >
> > Alistair
> >
> > >
> > > Here's the QEMU command which I use:
> > >
> > > qemu-system-riscv64 -M virt -m 512M -nographic \
> > > -bios opensbi/build/platform/generic/firmware/fw_jump.bin \
> > > -kernel ../xvisor-next/build/vmm.bin \
> > > -initrd rbd_v64.img \
> > > -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> > > cat /system/banner.txt\"" \
> > > -smp 4
> > >
> > > Also, I will be releasing Xvisor-0.3.2 by the end of Dec 2022 so I
> > > suggest using this upcoming release in your test.
> > >
> > > Regards,
> > > Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-15  3:25           ` Alistair Francis
@ 2022-12-23 13:14             ` Anup Patel
  2022-12-28  5:38               ` Alistair Francis
  0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2022-12-23 13:14 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Thu, Dec 15, 2022 at 8:55 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Mon, Dec 12, 2022 at 9:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > >
> > > > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > >
> > > > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > >
> > > > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > > > >
> > > > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > > >
> > > > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > > > >
> > > > > qemu-system-riscv64 -machine virt \
> > > > >     -m 1G -serial mon:stdio -serial null -nographic \
> > > > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > > > vserial bind guest0/uart0"' \
> > > > >     -smp 4 -d guest_errors \
> > > > >     -bios none \
> > > > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > > > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > > > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > > > >
> > > > > Running:
> > > > >
> > > > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > > > >
> > > > > I see this failure:
> > > > >
> > > > > INIT: bootcmd:  guest kick guest0
> > > > >
> > > > > guest0: Kicked
> > > > >
> > > > > INIT: bootcmd:  vserial bind guest0/uart0
> > > > >
> > > > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > > > size=0x4096 map failed
> > > > >
> > > > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > > > >
> > > > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > > > >
> > > > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > > > >
> > > > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > > > >
> > > > >          s1=0x0000000000000000          a0=0x0000000010001048
> > > > >
> > > > >          a1=0x0000000000000000          a2=0x0000000000989680
> > > > >
> > > > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > > > >
> > > > >          a5=0x0000000000000000          a6=0x0000000000019000
> > > > >
> > > > >          a7=0x0000000000000000          s2=0x0000000000000000
> > > > >
> > > > >          s3=0x0000000000000000          s4=0x0000000000000000
> > > > >
> > > > >          s5=0x0000000000000000          s6=0x0000000000000000
> > > > >
> > > > >          s7=0x0000000000000000          s8=0x0000000000000000
> > > > >
> > > > >          s9=0x0000000000000000         s10=0x0000000000000000
> > > > >
> > > > >         s11=0x0000000000000000          t0=0x0000000000004000
> > > > >
> > > > >          t1=0x0000000000000100          t2=0x0000000000000000
> > > > >
> > > > >          t3=0x0000000000000000          t4=0x0000000000000000
> > > > >
> > > > >          t5=0x0000000000000000          t6=0x0000000000000000
> > > > >
> > > > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > > > >
> > > > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > > > >
> > > > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > > > >
> > > > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > > > >
> > > > > I have tried updating to a newer Xvisor release, but with that I don't
> > > > > get any serial output.
> > > > >
> > > > > Can you help get the Xvisor tests back up and running?
> > > >
> > > > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > > > with your QEMU riscv-to-apply.next branch and it works fine (both
> > > > with and without Sstc).
> > >
> > > Does it work with the latest release?
> >
> > Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
> > your riscv-to-apply.next branch (commit 51bb9de2d188)
>
> I can't get anything to work with this patch. I have dropped this and
> the patches after this.
>
> I'm building the latest Xvisor release with:
>
> export CROSS_COMPILE=riscv64-linux-gnu-
> ARCH=riscv make generic-64b-defconfig
> make
>
> and running it as above, yet nothing. What am I missing here?

I tried multiple times with the latest Xvisor on different machines but
still can't reproduce the issue you are seeing.

We generally provide pre-built binaries with every Xvisor release
so I will share with you pre-built binaries of the upcoming Xvisor-0.3.2
release. Maybe that would help you ?

Regards,
Anup

>
> Alistair
>
> >
> > Regards,
> > Anup
> >
> > >
> > > Alistair
> > >
> > > >
> > > > Here's the QEMU command which I use:
> > > >
> > > > qemu-system-riscv64 -M virt -m 512M -nographic \
> > > > -bios opensbi/build/platform/generic/firmware/fw_jump.bin \
> > > > -kernel ../xvisor-next/build/vmm.bin \
> > > > -initrd rbd_v64.img \
> > > > -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> > > > cat /system/banner.txt\"" \
> > > > -smp 4
> > > >
> > > > Also, I will be releasing Xvisor-0.3.2 by the end of Dec 2022 so I
> > > > suggest using this upcoming release in your test.
> > > >
> > > > Regards,
> > > > Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-23 13:14             ` Anup Patel
@ 2022-12-28  5:38               ` Alistair Francis
  2023-01-03 16:13                 ` Anup Patel
  0 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2022-12-28  5:38 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Fri, Dec 23, 2022 at 11:14 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> On Thu, Dec 15, 2022 at 8:55 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Mon, Dec 12, 2022 at 9:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > >
> > > On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > >
> > > > On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > >
> > > > > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > > >
> > > > > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > > > > >
> > > > > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > > > >
> > > > > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > > > > >
> > > > > > qemu-system-riscv64 -machine virt \
> > > > > >     -m 1G -serial mon:stdio -serial null -nographic \
> > > > > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > > > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > > > > vserial bind guest0/uart0"' \
> > > > > >     -smp 4 -d guest_errors \
> > > > > >     -bios none \
> > > > > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > > > > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > > > > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > > > > >
> > > > > > Running:
> > > > > >
> > > > > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > > > > >
> > > > > > I see this failure:
> > > > > >
> > > > > > INIT: bootcmd:  guest kick guest0
> > > > > >
> > > > > > guest0: Kicked
> > > > > >
> > > > > > INIT: bootcmd:  vserial bind guest0/uart0
> > > > > >
> > > > > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > > > > size=0x4096 map failed
> > > > > >
> > > > > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > > > > >
> > > > > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > > > > >
> > > > > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > > > > >
> > > > > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > > > > >
> > > > > >          s1=0x0000000000000000          a0=0x0000000010001048
> > > > > >
> > > > > >          a1=0x0000000000000000          a2=0x0000000000989680
> > > > > >
> > > > > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > > > > >
> > > > > >          a5=0x0000000000000000          a6=0x0000000000019000
> > > > > >
> > > > > >          a7=0x0000000000000000          s2=0x0000000000000000
> > > > > >
> > > > > >          s3=0x0000000000000000          s4=0x0000000000000000
> > > > > >
> > > > > >          s5=0x0000000000000000          s6=0x0000000000000000
> > > > > >
> > > > > >          s7=0x0000000000000000          s8=0x0000000000000000
> > > > > >
> > > > > >          s9=0x0000000000000000         s10=0x0000000000000000
> > > > > >
> > > > > >         s11=0x0000000000000000          t0=0x0000000000004000
> > > > > >
> > > > > >          t1=0x0000000000000100          t2=0x0000000000000000
> > > > > >
> > > > > >          t3=0x0000000000000000          t4=0x0000000000000000
> > > > > >
> > > > > >          t5=0x0000000000000000          t6=0x0000000000000000
> > > > > >
> > > > > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > > > > >
> > > > > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > > > > >
> > > > > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > > > > >
> > > > > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > > > > >
> > > > > > I have tried updating to a newer Xvisor release, but with that I don't
> > > > > > get any serial output.
> > > > > >
> > > > > > Can you help get the Xvisor tests back up and running?
> > > > >
> > > > > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > > > > with your QEMU riscv-to-apply.next branch and it works fine (both
> > > > > with and without Sstc).
> > > >
> > > > Does it work with the latest release?
> > >
> > > Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
> > > your riscv-to-apply.next branch (commit 51bb9de2d188)
> >
> > I can't get anything to work with this patch. I have dropped this and
> > the patches after this.
> >
> > I'm building the latest Xvisor release with:
> >
> > export CROSS_COMPILE=riscv64-linux-gnu-
> > ARCH=riscv make generic-64b-defconfig
> > make
> >
> > and running it as above, yet nothing. What am I missing here?
>
> I tried multiple times with the latest Xvisor on different machines but
> still can't reproduce the issue you are seeing.

Odd

>
> We generally provide pre-built binaries with every Xvisor release
> so I will share with you pre-built binaries of the upcoming Xvisor-0.3.2
> release. Maybe that would help you ?

That would work. Let me know when the release happens and I can update
my images.

Alistair


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2022-12-28  5:38               ` Alistair Francis
@ 2023-01-03 16:13                 ` Anup Patel
  2023-01-16  5:20                   ` Anup Patel
  0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2023-01-03 16:13 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

Hi Alistair,

On Wed, Dec 28, 2022 at 11:08 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Fri, Dec 23, 2022 at 11:14 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > On Thu, Dec 15, 2022 at 8:55 AM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Mon, Dec 12, 2022 at 9:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > >
> > > > On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > >
> > > > > On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > >
> > > > > > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > > > >
> > > > > > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > > > >
> > > > > > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > > > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > > > > > >
> > > > > > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > > > > >
> > > > > > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > > > > > >
> > > > > > > qemu-system-riscv64 -machine virt \
> > > > > > >     -m 1G -serial mon:stdio -serial null -nographic \
> > > > > > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > > > > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > > > > > vserial bind guest0/uart0"' \
> > > > > > >     -smp 4 -d guest_errors \
> > > > > > >     -bios none \
> > > > > > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > > > > > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > > > > > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > > > > > >
> > > > > > > Running:
> > > > > > >
> > > > > > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > > > > > >
> > > > > > > I see this failure:
> > > > > > >
> > > > > > > INIT: bootcmd:  guest kick guest0
> > > > > > >
> > > > > > > guest0: Kicked
> > > > > > >
> > > > > > > INIT: bootcmd:  vserial bind guest0/uart0
> > > > > > >
> > > > > > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > > > > > size=0x4096 map failed
> > > > > > >
> > > > > > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > > > > > >
> > > > > > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > > > > > >
> > > > > > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > > > > > >
> > > > > > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > > > > > >
> > > > > > >          s1=0x0000000000000000          a0=0x0000000010001048
> > > > > > >
> > > > > > >          a1=0x0000000000000000          a2=0x0000000000989680
> > > > > > >
> > > > > > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > > > > > >
> > > > > > >          a5=0x0000000000000000          a6=0x0000000000019000
> > > > > > >
> > > > > > >          a7=0x0000000000000000          s2=0x0000000000000000
> > > > > > >
> > > > > > >          s3=0x0000000000000000          s4=0x0000000000000000
> > > > > > >
> > > > > > >          s5=0x0000000000000000          s6=0x0000000000000000
> > > > > > >
> > > > > > >          s7=0x0000000000000000          s8=0x0000000000000000
> > > > > > >
> > > > > > >          s9=0x0000000000000000         s10=0x0000000000000000
> > > > > > >
> > > > > > >         s11=0x0000000000000000          t0=0x0000000000004000
> > > > > > >
> > > > > > >          t1=0x0000000000000100          t2=0x0000000000000000
> > > > > > >
> > > > > > >          t3=0x0000000000000000          t4=0x0000000000000000
> > > > > > >
> > > > > > >          t5=0x0000000000000000          t6=0x0000000000000000
> > > > > > >
> > > > > > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > > > > > >
> > > > > > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > > > > > >
> > > > > > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > > > > > >
> > > > > > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > > > > > >
> > > > > > > I have tried updating to a newer Xvisor release, but with that I don't
> > > > > > > get any serial output.
> > > > > > >
> > > > > > > Can you help get the Xvisor tests back up and running?
> > > > > >
> > > > > > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > > > > > with your QEMU riscv-to-apply.next branch and it works fine (both
> > > > > > with and without Sstc).
> > > > >
> > > > > Does it work with the latest release?
> > > >
> > > > Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
> > > > your riscv-to-apply.next branch (commit 51bb9de2d188)
> > >
> > > I can't get anything to work with this patch. I have dropped this and
> > > the patches after this.
> > >
> > > I'm building the latest Xvisor release with:
> > >
> > > export CROSS_COMPILE=riscv64-linux-gnu-
> > > ARCH=riscv make generic-64b-defconfig
> > > make
> > >
> > > and running it as above, yet nothing. What am I missing here?
> >
> > I tried multiple times with the latest Xvisor on different machines but
> > still can't reproduce the issue you are seeing.
>
> Odd
>
> >
> > We generally provide pre-built binaries with every Xvisor release
> > so I will share with you pre-built binaries of the upcoming Xvisor-0.3.2
> > release. Maybe that would help you ?
>
> That would work. Let me know when the release happens and I can update
> my images.

Please download the Xvisor v0.3.2 pre-built binary tarball from:
https://xhypervisor.org/tarball/xvisor-0.3.2-bins.tar.xz

After untarring the above tarball, you can try the following command:
$ qemu-system-riscv64 -M virt -m 512M -nographic -bios
opensbi/build/platform/generic/firmware/fw_jump.bin -kernel
xvisor-0.3.2-bins/riscv/rv64/xvisor/vmm.bin -initrd
xvisor-0.3.2-bins/riscv/rv64/guest/virt64/disk-linux-6.1.1-one_guest_virt64.ext2
-append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
cat /system/banner.txt\""
OR
$ qemu-system-riscv32 -M virt -m 512M -nographic -bios
opensbi/build/platform/generic/firmware/fw_jump.bin -kernel
xvisor-0.3.2-bins/riscv/rv32/xvisor/vmm.bin -initrd
xvisor-0.3.2-bins/riscv/rv32/guest/virt32/disk-linux-6.1.1-one_guest_virt32.ext2
-append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
cat /system/banner.txt\""

Regards,
Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2023-01-03 16:13                 ` Anup Patel
@ 2023-01-16  5:20                   ` Anup Patel
  2023-01-19  2:05                     ` Alistair Francis
  0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2023-01-16  5:20 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

Hi Alistair,

On Tue, Jan 3, 2023 at 9:43 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> Hi Alistair,
>
> On Wed, Dec 28, 2022 at 11:08 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Fri, Dec 23, 2022 at 11:14 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > >
> > > On Thu, Dec 15, 2022 at 8:55 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > >
> > > > On Mon, Dec 12, 2022 at 9:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > >
> > > > > On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > > >
> > > > > > On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > > >
> > > > > > > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > > > > >
> > > > > > > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > > > > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > > > > > > >
> > > > > > > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > > > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > > > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > > > > > >
> > > > > > > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > > > > > > >
> > > > > > > > qemu-system-riscv64 -machine virt \
> > > > > > > >     -m 1G -serial mon:stdio -serial null -nographic \
> > > > > > > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > > > > > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > > > > > > vserial bind guest0/uart0"' \
> > > > > > > >     -smp 4 -d guest_errors \
> > > > > > > >     -bios none \
> > > > > > > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > > > > > > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > > > > > > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > > > > > > >
> > > > > > > > Running:
> > > > > > > >
> > > > > > > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > > > > > > >
> > > > > > > > I see this failure:
> > > > > > > >
> > > > > > > > INIT: bootcmd:  guest kick guest0
> > > > > > > >
> > > > > > > > guest0: Kicked
> > > > > > > >
> > > > > > > > INIT: bootcmd:  vserial bind guest0/uart0
> > > > > > > >
> > > > > > > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > > > > > > size=0x4096 map failed
> > > > > > > >
> > > > > > > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > > > > > > >
> > > > > > > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > > > > > > >
> > > > > > > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > > > > > > >
> > > > > > > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > > > > > > >
> > > > > > > >          s1=0x0000000000000000          a0=0x0000000010001048
> > > > > > > >
> > > > > > > >          a1=0x0000000000000000          a2=0x0000000000989680
> > > > > > > >
> > > > > > > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > > > > > > >
> > > > > > > >          a5=0x0000000000000000          a6=0x0000000000019000
> > > > > > > >
> > > > > > > >          a7=0x0000000000000000          s2=0x0000000000000000
> > > > > > > >
> > > > > > > >          s3=0x0000000000000000          s4=0x0000000000000000
> > > > > > > >
> > > > > > > >          s5=0x0000000000000000          s6=0x0000000000000000
> > > > > > > >
> > > > > > > >          s7=0x0000000000000000          s8=0x0000000000000000
> > > > > > > >
> > > > > > > >          s9=0x0000000000000000         s10=0x0000000000000000
> > > > > > > >
> > > > > > > >         s11=0x0000000000000000          t0=0x0000000000004000
> > > > > > > >
> > > > > > > >          t1=0x0000000000000100          t2=0x0000000000000000
> > > > > > > >
> > > > > > > >          t3=0x0000000000000000          t4=0x0000000000000000
> > > > > > > >
> > > > > > > >          t5=0x0000000000000000          t6=0x0000000000000000
> > > > > > > >
> > > > > > > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > > > > > > >
> > > > > > > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > > > > > > >
> > > > > > > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > > > > > > >
> > > > > > > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > > > > > > >
> > > > > > > > I have tried updating to a newer Xvisor release, but with that I don't
> > > > > > > > get any serial output.
> > > > > > > >
> > > > > > > > Can you help get the Xvisor tests back up and running?
> > > > > > >
> > > > > > > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > > > > > > with your QEMU riscv-to-apply.next branch and it works fine (both
> > > > > > > with and without Sstc).
> > > > > >
> > > > > > Does it work with the latest release?
> > > > >
> > > > > Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
> > > > > your riscv-to-apply.next branch (commit 51bb9de2d188)
> > > >
> > > > I can't get anything to work with this patch. I have dropped this and
> > > > the patches after this.
> > > >
> > > > I'm building the latest Xvisor release with:
> > > >
> > > > export CROSS_COMPILE=riscv64-linux-gnu-
> > > > ARCH=riscv make generic-64b-defconfig
> > > > make
> > > >
> > > > and running it as above, yet nothing. What am I missing here?
> > >
> > > I tried multiple times with the latest Xvisor on different machines but
> > > still can't reproduce the issue you are seeing.
> >
> > Odd
> >
> > >
> > > We generally provide pre-built binaries with every Xvisor release
> > > so I will share with you pre-built binaries of the upcoming Xvisor-0.3.2
> > > release. Maybe that would help you ?
> >
> > That would work. Let me know when the release happens and I can update
> > my images.
>
> Please download the Xvisor v0.3.2 pre-built binary tarball from:
> https://xhypervisor.org/tarball/xvisor-0.3.2-bins.tar.xz
>
> After untarring the above tarball, you can try the following command:
> $ qemu-system-riscv64 -M virt -m 512M -nographic -bios
> opensbi/build/platform/generic/firmware/fw_jump.bin -kernel
> xvisor-0.3.2-bins/riscv/rv64/xvisor/vmm.bin -initrd
> xvisor-0.3.2-bins/riscv/rv64/guest/virt64/disk-linux-6.1.1-one_guest_virt64.ext2
> -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> cat /system/banner.txt\""
> OR
> $ qemu-system-riscv32 -M virt -m 512M -nographic -bios
> opensbi/build/platform/generic/firmware/fw_jump.bin -kernel
> xvisor-0.3.2-bins/riscv/rv32/xvisor/vmm.bin -initrd
> xvisor-0.3.2-bins/riscv/rv32/guest/virt32/disk-linux-6.1.1-one_guest_virt32.ext2
> -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> cat /system/banner.txt\""

Do you want me to rebase and resend the patches which
are not merged ?

Regards,
Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes
  2023-01-16  5:20                   ` Anup Patel
@ 2023-01-19  2:05                     ` Alistair Francis
  0 siblings, 0 replies; 19+ messages in thread
From: Alistair Francis @ 2023-01-19  2:05 UTC (permalink / raw)
  To: Anup Patel
  Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Atish Patra, Richard Henderson, Anup Patel, qemu-riscv,
	qemu-devel

On Mon, Jan 16, 2023 at 3:20 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> Hi Alistair,
>
> On Tue, Jan 3, 2023 at 9:43 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > Hi Alistair,
> >
> > On Wed, Dec 28, 2022 at 11:08 AM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Fri, Dec 23, 2022 at 11:14 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > >
> > > > On Thu, Dec 15, 2022 at 8:55 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > >
> > > > > On Mon, Dec 12, 2022 at 9:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > >
> > > > > > On Mon, Dec 12, 2022 at 11:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > > > >
> > > > > > > On Thu, Dec 8, 2022 at 6:41 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > > > >
> > > > > > > > On Thu, Dec 8, 2022 at 9:00 AM Alistair Francis <alistair23@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, Nov 8, 2022 at 11:07 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > > > > > > > >
> > > > > > > > > > The htimedelta[h] CSR has impact on the VS timer comparison so we
> > > > > > > > > > should call riscv_timer_write_timecmp() whenever htimedelta changes.
> > > > > > > > > >
> > > > > > > > > > Fixes: 3ec0fe18a31f ("target/riscv: Add vstimecmp suppor")
> > > > > > > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > > > > > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > > > > > > > >
> > > > > > > > > This patch breaks my Xvisor test. When running OpenSBI and Xvisor like this:
> > > > > > > > >
> > > > > > > > > qemu-system-riscv64 -machine virt \
> > > > > > > > >     -m 1G -serial mon:stdio -serial null -nographic \
> > > > > > > > >     -append 'vmm.console=uart@10000000 vmm.bootcmd="vfs mount initrd
> > > > > > > > > /;vfs run /boot.xscript;vfs cat /system/banner.txt; guest kick guest0;
> > > > > > > > > vserial bind guest0/uart0"' \
> > > > > > > > >     -smp 4 -d guest_errors \
> > > > > > > > >     -bios none \
> > > > > > > > >     -device loader,file=./images/qemuriscv64/vmm.bin,addr=0x80200000 \
> > > > > > > > >     -kernel ./images/qemuriscv64/fw_jump.elf \
> > > > > > > > >     -initrd ./images/qemuriscv64/vmm-disk-linux.img -cpu rv64,h=true
> > > > > > > > >
> > > > > > > > > Running:
> > > > > > > > >
> > > > > > > > > Xvisor v0.3.0-129-gbc33f339 (Jan  1 1970 00:00:00)
> > > > > > > > >
> > > > > > > > > I see this failure:
> > > > > > > > >
> > > > > > > > > INIT: bootcmd:  guest kick guest0
> > > > > > > > >
> > > > > > > > > guest0: Kicked
> > > > > > > > >
> > > > > > > > > INIT: bootcmd:  vserial bind guest0/uart0
> > > > > > > > >
> > > > > > > > > [guest0/uart0] cpu_vcpu_stage2_map: guest_phys=0x000000003B9AC000
> > > > > > > > > size=0x4096 map failed
> > > > > > > > >
> > > > > > > > > do_error: CPU3: VCPU=guest0/vcpu0 page fault failed (error -1)
> > > > > > > > >
> > > > > > > > >        zero=0x0000000000000000          ra=0x0000000080001B4E
> > > > > > > > >
> > > > > > > > >          sp=0x000000008001CF80          gp=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          tp=0x0000000000000000          s0=0x000000008001CFB0
> > > > > > > > >
> > > > > > > > >          s1=0x0000000000000000          a0=0x0000000010001048
> > > > > > > > >
> > > > > > > > >          a1=0x0000000000000000          a2=0x0000000000989680
> > > > > > > > >
> > > > > > > > >          a3=0x000000003B9ACA00          a4=0x0000000000000048
> > > > > > > > >
> > > > > > > > >          a5=0x0000000000000000          a6=0x0000000000019000
> > > > > > > > >
> > > > > > > > >          a7=0x0000000000000000          s2=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          s3=0x0000000000000000          s4=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          s5=0x0000000000000000          s6=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          s7=0x0000000000000000          s8=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          s9=0x0000000000000000         s10=0x0000000000000000
> > > > > > > > >
> > > > > > > > >         s11=0x0000000000000000          t0=0x0000000000004000
> > > > > > > > >
> > > > > > > > >          t1=0x0000000000000100          t2=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          t3=0x0000000000000000          t4=0x0000000000000000
> > > > > > > > >
> > > > > > > > >          t5=0x0000000000000000          t6=0x0000000000000000
> > > > > > > > >
> > > > > > > > >        sepc=0x0000000080001918     sstatus=0x0000000200004120
> > > > > > > > >
> > > > > > > > >     hstatus=0x00000002002001C0     sp_exec=0x0000000010A64000
> > > > > > > > >
> > > > > > > > >      scause=0x0000000000000017       stval=0x000000003B9ACAF8
> > > > > > > > >
> > > > > > > > >       htval=0x000000000EE6B2BE      htinst=0x0000000000D03021
> > > > > > > > >
> > > > > > > > > I have tried updating to a newer Xvisor release, but with that I don't
> > > > > > > > > get any serial output.
> > > > > > > > >
> > > > > > > > > Can you help get the Xvisor tests back up and running?
> > > > > > > >
> > > > > > > > I tried the latest Xvisor-next (https://github.com/avpatel/xvisor-next)
> > > > > > > > with your QEMU riscv-to-apply.next branch and it works fine (both
> > > > > > > > with and without Sstc).
> > > > > > >
> > > > > > > Does it work with the latest release?
> > > > > >
> > > > > > Yes, the latest Xvisor-next repo works for QEMU v7.2.0-rc4 and
> > > > > > your riscv-to-apply.next branch (commit 51bb9de2d188)
> > > > >
> > > > > I can't get anything to work with this patch. I have dropped this and
> > > > > the patches after this.
> > > > >
> > > > > I'm building the latest Xvisor release with:
> > > > >
> > > > > export CROSS_COMPILE=riscv64-linux-gnu-
> > > > > ARCH=riscv make generic-64b-defconfig
> > > > > make
> > > > >
> > > > > and running it as above, yet nothing. What am I missing here?
> > > >
> > > > I tried multiple times with the latest Xvisor on different machines but
> > > > still can't reproduce the issue you are seeing.
> > >
> > > Odd
> > >
> > > >
> > > > We generally provide pre-built binaries with every Xvisor release
> > > > so I will share with you pre-built binaries of the upcoming Xvisor-0.3.2
> > > > release. Maybe that would help you ?
> > >
> > > That would work. Let me know when the release happens and I can update
> > > my images.
> >
> > Please download the Xvisor v0.3.2 pre-built binary tarball from:
> > https://xhypervisor.org/tarball/xvisor-0.3.2-bins.tar.xz
> >
> > After untarring the above tarball, you can try the following command:
> > $ qemu-system-riscv64 -M virt -m 512M -nographic -bios
> > opensbi/build/platform/generic/firmware/fw_jump.bin -kernel
> > xvisor-0.3.2-bins/riscv/rv64/xvisor/vmm.bin -initrd
> > xvisor-0.3.2-bins/riscv/rv64/guest/virt64/disk-linux-6.1.1-one_guest_virt64.ext2
> > -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> > cat /system/banner.txt\""
> > OR
> > $ qemu-system-riscv32 -M virt -m 512M -nographic -bios
> > opensbi/build/platform/generic/firmware/fw_jump.bin -kernel
> > xvisor-0.3.2-bins/riscv/rv32/xvisor/vmm.bin -initrd
> > xvisor-0.3.2-bins/riscv/rv32/guest/virt32/disk-linux-6.1.1-one_guest_virt32.ext2
> > -append "vmm.bootcmd=\"vfs mount initrd /;vfs run /boot.xscript;vfs
> > cat /system/banner.txt\""

Great! I have updated my tests and they are passing on the current QEMU master.

>
> Do you want me to rebase and resend the patches which
> are not merged ?

Yes please :)

Alistair

>
> Regards,
> Anup


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-01-19  2:07 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-08 12:56 [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
2022-11-08 12:56 ` [PATCH v2 1/5] target/riscv: Typo fix in sstc() predicate Anup Patel
2022-11-08 12:57 ` [PATCH v2 2/5] target/riscv: Update VS timer whenever htimedelta changes Anup Patel
2022-12-08  3:29   ` Alistair Francis
2022-12-08  8:41     ` Anup Patel
2022-12-12  5:53       ` Alistair Francis
2022-12-12 11:12         ` Anup Patel
2022-12-15  3:25           ` Alistair Francis
2022-12-23 13:14             ` Anup Patel
2022-12-28  5:38               ` Alistair Francis
2023-01-03 16:13                 ` Anup Patel
2023-01-16  5:20                   ` Anup Patel
2023-01-19  2:05                     ` Alistair Francis
2022-11-08 12:57 ` [PATCH v2 3/5] target/riscv: Don't clear mask in riscv_cpu_update_mip() for VSTIP Anup Patel
2022-11-08 12:57 ` [PATCH v2 4/5] target/riscv: No need to re-start QEMU timer when timecmp == UINT64_MAX Anup Patel
2022-11-08 12:57 ` [PATCH v2 5/5] target/riscv: Ensure opcode is saved for all relevant instructions Anup Patel
2022-11-21  7:23   ` Alistair Francis
2022-11-21  3:01 ` [PATCH v2 0/5] Nested virtualization fixes for QEMU Anup Patel
2022-11-22  1:03 ` Alistair Francis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).