public inbox for opensbi@lists.infradead.org
 help / color / mirror / Atom feed
From: Bo Gan <ganboing@gmail.com>
To: opensbi@lists.infradead.org, dramforever@live.com,
	anup.patel@oss.qualcomm.com
Cc: anup@brainfault.org, cleger@rivosinc.com, samuel.holland@sifive.com
Subject: Re: [PATCH 6/7] lib: sbi: Rework load/store emulator instruction decoding
Date: Wed, 1 Apr 2026 17:02:01 -0700	[thread overview]
Message-ID: <d1557584-2f42-46e4-b2ae-07a902f3dca8@gmail.com> (raw)
In-Reply-To: <20260210094044.72591-7-ganboing@gmail.com>

@Anup, any comments on this patch? I can address it together with your
other comments. Thanks.

Bo

On 2/10/26 01:40, Bo Gan wrote:
> Rehaul instruction decoding to fix the following issues:
> 
> - We assume the XLEN of previous mode is the same as MXLEN. However,
>    RVC instructions decodes differently in RV32 and RV64, so shouldn't
>    have assumed that.
> - We assume it's a misaligned fault and the load/store offset is 0,
>    i.e., base address == fault address, but access faults can have
>    non-0 offset (on HW supporting misaligned accesses), so platform
>    specific load/store fault handler gets the wrong base address.
> - No checking of [63:32] of tinst in RV64, which is explicitly
>    required by Privileged ISA 19.6.3. Must reject tinst with non-0
>    high 32 bits.
> 
> Thus, fix all the above. For misaligned load/store fault, the address
> offset is always 0, thus we kill the use of base address, and use trap
> address instead (same as before), which lets the compiler optimize out
> imm parsing and other calculations.
> 
> I also analyzed the behavior of misaligned fault handler before fix.
> With the following conditions met, it can trigger data corruption:
> 
> - HW doesn't transform instruction into tinst.
> - HW doesn't support misaligned load/store, and OS doesn't enable
>    misaligned delegation, thus OpenSBI handler is in effect
> - HW supports mixed XLEN, and M mode is running RV64, and the trapping
>    mode (U/VS/VU) is running RV32.
> - The trapping instruction is c.f{l|s}w(sp).
> 
> Due to the incorrect insn decoding, the trapping instruction would
> mistakenly be decoded as c.{l|s}d(sp). With this fix, c.f{l|s}w(sp)
> in RV32 is now emulated correctly.
> 
> Validation:
> The patch is validated to have fixed the issue with test cases running
> on a modified version of QEMU that exposes misaligned faults [1], and
> a further modified version that removes tinst transformation [2]. The
> S-mode OS is a local build of Debian Trixie 6.12 kernel that enables
> COMPAT (RV32), and the U-mode test application exercises all integer
> and floating-point load/store (RVIFD64/32+RVC64/32) instructions with
> all possible imm values. The patch is also tested on real HW (Sifive
> P550/ESWIN EIC7700), which only supports RV64. On P550, the same test
> was validated both in U mode and VU mode, where the host runs a 6.12
> ESWIN vendor kernel that has some ESWIN SoC device driver patches [3]
> applied, and the guest runs the exact same Debian Trixie 6.12 kernel
> mentioned above.
> 
> [1] https://github.com/ganboing/qemu/tree/ganboing-misalign
> [2] https://github.com/ganboing/qemu/tree/ganboing-misalign-no-tinst
> [3] https://github.com/sifiveinc/riscv-linux/tree/rel/kernel-6.12/hifive-premier-p550
> 
> Fixes: 7219477f7b40 ("lib: Use MTINST CSR in misaligned load/store emulation")
> Fixes: b5ae8e8a650d ("lib: Add misaligned load/store trap handling")
> Fixes: 4c112650bbb0 ("lib: sbi: abstract out insn decoding to unify mem fault handlers")
> Signed-off-by: Bo Gan <ganboing@gmail.com>
> ---
>   lib/sbi/sbi_trap_ldst.c | 427 +++++++++++++++++++++++++++-------------
>   1 file changed, 295 insertions(+), 132 deletions(-)
> 
> diff --git a/lib/sbi/sbi_trap_ldst.c b/lib/sbi/sbi_trap_ldst.c
> index 22c4d5a7..2371abca 100644
> --- a/lib/sbi/sbi_trap_ldst.c
> +++ b/lib/sbi/sbi_trap_ldst.c
> @@ -44,30 +44,34 @@ ulong sbi_misaligned_tinst_fixup(ulong orig_tinst, ulong new_tinst,
>   		return orig_tinst | (addr_offset << SH_RS1);
>   }
>   
> +static inline bool sbi_trap_tinst_valid(ulong tinst)
> +{
> +	/*
> +	 * Bit[0] == 1 implies trapped instruction value is
> +	 * transformed instruction or custom instruction.
> +	 * Also do proper checking per Privileged ISA 19.6.3,
> +	 * and make sure high 32 bits of tinst is 0
> +	 */
> +	return tinst == (uint32_t)tinst && (tinst & 0x1);
> +}
> +
>   static int sbi_trap_emulate_load(struct sbi_trap_context *tcntx,
>   				 sbi_trap_ld_emulator emu)
>   {
>   	const struct sbi_trap_info *orig_trap = &tcntx->trap;
>   	struct sbi_trap_regs *regs = &tcntx->regs;
> -	ulong insn, insn_len;
> +	ulong insn, insn_len, imm = 0, shift = 0, off = 0;
>   	union sbi_ldst_data val = { 0 };
>   	struct sbi_trap_info uptrap;
> -	int rc, fp = 0, shift = 0, len = 0;
> -	bool xform = false;
> -
> -	if (orig_trap->tinst & 0x1) {
> -		/*
> -		 * Bit[0] == 1 implies trapped instruction value is
> -		 * transformed instruction or custom instruction.
> -		 */
> +	bool xform = false, fp = false, c_load = false, c_ldsp = false;
> +	int rc, len = 0, prev_xlen = 0;
> +
> +	if (sbi_trap_tinst_valid(orig_trap->tinst)) {
>   		xform	 = true;
>   		insn	 = orig_trap->tinst | INSN_16BIT_MASK;
>   		insn_len = (orig_trap->tinst & 0x2) ? INSN_LEN(insn) : 2;
>   	} else {
> -		/*
> -		 * Bit[0] == 0 implies trapped instruction value is
> -		 * zero or special value.
> -		 */
> +		/* trapped instruction value is zero or special value */
>   		insn = sbi_get_insn(regs->mepc, &uptrap);
>   		if (uptrap.cause) {
>   			return sbi_trap_redirect(regs, &uptrap);
> @@ -75,92 +79,170 @@ static int sbi_trap_emulate_load(struct sbi_trap_context *tcntx,
>   		insn_len = INSN_LEN(insn);
>   	}
>   
> +	/**
> +	 * Common for RV32/RV64:
> +	 *    lb, lbu, lh, lhu, lw, flw, flw
> +	 *    c.lbu, c.lh, c.lhu, c.lw, c.lwsp, c.fld, c.fldsp
> +	 */
>   	if ((insn & INSN_MASK_LB) == INSN_MATCH_LB) {
> -		len   = 1;
> -		shift = 8 * (sizeof(ulong) - len);
> +		len = -1;
>   	} else if ((insn & INSN_MASK_LBU) == INSN_MATCH_LBU) {
>   		len = 1;
> -	} else if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) {
> -		len   = 4;
> -		shift = 8 * (sizeof(ulong) - len);
> -#if __riscv_xlen == 64
> -	} else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) {
> -		len   = 8;
> -		shift = 8 * (sizeof(ulong) - len);
> -	} else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) {
> -		len = 4;
> -#endif
> -#ifdef __riscv_flen
> -	} else if ((insn & INSN_MASK_FLD) == INSN_MATCH_FLD) {
> -		fp  = 1;
> -		len = 8;
> -	} else if ((insn & INSN_MASK_FLW) == INSN_MATCH_FLW) {
> -		fp  = 1;
> -		len = 4;
> -#endif
> +	} else if ((insn & INSN_MASK_C_LBU) == INSN_MATCH_C_LBU) {
> +		/* Zcb */
> +		len = 1;
> +		imm = RVC_LB_IMM(insn);
> +		c_load = true;
>   	} else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) {
> -		len   = 2;
> -		shift = 8 * (sizeof(ulong) - len);
> +		len = -2;
> +	} else if ((insn & INSN_MASK_C_LH) == INSN_MATCH_C_LH) {
> +		/* Zcb */
> +		len = -2;
> +		imm = RVC_LH_IMM(insn);
> +		c_load = true;
>   	} else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) {
>   		len = 2;
> -#if __riscv_xlen >= 64
> -	} else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) {
> -		len   = 8;
> -		shift = 8 * (sizeof(ulong) - len);
> -		insn  = RVC_RS2S(insn) << SH_RD;
> -	} else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP &&
> -		   ((insn >> SH_RD) & 0x1f)) {
> -		len   = 8;
> -		shift = 8 * (sizeof(ulong) - len);
> -#endif
> +	} else if ((insn & INSN_MASK_C_LHU) == INSN_MATCH_C_LHU) {
> +		/* Zcb */
> +		len = 2;
> +		imm = RVC_LH_IMM(insn);
> +		c_load = true;
> +	} else if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) {
> +		len = -4;
>   	} else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) {
> -		len   = 4;
> -		shift = 8 * (sizeof(ulong) - len);
> -		insn  = RVC_RS2S(insn) << SH_RD;
> +		/* Zca */
> +		len = -4;
> +		imm = RVC_LW_IMM(insn);
> +		c_load = true;
>   	} else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP &&
> -		   ((insn >> SH_RD) & 0x1f)) {
> -		len   = 4;
> -		shift = 8 * (sizeof(ulong) - len);
> +		GET_RD_NUM(insn)) {
> +		/* Zca */
> +		len = -4;
> +		imm = RVC_LWSP_IMM(insn);
> +		c_ldsp = true;
>   #ifdef __riscv_flen
> +	} else if ((insn & INSN_MASK_FLW) == INSN_MATCH_FLW) {
> +		len = 4;
> +		fp = true;
> +	} else if ((insn & INSN_MASK_FLD) == INSN_MATCH_FLD) {
> +		len = 8;
> +		fp = true;
>   	} else if ((insn & INSN_MASK_C_FLD) == INSN_MATCH_C_FLD) {
> -		fp   = 1;
> -		len  = 8;
> -		insn = RVC_RS2S(insn) << SH_RD;
> +		/* Zcd */
> +		len = 8;
> +		imm = RVC_LD_IMM(insn);
> +		c_load = true;
> +		fp = true;
>   	} else if ((insn & INSN_MASK_C_FLDSP) == INSN_MATCH_C_FLDSP) {
> -		fp  = 1;
> +		/* Zcd */
>   		len = 8;
> -#if __riscv_xlen == 32
> -	} else if ((insn & INSN_MASK_C_FLW) == INSN_MATCH_C_FLW) {
> -		fp   = 1;
> -		len  = 4;
> -		insn = RVC_RS2S(insn) << SH_RD;
> -	} else if ((insn & INSN_MASK_C_FLWSP) == INSN_MATCH_C_FLWSP) {
> -		fp  = 1;
> -		len = 4;
> +		imm = RVC_LDSP_IMM(insn);
> +		c_ldsp = true;
> +		fp = true;
>   #endif
> +	} else {
> +		prev_xlen = sbi_regs_prev_xlen(regs);
> +	}
> +
> +	/**
> +	 * Must distinguish between rv64 and rv32, RVC instructions have
> +	 * overlapping encoding:
> +	 *     c.ld in rv64 == c.flw in rv32
> +	 *     c.ldsp in rv64 == c.flwsp in rv32
> +	 */
> +	if (prev_xlen == 64) {
> +		/* RV64 Only: lwu, ld, c.ld, c.ldsp  */
> +		if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) {
> +			len = 4;
> +		} else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) {
> +			len = 8;
> +		} else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) {
> +			/* Zca */
> +			len = 8;
> +			imm = RVC_LD_IMM(insn);
> +			c_load = true;
> +		} else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP &&
> +			GET_RD_NUM(insn)) {
> +			/* Zca */
> +			len = 8;
> +			imm = RVC_LDSP_IMM(insn);
> +			c_ldsp = true;
> +		}
> +#ifdef __riscv_flen
> +	} else if (prev_xlen == 32) {
> +		/* RV32 Only: c.flw, c.flwsp */
> +		if ((insn & INSN_MASK_C_FLW) == INSN_MATCH_C_FLW) {
> +			/* Zcf */
> +			len = 4;
> +			imm = RVC_LW_IMM(insn);
> +			c_load = true;
> +			fp = true;
> +		} else if ((insn & INSN_MASK_C_FLWSP) == INSN_MATCH_C_FLWSP) {
> +			/* Zcf */
> +			len = 4;
> +			imm = RVC_LWSP_IMM(insn);
> +			c_ldsp = true;
> +			fp = true;
> +		}
>   #endif
> -	} else if ((insn & INSN_MASK_C_LHU) == INSN_MATCH_C_LHU) {
> -		len = 2;
> -		insn = RVC_RS2S(insn) << SH_RD;
> -	} else if ((insn & INSN_MASK_C_LH) == INSN_MATCH_C_LH) {
> -		len = 2;
> +	}
> +
> +	if (len < 0) {
> +		len = -len;
>   		shift = 8 * (sizeof(ulong) - len);
> -		insn = RVC_RS2S(insn) << SH_RD;
>   	}
>   
> -	rc = emu(xform ? 0 : insn, len, orig_trap->tval, &val, tcntx);
> +	if (!len || orig_trap->cause == CAUSE_MISALIGNED_LOAD)
> +		/* Unknown instruction or no need to calculate offset */
> +		goto do_emu;
> +
> +	if (xform)
> +		/* Transformed insn */
> +		off = GET_RS1_NUM(insn);
> +	else if (c_load)
> +		/* non SP-based compressed load */
> +		off = orig_trap->tval - GET_RS1S(insn, regs) - imm;
> +	else if (c_ldsp)
> +		/* SP-based compressed load */
> +		off = orig_trap->tval - REG_VAL(2, regs) - imm;
> +	else
> +		/* I-type non-compressed load */
> +		off = orig_trap->tval - GET_RS1(insn, regs) - (ulong)IMM_I(insn);
> +	/**
> +	 * Normalize offset, in case the XLEN of unpriv mode is smaller,
> +	 * and/or pointer masking is in effect
> +	 */
> +	off &= (len - 1);
> +
> +do_emu:
> +	rc = emu(xform ? 0 : insn, len, orig_trap->tval - off, &val, tcntx);
>   	if (rc <= 0)
>   		return rc;
> +	if (!len)
> +		goto epc_fixup;
> +
> +	if (!fp) {
> +		ulong v = ((long)(val.data_ulong << shift)) >> shift;
>   
> -	if (!fp)
> -		SET_RD(insn, regs, ((long)(val.data_ulong << shift)) >> shift);
> +		if (c_load)
> +			SET_RDS(insn, regs, v);
> +		else
> +			SET_RD(insn, regs, v);
>   #ifdef __riscv_flen
> -	else if (len == 8)
> -		SET_F64_RD(insn, regs, val.data_u64);
> -	else
> -		SET_F32_RD(insn, regs, val.data_ulong);
> +	} else if (len == 8) {
> +		if (c_load)
> +			SET_F64_RDS(insn, regs, val.data_u64);
> +		else
> +			SET_F64_RD(insn, regs, val.data_u64);
> +	} else {
> +		if (c_load)
> +			SET_F32_RDS(insn, regs, val.data_ulong);
> +		else
> +			SET_F32_RD(insn, regs, val.data_ulong);
>   #endif
> +	}
>   
> +epc_fixup:
>   	regs->mepc += insn_len;
>   
>   	return 0;
> @@ -171,25 +253,18 @@ static int sbi_trap_emulate_store(struct sbi_trap_context *tcntx,
>   {
>   	const struct sbi_trap_info *orig_trap = &tcntx->trap;
>   	struct sbi_trap_regs *regs = &tcntx->regs;
> -	ulong insn, insn_len;
> +	ulong insn, insn_len, imm = 0, off = 0;
>   	union sbi_ldst_data val;
>   	struct sbi_trap_info uptrap;
> -	int rc, len = 0;
> -	bool xform = false;
> -
> -	if (orig_trap->tinst & 0x1) {
> -		/*
> -		 * Bit[0] == 1 implies trapped instruction value is
> -		 * transformed instruction or custom instruction.
> -		 */
> +	bool xform = false, fp = false, c_store = false, c_stsp = false;
> +	int rc, len = 0, prev_xlen = 0;
> +
> +	if (sbi_trap_tinst_valid(orig_trap->tinst)) {
>   		xform	 = true;
>   		insn	 = orig_trap->tinst | INSN_16BIT_MASK;
>   		insn_len = (orig_trap->tinst & 0x2) ? INSN_LEN(insn) : 2;
>   	} else {
> -		/*
> -		 * Bit[0] == 0 implies trapped instruction value is
> -		 * zero or special value.
> -		 */
> +		/* trapped instruction value is zero or special value */
>   		insn = sbi_get_insn(regs->mepc, &uptrap);
>   		if (uptrap.cause) {
>   			return sbi_trap_redirect(regs, &uptrap);
> @@ -197,62 +272,150 @@ static int sbi_trap_emulate_store(struct sbi_trap_context *tcntx,
>   		insn_len = INSN_LEN(insn);
>   	}
>   
> -	val.data_ulong = GET_RS2(insn, regs);
> -
> +	/**
> +	 * Common for RV32/RV64:
> +	 *    sb, sh, sw, fsw, fsd
> +	 *    c.sb, c.sh, c.sw, c.swsp, c.fsd, c.fsdsp
> +	 */
>   	if ((insn & INSN_MASK_SB) == INSN_MATCH_SB) {
>   		len = 1;
> -	} else if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) {
> -		len = 4;
> -#if __riscv_xlen == 64
> -	} else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) {
> -		len = 8;
> -#endif
> -#ifdef __riscv_flen
> -	} else if ((insn & INSN_MASK_FSD) == INSN_MATCH_FSD) {
> -		len	     = 8;
> -		val.data_u64 = GET_F64_RS2(insn, regs);
> -	} else if ((insn & INSN_MASK_FSW) == INSN_MATCH_FSW) {
> -		len	       = 4;
> -		val.data_ulong = GET_F32_RS2(insn, regs);
> -#endif
> +	} else if ((insn & INSN_MASK_C_SB) == INSN_MATCH_C_SB) {
> +		/* Zcb */
> +		len = 1;
> +		imm = RVC_SB_IMM(insn);
> +		c_store = true;
>   	} else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) {
>   		len = 2;
> -#if __riscv_xlen >= 64
> -	} else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) {
> -		len	       = 8;
> -		val.data_ulong = GET_RS2S(insn, regs);
> -	} else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP) {
> -		len	       = 8;
> -		val.data_ulong = GET_RS2C(insn, regs);
> -#endif
> +	} else if ((insn & INSN_MASK_C_SH) == INSN_MATCH_C_SH) {
> +		/* Zcb */
> +		len = 2;
> +		imm = RVC_SH_IMM(insn);
> +		c_store = true;
> +	} else if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) {
> +		len = 4;
>   	} else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) {
> -		len	       = 4;
> -		val.data_ulong = GET_RS2S(insn, regs);
> +		/* Zca */
> +		len = 4;
> +		imm = RVC_SW_IMM(insn);
> +		c_store = true;
>   	} else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP) {
> -		len	       = 4;
> -		val.data_ulong = GET_RS2C(insn, regs);
> +		/* Zca */
> +		len = 4;
> +		imm = RVC_SWSP_IMM(insn);
> +		c_stsp = true;
>   #ifdef __riscv_flen
> +	} else if ((insn & INSN_MASK_FSW) == INSN_MATCH_FSW) {
> +		len = 4;
> +		fp = true;
> +	} else if ((insn & INSN_MASK_FSD) == INSN_MATCH_FSD) {
> +		len = 8;
> +		fp = true;
>   	} else if ((insn & INSN_MASK_C_FSD) == INSN_MATCH_C_FSD) {
> -		len	     = 8;
> -		val.data_u64 = GET_F64_RS2S(insn, regs);
> +		/* Zcd */
> +		len = 8;
> +		imm = RVC_SD_IMM(insn);
> +		c_store = true;
> +		fp = true;
>   	} else if ((insn & INSN_MASK_C_FSDSP) == INSN_MATCH_C_FSDSP) {
> -		len	     = 8;
> -		val.data_u64 = GET_F64_RS2C(insn, regs);
> -#if __riscv_xlen == 32
> -	} else if ((insn & INSN_MASK_C_FSW) == INSN_MATCH_C_FSW) {
> -		len	       = 4;
> -		val.data_ulong = GET_F32_RS2S(insn, regs);
> -	} else if ((insn & INSN_MASK_C_FSWSP) == INSN_MATCH_C_FSWSP) {
> -		len	       = 4;
> -		val.data_ulong = GET_F32_RS2C(insn, regs);
> +		/* Zcd */
> +		len = 8;
> +		imm = RVC_SDSP_IMM(insn);
> +		c_stsp = true;
> +		fp = true;
>   #endif
> +	} else {
> +		prev_xlen = sbi_regs_prev_xlen(regs);
> +	}
> +
> +	/**
> +	 * Must distinguish between rv64 and rv32, RVC instructions have
> +	 * overlapping encoding:
> +	 *     c.sd in rv64 == c.fsw in rv32
> +	 *     c.sdsp in rv64 == c.fswsp in rv32
> +	 */
> +	if (prev_xlen == 64) {
> +		/* RV64 Only: sd, c.sd, c.sdsp */
> +		if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) {
> +			len = 8;
> +		} else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) {
> +			/* Zca */
> +			len = 8;
> +			imm = RVC_SD_IMM(insn);
> +			c_store = true;
> +		} else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP) {
> +			/* Zca */
> +			len = 8;
> +			imm = RVC_SDSP_IMM(insn);
> +			c_stsp = true;
> +		}
> +#ifdef __riscv_flen
> +	} else if (prev_xlen == 32) {
> +		/* RV32 Only: c.fsw, c.fswsp */
> +		if ((insn & INSN_MASK_C_FSW) == INSN_MATCH_C_FSW) {
> +			/* Zcf */
> +			len = 4;
> +			imm = RVC_SW_IMM(insn);
> +			c_store = true;
> +			fp = true;
> +		} else if ((insn & INSN_MASK_C_FSWSP) == INSN_MATCH_C_FSWSP) {
> +			/* Zcf */
> +			len = 4;
> +			imm = RVC_SWSP_IMM(insn);
> +			c_stsp = true;
> +			fp = true;
> +		}
>   #endif
> -	} else if ((insn & INSN_MASK_C_SH) == INSN_MATCH_C_SH) {
> -		len		= 2;
> -		val.data_ulong = GET_RS2S(insn, regs);
>   	}
>   
> -	rc = emu(xform ? 0 : insn, len, orig_trap->tval, val, tcntx);
> +	if (!fp) {
> +		if (c_store)
> +			val.data_ulong = GET_RS2S(insn, regs);
> +		else if (c_stsp)
> +			val.data_ulong = GET_RS2C(insn, regs);
> +		else
> +			val.data_ulong = GET_RS2(insn, regs);
> +#ifdef __riscv_flen
> +	} else if (len == 8) {
> +		if (c_store)
> +			val.data_u64 = GET_F64_RS2S(insn, regs);
> +		else if (c_stsp)
> +			val.data_u64 = GET_F64_RS2C(insn, regs);
> +		else
> +			val.data_u64 = GET_F64_RS2(insn, regs);
> +	} else {
> +		if (c_store)
> +			val.data_ulong = GET_F32_RS2S(insn, regs);
> +		else if (c_stsp)
> +			val.data_ulong = GET_F32_RS2C(insn, regs);
> +		else
> +			val.data_ulong = GET_F32_RS2(insn, regs);
> +#endif
> +	}
> +
> +	if (!len || orig_trap->cause == CAUSE_MISALIGNED_STORE)
> +		/* Unknown instruction or no need to calculate offset */
> +		goto do_emu;
> +
> +	if (xform)
> +		/* Transformed insn */
> +		off = GET_RS1_NUM(insn);
> +	else if (c_store)
> +		/* non SP-based compressed store */
> +		off = orig_trap->tval - GET_RS1S(insn, regs) - imm;
> +	else if (c_stsp)
> +		/* SP-based compressed store */
> +		off = orig_trap->tval - REG_VAL(2, regs) - imm;
> +	else
> +		/* S-type non-compressed store */
> +		off = orig_trap->tval - GET_RS1(insn, regs) - (ulong)IMM_S(insn);
> +	/**
> +	 * Normalize offset, in case the XLEN of unpriv mode is smaller,
> +	 * and/or pointer masking is in effect
> +	 */
> +	off &= (len - 1);
> +
> +do_emu:
> +	rc = emu(xform ? 0 : insn, len, orig_trap->tval - off, val, tcntx);
>   	if (rc <= 0)
>   		return rc;
>   


-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

  parent reply	other threads:[~2026-04-01 23:57 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-10  9:40 [PATCH 0/7] Fixes for load/store misaligned and access faults Bo Gan
2026-02-10  9:40 ` [PATCH 1/7] include: sbi: Add more mstatus and instruction encoding Bo Gan
2026-03-20  4:40   ` Anup Patel
2026-02-10  9:40 ` [PATCH 2/7] include: sbi: Add sbi_regs_prev_xlen Bo Gan
2026-03-20  4:42   ` Anup Patel
2026-02-10  9:40 ` [PATCH 3/7] include: sbi: Add GET_RDS_NUM/SET(_FP32/_FP64)_RDS macros Bo Gan
2026-03-20  4:44   ` Anup Patel
2026-02-10  9:40 ` [PATCH 4/7] include: sbi: set FS dirty in vsstatus when V=1 Bo Gan
2026-03-20  4:45   ` Anup Patel
2026-02-10  9:40 ` [PATCH 5/7] lib: sbi: Do not override emulator callback for vector load/store Bo Gan
2026-03-20  5:13   ` Anup Patel
2026-03-21  4:50     ` Bo Gan
2026-02-10  9:40 ` [PATCH 6/7] lib: sbi: Rework load/store emulator instruction decoding Bo Gan
2026-02-10 16:08   ` Andrew Jones
2026-02-11 10:36     ` Bo Gan
2026-02-11 15:01       ` Andrew Jones
2026-04-02  0:02   ` Bo Gan [this message]
2026-02-10  9:40 ` [PATCH 7/7] [NOT-FOR-UPSTREAM] Test program for misaligned load/store Bo Gan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d1557584-2f42-46e4-b2ae-07a902f3dca8@gmail.com \
    --to=ganboing@gmail.com \
    --cc=anup.patel@oss.qualcomm.com \
    --cc=anup@brainfault.org \
    --cc=cleger@rivosinc.com \
    --cc=dramforever@live.com \
    --cc=opensbi@lists.infradead.org \
    --cc=samuel.holland@sifive.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox