From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 15230ECAAD3 for ; Mon, 19 Sep 2022 17:44:45 +0000 (UTC) Received: from localhost ([::1]:42694 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oaKp2-0004nT-2f for qemu-devel@archiver.kernel.org; Mon, 19 Sep 2022 13:44:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37564) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oaKfg-0002uP-6P for qemu-devel@nongnu.org; Mon, 19 Sep 2022 13:35:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:48479) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oaKfd-0002qq-Oy for qemu-devel@nongnu.org; Mon, 19 Sep 2022 13:35:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1663608900; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FgtnSMB+tpzz67Jmag+UQ/1Hghww1UvzFsji5Vkt7R0=; b=XIP1UcAthn9SwHonkJke8MK68O5yAnToWEXWsFOOFedyBKCP3c7tahNpU5SZKv0/ohwHbr 8OTkwHiHASIML/bUTJtBO129a/U2DxwQo+z3wy4UGx2blZ7vyROFn6AvPgbBIWeoJCq/a+ IN9X0GkH5ETWp+AHoQuoP3n6UFHLjsM= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-484-9DrTM8x6P0aWYHX2CzUJCg-1; Mon, 19 Sep 2022 13:34:59 -0400 X-MC-Unique: 9DrTM8x6P0aWYHX2CzUJCg-1 Received: by mail-wm1-f72.google.com with SMTP id i129-20020a1c3b87000000b003b33e6160bdso4948267wma.7 for ; Mon, 19 Sep 2022 10:34:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=FgtnSMB+tpzz67Jmag+UQ/1Hghww1UvzFsji5Vkt7R0=; b=W3yTPBaAENGr2wmi61D6AFeXkCs4Cnwq7BpDP/d6Frai54Pc8eYqWjSl9hTtR61Luu f9jNli+Co7t+juV4iG48K0DETZUV8ScVeAFGIwi1uRmdxupaiU3ZjbMX20q63R0WBT9J YiFqPzMXGMBrxsfT6rHLx6j7mW2sGaQz+N2i+uKjB9WrmUAcU46MdNpyG9wXqZv0Ksp6 YVXysAaV2frWYl8VMMkrl2C9guR3qPcvSvvxFR3EPG/CKXCOUBGtr/yVrJHeR193cGUS DagDgCyo1qiZ2aaGc0IYzj/3kLDWqu0ieMtZftZBZV891lv5UA7RQzs/TopHzzEfiE77 B2/w== X-Gm-Message-State: ACrzQf1dta65O2BRvrQ7lNFtWfHvJlZn5vGxvFLWhTAhxzWERJIPlwxl P5jm/CvC7YaOOoLse5/12WI1dDCf+E1vUVcRxMseEQbRvakKbMDxhSYGX9ZtHGJi57qVNEdSu1A D7CiX/3Abks/+zssHlIv+W41n8hkScRx493N5Ztb7QJ8EPH+6lOqgy7Kc3s6WOowVSLg= X-Received: by 2002:a05:600c:511b:b0:3b4:b3d7:c429 with SMTP id o27-20020a05600c511b00b003b4b3d7c429mr14500918wms.80.1663608897847; Mon, 19 Sep 2022 10:34:57 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4MgRkLASyosm4Jiac/o4RTO5DlwHzzd69/3+lMJiPPGfDrG8B9HYMyaiY6mVXef5q+BqIs8A== X-Received: by 2002:a05:600c:511b:b0:3b4:b3d7:c429 with SMTP id o27-20020a05600c511b00b003b4b3d7c429mr14500896wms.80.1663608897395; Mon, 19 Sep 2022 10:34:57 -0700 (PDT) Received: from [192.168.10.118] ([2001:b07:6468:f312:1c09:f536:3de6:228c]) by smtp.gmail.com with ESMTPSA id r10-20020a5d694a000000b0021e51c039c5sm14420094wrw.80.2022.09.19.10.34.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 10:34:56 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson , Ricky Zhou Subject: [PULL 02/21] target/i386: Raise #GP on unaligned m128 accesses when required. Date: Mon, 19 Sep 2022 19:34:30 +0200 Message-Id: <20220919173449.5864-3-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220919173449.5864-1-pbonzini@redhat.com> References: <20220919173449.5864-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Many instructions which load/store 128-bit values are supposed to raise #GP when the memory operand isn't 16-byte aligned. This includes: - Instructions explicitly requiring memory alignment (Exceptions Type 1 in the "AVX and SSE Instruction Exception Specification" section of the SDM) - Legacy SSE instructions that load/store 128-bit values (Exceptions Types 2 and 4). This change sets MO_ALIGN_16 on 128-bit memory accesses that require 16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access hooks that simulate a #GP exception in qemu-user and qemu-system, respectively. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217 Reviewed-by: Richard Henderson Signed-off-by: Ricky Zhou Message-Id: <20220830034816.57091-2-ricky@rzhou.org> [Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo] Signed-off-by: Paolo Bonzini --- target/i386/tcg/excp_helper.c | 13 +++++++++ target/i386/tcg/helper-tcg.h | 28 +++++++++++------- target/i386/tcg/sysemu/excp_helper.c | 8 ++++++ target/i386/tcg/tcg-cpu.c | 2 ++ target/i386/tcg/translate.c | 43 ++++++++++++++++------------ target/i386/tcg/user/excp_helper.c | 7 +++++ 6 files changed, 72 insertions(+), 29 deletions(-) diff --git a/target/i386/tcg/excp_helper.c b/target/i386/tcg/excp_helper.c index c1ffa1c0ef..7c3c8dc7fe 100644 --- a/target/i386/tcg/excp_helper.c +++ b/target/i386/tcg/excp_helper.c @@ -140,3 +140,16 @@ G_NORETURN void raise_exception_ra(CPUX86State *env, int exception_index, { raise_interrupt2(env, exception_index, 0, 0, 0, retaddr); } + +G_NORETURN void handle_unaligned_access(CPUX86State *env, vaddr vaddr, + MMUAccessType access_type, + uintptr_t retaddr) +{ + /* + * Unaligned accesses are currently only triggered by SSE/AVX + * instructions that impose alignment requirements on memory + * operands. These instructions raise #GP(0) upon accessing an + * unaligned address. + */ + raise_exception_ra(env, EXCP0D_GPF, retaddr); +} diff --git a/target/i386/tcg/helper-tcg.h b/target/i386/tcg/helper-tcg.h index 34167e2e29..cd1723389a 100644 --- a/target/i386/tcg/helper-tcg.h +++ b/target/i386/tcg/helper-tcg.h @@ -42,17 +42,6 @@ void x86_cpu_do_interrupt(CPUState *cpu); bool x86_cpu_exec_interrupt(CPUState *cpu, int int_req); #endif -/* helper.c */ -#ifdef CONFIG_USER_ONLY -void x86_cpu_record_sigsegv(CPUState *cs, vaddr addr, - MMUAccessType access_type, - bool maperr, uintptr_t ra); -#else -bool x86_cpu_tlb_fill(CPUState *cs, vaddr address, int size, - MMUAccessType access_type, int mmu_idx, - bool probe, uintptr_t retaddr); -#endif - void breakpoint_handler(CPUState *cs); /* n must be a constant to be efficient */ @@ -78,6 +67,23 @@ G_NORETURN void raise_exception_err_ra(CPUX86State *env, int exception_index, int error_code, uintptr_t retaddr); G_NORETURN void raise_interrupt(CPUX86State *nenv, int intno, int is_int, int error_code, int next_eip_addend); +G_NORETURN void handle_unaligned_access(CPUX86State *env, vaddr vaddr, + MMUAccessType access_type, + uintptr_t retaddr); +#ifdef CONFIG_USER_ONLY +void x86_cpu_record_sigsegv(CPUState *cs, vaddr addr, + MMUAccessType access_type, + bool maperr, uintptr_t ra); +void x86_cpu_record_sigbus(CPUState *cs, vaddr addr, + MMUAccessType access_type, uintptr_t ra); +#else +bool x86_cpu_tlb_fill(CPUState *cs, vaddr address, int size, + MMUAccessType access_type, int mmu_idx, + bool probe, uintptr_t retaddr); +G_NORETURN void x86_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, + MMUAccessType access_type, + int mmu_idx, uintptr_t retaddr); +#endif /* cc_helper.c */ extern const uint8_t parity_table[256]; diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c index 48feba7e75..796dc2a1f3 100644 --- a/target/i386/tcg/sysemu/excp_helper.c +++ b/target/i386/tcg/sysemu/excp_helper.c @@ -439,3 +439,11 @@ bool x86_cpu_tlb_fill(CPUState *cs, vaddr addr, int size, } return true; } + +G_NORETURN void x86_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, + MMUAccessType access_type, + int mmu_idx, uintptr_t retaddr) +{ + X86CPU *cpu = X86_CPU(cs); + handle_unaligned_access(&cpu->env, vaddr, access_type, retaddr); +} diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c index 6fdfdf9598..d3c2b8fb49 100644 --- a/target/i386/tcg/tcg-cpu.c +++ b/target/i386/tcg/tcg-cpu.c @@ -75,10 +75,12 @@ static const struct TCGCPUOps x86_tcg_ops = { #ifdef CONFIG_USER_ONLY .fake_user_interrupt = x86_cpu_do_interrupt, .record_sigsegv = x86_cpu_record_sigsegv, + .record_sigbus = x86_cpu_record_sigbus, #else .tlb_fill = x86_cpu_tlb_fill, .do_interrupt = x86_cpu_do_interrupt, .cpu_exec_interrupt = x86_cpu_exec_interrupt, + .do_unaligned_access = x86_cpu_do_unaligned_access, .debug_excp_handler = breakpoint_handler, .debug_check_breakpoint = x86_debug_check_breakpoint, #endif /* !CONFIG_USER_ONLY */ diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index d6420df31d..8ec91d17af 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -2738,21 +2738,23 @@ static inline void gen_stq_env_A0(DisasContext *s, int offset) tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, s->mem_index, MO_LEUQ); } -static inline void gen_ldo_env_A0(DisasContext *s, int offset) +static inline void gen_ldo_env_A0(DisasContext *s, int offset, bool align) { int mem_index = s->mem_index; - tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, mem_index, MO_LEUQ); + tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, mem_index, + MO_LEUQ | (align ? MO_ALIGN_16 : 0)); tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(0))); tcg_gen_addi_tl(s->tmp0, s->A0, 8); tcg_gen_qemu_ld_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ); tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(1))); } -static inline void gen_sto_env_A0(DisasContext *s, int offset) +static inline void gen_sto_env_A0(DisasContext *s, int offset, bool align) { int mem_index = s->mem_index; tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(0))); - tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, mem_index, MO_LEUQ); + tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, mem_index, + MO_LEUQ | (align ? MO_ALIGN_16 : 0)); tcg_gen_addi_tl(s->tmp0, s->A0, 8); tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(1))); tcg_gen_qemu_st_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ); @@ -3131,7 +3133,7 @@ static const struct SSEOpHelper_table6 sse_op_table6[256] = { [0x25] = UNARY_OP(pmovsxdq, SSE41, SSE_OPF_MMX), [0x28] = BINARY_OP(pmuldq, SSE41, SSE_OPF_MMX), [0x29] = BINARY_OP(pcmpeqq, SSE41, SSE_OPF_MMX), - [0x2a] = SPECIAL_OP(SSE41), /* movntqda */ + [0x2a] = SPECIAL_OP(SSE41), /* movntdqa */ [0x2b] = BINARY_OP(packusdw, SSE41, SSE_OPF_MMX), [0x30] = UNARY_OP(pmovzxbw, SSE41, SSE_OPF_MMX), [0x31] = UNARY_OP(pmovzxbd, SSE41, SSE_OPF_MMX), @@ -3294,17 +3296,17 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, break; case 0x1e7: /* movntdq */ case 0x02b: /* movntps */ - case 0x12b: /* movntps */ + case 0x12b: /* movntpd */ if (mod == 3) goto illegal_op; gen_lea_modrm(env, s, modrm); - gen_sto_env_A0(s, ZMM_OFFSET(reg)); + gen_sto_env_A0(s, ZMM_OFFSET(reg), true); break; case 0x3f0: /* lddqu */ if (mod == 3) goto illegal_op; gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, ZMM_OFFSET(reg)); + gen_ldo_env_A0(s, ZMM_OFFSET(reg), false); break; case 0x22b: /* movntss */ case 0x32b: /* movntsd */ @@ -3373,7 +3375,9 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, case 0x26f: /* movdqu xmm, ea */ if (mod != 3) { gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, ZMM_OFFSET(reg)); + gen_ldo_env_A0(s, ZMM_OFFSET(reg), + /* movaps, movapd, movdqa */ + b == 0x028 || b == 0x128 || b == 0x16f); } else { rm = (modrm & 7) | REX_B(s); gen_op_movo(s, ZMM_OFFSET(reg), ZMM_OFFSET(rm)); @@ -3432,7 +3436,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, case 0x212: /* movsldup */ if (mod != 3) { gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, ZMM_OFFSET(reg)); + gen_ldo_env_A0(s, ZMM_OFFSET(reg), true); } else { rm = (modrm & 7) | REX_B(s); gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)), @@ -3474,7 +3478,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, case 0x216: /* movshdup */ if (mod != 3) { gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, ZMM_OFFSET(reg)); + gen_ldo_env_A0(s, ZMM_OFFSET(reg), true); } else { rm = (modrm & 7) | REX_B(s); gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(1)), @@ -3568,7 +3572,9 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, case 0x27f: /* movdqu ea, xmm */ if (mod != 3) { gen_lea_modrm(env, s, modrm); - gen_sto_env_A0(s, ZMM_OFFSET(reg)); + gen_sto_env_A0(s, ZMM_OFFSET(reg), + /* movaps, movapd, movdqa */ + b == 0x029 || b == 0x129 || b == 0x17f); } else { rm = (modrm & 7) | REX_B(s); gen_op_movo(s, ZMM_OFFSET(rm), ZMM_OFFSET(reg)); @@ -3724,7 +3730,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, if (mod != 3) { gen_lea_modrm(env, s, modrm); op2_offset = offsetof(CPUX86State,xmm_t0); - gen_ldo_env_A0(s, op2_offset); + /* FIXME: should be 64-bit access if b1 == 0. */ + gen_ldo_env_A0(s, op2_offset, !!b1); } else { rm = (modrm & 7) | REX_B(s); op2_offset = ZMM_OFFSET(rm); @@ -3913,11 +3920,11 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, tcg_gen_st16_tl(s->tmp0, cpu_env, op2_offset + offsetof(ZMMReg, ZMM_W(0))); break; - case 0x2a: /* movntqda */ - gen_ldo_env_A0(s, op1_offset); + case 0x2a: /* movntdqa */ + gen_ldo_env_A0(s, op1_offset, true); return; default: - gen_ldo_env_A0(s, op2_offset); + gen_ldo_env_A0(s, op2_offset, true); } } if (!op6->fn[b1].op1) { @@ -4499,7 +4506,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, } else { op2_offset = offsetof(CPUX86State, xmm_t0); gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, op2_offset); + gen_ldo_env_A0(s, op2_offset, true); } val = x86_ldub_code(env, s); @@ -4606,7 +4613,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b, break; default: /* 128 bit access */ - gen_ldo_env_A0(s, op2_offset); + gen_ldo_env_A0(s, op2_offset, true); break; } } else { diff --git a/target/i386/tcg/user/excp_helper.c b/target/i386/tcg/user/excp_helper.c index cd507e2a1b..b3bdb7831a 100644 --- a/target/i386/tcg/user/excp_helper.c +++ b/target/i386/tcg/user/excp_helper.c @@ -48,3 +48,10 @@ void x86_cpu_record_sigsegv(CPUState *cs, vaddr addr, cpu_loop_exit_restore(cs, ra); } + +void x86_cpu_record_sigbus(CPUState *cs, vaddr addr, + MMUAccessType access_type, uintptr_t ra) +{ + X86CPU *cpu = X86_CPU(cs); + handle_unaligned_access(&cpu->env, addr, access_type, ra); +} -- 2.37.2