From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1GMSRO-0004g5-ME for qemu-devel@nongnu.org; Sun, 10 Sep 2006 12:45:54 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1GMSRN-0004ft-5q for qemu-devel@nongnu.org; Sun, 10 Sep 2006 12:45:54 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1GMSRN-0004fq-0A for qemu-devel@nongnu.org; Sun, 10 Sep 2006 12:45:53 -0400 Received: from [84.96.92.11] (helo=smtp.Neuf.fr) by monty-python.gnu.org with esmtp (Exim 4.52) id 1GMSSR-0004B8-UA for qemu-devel@nongnu.org; Sun, 10 Sep 2006 12:47:00 -0400 Received: from [86.73.70.44] by sp604005mt.gpm.neuf.ld (Sun Java System Messaging Server 6.2-5.05 (built Feb 16 2006)) with ESMTP id <0J5D007K5YKE6751@sp604005mt.gpm.neuf.ld> for qemu-devel@nongnu.org; Sun, 10 Sep 2006 18:45:51 +0200 (CEST) Date: Sun, 10 Sep 2006 18:46:21 +0200 From: Fabrice Bellard Subject: Re: [Qemu-devel] ARM load/store multiple bug In-reply-to: Message-id: <4504415D.50905@bellard.org> MIME-version: 1.0 Content-type: text/plain; charset=us-ascii; format=flowed Content-transfer-encoding: 7BIT References: <200609100043.09455.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Note that QEMU supports specific unaligned access handling when using the softmmu code. It is possible to implement the ARM specific unaligned accesses without slowing down the aligned case. See the mips case with do_unaligned_access(). Regards, Fabrice. Justin Fletcher wrote: > On Sun, 10 Sep 2006, Paul Brook wrote: > >>> ---8<--- >>> if (n != 1) >>> gen_op_addl_T1_im(-((n - 1) * 4)); >>> } >>> } >>> j = 0; >>> /* Insert something like gen_op_bicl_T1_im(3); here */ >>> for(i=0;i<16;i++) { >>> if (insn & (1 << i)) { >>> if (insn & (1 << 20)) { >>> ---8<--- >> >> >> This is not sufficient. It breaks base register writeback. > > > Doh! Of course, yes. > >> I'll also note that the behavior is dependent on alignment traps being >> disabled (and unaligned access on some cores). ie. for linux user mode >> emulation the current behavior is acceptable. > > > Fair enough; it fails badly on some of the code in the OS I'm running on > it. With a bit of looking around I decided the easiest way to do the > operation and maintain the compatibility with the writeback was to > introduce a pair of new operations for load and store, forced to aligned > addresses. This is NOT, as you note, ideal because it moves the problem > of the alignment checks and should probably be done better - I don't > know the code well enough, I'm afraid, to know the right or best way to > do it. > > My solution is to add new operations to target-arm/op_mem.h : > > ----8<---- > /* Load from aligned address T1 into T0 (used for LDM) */ > #define MEM_LDA_OP(name) \ > void OPPROTO glue(op_lda##name,MEMSUFFIX)(void) \ > { \ > /* JRF: Note should raise alignment fault if alignment in use and \ > b0 or b1 set */ \ > T0 = glue(ld##name,MEMSUFFIX)(T1 &~3); \ > FORCE_RET(); \ > } > MEM_LDA_OP(l) > > #undef MEM_LDA_OP > > /* Store aligned address T0 into T1 (used for STM) */ > #define MEM_STA_OP(name) \ > void OPPROTO glue(op_sta##name,MEMSUFFIX)(void) \ > { \ > /* JRF: Note should raise alignment fault if alignment in use and \ > b0 or b1 set */ \ > glue(st##name,MEMSUFFIX)(T1 &~3, T0); \ > FORCE_RET(); \ > } > MEM_STA_OP(l) > > #undef MEM_STA_OP > ----8<---- > > And to replace the load and store operations in the LDM/STM > implementation in target-arm/translate.c to use these, eg : > > if (insn & (1 << 20)) { > /* load */ > gen_ldst(ldal, s); /* JRF: was ldl */ > if (i == 15) > > and : > > gen_movl_T0_reg(s, i); > } > gen_ldst(stal, s); /* JRF: was stl */ > } > j++; > > I realise that these are not good for the generic use, but they solve my > problems and it may solve other people's if they happen to be using a > similarly constructed OS. >