From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=39769 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PFsS6-0007mc-KI for qemu-devel@nongnu.org; Tue, 09 Nov 2010 12:57:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PFsS5-0001BM-7Q for qemu-devel@nongnu.org; Tue, 09 Nov 2010 12:57:50 -0500 Received: from mail-qw0-f45.google.com ([209.85.216.45]:61284) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PFsS5-0001BI-5D for qemu-devel@nongnu.org; Tue, 09 Nov 2010 12:57:49 -0500 Received: by qwf6 with SMTP id 6so4863173qwf.4 for ; Tue, 09 Nov 2010 09:57:48 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <20101108232709.c98bf86f@shadowfax.no-ip.com> References: <20101108232709.c98bf86f@shadowfax.no-ip.com> From: Blue Swirl Date: Tue, 9 Nov 2010 17:57:28 +0000 Message-ID: Subject: Re: [Qemu-devel] Single 64bit memory transaction instead of two 32bit memory transaction. Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: adnan@khaleel.us Cc: qemu-devel@nongnu.org On Mon, Nov 8, 2010 at 11:27 PM, Adnan Khaleel wrote: > In the file exec.c: > > The memory Write/Read functions are declared as an array of 4 entries whe= re > the index values of 0,1,2 correspond to 8,16 and 32bit write and read > functions respectively. > > CPUWriteMemoryFunc *io_mem_write[IO_MEM_NB_ENTRIES][4]; > CPUReadMemoryFunc *io_mem_read[IO_MEM_NB_ENTRIES][4]; > > Is there any reason why we can't extend this to include 64bit writes and > read by increasing the array size? This is because 64bit reads are curren= tly > handled as two separate 32bit reads for eg: sommu_template.h > > static inline DATA_TYPE glue(io_read, SUFFIX)(target_phys_addr_t physaddr= , > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 target_ulong addr, > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 void *retaddr) > { > : > =C2=A0=C2=A0=C2=A0 res =3D io_mem_read[index][2](io_mem_opaque[index], ph= ysaddr); > =C2=A0=C2=A0=C2=A0 res |=3D (uint64_t)io_mem_read[index][2](io_mem_opaque= [index], physaddr + > 4) << 32; > : > =C2=A0=C2=A0=C2=A0 return res; > } > > I'm sure this is happening in other places as well. Is there a reason for > this or could we arbitrarily increase this (within limits ofcourse)? Legacy. Patches have been submitted to add 64 bit IO handlers. But there have been other discussions to change the whole I/O interface.