From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Ng1Uv-0006Yg-R5 for qemu-devel@nongnu.org; Fri, 12 Feb 2010 14:48:17 -0500 Received: from [199.232.76.173] (port=59336 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Ng1Uu-0006YW-Ge for qemu-devel@nongnu.org; Fri, 12 Feb 2010 14:48:16 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1Ng1Ut-0006jB-Mf for qemu-devel@nongnu.org; Fri, 12 Feb 2010 14:48:16 -0500 Received: from mail-pz0-f187.google.com ([209.85.222.187]:58274) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Ng1Ut-0006j5-8k for qemu-devel@nongnu.org; Fri, 12 Feb 2010 14:48:15 -0500 Received: by pzk17 with SMTP id 17so1656786pzk.4 for ; Fri, 12 Feb 2010 11:48:14 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: From: Blue Swirl Date: Fri, 12 Feb 2010 21:47:54 +0200 Message-ID: Subject: Re: [Qemu-devel] [PATCH 3/6] Fix last page errors in page_set_flags and page_check_range. Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Richard Henderson Cc: qemu-devel@nongnu.org On Fri, Feb 12, 2010 at 12:57 AM, Richard Henderson wrote= : > The addr < end comparison prevents the last page from being > iterated; an iteration based on length avoids this problem. Please make separate patches for unrelated changes. Now the essence of the patch is very hard to see. Also pure formatting changes are not very useful. > --- > =C2=A0exec.c | =C2=A0 54 +++++++++++++++++++++++++++---------------------= ------ > =C2=A01 files changed, 27 insertions(+), 27 deletions(-) > > diff --git a/exec.c b/exec.c > index 766568b..ebbe6d0 100644 > --- a/exec.c > +++ b/exec.c > @@ -2222,35 +2222,31 @@ void page_dump(FILE *f) > > =C2=A0int page_get_flags(target_ulong address) > =C2=A0{ > - =C2=A0 =C2=A0PageDesc *p; > - > - =C2=A0 =C2=A0p =3D page_find(address >> TARGET_PAGE_BITS); > - =C2=A0 =C2=A0if (!p) > - =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0; > - =C2=A0 =C2=A0return p->flags; > + =C2=A0 =C2=A0PageDesc *p =3D page_find(address >> TARGET_PAGE_BITS); > + =C2=A0 =C2=A0return p ? p->flags : 0; > =C2=A0} For example this change does not make any difference but just add confusion= . > > -/* modify the flags of a page and invalidate the code if > - =C2=A0 necessary. The flag PAGE_WRITE_ORG is positioned automatically > - =C2=A0 depending on PAGE_WRITE */ > +/* Modify the flags of a page and invalidate the code if necessary. > + =C2=A0 The flag PAGE_WRITE_ORG is positioned automatically depending > + =C2=A0 on PAGE_WRITE. =C2=A0The mmap_lock should already be held. =C2= =A0*/ > =C2=A0void page_set_flags(target_ulong start, target_ulong end, int flags= ) > =C2=A0{ > - =C2=A0 =C2=A0PageDesc *p; > - =C2=A0 =C2=A0target_ulong addr; > + =C2=A0 =C2=A0target_ulong addr, len; > > - =C2=A0 =C2=A0/* mmap_lock should already be held. =C2=A0*/ > =C2=A0 =C2=A0 start =3D start & TARGET_PAGE_MASK; > =C2=A0 =C2=A0 end =3D TARGET_PAGE_ALIGN(end); > - =C2=A0 =C2=A0if (flags & PAGE_WRITE) > + > + =C2=A0 =C2=A0if (flags & PAGE_WRITE) { > =C2=A0 =C2=A0 =C2=A0 =C2=A0 flags |=3D PAGE_WRITE_ORG; > - =C2=A0 =C2=A0for(addr =3D start; addr < end; addr +=3D TARGET_PAGE_SIZE= ) { > - =C2=A0 =C2=A0 =C2=A0 =C2=A0p =3D page_find_alloc(addr >> TARGET_PAGE_BI= TS); > - =C2=A0 =C2=A0 =C2=A0 =C2=A0/* We may be called for host regions that ar= e outside guest > - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 address space. =C2=A0*/ Why remove the comment, is it no longer true? > - =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!p) > - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return; > - =C2=A0 =C2=A0 =C2=A0 =C2=A0/* if the write protection is set, then we i= nvalidate the code > - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 inside */ > + =C2=A0 =C2=A0} > + > + =C2=A0 =C2=A0for (addr =3D start, len =3D end - start; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 len !=3D 0; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET= _PAGE_SIZE) { > + =C2=A0 =C2=A0 =C2=A0 =C2=A0PageDesc *p =3D page_find_alloc(addr >> TARG= ET_PAGE_BITS, 1); > + > + =C2=A0 =C2=A0 =C2=A0 =C2=A0/* If the write protection bit is set, then = we invalidate > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 the code inside. =C2=A0*/ > =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!(p->flags & PAGE_WRITE) && > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (flags & PAGE_WRITE) && > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p->first_tb) { > @@ -2266,18 +2262,22 @@ int page_check_range(target_ulong start, target_u= long len, int flags) > =C2=A0 =C2=A0 target_ulong end; > =C2=A0 =C2=A0 target_ulong addr; > > - =C2=A0 =C2=A0if (start + len < start) > - =C2=A0 =C2=A0 =C2=A0 =C2=A0/* we've wrapped around */ > + =C2=A0 =C2=A0if (start + len - 1 < start) { > + =C2=A0 =C2=A0 =C2=A0 =C2=A0/* We've wrapped around. =C2=A0*/ > =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1; > + =C2=A0 =C2=A0} > > - =C2=A0 =C2=A0end =3D TARGET_PAGE_ALIGN(start+len); /* must do before we= loose bits in the next step */ > + =C2=A0 =C2=A0/* END must be computed before we drop bits from START. = =C2=A0*/ > + =C2=A0 =C2=A0end =3D TARGET_PAGE_ALIGN(start + len); > =C2=A0 =C2=A0 start =3D start & TARGET_PAGE_MASK; > > - =C2=A0 =C2=A0for(addr =3D start; addr < end; addr +=3D TARGET_PAGE_SIZE= ) { > + =C2=A0 =C2=A0for (addr =3D start, len =3D end - start; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 len !=3D 0; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 len -=3D TARGET_PAGE_SIZE, addr +=3D TARGET= _PAGE_SIZE) { > =C2=A0 =C2=A0 =C2=A0 =C2=A0 p =3D page_find(addr >> TARGET_PAGE_BITS); > - =C2=A0 =C2=A0 =C2=A0 =C2=A0if( !p ) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!p) > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1; > - =C2=A0 =C2=A0 =C2=A0 =C2=A0if( !(p->flags & PAGE_VALID) ) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!(p->flags & PAGE_VALID)) > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1; > > =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ((flags & PAGE_READ) && !(p->flags & PAGE_= READ)) > -- > 1.6.6 > > > >