From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NaCCo-0007Xv-CF for qemu-devel@nongnu.org; Wed, 27 Jan 2010 13:01:30 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NaCCj-0007TM-Dg for qemu-devel@nongnu.org; Wed, 27 Jan 2010 13:01:29 -0500 Received: from [199.232.76.173] (port=49847 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NaCCj-0007Ss-3r for qemu-devel@nongnu.org; Wed, 27 Jan 2010 13:01:25 -0500 Received: from mail-pw0-f43.google.com ([209.85.160.43]:51636) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NaCCi-0005mJ-Du for qemu-devel@nongnu.org; Wed, 27 Jan 2010 13:01:24 -0500 Received: by pwj11 with SMTP id 11so4250762pwj.2 for ; Wed, 27 Jan 2010 10:01:23 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: From: Blue Swirl Date: Wed, 27 Jan 2010 18:01:03 +0000 Message-ID: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: [Qemu-devel] Re: sparc solaris guest, hsfs_putpage: dirty HSFS page List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Artyom Tarasenko Cc: qemu-devel On Tue, Jan 26, 2010 at 10:42 PM, Artyom Tarasenko wrote: > 2010/1/26 Blue Swirl : >> On Tue, Jan 26, 2010 at 7:03 PM, Artyom Tarasenko >> wrote: >>> 2010/1/24 Blue Swirl : >>>> On Sun, Jan 24, 2010 at 2:02 AM, Artyom Tarasenko >>>> wrote: >>>>> All solaris versions which currently boot (from cd) regularly produce= buckets of >>>>> "hsfs_putpage: dirty HSFS page" messages. >>>>> >>>>> High Sierra is a pretty old and stable stuff, so it is possible that >>>>> the code is similar to OpenSolaris. >>>>> I looked in debugger, and the function calls hierarchy looks pretty s= imilar. >>>>> >>>>> Now in the OpenSolaris source code there is a nice comment: >>>>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/com= mon/fs/hsfs/hsfs_vnops.c#1758 >>>>> /* >>>>> * Normally pvn_getdirty() should return 0, which >>>>> * impies that it has done the job for us. >>>>> * The shouldn't-happen scenario is when it returns 1. >>>>> * This means that the page has been modified and >>>>> * needs to be put back. >>>>> * Since we can't write on a CD, we fake a failed >>>>> * I/O and force pvn_write_done() to destroy the page. >>>>> */ >>>>> if (pvn_getdirty(pp, flags) =3D=3D 1) { >>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cmn_err(CE_NOT= E, >>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0"hsfs_putpage: dirty HSFS page"); >>>>> >>>>> Now the question: does the problem have to do with qemu caches (non-)= emulation? >>>>> Can it be that we mark non-dirty pages dirty? Or does qemu always mar= k >>>>> pages dirty exactly to avoid cache emulation? >>>>> >>>>> Otherwise it means something else goes astray and Solaris guest reall= y >>>>> modifies the pages it shouldn't. >>>>> >>>>> Just wonder what to dig first, MMU or IRQ emulation (the two most >>>>> obvious suspects). >>>> >>>> Maybe the stores via MMU bypass ASIs >>> >>> why bypass stores? What about the non-bypass ones? >> >> Because their use should update the PTE dirty bits. > > update !=3Dalways set. Where is it implemented? I guess the code is > shared between multiple architectures. > Is there a way to trace at what point certain page is getting dirty? > > Since it's not the bypass ASIs it must be something else. target-sparc/helper.c:193 for the page table dirtiness (this is probably what Solaris can detect). There is other kind of dirtiness in exec.c, grep for phys_ram_dirty uses. But this should not be visible to guest.