From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4211c93R6JzF321 for ; Thu, 30 Aug 2018 09:12:25 +1000 (AEST) Received: by mail-pg1-x541.google.com with SMTP id y3-v6so1765414pgv.0 for ; Wed, 29 Aug 2018 16:12:25 -0700 (PDT) Date: Thu, 30 Aug 2018 09:12:13 +1000 From: Nicholas Piggin To: Linus Torvalds Cc: linux-mm , linux-arch , Linux Kernel Mailing List , ppc-dev , Andrew Morton Subject: Re: [PATCH 2/3] mm/cow: optimise pte dirty/accessed bits handling in fork Message-ID: <20180830091213.78b64354@roar.ozlabs.ibm.com> In-Reply-To: References: <20180828112034.30875-1-npiggin@gmail.com> <20180828112034.30875-3-npiggin@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Wed, 29 Aug 2018 08:42:09 -0700 Linus Torvalds wrote: > On Tue, Aug 28, 2018 at 4:20 AM Nicholas Piggin wrote: > > > > fork clears dirty/accessed bits from new ptes in the child. This logic > > has existed since mapped page reclaim was done by scanning ptes when > > it may have been quite important. Today with physical based pte > > scanning, there is less reason to clear these bits. > > Can you humor me, and make the dirty/accessed bit patches separate? Yeah sure. > There is actually a difference wrt the dirty bit: if we unmap an area > with dirty pages, we have to do the special synchronous flush. > > So a clean page in the virtual mapping is _literally_ cheaper to have. Oh yeah true, that blasted thing. Good point. Dirty micro fault seems to be the big one for my Skylake, takes 300 nanoseconds per access. Accessed takes about 100. (I think, have to go over my benchmark a bit more carefully and re-test). Dirty will happen less often though, particularly as most places we do write to (stack, heap, etc) will be write protected for COW anyway, I think. Worst case might be a big shared shm segment like a database buffer cache, but those kind of forks should happen very very infrequently I would hope. Yes maybe we can do that. I'll split them up and try to get some numbers for them individually. Thanks, Nick