From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58015331A77 for ; Tue, 7 Apr 2026 09:36:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775554587; cv=none; b=CWrtwKKVeiCX/1fZla8XvvtRZn1s9VMgmZiE0GKQCZE16oXORvhafftzK+kOfPXl9F6woljgxo52mUc1j0AHWVG3McuwEIDV1eciHrZumedtNKvE6iFeTIzc1hiVecLY62jDUGPTXo3BzhGaG1SetXn/3Spyg/k7wr1SX9xT2rc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775554587; c=relaxed/simple; bh=Xcqs8K1vb05MUeUKbohjw0eLwXgEdr6RXzuOKrc2odQ=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=U6Wb/v/wY751/oBSW9YWOQ2l3MHsHl867KHkTFEXeiHYdeMNFP448rAzlZYMyhuqaSHVUgjl5Z8zuNeBKLG9wHWD1ZVV+PXSHURuLxDCXyiLXvjt6Va1uyrMTrkV3N+PEfggO8F6mVMO3YGabD7tu/xb5qsmf9WL5jp9zOi/2C0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=l0gMwW04; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="l0gMwW04" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C21FC116C6; Tue, 7 Apr 2026 09:36:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775554587; bh=Xcqs8K1vb05MUeUKbohjw0eLwXgEdr6RXzuOKrc2odQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=l0gMwW042HsRuOqECoj0EDgmLBgPl/wHPCYpdzBik+EFVfwmksMgEI4HhIA7gsMaB TbVN/n1MTUAxkgUhDD9MqrEByZtqcEDXKGYJu/OyPw4fUdbT/8FKaaMq15Lt2tsTKx 4TDISS6TwM2G0fNz5eb9h3FMYFstU5wvdVaHZuksjG0AaqN+vviQjWfLQX1BIvEuPe 5469lbbNYvxnpTe/+d+/QoUpqz4Mxlo1VsY04RHUUchiTtoxK4c10+gmKcEv92/eFi 30wjvValc3Eb7rqdgqz2eAL3PxCdLdT+Tf1/SDoK0YP4aO0BPfYge3giJGXr3/wIAO 33UqsyZV3Z+hQ== Date: Tue, 7 Apr 2026 10:36:21 +0100 From: "Lorenzo Stoakes (Oracle)" To: xu.xin16@zte.com.cn Cc: hughd@google.com, akpm@linux-foundation.org, david@kernel.org, chengming.zhou@linux.dev, wang.yaxin@zte.com.cn, yang.yang29@zte.com.cn, michel@lespinasse.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range Message-ID: References: <9950c6c1-f960-58c0-4312-e4f5ac122043@google.com> <20260407142141059pWDasxUAknP5rqvAMl28K@zte.com.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260407142141059pWDasxUAknP5rqvAMl28K@zte.com.cn> On Tue, Apr 07, 2026 at 02:21:41PM +0800, xu.xin16@zte.com.cn wrote: > > > From the current implementation of mremap, before it succeeds, it always calls > > > prep_move_vma() -> madvise(MADV_UNMERGEABLE) -> break_ksm(), which splits KSM pages > > > into regular anonymous pages, which appears to be based on a patch you introduced > > > over a decade ago, 1ff829957316(ksm: prevent mremap move poisoning). Given this, > > > KSM pages should already be broken prior to the move, so they wouldn't remain as > > > mergeable pages after mremap. Could there be a scenario where this breaking mechanism > > > is bypassed, or am I missing a subtlety in the sequence of operations? > > > > I'd completely forgotten that patch by now! But it's dealing with a > > different issue; and note how it's intentionally leaving MADV_MERGEABLE > > on the vma itself, just using MADV_UNMERGEABLE (with &dummy) as an > > interface to CoW the KSM pages at that time, letting them be remerged after. Hmm yeah, we mark them unmergeable but don't update the VMA flags (since using &dummy), so they can just be merged later right? And then the: void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) { ... const pgoff_t pgoff = rmap_item->address >> PAGE_SHIFT; ... anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root, pgoff, pgoff) { ... } ... } Would _assume_ that folio->pgoff == addr >> PAGE_SHIFT, which will no longer be the case here? And yeah this all sucks (come to my lsf talk etc.) This does make me realise I have to also radically change KSM (gulp) in that work too. So maybe time for me to actually learn more about it... > > > > The sequence in my testcase was: > > > > boot with mem=1G > > echo 1 >/sys/kernel/mm/ksm/run > > base = mmap(NULL, 3*PAGE_SIZE, PROT_READ|PROT_WRITE, > > MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); > > madvise(base, 3*PAGE_SIZE, MADV_MERGEABLE); > > madvise(base, 3*PAGE_SIZE, MADV_DONTFORK); /* in case system() used */ > > memset(base, 0x77, 2*PAGE_SIZE); > > sleep(1); /* I think not required */ > > mremap(base + PAGE_SIZE, PAGE_SIZE, PAGE_SIZE, > > MREMAP_MAYMOVE|MREMAP_FIXED, base + 2*PAGE_SIZE); > > base2 = mmap(NULL, 512K, PROT_READ|PROT_WRITE, > > MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); > > madvise(base2, 512K, MADV_DONTFORK); /* in case system() used */ > > memset(base2, 0x77, 512K); > > print pages_shared pages_sharing /* 1 1 expected, 1 1 seen */ > > run something to mmap 1G anon, touch all, touch again, exit > > print pages_shared pages_sharing /* 0 0 expected, 1 1 seen */ > > exit > > > > Those base2 lines were a late addition, to get the test without mremap > > showing 0 0 instead of 1 1 at the end; just as I had to apply that > > pte_mkold-without-folio_mark_accessed patch to the kernel's mm/ksm.c. > > > > Originally I was checking the testcase's /proc/pid/smaps manually > > before exit; then found printing pages_shared pages_sharing easier. > > > > Hugh > > Following the idea from your test case, I wrote a similar test program, > using migration instead of swap to trigger reverse mapping. The results > show that pages after mremap can still be successfully migrated. > > See my testcase: > https://lore.kernel.org/all/20260407140805858ViqJKFhfmYSfq0FynsaEY@zte.com.cn/ > > Therefore, I suspect that the reason your test program did not swap out > the pages might lie elsewhere, rather than being caused by this optimization. > > Thanks. Maybe test programs are not happening to hit the 'merge again' case after the initial force-unmergeing? I may be missing things here, my bandwidth is now unfortunately seriously hampered and likely to remain so for some time :'( Cheers, Lorenzo