From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C19B28288F; Wed, 31 Jan 2024 15:03:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706713384; cv=none; b=eV11bIKGnAEHHcZdLAlPa23QaDMf0akECOgEkLG1idquX8BQmWDVrkT+u2cG9sgquvINyl6rGXmZq5r73mcYgj8KR45D7ztcayyxW5ZWsKeSg8zzfCP61B6+8v1wrWWcgQbvZscC1t6B3ylXne61Qu/ULHIXwCS9/gjnm1v4cQc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706713384; c=relaxed/simple; bh=kv8zjaYoSk9dJcDJ0wHeUe8lSIZbo2bgiPkiGFxD39g=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=IcUejHPwvuyag7NoR8nwcZ1HrLRrS4FVbt5RX/HYxkwhUta61gC9iQnK3tSKZTWHYxGVkYPJVPoWmDXDcetyZMWeDD4JaQA/QpNtRKSqSBDC8OhG8v+497J2MuG19o+IYDPymttEcOtMxDGVszlmJYDRUx+3kY7Brp7uhifNY2s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 50D6ADA7; Wed, 31 Jan 2024 07:03:45 -0800 (PST) Received: from [10.57.79.60] (unknown [10.57.79.60]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E47B3F762; Wed, 31 Jan 2024 07:02:57 -0800 (PST) Message-ID: Date: Wed, 31 Jan 2024 15:02:56 +0000 Precedence: bulk X-Mailing-List: linux-s390@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org References: <20240129124649.189745-1-david@redhat.com> <57eb82c7-4816-42a2-b5ab-cc221e289b21@arm.com> <714d0930-2202-48b6-9728-d248f820325e@arm.com> <30718fc8-15cf-41e4-922c-5cdbf00a0840@redhat.com> <74333154-a99b-4bad-81f4-bee02ba05e91@redhat.com> From: Ryan Roberts In-Reply-To: <74333154-a99b-4bad-81f4-bee02ba05e91@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 31/01/2024 14:29, David Hildenbrand wrote: >>> Note that regarding NUMA effects, I mean when some memory access within the same >>> socket is faster/slower even with only a single node. On AMD EPYC that's >>> possible, depending on which core you are running and on which memory controller >>> the memory you want to access is located. If both are in different quadrants >>> IIUC, the access latency will be different. >> >> I've configured the NUMA to only bring the RAM and CPUs for a single socket >> online, so I shouldn't be seeing any of these effects. Anyway, I've been using >> the Altra as a secondary because its so much slower than the M2. Let me move >> over to it and see if everything looks more straightforward there. > > Better use a system where people will actually run Linux production workloads > on, even if it is slower :) > > [...] > >>>> >>>> I'll continue to mess around with it until the end of the day. But I'm not >>>> making any headway, then I'll change tack; I'll just measure the performance of >>>> my contpte changes using your fork/zap stuff as the baseline and post based on >>>> that. >>> >>> You should likely not focus on M2 results. Just pick a representative bare metal >>> machine where you get consistent, explainable results. >>> >>> Nothing in the code is fine-tuned for a particular architecture so far, only >>> order-0 handling is kept separate. >>> >>> BTW: I see the exact same speedups for dontneed that I see for munmap. For >>> example, for order-9, it goes from 0.023412s -> 0.009785, so -58%. So I'm >>> curious why you see a speedup for munmap but not for dontneed. >> >> Ugh... ok, coming up. > > Hopefully you were just staring at the wrong numbers (e.g., only with fork > patches). Because both (munmap/pte-dontneed) are using the exact same code path. > Ahh... I'm doing pte-dontneed, which is the only option in your original benchmark - it does MADV_DONTNEED one page at a time. It looks like your new benchmark has an additional "dontneed" option that does it in one shot. Which option are you running? Assuming the latter, I think that explains it.