From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F078C0015E for ; Thu, 3 Aug 2023 09:32:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79AB9280223; Thu, 3 Aug 2023 05:32:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 725472801EB; Thu, 3 Aug 2023 05:32:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C45C280223; Thu, 3 Aug 2023 05:32:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 45FD32801EB for ; Thu, 3 Aug 2023 05:32:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F0923140654 for ; Thu, 3 Aug 2023 09:32:19 +0000 (UTC) X-FDA: 81082277598.23.FD9F0D7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf05.hostedemail.com (Postfix) with ESMTP id E065C10001E for ; Thu, 3 Aug 2023 09:32:17 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691055138; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l5W8wkIWgn4wJJIwymtLz2xMSGaL+2iEKTkxNfiqTTk=; b=LLm6Nz+MHJgnQTbEoK7NBAmzd+Jn5Af8KpnegZkXhaUlAAUjDR8ky/nVhrxA0DrV762Yd3 ruOuf+p8XlHMqiYPY/BjHg4JikNco2i71TDWrgWGPi+G8Bx3vitYSPyBxOZtv2sJA1Mwws 51ghWHROXudtS58+aDWkACgFPr0t2ow= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691055138; a=rsa-sha256; cv=none; b=7mPyPDhS0BdniyvWAerNGUjIOdPoT/9KFeouJbAacJnVWN/8VBcvTPF4EU4ftuQ7Oo4qAv IeR2Eo+0kzukY537dwjuWBJ3eZTtkcrDGALAWxl9Fhc+jPp96hoXAH25tv3cGIVzsWzpmv h61j5dCZ4wICGo7Wp6XC3r7UYpFjYSY= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 871D32F4; Thu, 3 Aug 2023 02:32:59 -0700 (PDT) Received: from [10.1.35.53] (C02Z41KALVDN.cambridge.arm.com [10.1.35.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6197A3F5A1; Thu, 3 Aug 2023 02:32:14 -0700 (PDT) Message-ID: Date: Thu, 3 Aug 2023 10:32:12 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v4 2/5] mm: LARGE_ANON_FOLIO for improved performance To: Yin Fengwei , Yu Zhao Cc: Andrew Morton , Matthew Wilcox , David Hildenbrand , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org References: <20230726095146.2826796-1-ryan.roberts@arm.com> <20230726095146.2826796-3-ryan.roberts@arm.com> <8c0710e0-a75a-b315-dae1-dd93092e4bd6@arm.com> <4ae53b2a-e069-f579-428d-ac6f744cd19a@intel.com> <49142e18-fd4e-6487-113a-3112b1c17dbe@arm.com> <2d947a72-c295-e4c5-4176-4c59cc250e39@intel.com> From: Ryan Roberts In-Reply-To: <2d947a72-c295-e4c5-4176-4c59cc250e39@intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: 4chotay6ftzh5kc6czc7x686mo9f7tmm X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: E065C10001E X-Rspam-User: X-HE-Tag: 1691055137-484687 X-HE-Meta: U2FsdGVkX19gTkWD05cgvwKjR0EdZPMm4JbxD548J9ok9BK8uNBEqtd6ijnqGFvX+coX5shLrUFiNjKh2ggYmO+GJEpY4sz5rXHdh6U2pl0eKLLae42xdyZ5BDucvEh1KAz0Alr3xwAC2Emy5a44De9VE5CzVgt0iVWECDBA9rY3RDV6+3lCf+w4nyaYQ0CssHk8H8BViByBSZ4Wnbf4WIaG+hMeJTcU+Pp+QgtzZfPHFVmoxv9mGktBc65FdbdvyCKDat+kaZEDpROohSV2riWPCSGL4rK1isIuk5QdwvhgGQDNE7z9D19qxT4IJmiCf2kuo1uxCWUUzOUfr36m2Z6txdl0IEamrvECtqe/LOIoMSrxG2WbiZmdZAhJ+n3rUB5wpvHtHNJ3EaiVLXyGkBR5DyaQfQ5GifOBruqXC4ZcmbKlXb1CVTRP/S0AanrBw07s5D2ppnw+2e4hwMhbbZeBGFNwtZEdxDcOf3Y098PhLZbidR5P+W6Jnemn3g37sTan24VPZ98cjj2oox6SjqOKcRZRmbBuHuo3nC/iYRds3xbW6/CzMmN8q7DZRRYwpCjTGgBuCyrXss8DNZv2v8Gs4wEVWfJUaOa60Bg9e2MfWkqLKGAM7i7Y1hy4X835n071cLVveOdBbp10hDNRTho72EYa2igjDPdPlQ9C8u1nZofNBnqfcdirUW0kj8guvZbW834yDn5OTWykjVN+V4lE7dOo3Qf6xWow7ROMb6/U5vagqVLA4z1zUqEcgnRk1wkEMcxKUf3vFI8EuTAsNWpzOGMwtoLmnKGH6n8X1wJeIIfjBoHZ1GZWnQ7bO+c8Jxvu5pM/lfI1rvfXUjwPHyiNUUuUjTWZ1TLYUVy809z1gxTxZSpQHKLCRBNBuDbOXdTLA6ghZA6ictnVv3m+Ngy+08V/T9BoC9Xf+GWKOdaOJb0FS74ZKFR702Ra6jXk9J60d/KIH8MBsBHZJBq f52cMr6U Iag7FfGALntJ+M3Umnb+3EDNqFhdSDGzp+nOFP4i1AcBq9vjawfMuLkJkaCXnlqiJ0n+jr9Aezm9OvutoYuVrGotBniOmvZWlLRYIEE60iPCBpv9+XBlT4RafLV6bGFu0S4tEUA3DOFLs93SKSydMkgq6CoAM5CI+gGDv40AWpMQm202kHDMk/u5O82Wlp5L7KCbwvTCP9DAHVeeIWDWDoNXc8tFcS6aSo49eEQMkpTgQ15A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 03/08/2023 09:37, Yin Fengwei wrote: > > > On 8/3/23 16:21, Ryan Roberts wrote: >> On 03/08/2023 09:05, Yin Fengwei wrote: >> >> ... >> >>>> I've captured run time and peak memory usage, and taken the mean. The stdev for >>>> the peak memory usage is big-ish, but I'm confident this still captures the >>>> central tendancy well: >>>> >>>> | MAX_ORDER_UNHINTED | real-time | kern-time | user-time | peak memory | >>>> |:-------------------|------------:|------------:|------------:|:------------| >>>> | 4k | 0.0% | 0.0% | 0.0% | 0.0% | >>>> | 16k | -3.6% | -26.5% | -0.5% | -0.1% | >>>> | 32k | -4.8% | -37.4% | -0.6% | -0.1% | >>>> | 64k | -5.7% | -42.0% | -0.6% | -1.1% | >>>> | 128k | -5.6% | -42.1% | -0.7% | 1.4% | >>>> | 256k | -4.9% | -41.9% | -0.4% | 1.9% | >>> >>> Here is my test result: >>> >>> real user sys >>> hink-4k: 0% 0% 0% >>> hink-16K: -3% 0.1% -18.3% >>> hink-32K: -4% 0.2% -27.2% >>> hink-64K: -4% 0.5% -31.0% >>> hink-128K: -4% 0.9% -33.7% >>> hink-256K: -5% 1% -34.6% >>> >>> >>> I used command: >>> /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" make -skj96 allmodconfig all >>> to build kernel and collect the real time/user time/kernel time. >>> /sys/kernel/mm/transparent_hugepage/enabled is "madvise". >>> Let me know if you have any question about the test. >> >> Thanks for doing this! I have a couple of questions: >> >> - how many times did you run each test? > Three times for each ANON_FOLIO_MAX_ORDER_UNHINTED. The stddev is quite > small like less than %1. And out of interest, were you running on bare metal or in VM? And did you reboot between each run? >> >> - how did you configure the large page size? (I sent an email out yesterday >> saying that I was doing it wrong from my tests, so the 128k and 256k results >> for my test set are not valid. > I changed the ANON_FOLIO_MAX_ORDER_UNHINTED definition manually every time. In that case, I think your results are broken in a similar way to mine. This code means that order will never be higher than 3 (32K) on x86: + order = max(arch_wants_pte_order(), PAGE_ALLOC_COSTLY_ORDER); + + if (!hugepage_vma_check(vma, vma->vm_flags, false, true, true)) + order = min(order, ANON_FOLIO_MAX_ORDER_UNHINTED); On x86, arch_wants_pte_order() is not implemented and the default returns -1, so you end up with: order = min(PAGE_ALLOC_COSTLY_ORDER, ANON_FOLIO_MAX_ORDER_UNHINTED) So your 4k, 16k and 32k results should be valid, but 64k, 128k and 256k results are actually using 32k, I think? Which is odd because you are getting more stddev than the < 1% you quoted above? So perhaps this is down to rebooting (kaslr, or something...?) (on arm64, arch_wants_pte_order() returns 4, so my 64k result is also valid). As a quick hack to work around this, would you be able to change the code to this: + if (!hugepage_vma_check(vma, vma->vm_flags, false, true, true)) + order = ANON_FOLIO_MAX_ORDER_UNHINTED; > >> >> - what does "hink" mean?? > Sorry for the typo. It should be ANON_FOLIO_MAX_ORDER_UNHINTED. > >> >>> >>> I also find one strange behavior with this version. It's related with why >>> I need to set the /sys/kernel/mm/transparent_hugepage/enabled to "madvise". >>> If it's "never", the large folio is disabled either. >>> If it's "always", the THP will be active before large folio. So the system is >>> in the mixed mode. it's not suitable for this test. >> >> We had a discussion around this in the THP meeting yesterday. I'm going to write >> this up propoerly so we can have proper systematic discussion. The tentative >> conclusion is that MADV_NOHUGEPAGE must continue to mean "do not fault in more >> than is absolutely necessary". I would assume we need to extend that thinking to >> the process-wide and system-wide knobs (as is done in the patch), but we didn't >> explicitly say so in the meeting. > There are cases that THP is not appreciated because of the latency or memory > consumption. For these cases, large folio may fill the gap as less latency and > memory consumption. > > > So if disabling THP means large folio can't be used, we loose the chance to > benefit those cases with large folio. Yes, I appreciate that. But there are also real use cases that expect MADV_NOHUGEPAGE means "do not fault more than is absolutely necessary" and the use cases break if that's not obeyed (e.g. live migration w/ qemu). So I think we need to be conservitive to start. These apps that are explicitly forbidding THP today, should be updated in the long run to opt-in to large anon folios using some as-yet undefined control. > > > Regards > Yin, Fengwei > >> >> My intention is that if you have requested THP and your vma is big enough for >> PMD-size then you get that, else you fallback to large anon folios. And if you >> have neither opted in nor out, then you get large anon folios. >> >> We talked about the idea of adding a new knob that let's you set the max order, >> but that needs a lot more thought. >> >> Anyway, as I said, I'll write it up so we can all systematically discuss. >> >>> >>> So if it's "never", large folio is disabled. But why "madvise" enables large >>> folio unconditionly? Suppose it's only enabled for the VMA range which user >>> madvise large folio (or THP)? >>> >>> Specific for the hink setting, my understand is that we can't choose it only >>> by this testing. Other workloads may have different behavior with differnt >>> hink setting. >>> >>> >>> Regards >>> Yin, Fengwei >>> >>