From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29272C433ED for ; Tue, 20 Apr 2021 14:39:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A317613C3 for ; Tue, 20 Apr 2021 14:39:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A317613C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B96C76B006E; Tue, 20 Apr 2021 10:39:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B46F96B0071; Tue, 20 Apr 2021 10:39:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A103E6B0072; Tue, 20 Apr 2021 10:39:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 88BCC6B006E for ; Tue, 20 Apr 2021 10:39:08 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 362078249980 for ; Tue, 20 Apr 2021 14:39:08 +0000 (UTC) X-FDA: 78053002776.21.7BEE5BA Received: from mail-il1-f182.google.com (mail-il1-f182.google.com [209.85.166.182]) by imf08.hostedemail.com (Postfix) with ESMTP id EFCBE80192C7 for ; Tue, 20 Apr 2021 14:38:48 +0000 (UTC) Received: by mail-il1-f182.google.com with SMTP id v13so16191167ilj.8 for ; Tue, 20 Apr 2021 07:39:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=78qdlKaXI5c/gSLjMnYAG5j/XtzAnKAkFt6jrSWobTY=; b=DylE+vFySprXbPZ5WEXsU9zFHA8K4o8Bd+M4D5OmFpg45Ok4l5K+ToLCAY8nHA3xry C4HMOKtdNnW46yor5wh3NSsEgwLlo/rhbyQXyqOY3RSfWRl4zw0dfs5KrX9/q8hRu7Qb vSwSsLc7QYvk8SgPXVMLHLTQV1c8sihYcHSGdSMJ8QFJpL1Nbh4KEAw/wP1RPNtW6txo edS47UZ+uW3Enr2chy+PQBqBijR0Y/6tm3vimqE7VJbOU3IHogvbDV1f2B7qhwp+Bmv8 D46cq9W6V1nsaf4bVdJw2fBG0m2dPktrCDTHYseCw3lECEyoQWQ2VhPnNZFWoysQmBTD 3t3g== X-Gm-Message-State: AOAM532B3o2BGC/Wqgv/1Sll9YKdc7Q1HJLpW7CTklCd5IdJeOJngXrD +M2spnP1CfbMF+89dvVpKKg= X-Google-Smtp-Source: ABdhPJwE0P8x78EkGHi9DtTb/pP1avcbuKkIaMy3rapRcfs05mICBMGpo7yqu5PV0J4ml7b1NLgkGA== X-Received: by 2002:a05:6e02:dea:: with SMTP id m10mr22189841ilj.112.1618929547061; Tue, 20 Apr 2021 07:39:07 -0700 (PDT) Received: from google.com (243.199.238.35.bc.googleusercontent.com. [35.238.199.243]) by smtp.gmail.com with ESMTPSA id e6sm8691965iom.2.2021.04.20.07.39.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Apr 2021 07:39:06 -0700 (PDT) Date: Tue, 20 Apr 2021 14:39:04 +0000 From: Dennis Zhou To: Pratik Sampat Cc: Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v4 0/4] percpu: partial chunk depopulation Message-ID: References: <20210419225047.3415425-1-dennis@kernel.org> <8a1fa15c-3373-6357-2a2e-4a2b8b3bfb06@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8a1fa15c-3373-6357-2a2e-4a2b8b3bfb06@linux.ibm.com> X-Rspamd-Queue-Id: EFCBE80192C7 X-Stat-Signature: tjakxnyoat4szim58hpcy71uc9isa6hb X-Rspamd-Server: rspam02 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=mail-il1-f182.google.com; client-ip=209.85.166.182 X-HE-DKIM-Result: none/none X-HE-Tag: 1618929528-103570 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 20, 2021 at 04:37:02PM +0530, Pratik Sampat wrote: > > On 20/04/21 4:27 am, Dennis Zhou wrote: > > On Mon, Apr 19, 2021 at 10:50:43PM +0000, Dennis Zhou wrote: > > > Hello, > > > > > > This series is a continuation of Roman's series in [1]. It aims to solve > > > chunks holding onto free pages by adding a reclaim process to the percpu > > > balance work item. > > > > > > The main difference is that the nr_empty_pop_pages is now managed at > > > time of isolation instead of intermixed. This helps with deciding which > > > chunks to free instead of having to interleave returning chunks to > > > active duty. > > > > > > The allocation priority is as follows: > > > 1) appropriate chunk slot increasing until fit > > > 2) sidelined chunks > > > 3) full free chunks > > > > > > The last slot for to_depopulate is never used for allocations. > > > > > > A big thanks to Roman for initiating the work and being available for > > > iterating on these ideas. > > > > > > This patchset contains the following 4 patches: > > > 0001-percpu-factor-out-pcpu_check_block_hint.patch > > > 0002-percpu-use-pcpu_free_slot-instead-of-pcpu_nr_slots-1.patch > > > 0003-percpu-implement-partial-chunk-depopulation.patch > > > 0004-percpu-use-reclaim-threshold-instead-of-running-for-.patch > > > > > > 0001 and 0002 are clean ups. 0003 implement partial chunk depopulation > > > initially from Roman. 0004 adds a reclaim threshold so we do not need to > > > schedule for every page freed. > > > > > > This series is on top of percpu$for-5.14 67c2669d69fb. > > > > > > diffstats below: > > > > > > Dennis Zhou (2): > > > percpu: use pcpu_free_slot instead of pcpu_nr_slots - 1 > > > percpu: use reclaim threshold instead of running for every page > > > > > > Roman Gushchin (2): > > > percpu: factor out pcpu_check_block_hint() > > > percpu: implement partial chunk depopulation > > > > > > mm/percpu-internal.h | 5 + > > > mm/percpu-km.c | 5 + > > > mm/percpu-stats.c | 20 ++-- > > > mm/percpu-vm.c | 30 ++++++ > > > mm/percpu.c | 252 ++++++++++++++++++++++++++++++++++++++----- > > > 5 files changed, 278 insertions(+), 34 deletions(-) > > > > > > Thanks, > > > Dennis > > Hello Pratik, > > > > Do you mind testing this series again on POWER9? The base is available > > here: > > https://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git/log/?h=for-5.14 > > > > Thanks, > > Dennis > > Hello Dennis, I have tested this patchset on POWER9. > > I have tried variations of the percpu_test in the top level and nested cgroups > creation as the test with 1000:10 didn't show any benefits. This is most likely because the 1 in every 11 still pins every page while 1 in 50 does not. Can you try the patch below on top? I think it may show slightly better perf as well. If it doesn't I'll just drop it. > > The following example shows more consistent benefits with the de-allocation > strategy. > Outer: 1000 > Inner: 50 > # ./percpu_test.sh > Percpu: 6912 kB > Percpu: 532736 kB > Percpu: 278784 kB > > I believe it could be a result of bulk freeing within "free_unref_page_commit", > where pages are only free'd if pcp->count >= pcp->high. As POWER has a larger > page size it would end up creating lesser number of pages but with the > effects of fragmentation. This is unrelated to per cpu pages in slab/slub. Percpu is a separate memory allocator. > > Having said that, the patchset and its behavior does look good to me. Thanks, can I throw the following on the appropriate patches? In the future it's good to be explicit about this because some prefer to credit different emails. Tested-by: Pratik Sampat Thanks, Dennis The following may do a little better on power9: --- >From a1464c4d5900cca68fd95b935178d72bb74837d5 Mon Sep 17 00:00:00 2001 From: Dennis Zhou Date: Tue, 20 Apr 2021 14:25:20 +0000 Subject: [PATCH] percpu: convert free page float to bytes The percpu memory allocator keeps around a minimum number of free pages to ensure we can satisfy atomic allocations. However, we've always kept this number in terms of pages. On certain architectures like arm and powerpc, the default page size could be 64k instead of 4k. So, start with a target number of free bytes and then convert to pages. Signed-off-by: Dennis Zhou --- mm/percpu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/percpu.c b/mm/percpu.c index ba13e683d022..287fe3091244 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -80,6 +80,7 @@ #include #include #include +#include #include #include #include @@ -107,11 +108,12 @@ /* chunks in slots below this are subject to being sidelined on failed alloc */ #define PCPU_SLOT_FAIL_THRESHOLD 3 -#define PCPU_EMPTY_POP_PAGES_LOW 2 -#define PCPU_EMPTY_POP_PAGES_HIGH 4 +#define PCPU_EMPTY_POP_PAGES_LOW (max_t(int, (SZ_8K) / PAGE_SIZE, 1)) +#define PCPU_EMPTY_POP_PAGES_HIGH (max_t(int, (SZ_16K) / PAGE_SIZE, \ + PCPU_EMPTY_POP_PAGES_LOW + 1)) /* only schedule reclaim if there are at least N empty pop pages sidelined */ -#define PCPU_EMPTY_POP_RECLAIM_THRESHOLD 4 +#define PCPU_EMPTY_POP_RECLAIM_THRESHOLD (PCPU_EMPTY_POP_PAGES_HIGH) #ifdef CONFIG_SMP /* default addr <-> pcpu_ptr mapping, override in asm/percpu.h if necessary */ -- 2.31.1.368.gbe11c130af-goog