From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9960FC3DA41 for ; Thu, 11 Jul 2024 08:20:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 01A076B0099; Thu, 11 Jul 2024 04:20:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F0D7F6B009C; Thu, 11 Jul 2024 04:20:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD3D46B009D; Thu, 11 Jul 2024 04:20:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BD6BC6B0099 for ; Thu, 11 Jul 2024 04:20:26 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 686E5C040B for ; Thu, 11 Jul 2024 08:20:26 +0000 (UTC) X-FDA: 82326774852.17.79E409C Received: from mail-ua1-f46.google.com (mail-ua1-f46.google.com [209.85.222.46]) by imf02.hostedemail.com (Postfix) with ESMTP id B086C80010 for ; Thu, 11 Jul 2024 08:20:24 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.46 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720685992; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I9V55ZdXNpFXSzjKazk2BYhj7255MteXQT8lNoXNpGQ=; b=5JYt22+EHSVOXNZt8otILuiTQ6pTE/YHy3hExxs7RUpckIyHjv5I7S3eQ11NAjR+3nYM33 fIqsczOeLSdbHYrqax55rMJ4EnUWUQx67V4o87Sgf19i3d3h04+zI7ZVRCA1alcWUgeYYJ ZIPilBaQmZlbgLf238CtbQwyYZH9dk0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720685992; a=rsa-sha256; cv=none; b=mMjPtNzVPPbpi5pGyc2PftFSLyqhSmdHuAYZKYsFI/Gph2xrTksnewDSRcYY46d9k+RZtU 4yZc2ZK5u7g32yoH19dNscgrhvqPuhRevwJDKus7SCmXl7P3Qayb8+F3m+unM4xD6yi/Zn RMVVTSCw+SmPH2XlV8L3f3+wdVSo0+Y= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.46 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) Received: by mail-ua1-f46.google.com with SMTP id a1e0cc1a2514c-81179da9049so178378241.1 for ; Thu, 11 Jul 2024 01:20:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720686024; x=1721290824; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I9V55ZdXNpFXSzjKazk2BYhj7255MteXQT8lNoXNpGQ=; b=kDLow7Oguld6jwDONUGyzf0PmDKiF1pO021vF2Rw9KZWdjOEzWH+aSSNnWcLvS1FVO Dkb+VJ237gNoiaum2d+/q4USJ84sWfPRGL7FnsfO1oUh/yJDVqWeL0x2C8VVHIlDmC7X zJHjt6HmUtxYAJ4wK61He7KpF3Oah/leXTH3/8reNMq3fgLidGD46GXpNntNYjZv9eUX NWHB4LK5tIAxA8PSDZ3ez+92stS5bguPrvQZWmmQxcmlbdWFX6pX3LwDqf0qyTE3bsgZ nkTzSjVYAmwUK1oJYOfyP2HxBCyJXvPRdmfatxPnGOvUfaskdT2+YXG0VR29h2jWwtRY KTDw== X-Forwarded-Encrypted: i=1; AJvYcCU1a6umzgDmR5Hm9OFlzGyeD56fLvq/eNBDY+QqAS+svWk7fZ9rGcgm7zQvwQXaRq/NR9yolaJjezMPqEqZltY1kxE= X-Gm-Message-State: AOJu0YwXyXg6GtJUePJPGJjIEMZfzJWk2G9DZtilpI0nrfGfHocPxvOT +ld3l9coWWL4i/uJNXpwzrMxdu/vJo+NdwIHPaFXnCblkIotASKovzIS402wH0MCLS6B47ArTor DrHkT9gWuchQmmgbQhzLwsWvvSQo= X-Google-Smtp-Source: AGHT+IFNBlRV01qSTvUsWyfoF8g9J4oirMD1z8dQwaHLBy8K+0V9gAVsMNYkNJYu5EBpY4r8z8Ad5pQGJ14uh/6bzuU= X-Received: by 2002:a05:6102:4413:b0:48f:e638:270f with SMTP id ada2fe7eead31-490321166b4mr9803091137.9.1720686023612; Thu, 11 Jul 2024 01:20:23 -0700 (PDT) MIME-Version: 1.0 References: <20240711072929.3590000-1-ryan.roberts@arm.com> <20240711072929.3590000-2-ryan.roberts@arm.com> In-Reply-To: <20240711072929.3590000-2-ryan.roberts@arm.com> From: Barry Song Date: Thu, 11 Jul 2024 20:20:11 +1200 Message-ID: Subject: Re: [PATCH v1 1/2] mm: Cleanup count_mthp_stat() definition To: Ryan Roberts Cc: Andrew Morton , Hugh Dickins , Jonathan Corbet , "Matthew Wilcox (Oracle)" , David Hildenbrand , Lance Yang , Baolin Wang , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B086C80010 X-Stat-Signature: sid3iy1twdfosn8j1yaj36ek6jzwadcr X-HE-Tag: 1720686024-871884 X-HE-Meta: U2FsdGVkX1/VZvJ6F0BVic/665QsHFMNy79K2DPmoI1mS4r6SOkIlLAuHofX1cM93YU8y2mGNDkp31oydqnvktzeny+iZRwkxFejZ0SQnP3kXFAHTU22yj+VlJxlJvfpuruTmNVTI5oXWIUUntiKXBbT4C6JVmcWKs5L9AS8YmF284kPOOhVMvaM/3lYJ36Cx0qG4aT9La6zJ7HvhkvNI5hOnZTRmvVzs3SavOwX+hNYQLTj0rpDZ1yk8MO0mXgP0HOuTCAomLNQSFyYg5+QPlUGoQNRPuotUyB717EZoj0xXCihcTKEDImz+drjJd81c0nQ5BLPXuTGqiQVAbbg6h1cntPjw5V20VgfHm/lRQXmBppFZtsj2PCN7AYRFZy8wDLJ+36KCJW7QlOtLbLcHIBqzhZzlKLKTBDDL81UEbAYBrFwAZRs+PNLeDD5h3NNhW7TsvRFHi484VqQJD0CcfqSu6lqaoDtapDA7B25COU+xfrBpKZeOaXR/xOjyyDp+TtfrzVSECmQUCQHNbqlFhqinKON5us6HnaqpsgNhBj0TLkGKBh0jj+jBaeNfqA2oofgX3+TYrGRxzzA/j+NahKiiQw1qRAihdtLf2f1ol//kxWC0U8Fu9QC9fqJovpP9ZYdcc58lMIDDMicQGK0UGOCa/BKhZ+alO81jsfGd0R44zaTdM2RnwuTgFmrh582IoBh7U5BdZ2T+OhDmx8cLoyGfl5HwyAROY7TFrf0oGEv4y161A0ZrsbAcP4vdCRF0mE8y7QrIVHbwH9OqUaXNZvdbRvduOxu5NofK3sqrKYTMuCNmPNnOPKjRh2roNnxX0xSs0e7LViwCK3YfRJf3oqGZTK8qeN+3JWMBTdMAYkBsE9ak/HzL1+vsHyEYNOeY0bEmU3CZDZlsUi8ALZqFD0L/UZwsL6tw/iFpdnxtipSmRuajY/MmNH7dwGorCz8MZ3/6eyQ77RiqXT1Xhr 4LAjTn/d ArtIeKcZJ2LFMnPkQUHOCW6K+3fGZptemk6QJPk1OGh8YeADI02tEXLdBQVMOM8WCBO89XfZNortLMwdNYOo54y4gMLak6oebCaT0qKN7bEYcwPOoVBpoxYiRvsJum7Vn1AAC0HyBiTJFhoKdd31Kl/lKiWXmzRL3IWDZ1CEnGEUlLqAuK3QabyC7wUh3uk0HTvk4hxz3dR9tB2C4SI65A3vxaKQ7F789jnm/z5xztkFBfgY/cqaHnveKsXylESM9qp0Y71kY+Frk9RglimbA2tIfvLyzbck6f4zstkGdpZyM5/TSuf1PD23CjOHkG90EEImMEjyYojxl5PLy/vphO9vjHPyqjs/mEk7idTL3u2mnJaCUqWlcDsmCvH9/pJEHePd5E9Z1c9zA6DjLqLXU0v//+STnnXIeZK0FB6J4zZhNA1dofvH4djGiWqAS8t3+XXQT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 11, 2024 at 7:29=E2=80=AFPM Ryan Roberts = wrote: > > Let's move count_mthp_stat() so that it's always defined, even when THP > is disabled. Previously uses of the function in files such as shmem.c, > which are compiled even when THP is disabled, required ugly THP > ifdeferry. With this cleanup, we can remove those ifdefs and the > function resolves to a nop when THP is disabled. > > I shortly plan to call count_mthp_stat() from more THP-invariant source > files. > > Signed-off-by: Ryan Roberts Acked-by: Barry Song > --- > include/linux/huge_mm.h | 70 ++++++++++++++++++++--------------------- > mm/memory.c | 2 -- > mm/shmem.c | 6 ---- > 3 files changed, 35 insertions(+), 43 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index cff002be83eb..cb93b9009ce4 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -108,6 +108,41 @@ extern struct kobj_attribute thpsize_shmem_enabled_a= ttr; > #define HPAGE_PUD_MASK (~(HPAGE_PUD_SIZE - 1)) > #define HPAGE_PUD_SIZE ((1UL) << HPAGE_PUD_SHIFT) > > +enum mthp_stat_item { > + MTHP_STAT_ANON_FAULT_ALLOC, > + MTHP_STAT_ANON_FAULT_FALLBACK, > + MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > + MTHP_STAT_SWPOUT, > + MTHP_STAT_SWPOUT_FALLBACK, > + MTHP_STAT_SHMEM_ALLOC, > + MTHP_STAT_SHMEM_FALLBACK, > + MTHP_STAT_SHMEM_FALLBACK_CHARGE, > + MTHP_STAT_SPLIT, > + MTHP_STAT_SPLIT_FAILED, > + MTHP_STAT_SPLIT_DEFERRED, > + __MTHP_STAT_COUNT > +}; > + > +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS) > +struct mthp_stat { > + unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUN= T]; > +}; > + > +DECLARE_PER_CPU(struct mthp_stat, mthp_stats); > + > +static inline void count_mthp_stat(int order, enum mthp_stat_item item) > +{ > + if (order <=3D 0 || order > PMD_ORDER) > + return; > + > + this_cpu_inc(mthp_stats.stats[order][item]); > +} > +#else > +static inline void count_mthp_stat(int order, enum mthp_stat_item item) > +{ > +} > +#endif > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > extern unsigned long transparent_hugepage_flags; > @@ -263,41 +298,6 @@ struct thpsize { > > #define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) > > -enum mthp_stat_item { > - MTHP_STAT_ANON_FAULT_ALLOC, > - MTHP_STAT_ANON_FAULT_FALLBACK, > - MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, > - MTHP_STAT_SWPOUT, > - MTHP_STAT_SWPOUT_FALLBACK, > - MTHP_STAT_SHMEM_ALLOC, > - MTHP_STAT_SHMEM_FALLBACK, > - MTHP_STAT_SHMEM_FALLBACK_CHARGE, > - MTHP_STAT_SPLIT, > - MTHP_STAT_SPLIT_FAILED, > - MTHP_STAT_SPLIT_DEFERRED, > - __MTHP_STAT_COUNT > -}; > - > -struct mthp_stat { > - unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUN= T]; > -}; > - > -#ifdef CONFIG_SYSFS > -DECLARE_PER_CPU(struct mthp_stat, mthp_stats); > - > -static inline void count_mthp_stat(int order, enum mthp_stat_item item) > -{ > - if (order <=3D 0 || order > PMD_ORDER) > - return; > - > - this_cpu_inc(mthp_stats.stats[order][item]); > -} > -#else > -static inline void count_mthp_stat(int order, enum mthp_stat_item item) > -{ > -} > -#endif > - > #define transparent_hugepage_use_zero_page() \ > (transparent_hugepage_flags & \ > (1< diff --git a/mm/memory.c b/mm/memory.c > index 802d0d8a40f9..a50fdefb8f0b 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4597,9 +4597,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) > > folio_ref_add(folio, nr_pages - 1); > add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > -#ifdef CONFIG_TRANSPARENT_HUGEPAGE > count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); > -#endif > folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); > folio_add_lru_vma(folio, vma); > setpte: > diff --git a/mm/shmem.c b/mm/shmem.c > index f24dfbd387ba..fce1343f44e6 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1776,9 +1776,7 @@ static struct folio *shmem_alloc_and_add_folio(stru= ct vm_fault *vmf, > > if (pages =3D=3D HPAGE_PMD_NR) > count_vm_event(THP_FILE_FALLBACK); > -#ifdef CONFIG_TRANSPARENT_HUGEPAGE > count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK); > -#endif > order =3D next_order(&suitable_orders, order); > } > } else { > @@ -1803,10 +1801,8 @@ static struct folio *shmem_alloc_and_add_folio(str= uct vm_fault *vmf, > count_vm_event(THP_FILE_FALLBACK); > count_vm_event(THP_FILE_FALLBACK_CHARGE); > } > -#ifdef CONFIG_TRANSPARENT_HUGEPAGE > count_mthp_stat(folio_order(folio), MTHP_STAT_SHM= EM_FALLBACK); > count_mthp_stat(folio_order(folio), MTHP_STAT_SHM= EM_FALLBACK_CHARGE); > -#endif > } > goto unlock; > } > @@ -2180,9 +2176,7 @@ static int shmem_get_folio_gfp(struct inode *inode,= pgoff_t index, > if (!IS_ERR(folio)) { > if (folio_test_pmd_mappable(folio)) > count_vm_event(THP_FILE_ALLOC); > -#ifdef CONFIG_TRANSPARENT_HUGEPAGE > count_mthp_stat(folio_order(folio), MTHP_STAT_SHM= EM_ALLOC); > -#endif > goto alloced; > } > if (PTR_ERR(folio) =3D=3D -EEXIST) > -- > 2.43.0 >