From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 288DDC4345F for ; Wed, 24 Apr 2024 17:59:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F16E6B00AD; Wed, 24 Apr 2024 13:59:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97AD36B00B2; Wed, 24 Apr 2024 13:59:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F3E86B00B3; Wed, 24 Apr 2024 13:59:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5E7E16B00AD for ; Wed, 24 Apr 2024 13:59:06 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E4C0E1C04DD for ; Wed, 24 Apr 2024 17:59:05 +0000 (UTC) X-FDA: 82045186650.30.3477A10 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf23.hostedemail.com (Postfix) with ESMTP id 0143014000C for ; Wed, 24 Apr 2024 17:59:03 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kCroqBHi; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of libang.linux@gmail.com designates 209.85.216.68 as permitted sender) smtp.mailfrom=libang.linux@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713981544; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NszFlhE32epBglsn0xmSIWgwQN3Rjv1JbjP51BpVSrw=; b=V5NQhICJ9LVS3MEjSCE5mVh25H8jOG/EoIoMX5wJBzyE+rG+USrlUbP/giPYQ1hAGuZNYI 4B2/NDRN/bCMxnXIchjIB20yLfVglykqPjrDDIF9gcrfuDURiSUaEoqKtyQuOLAzVqYgsW Gmh2dm+yD8rybgOyYoPd6zcWK8RMlI0= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kCroqBHi; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of libang.linux@gmail.com designates 209.85.216.68 as permitted sender) smtp.mailfrom=libang.linux@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713981544; a=rsa-sha256; cv=none; b=XVDOHO6SR/dmRw5+TOrHgluSq1jFjYdRQZBgS9N2q6teEFIvG7XhZPG8PO84tel2xiGYqw ggGcSmJfRIuf5cik8/JW6RPGCNQwNv9BraTqALw7UXXzLMnoI++3Q49UQA4LpGi7yjdOs1 lBpFkGFdw8iXjw6rfTZIMp5QiYu9ObU= Received: by mail-pj1-f68.google.com with SMTP id 98e67ed59e1d1-2a2d0ca3c92so165081a91.0 for ; Wed, 24 Apr 2024 10:59:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713981543; x=1714586343; darn=kvack.org; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to :cc:subject:date:message-id:reply-to; bh=NszFlhE32epBglsn0xmSIWgwQN3Rjv1JbjP51BpVSrw=; b=kCroqBHiojqVrUYiswWP2nq/ioXasim5/dbWdMtr3Tem+VcY7Lvwjl8gfHXHqdmnvj YOtTlTUtn2Q86k8zh/++3hCL7akkPTFsNjoonTUHU+gigJci8lh1NODP1oKppprzIRxs 4pffQ+h2CRh+P/M0FAooY0wDHlZ0XGjWaB0kHIsQAoQdAJBFzJa9u2pKBNA28DxujjIT rYMfmZfUQBwX9QlJY8NMPLU23LfSwjbvh8ia7ja6tlr45HKjsdjyoc4ToDvfgFR0KWGu GRSG1VTMoEKW9t8BekUSIexG5c/jA4N4FMDki3BeyHxCNxO1/jCSkgoHzCpmKWPEI3kV yGVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713981543; x=1714586343; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NszFlhE32epBglsn0xmSIWgwQN3Rjv1JbjP51BpVSrw=; b=fSUHz6lSDviu0lZT206QIQiNxRkiKzLW7uAA5VJ3I499CjqnMJ6YlCAtBmLvBrBmBy fLF/hgba5ZEdWav7kYmBZ43GN/M+E9a+6q9VDgijT6ZqQZiJgboRyx80ge9GH4x2g67m SnAk5U9taSpSJXjf3cbgdBPivqss6Vj0JEVta+oAnIRtYHT57LGoXya2B4qWtahA1mCP Ysl+3sMjVI4HZ/xLjKvz4okdWY4kt1o9PfagDCJac2NkxT0yzJLAmntqOs1XMTBhBYH6 XKtlIcJcJJVpoTC0KcVdhR0wm17ImkqNiTwJvX2UpuGv5NlFNcX0IWOGwtKwDp9GivD+ 9WZg== X-Forwarded-Encrypted: i=1; AJvYcCVDgZdxMDgzZVIPfNvIPHYUDrvUclJfjhqjfV21Un9BQepY9lwKXVsAqvV7l86JO5CXVns7e7KazLkfVyTkyiVVDZk= X-Gm-Message-State: AOJu0YwyUlgq0rpGY5AIpsetzZIg/lRBhDy96BZGH0qpCYWDpln3tuoO 2jkOtMorM/Bf8l7LO00JJbGTGAGpwOVOq9azojqd/livlwS2Dzz1kCxPdKEeFx0= X-Google-Smtp-Source: AGHT+IEg3M+Tqa7HRWDFYAFkS/OOvNrun7/51RUt2ky/1fAKXZjtK93DAKpw++xohfVUHl64yfg/Lg== X-Received: by 2002:a17:90b:792:b0:2a5:506f:161c with SMTP id l18-20020a17090b079200b002a5506f161cmr621959pjz.4.1713981542730; Wed, 24 Apr 2024 10:59:02 -0700 (PDT) Received: from [192.168.3.172] ([122.231.145.235]) by smtp.gmail.com with ESMTPSA id ns5-20020a17090b250500b002a5f44353d2sm13495542pjb.7.2024.04.24.10.58.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 24 Apr 2024 10:59:02 -0700 (PDT) Message-ID: Date: Thu, 25 Apr 2024 01:58:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: add per-order mTHP split counters From: Bang Li To: Lance Yang Cc: akpm@linux-foundation.org, 21cnbao@gmail.com, ryan.roberts@arm.com, david@redhat.com, baolin.wang@linux.alibaba.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Bang Li References: <20240424135148.30422-1-ioworker0@gmail.com> <20240424135148.30422-2-ioworker0@gmail.com> <1192295a-5b94-4c1a-b11c-7cd8ef0e62b7@gmail.com> Content-Language: en-US In-Reply-To: <1192295a-5b94-4c1a-b11c-7cd8ef0e62b7@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 0143014000C X-Stat-Signature: m9wm9mqgh83dj8qhqn8cd9of8ub1apb5 X-HE-Tag: 1713981543-207773 X-HE-Meta: U2FsdGVkX1/E96w83LXuhwdscZ5uN4Rj38+xJyRrH4L0I8YWl+hqd8tNpEzQAxzuRcku2fAY5263JaQPt4CvpjSnAIAebDXaJYpT8Y9KncQivIBvSZvsRhJJQ+lgR/Vpad6TW7ITFkD61pYXr7qngO6PMdLotkiDHnVj8Shv4AseuXKUBwHjQKy0eUWWGWrwPSYtrNju89cGn6+BtJHTkp0+D6o7+nuOrS90y0jMVommO5YvtiFwr10g1i56po2eJLReRucVumyG9Uq73avmCifnnaqTWfDB8TJjYkOGkr4cKhbXaVA67Qa1W1kxZheExtVCj7jLxW9rVC9T4Vog1+89OjPzSuCct7TfdMfbVebBkaFM2gcvP1sJiZPMlB9fijUY+cYJZqB3+H1GWcxMEWgqrcAq4t2FD+z4GUstAXUMB+cBEY2DibAjAwhMi9tyhEUIycOvAqUGtOm7TVD1CsBqdS260lHuTOFSerkpLfFiFY31TqWGmh/oqtjEUOn6MYR8q364C2pdUNhSXw++itrCLVrq9NEhQBAo4LQV5LerTRbHgZIzv7Hl6LtOeBtkQHXEP0QwYhW9EMHPlj6b4xncu2lKXwmaR+vLx24lECHjeuBE1driUKYZl5oToFqQ+hhsEsioZGAhlbFtOm1n/+0YE4Ra25d//+2V7LnOxW5lp7rtaJAzBbLI4g8CN/jTpQh4C88CvSiCO1TXHUngnIIIJcYvoBp+TkN1KBK+igV+0PyGV40YalvNBTy7XoG3+Gj3Guez2HTVMTM/x1v0FsnuSYeHG98l4CIrEXXhvjA6scQfYP8xJf0iJ5iJ3OTvOD2X6zxdHD6KCkeL0+5BlcglAnZ6g5giEF1nu2FgXtFtWjpC3G4+wtlyvctecMqcsXIF5qAztzNX5qCyyeG/DcGn9LIQUUnyQrSUF4kxcJdPUJY8+jghiYosNtX8KuAx+SXw4jURV9JXYLZrNNg Wgf9WaxI bg/MsYyHAJjdqIJX6dDpX5UjaKTOe2Esoy+Io1v6FFt1SDNq0A+3wi8an5pmmpe7B7xvhPtTD/RApg88zAPrK9O6tj72Z8wfFlXAxsSkAnr1rcjPD0LotWGFsGaUXGFlj1K3yznFuZtvG/8c4Buue5ySJQVl3D8i5dVpzRDggQJLg/733Xlsi5OFh+isD79jA2DpQCuGNEiV6wSRmUqMxIvuvf8UCTzHKsfKjCWhQKw2Jw9k9075Ksm72hJWu/ZfWJ5KZaixI/LR0uOmyEwoafqFixWcLcCZTkfwy+auxzY6MQqfJ1SU+VcDh0JlALNGoar4sr7+4bJMXLRATnfb/mWW/HHEv69cOh2Ysjvc2veH067lucaY1rr/WM/75mmGXNkWPvCoKmZuEkdf4Ogjad3mo2M68ARzaCst6W+mSxqHv/+by+Wc4CxrARJIJZlIRHRmFCcQk017qdLxYhMotGoa2ioWWqyEhGf6zgyKjvf4OhCVsbw/vx0yjiXLgyYr186XkfEsezZSN82clJX+czMKvTw5na5S/dmZYEpF9S6aQczvsSvw07mh7eZJGGd0lGd8ZZura4vV6saOYwsjf3tOf+qL7xV52pfJtH6yF8FNwVDCSHzNAvoJTjS9UX3iNCHm1A79Sfu3svEgQfJta//vAPEroSRtVtnVHrbkrVAfataA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hey, sorry for making noise, there was something wrong with the format of the last email. On 2024/4/25 1:12, Bang Li wrote: > Hey Lance, > > On 2024/4/24 21:51, Lance Yang wrote: > >> At present, the split counters in THP statistics no longer include >> PTE-mapped mTHP. Therefore, this commit introduces per-order mTHP split >> counters to monitor the frequency of mTHP splits. This will assist >> developers in better analyzing and optimizing system performance. >> >> /sys/kernel/mm/transparent_hugepage/hugepages-/stats >>          split_page >>          split_page_failed >>          deferred_split_page >> >> Signed-off-by: Lance Yang >> --- >>   include/linux/huge_mm.h |  3 +++ >>   mm/huge_memory.c        | 14 ++++++++++++-- >>   2 files changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index 56c7ea73090b..7b9c6590e1f7 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -272,6 +272,9 @@ enum mthp_stat_item { >>       MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, >>       MTHP_STAT_ANON_SWPOUT, >>       MTHP_STAT_ANON_SWPOUT_FALLBACK, >> +    MTHP_STAT_SPLIT_PAGE, >> +    MTHP_STAT_SPLIT_PAGE_FAILED, >> +    MTHP_STAT_DEFERRED_SPLIT_PAGE, >>       __MTHP_STAT_COUNT >>   }; >>   diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 055df5aac7c3..52db888e47a6 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -557,6 +557,9 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, >> MTHP_STAT_ANON_FAULT_FALLBACK); >>   DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, >> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); >>   DEFINE_MTHP_STAT_ATTR(anon_swpout, MTHP_STAT_ANON_SWPOUT); >>   DEFINE_MTHP_STAT_ATTR(anon_swpout_fallback, >> MTHP_STAT_ANON_SWPOUT_FALLBACK); >> +DEFINE_MTHP_STAT_ATTR(split_page, MTHP_STAT_SPLIT_PAGE); >> +DEFINE_MTHP_STAT_ATTR(split_page_failed, MTHP_STAT_SPLIT_PAGE_FAILED); >> +DEFINE_MTHP_STAT_ATTR(deferred_split_page, >> MTHP_STAT_DEFERRED_SPLIT_PAGE); >>     static struct attribute *stats_attrs[] = { >>       &anon_fault_alloc_attr.attr, >> @@ -564,6 +567,9 @@ static struct attribute *stats_attrs[] = { >>       &anon_fault_fallback_charge_attr.attr, >>       &anon_swpout_attr.attr, >>       &anon_swpout_fallback_attr.attr, >> +    &split_page_attr.attr, >> +    &split_page_failed_attr.attr, >> +    &deferred_split_page_attr.attr, >>       NULL, >>   }; >>   @@ -3083,7 +3089,7 @@ int split_huge_page_to_list_to_order(struct >> page *page, struct list_head *list, >>       XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, >> new_order); >>       struct anon_vma *anon_vma = NULL; >>       struct address_space *mapping = NULL; >> -    bool is_thp = folio_test_pmd_mappable(folio); >> +    int order = folio_order(folio); >>       int extra_pins, ret; >>       pgoff_t end; >>       bool is_hzp; >> @@ -3262,8 +3268,10 @@ int split_huge_page_to_list_to_order(struct >> page *page, struct list_head *list, >>           i_mmap_unlock_read(mapping); >>   out: >>       xas_destroy(&xas); >> -    if (is_thp) >> +    if (order >= HPAGE_PMD_ORDER) >>           count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED); >> +    count_mthp_stat(order, !ret ? MTHP_STAT_SPLIT_PAGE : >> +                      MTHP_STAT_SPLIT_PAGE_FAILED); >>       return ret; >>   } >>   @@ -3327,6 +3335,8 @@ void deferred_split_folio(struct folio *folio) >>       if (list_empty(&folio->_deferred_list)) { >>           if (folio_test_pmd_mappable(folio)) >>               count_vm_event(THP_DEFERRED_SPLIT_PAGE); >> +        count_mthp_stat(folio_order(folio), >> +                MTHP_STAT_DEFERRED_SPLIT_PAGE); >>           list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); >>           ds_queue->split_queue_len++; >>   #ifdef CONFIG_MEMCG > > My opinion can be ignored :). Would it be better to modify the > deferred_split_folio > function as follows? I'm not sure. > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c index > 055df5aac7c3..e8562e8630b1 100644 --- a/mm/huge_memory.c +++ > b/mm/huge_memory.c @@ -3299,12 +3299,13 @@ void > deferred_split_folio(struct folio *folio) struct mem_cgroup *memcg = > folio_memcg(folio); #endif unsigned long flags; + int order = > folio_order(folio); /* * Order 1 folios have no space for a deferred > list, but we also * won't waste much memory by not adding them to the > deferred list. */ - if (folio_order(folio) <= 1) + if (order <= 1) > return; /* @@ -3325,8 +3326,9 @@ void deferred_split_folio(struct > folio *folio) spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > if (list_empty(&folio->_deferred_list)) { - if > (folio_test_pmd_mappable(folio)) + if (order >= HPAGE_PMD_ORDER) > count_vm_event(THP_DEFERRED_SPLIT_PAGE); + count_mthp_stat(order, > MTHP_STAT_DEFERRED_SPLIT_PAGE); list_add_tail(&folio->_deferred_list, > &ds_queue->split_queue); ds_queue->split_queue_len++; #ifdef > CONFIG_MEMCG thanks, > bang > My opinion can be ignored :). Would it be better to modify the deferred_split_folio function as follows? I'm not sure. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 055df5aac7c3..e8562e8630b1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3299,12 +3299,13 @@ void deferred_split_folio(struct folio *folio)         struct mem_cgroup *memcg = folio_memcg(folio);  #endif         unsigned long flags; +       int order = folio_order(folio);         /*          * Order 1 folios have no space for a deferred list, but we also          * won't waste much memory by not adding them to the deferred list.          */ -       if (folio_order(folio) <= 1) +       if (order <= 1)                 return;         /* @@ -3325,8 +3326,9 @@ void deferred_split_folio(struct folio *folio)         spin_lock_irqsave(&ds_queue->split_queue_lock, flags);         if (list_empty(&folio->_deferred_list)) { -               if (folio_test_pmd_mappable(folio)) +               if (order >= HPAGE_PMD_ORDER)                         count_vm_event(THP_DEFERRED_SPLIT_PAGE); +               count_mthp_stat(order, MTHP_STAT_DEFERRED_SPLIT_PAGE);                 list_add_tail(&folio->_deferred_list, &ds_queue->split_queue);                 ds_queue->split_queue_len++;  #ifdef CONFIG_MEMCG thanks, bang