From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DB19C433EF for ; Thu, 7 Apr 2022 03:16:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240095AbiDGDSc (ORCPT ); Wed, 6 Apr 2022 23:18:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239751AbiDGDSN (ORCPT ); Wed, 6 Apr 2022 23:18:13 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2248C6D38A for ; Wed, 6 Apr 2022 20:16:03 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b16-20020a253410000000b00633b9e71eecso3254022yba.14 for ; Wed, 06 Apr 2022 20:16:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=0gDL2pvTD5zxTlqUvzGVrkqMwF+WWHMG6CzLJtufS9M=; b=QICSklTnaaKrO6/CXXqJRjo6eUoMfPz4jqJsprXE9+mK/IdywVQeWsiiKfkQzApu8E io4jZTYfk+M2pMZWRjWFCLRIdy4NtX6ZsC2I+g4xsFu5eLUI1hci3N0Drcy7JIguNy5K RLAw+HBz0UxECtKLdy3aUU8a/9UqGtIpri8M0oTvcgQC1m7+xPlp35VoWv8m920YL6NM GjgHvHcdIRjPovqKGoeuusODGjIBEdxv59NO9FePJjy28RshR3nHegG91QPzvdIySuve b1Y7KAmXVj0xLVH0KzccTpDl62qkV19v+SnBwH4Q+yJwvKT+Hq28lrFMCeA3SVYkl+pf AkTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=0gDL2pvTD5zxTlqUvzGVrkqMwF+WWHMG6CzLJtufS9M=; b=k9YK31B4stng1LhSWeiMWwzxjVZBA2C1dWyLv6oDJ+9wwxwj4asx1agSg3JKbd1sgi qbezfq4xgntiPLcOCmq0RBk6W1CK6YRQERDvoqyaxFg8M6JCptFzeatB8Vj/0HutXh+e zLrdXo9t5cc6bbb5O3jDuH3AwJkmxxS53SVb3keniZ4DsikapNpU4HvRxQrRidnfAFlS LwDrEze2TmOZlUMwxYBhnRPnWdTnOCX0TlTz2jYmRAl14xKOHUJra6nSemfUahBvrhYc wlfacAMjfbz17MHGqZUEtj887gw9sPc0tweCb3Hxesh9FlqY+WtJ6z3OBgea2eTZ7C0i UpnA== X-Gm-Message-State: AOAM531ZT1N66ZfmiRb/wOiQW318CvHkeYGrIfQ8/IAf2zKc8q+mvWjc eq97q2O1B90MNNz336WOeQuLiyKGTIg= X-Google-Smtp-Source: ABdhPJy0qDaWGmrAHqYOlAxiHODGv8oSz35pnKw1CmjSNap+V1hJvi+veG0m6uQb5qNeS9I6XInk5X7RAis= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:9ea2:c755:ae22:6862]) (user=yuzhao job=sendgmr) by 2002:a25:3247:0:b0:63d:8e17:ac77 with SMTP id y68-20020a253247000000b0063d8e17ac77mr8382368yby.325.1649301362972; Wed, 06 Apr 2022 20:16:02 -0700 (PDT) Date: Wed, 6 Apr 2022 21:15:16 -0600 In-Reply-To: <20220407031525.2368067-1-yuzhao@google.com> Message-Id: <20220407031525.2368067-5-yuzhao@google.com> Mime-Version: 1.0 References: <20220407031525.2368067-1-yuzhao@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v10 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" From: Yu Zhao To: Stephen Rothwell , linux-mm@kvack.org Cc: Andi Kleen , Andrew Morton , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, page-reclaim@google.com, x86@kernel.org, Yu Zhao , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , "=?UTF-8?q?Holger=20Hoffst=C3=A4tte?=" , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org This patch undoes the following refactor: commit 289ccba18af4 ("include/linux/mm_inline.h: fold __update_lru_size() i= nto its sole caller") The upcoming changes to include/linux/mm_inline.h will reuse __update_lru_size(). Signed-off-by: Yu Zhao Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffst=C3=A4tte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- include/linux/mm_inline.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ac32125745ab..7c9c2157e9a8 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -32,7 +32,7 @@ static inline int page_is_file_lru(struct page *page) return folio_is_file_lru(page_folio(page)); } =20 -static __always_inline void update_lru_size(struct lruvec *lruvec, +static __always_inline void __update_lru_size(struct lruvec *lruvec, enum lru_list lru, enum zone_type zid, long nr_pages) { @@ -41,6 +41,13 @@ static __always_inline void update_lru_size(struct lruve= c *lruvec, __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages); __mod_zone_page_state(&pgdat->node_zones[zid], NR_ZONE_LRU_BASE + lru, nr_pages); +} + +static __always_inline void update_lru_size(struct lruvec *lruvec, + enum lru_list lru, enum zone_type zid, + int nr_pages) +{ + __update_lru_size(lruvec, lru, zid, nr_pages); #ifdef CONFIG_MEMCG mem_cgroup_update_lru_size(lruvec, lru, zid, nr_pages); #endif --=20 2.35.1.1094.g7c7d902a7c-goog