From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 03DECC433F5 for ; Tue, 8 Feb 2022 08:42:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bsKwF/bNjB6DNhtqYePvlYh6dQsf8rrVKS5i1gBsHhU=; b=trjq8TiEmCk6gM Ph4LG5zvYCvgx2FuNx95KnpaZgHPQCvMECHGqu5lI7eabtu7Zw4iKSYRh4p78cWtJqmFBFt4c+zVP 9KWv53aY35BtKS6TWdFaKGqR5eO09IAPo6K17vM+ACKvjQBvHSF2EquIlWf1WpgbBO8sWP/CCSbDM 6cyPEsFWn9Qx3Fm4e4trcONfGp8XwWuPDnmDuHd5TeFuBOgxS/lj0ii7T7n7iPZzpj5kH/ZoEa0Ww 5yocxxrcBnVafF0ceDknsOh978Ej9l0PebzHPD9Ffp4DbHo7hEioF98L/VLqPQDWaJ+iG9cCawUt/ +w/eqX5+6Hbda8f5OSkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHM3Y-00D1dL-7A; Tue, 08 Feb 2022 08:41:00 +0000 Received: from mail-io1-xd2a.google.com ([2607:f8b0:4864:20::d2a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHM3T-00D1be-T0 for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 08:40:57 +0000 Received: by mail-io1-xd2a.google.com with SMTP id w7so20198899ioj.5 for ; Tue, 08 Feb 2022 00:40:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=fSx772rzzfOHYeUS+/fgO/fHFWW4qH5p6y6tkCzuQFA=; b=AHqyiZiHhsX/3nWVek6Mk+Hpl9VkTBER5gY4+QxuoBwPffcCJSf1mFraEY+RXC44fp DWpdTlnJvmOfAhLEYM5+k5IhbMIz71LyVJXmRMsW2+L2Gf2PiOZ1UTwxRF8pqrMZ32Yb SArw/UKQsp144yuoCyeuTeK9iEmdIPdmTdu4W0I/mnGIANLuo9KxWM/ZQ5B04Yw68Sdo u5J2wcU9D7M2zFaZoIwG8GnBR6SFWwkueGRUKTetdx8TBz+KkYUu9JazYnirH3NNPxs9 budJlxxFq+5uiOX8/+GLWUOlbFRP/YnbsWPBsvNI4JVBLPqnK+9V/T9KSVwwae5C+rT5 jvVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=fSx772rzzfOHYeUS+/fgO/fHFWW4qH5p6y6tkCzuQFA=; b=gi4xjDPlc43SJYERQXuVmUwTKB276De1pTEW/CWPNvLwLGU0CqgfA2w5N555Rrop0O JHMO4zLmHIEhD/EjX7rwAmQ31/iQ5Sp6JPwNcXJm7hwDOviLk08u/FfS8q/NUAsRHMMv E0b1763QtOajO8ksMzFL4KZvAtNkyQve8BAiRBKRy2GWrHBJ+bxUV1JD5KDQzjrvp/mg GL54GqvHbPmaehLcGsaMNtTUU6LkmtYmv2i4dxjiOVsPU17RRb4PNfx7ozMBy0hzawkE zJB9ASpOSFE6jYXYROcxNmk3ZSN6UzDegrLkZOcB3rDzzF5jCJNKWvbPobM1TtlE3gZX hqGQ== X-Gm-Message-State: AOAM532y/64f0pl41zC4yyo/NVwNFCtkqG37z+eQoDwdEhLMB3j1VX36 0xlJX50ObKBX50Kn4mXrm3pcgA== X-Google-Smtp-Source: ABdhPJz/3x2s2v++jA5R1rqXH1GDSTTJ7J9vwJGyql2IY4BWAXJhvKNMtH68ZcF37X2wEKNQkGGLNQ== X-Received: by 2002:a05:6638:150d:: with SMTP id b13mr1677197jat.131.1644309653668; Tue, 08 Feb 2022 00:40:53 -0800 (PST) Received: from google.com ([2620:15c:183:200:5f31:19c3:21f5:7300]) by smtp.gmail.com with ESMTPSA id p16sm4556962ilm.85.2022.02.08.00.40.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Feb 2022 00:40:53 -0800 (PST) Date: Tue, 8 Feb 2022 01:40:49 -0700 From: Yu Zhao To: Andrew Morton , Johannes Weiner , Mel Gorman , Michal Hocko Cc: Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Michael Larabel , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?iso-8859-1?Q?Hoffst=E4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh Subject: Re: [PATCH v7 06/12] mm: multigenerational LRU: exploit locality in rmap Message-ID: References: <20220208081902.3550911-1-yuzhao@google.com> <20220208081902.3550911-7-yuzhao@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220208081902.3550911-7-yuzhao@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_004055_993659_6F3DB2CD X-CRM114-Status: GOOD ( 17.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Feb 08, 2022 at 01:18:56AM -0700, Yu Zhao wrote: > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index b72d75141e12..51c9bc8e965d 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -436,6 +436,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > * - LRU isolation > * - lock_page_memcg() > * - exclusive reference > + * - mem_cgroup_trylock_pages() > * > * For a kmem folio a caller should hold an rcu read lock to protect memcg > * associated with a kmem folio from being released. > @@ -497,6 +498,7 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) > * - LRU isolation > * - lock_page_memcg() > * - exclusive reference > + * - mem_cgroup_trylock_pages() > * > * For a kmem page a caller should hold an rcu read lock to protect memcg > * associated with a kmem page from being released. > @@ -934,6 +936,23 @@ void unlock_page_memcg(struct page *page); > > void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); > > +/* try to stablize folio_memcg() for all the pages in a memcg */ > +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > +{ > + rcu_read_lock(); > + > + if (mem_cgroup_disabled() || !atomic_read(&memcg->moving_account)) > + return true; > + > + rcu_read_unlock(); > + return false; > +} > + > +static inline void mem_cgroup_unlock_pages(void) > +{ > + rcu_read_unlock(); > +} Replaced the open-coded folio_memcg() lock with a new function mem_cgroup_trylock_pages() as requested here: https://lore.kernel.org/linux-mm/YeATr%2F%2FU6XD87fWF@dhcp22.suse.cz/ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel