From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9489EC2D0CD for ; Wed, 18 Dec 2019 01:37:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 58C2321775 for ; Wed, 18 Dec 2019 01:37:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="0XKSRFB3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58C2321775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E67318E00C0; Tue, 17 Dec 2019 20:37:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E18468E0079; Tue, 17 Dec 2019 20:37:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2DB18E00C0; Tue, 17 Dec 2019 20:37:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id BA8098E0079 for ; Tue, 17 Dec 2019 20:37:37 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 598B740C2 for ; Wed, 18 Dec 2019 01:37:37 +0000 (UTC) X-FDA: 76276550154.08.loaf63_84b7776c1e4a X-HE-Tag: loaf63_84b7776c1e4a X-Filterd-Recvd-Size: 3184 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Dec 2019 01:37:36 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9A198206D7; Wed, 18 Dec 2019 01:37:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1576633055; bh=yTmA3N0DX4DzZsE04AWRZ/hwoPMTbo467LgbdntdUEA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=0XKSRFB3b3+GPGQJLXIfijh9GrV32Ow0FmIKduSw0ZmLWO1Fg9C/QMb367Ie366uN aAIepWd5Ho/+rNu2DCRDiiOl2RoTJhJNDxX2PyM0tXJNMxUJFKO0Pe3jYTU9deuGaK XaFxLP5ef8frRYzZ5rjHbnf8SxF/7Cn+L9jpdUHg= Date: Tue, 17 Dec 2019 17:37:35 -0800 From: Andrew Morton To: Johannes Weiner Cc: Yafang Shao , Michal Hocko , Vladimir Davydov , Dave Chinner , Al Viro , Linux MM , linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 0/4] memcg, inode: protect page cache from freeing inode Message-Id: <20191217173735.e8ae519f4930b31f99f71619@linux-foundation.org> In-Reply-To: <20191217165422.GA213613@cmpxchg.org> References: <1576582159-5198-1-git-send-email-laoar.shao@gmail.com> <20191217115603.GA10016@dhcp22.suse.cz> <20191217165422.GA213613@cmpxchg.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 17 Dec 2019 11:54:22 -0500 Johannes Weiner wrote: > I've carried the below patch in my private tree for testing cache > aging decisions that the shrinker interfered with. (It would be nicer > if page cache pages could pin the inode of course, but reclaim cannot > easily participate in the inode refcounting scheme.) > > ... > > --- a/fs/inode.c > +++ b/fs/inode.c > @@ -753,7 +753,13 @@ static enum lru_status inode_lru_isolate(struct list_head *item, > return LRU_ROTATE; > } > > - if (inode_has_buffers(inode) || inode->i_data.nrpages) { > + /* Leave the pages to page reclaim */ > + if (inode->i_data.nrpages) { > + spin_unlock(&inode->i_lock); > + return LRU_ROTATE; > + } I guess that code should have been commented... This code was originally added because on large highmem machines we were seeing lowmem full of inodes which had one or more highmem pages attached to them. Highmem was not under memory pressure so those pagecache pages remained unreclaimed "for ever", thus pinning their lowmem inode. The net result was exhaustion of lowmem. I guess a #ifdef CONFIG_HIGHMEM would help, to preserve the old behavior in that case. Although given the paucity of testing on large highmem machines, the risk of divergent behavior over time is a concern.