From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D08CC433E0 for ; Thu, 11 Jun 2020 03:28:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2C3BF2064C for ; Thu, 11 Jun 2020 03:28:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C3BF2064C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AEE678D0078; Wed, 10 Jun 2020 23:28:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A78878D004C; Wed, 10 Jun 2020 23:28:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9406D8D0078; Wed, 10 Jun 2020 23:28:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id 795348D004C for ; Wed, 10 Jun 2020 23:28:53 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 234511EE6 for ; Thu, 11 Jun 2020 03:28:53 +0000 (UTC) X-FDA: 76915499346.16.cakes81_14169eb26dd0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id DFD66100E690B for ; Thu, 11 Jun 2020 03:28:52 +0000 (UTC) X-HE-Tag: cakes81_14169eb26dd0 X-Filterd-Recvd-Size: 4509 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Thu, 11 Jun 2020 03:28:51 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R871e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07425;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0U.E6nrM_1591846127; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U.E6nrM_1591846127) by smtp.aliyun-inc.com(127.0.0.1); Thu, 11 Jun 2020 11:28:48 +0800 From: Alex Shi Subject: Re: [patch 113/131] mm: balance LRU lists based on relative thrashing To: Joonsoo Kim , Johannes Weiner Cc: LKML , Andrew Morton , Joonsoo Kim , Linux Memory Management List , Michal Hocko , =?UTF-8?B?6rmA66+87LCs?= , mm-commits@vger.kernel.org, Rik van Riel , Linus Torvalds References: <20200603230303.kSkT62Lb5%akpm@linux-foundation.org> <20200609144551.GA452252@cmpxchg.org> Message-ID: Date: Thu, 11 Jun 2020 11:28:47 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: DFD66100E690B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/6/10 =E4=B8=8B=E5=8D=881:23, Joonsoo Kim =E5=86=99=E9=81=93= : > 2020=EB=85=84 6=EC=9B=94 9=EC=9D=BC (=ED=99=94) =EC=98=A4=ED=9B=84 11:4= 6, Johannes Weiner =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84= =B1: >> >> On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote: >>> >>> >>> =E5=9C=A8 2020/6/4 =E4=B8=8A=E5=8D=887:03, Andrew Morton =E5=86=99=E9= =81=93: >>>> >>>> + /* XXX: Move to lru_cache_add() when it supports new vs putback = */ >>> >>> Hi Hannes, >>> >>> Sorry for a bit lost, would you like to explain a bit more of your id= ea here? >>> >>>> + spin_lock_irq(&page_pgdat(page)->lru_lock); >>>> + lru_note_cost(page); >>>> + spin_unlock_irq(&page_pgdat(page)->lru_lock); >>>> + >>> >>> >>> What could we see here w/o the lru_lock? Why I want to know the lru_lock protection here is that currently we have= 5 lru lists but only guarded by one lock, that would cause much contention when diffe= rent apps active on a server. I guess originally we have only one lru_lock, since 5 locks would cause c= acheline bouncing if we put them together, or a bit cacheline waste to separate them in cac= heline. But after we have qspinlock, each of cpu will just loop lock on their cacheli= ne, no interfer=20 to others. It would much much relief the performance drop by cacheline bo= unce. And we could use page.mapping bits to store the using lru list index for = the page. As a quick thought, I guess, except the 5 locks for 5 lists, we still nee= d 1 more lock for common lruvec data or for others which relay on lru_lock now, like mlock,= hpage_nr_pages.. That's the reason I want to know everything under lru_lock. :) Any comments for this idea? :) Thanks Alex >> >> It'll just be part of the existing LRU locking in >> pagevec_lru_move_fn(), when the new pages are added to the LRU in >> batch. See this older patch for example: >> >> https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.= org/ >> >> I didn't include it in this series to reduce conflict with Joonsoo's >> WIP series that also operates in this area and does something similar: >=20 > Thanks! >=20 >> https://lkml.org/lkml/2020/4/3/63 >=20 > I haven't completed the rebase of my series but I guess that referenced= patch > "https://lkml.org/lkml/2020/4/3/63" would be removed in the next versio= n. Thanks a lot for the info, Johannes&Joonsoo! A long history for a interes= ting idea. :) >=20 > Before the I/O cost model, a new anonymous page contributes to the LRU = reclaim > balance. But, now, a new anonymous page doesn't contributes to the I/O = cost > so this adjusting patch would not be needed anymore. >=20 > If anyone wants to change this part, > "/* XXX: Move to lru_cache_add() when it supports new vs putback */", f= eel free > to do it.