From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Shi Subject: Re: [PATCH v7 02/10] mm/memcg: fold lru_lock in lock_page_lru Date: Mon, 13 Jan 2020 17:45:51 +0800 Message-ID: <952d02c2-8aa5-40bb-88bb-c43dee65c8bc@linux.alibaba.com> References: <1577264666-246071-1-git-send-email-alex.shi@linux.alibaba.com> <1577264666-246071-3-git-send-email-alex.shi@linux.alibaba.com> <36d7e390-a3d1-908c-d181-4a9e9c8d3d98@yandex-team.ru> Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Return-path: In-Reply-To: <36d7e390-a3d1-908c-d181-4a9e9c8d3d98-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="utf-8" To: Konstantin Khlebnikov , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org, willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org Cc: Michal Hocko , Vladimir Davydov 在 2020/1/10 下午4:49, Konstantin Khlebnikov 写道: > On 25/12/2019 12.04, Alex Shi wrote: >>  From the commit_charge's explanations and mem_cgroup_commit_charge >> comments, as well as call path when lrucare is ture, The lru_lock is >> just to guard the task migration(which would be lead to move_account) >> So it isn't needed when !PageLRU, and better be fold into PageLRU to >> reduce lock contentions. >> >> Signed-off-by: Alex Shi >> Cc: Johannes Weiner >> Cc: Michal Hocko >> Cc: Matthew Wilcox >> Cc: Vladimir Davydov >> Cc: Andrew Morton >> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >> Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org >> Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >> --- >>   mm/memcontrol.c | 9 ++++----- >>   1 file changed, 4 insertions(+), 5 deletions(-) >> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index c5b5f74cfd4d..0ad10caabc3d 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -2572,12 +2572,11 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) >>     static void lock_page_lru(struct page *page, int *isolated) >>   { >> -    pg_data_t *pgdat = page_pgdat(page); >> - >> -    spin_lock_irq(&pgdat->lru_lock); >>       if (PageLRU(page)) { >> +        pg_data_t *pgdat = page_pgdat(page); >>           struct lruvec *lruvec; >>   +        spin_lock_irq(&pgdat->lru_lock); > > That's wrong. Here PageLRU must be checked again under lru_lock. Hi, Konstantin, For logical remain, we can get the lock and then release for !PageLRU. but I still can figure out the problem scenario. Would like to give more hints? > > > Also I don't like these functions: > - called lock/unlock but actually also isolates > - used just once > - pgdat evaluated twice That's right. I will fold these functions into commit_charge. Thanks Alex