From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1BE4C4167B for ; Wed, 6 Dec 2023 07:40:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43B926B007B; Wed, 6 Dec 2023 02:40:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EAD76B007E; Wed, 6 Dec 2023 02:40:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B31E6B0080; Wed, 6 Dec 2023 02:40:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 17F326B007B for ; Wed, 6 Dec 2023 02:40:08 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DF0EFC0117 for ; Wed, 6 Dec 2023 07:40:07 +0000 (UTC) X-FDA: 81535594854.28.AB914C5 Received: from out-183.mta0.migadu.com (out-183.mta0.migadu.com [91.218.175.183]) by imf14.hostedemail.com (Postfix) with ESMTP id 02AE9100007 for ; Wed, 6 Dec 2023 07:40:03 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=RG98rkGr; spf=pass (imf14.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.183 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701848404; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RLCMnEAC+ePOjl2iVkovuvFmYeessjol1ApV8elEB4s=; b=rdYyocYYtQbG8OAm8bQhLb+tViFqfXBnpA7tHtkvzZwtL4aJFEDh47NYfEz+9UQRLvjOJ4 8UUL+9nxp0MwItCxh1Uzw8yWl+U5DqqJq5Gq4B48qK3YHSzvf7LaOrJi7e+z0o4BOdBG/j JU3YWGSHqDL9Puk0aenyD55KZw2Tjnc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701848404; a=rsa-sha256; cv=none; b=vKV9GdRE7jQpL6CMwddQcsYEGFRWsLhKbaP53/Nw/N2g+8Jh9a04PutuU6SGP4774YrfI5 B9UbI1I4uKIPVghQPhupqIeisgN07FwYkzzJBfN8zCoOqN7HpTJWzPqLdZAgXHYKbDWC4Z qyPvaxowxloJIZ3fuh4NY47rfPDnSk0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=RG98rkGr; spf=pass (imf14.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.183 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1701848401; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RLCMnEAC+ePOjl2iVkovuvFmYeessjol1ApV8elEB4s=; b=RG98rkGrRdXvyciaZ5Vv4XrC4+GqT/ddWAQICli27JzPU9LMEXfZFkUAgoEK//VkaI+h19 GZn4NaITvK2c54PN6bezeIKgOAgmf/biY2yk6ADJTOuYVlzGrMPIqN4Zg4UsI9ILfhyI9n IA4qpS/O//SJ9q8mslbyGK7UUEaO0SU= Date: Wed, 6 Dec 2023 15:39:54 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Content-Language: en-US To: Yosry Ahmed Cc: Nhat Pham , akpm@linux-foundation.org, hannes@cmpxchg.org, cerasuolodomenico@gmail.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org References: <20231130194023.4102148-1-nphamcs@gmail.com> <20231130194023.4102148-7-nphamcs@gmail.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Stat-Signature: scdaypmio4zu9wy9f3pcfi6cwnyjey1g X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 02AE9100007 X-Rspam-User: X-HE-Tag: 1701848403-995799 X-HE-Meta: U2FsdGVkX1/crqoHdegZOWt1LA+qDbTBYSgFPw8jvRMpjmKl8t1MymN3FwuITUsF+vbQ3Py1LcGaERduhobRd1wOmApUCTfC6nA4wFoDbWvni16zTdrjnqPiMuCu6EkWJ9i/fXVHiRwm1R99XhwBsO0UOFHmxc5XVl7gXRtWxG90n7I2esPHp1+fSdo/V6QCeE3k7WDFdw7VLiB7USaLPHw9yopUwBlxsdzC2YtGjPtHHCFCSIYDSnUDA3aqqjtFHRQ1mc/gqXa9BD0WTd20xa4S+RzSu+fA1sJEUiKU9WfSsr67BFlKJKRZD+Opdh5PiYDOnt3pWzWJwit6ZqKmamhLkS3TKXxDv2couhRyZFkrnPa9elRTiSglQ1kppE/oL9dixbnxHSyfsU4utr+SEfW3pYG+DZVrCtBu7mSh1v3COMd3M6Lqz/6ZJuOMITzowI/2ZpL72ZnE2eJGBq0RYAhrZhg4+1JLYGulbbOBL4VHnql3OKbFsUnGRscB0mk9BsY77lUWAWWT+vwXReob0eY78Oab5jmIrl4qdBMBBUlrq8KZhaXadV7S83ioMn/UnRYDsMiE1JIc/EjLShxFnPYFnfOfT5sfpke5YGhP07bdG2otkAgCoV7moeqR1bL/Y7zg0uqjF8wVOrpCRw/d/zrKlPnz6Z3AcsRu3nOxiSjbHOrtES2KT4Mc5lfcOWd1JF/eVtbPIWdbNNkWpSC2ERrsbKMzWjXoWOgVuTt669Y6RVrKP0gi94ei4waEixayKy8BW+IZqmWOL9mpgDXK9r/N66aQIYwWFR2AtnvhGFRqDN2RhOz2bFhTDXwbp/f12cR+SWq36/GW3A+LAbylM+CO60Irmj6L4YM43xvdsQLbw/3kgUTr3jYWafPluOGbhmGR3o3TS6BkPcIwxi3t057wbqSlcA/Syh1WmMHSwlq8B102Cq7GhKK9iYg2hN8pu1yNpM933TRwB5P/wQe eeAGIE+7 gmg/VguKfBO067qg3uPH81hIwoh2MAK+oAK02II3nInPRflkFzVem3OnkvMlAhIhzvImtNYlakq3hSvCJhaGRcWXlcA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/12/6 15:36, Yosry Ahmed wrote: > On Tue, Dec 5, 2023 at 10:43 PM Chengming Zhou wrote: >> >> On 2023/12/6 13:59, Yosry Ahmed wrote: >>> [..] >>>>> @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, >>>>> return entry; >>>>> } >>>>> >>>>> +/********************************* >>>>> +* shrinker functions >>>>> +**********************************/ >>>>> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, >>>>> + spinlock_t *lock, void *arg); >>>>> + >>>>> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, >>>>> + struct shrink_control *sc) >>>>> +{ >>>>> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); >>>>> + unsigned long shrink_ret, nr_protected, lru_size; >>>>> + struct zswap_pool *pool = shrinker->private_data; >>>>> + bool encountered_page_in_swapcache = false; >>>>> + >>>>> + nr_protected = >>>>> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); >>>>> + lru_size = list_lru_shrink_count(&pool->list_lru, sc); >>>>> + >>>>> + /* >>>>> + * Abort if the shrinker is disabled or if we are shrinking into the >>>>> + * protected region. >>>>> + * >>>>> + * This short-circuiting is necessary because if we have too many multiple >>>>> + * concurrent reclaimers getting the freeable zswap object counts at the >>>>> + * same time (before any of them made reasonable progress), the total >>>>> + * number of reclaimed objects might be more than the number of unprotected >>>>> + * objects (i.e the reclaimers will reclaim into the protected area of the >>>>> + * zswap LRU). >>>>> + */ >>>>> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { >>>>> + sc->nr_scanned = 0; >>>>> + return SHRINK_STOP; >>>>> + } >>>>> + >>>>> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, >>>>> + &encountered_page_in_swapcache); >>>>> + >>>>> + if (encountered_page_in_swapcache) >>>>> + return SHRINK_STOP; >>>>> + >>>>> + return shrink_ret ? shrink_ret : SHRINK_STOP; >>>>> +} >>>>> + >>>>> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, >>>>> + struct shrink_control *sc) >>>>> +{ >>>>> + struct zswap_pool *pool = shrinker->private_data; >>>>> + struct mem_cgroup *memcg = sc->memcg; >>>>> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); >>>>> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; >>>>> + >>>>> +#ifdef CONFIG_MEMCG_KMEM >>>>> + cgroup_rstat_flush(memcg->css.cgroup); >>>>> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; >>>>> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); >>>>> +#else >>>>> + /* use pool stats instead of memcg stats */ >>>>> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; >>>>> + nr_stored = atomic_read(&pool->nr_stored); >>>>> +#endif >>>>> + >>>>> + if (!zswap_shrinker_enabled || !nr_stored) >>>> When I tested with this series, with !zswap_shrinker_enabled in the default case, >>>> I found the performance is much worse than that without this patch. >>>> >>>> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory. >>>> >>>> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention >>>> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above >>>> the cgroup_rstat_flush(), the performance become much better. >>>> >>>> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()? >>> >>> Yes, we should do nothing if !zswap_shrinker_enabled. We should also >>> use mem_cgroup_flush_stats() here like other places unless accuracy is >>> crucial, which I doubt given that reclaim uses >>> mem_cgroup_flush_stats(). >>> >> >> Yes. After changing to use mem_cgroup_flush_stats() here, the performance >> become much better. >> >>> mem_cgroup_flush_stats() has some thresholding to make sure we don't >>> do flushes unnecessarily, and I have a pending series in mm-unstable >>> that makes that thresholding per-memcg. Keep in mind that adding a >>> call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable, >> >> My test branch is linux-next 20231205, and it's all good after changing >> to use mem_cgroup_flush_stats(memcg). > > Thanks for reporting back. We should still move the > zswap_shrinker_enabled check ahead, no need to even call > mem_cgroup_flush_stats() if we will do nothing anyway. > Yes, agree!