From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88B9ACA0ED3 for ; Mon, 2 Sep 2024 11:37:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 092D18D00C0; Mon, 2 Sep 2024 07:37:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 01D378D0065; Mon, 2 Sep 2024 07:37:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFDBD8D00C0; Mon, 2 Sep 2024 07:37:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BDB738D0065 for ; Mon, 2 Sep 2024 07:37:53 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 63BB041CC3 for ; Mon, 2 Sep 2024 11:37:53 +0000 (UTC) X-FDA: 82519598826.27.3AF0843 Received: from out-182.mta1.migadu.com (out-182.mta1.migadu.com [95.215.58.182]) by imf29.hostedemail.com (Postfix) with ESMTP id 44AD3120011 for ; Mon, 2 Sep 2024 11:37:51 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=DQl7R2YW; spf=pass (imf29.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.182 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725277048; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uyVq+8UTCbJtNv99fw/EL/a236+8T/SsMGDzI7q67Hs=; b=NKm2aSuf+Y1qP/Y+bopfp/nqnP/XlmVR9x3QSQbck/Ibp2QzHxwFTgmC32/Td73pswbFQR D74769scnk453R3wmV/ZV+SsPnmNcHNOysf1kaRJC8sHKSz/5VD3hkHN+mvuMYyvdVG3jq 9AWCMchej5eCv0Qyg+tNK9zYZasxvBY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=DQl7R2YW; spf=pass (imf29.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.182 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725277048; a=rsa-sha256; cv=none; b=wMnlNFFoU2P6JWKorYz+1skey6hhgjR2TdLNKYEad4cp30BoZr3bWl4+18m4a4IkCG85ba ogaobJHKcAvIF3iywvcFRhOfyTucI2fCLmAniMTXEvxN5IUm7XdJkFc+6t/aEa15PKMBWB KHJqY4DloQA6bDso1tJAFxHtnGnxc94= Message-ID: <07e2365b-48d2-48e8-9ed3-81a2baf377fc@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1725277069; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uyVq+8UTCbJtNv99fw/EL/a236+8T/SsMGDzI7q67Hs=; b=DQl7R2YWeDMLK08mR+LDb2G3+3KAYHpIaF7nTtHwcyU6e1KttuzNe7Tn3Cv5DljElnFAUI FDgvgagbFGt2NTq9UtCzYQ+vDHqrloKCAlZylwaCYRDyOC/EuMZ00tkWciStGDwy5NBRfM Bh6zt1efXld220i26UPUOTH6qAsO+7w= Date: Mon, 2 Sep 2024 19:37:37 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v6 2/3] mm: zswap: zswap_store() extended to handle mTHP folios. To: Kanchana P Sridhar , linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com References: <20240829212705.6714-1-kanchana.p.sridhar@intel.com> <20240829212705.6714-3-kanchana.p.sridhar@intel.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: <20240829212705.6714-3-kanchana.p.sridhar@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Stat-Signature: tmkqepq6ermiwimm5ef3pxg61cc9che4 X-Rspamd-Queue-Id: 44AD3120011 X-Rspamd-Server: rspam11 X-HE-Tag: 1725277070-864678 X-HE-Meta: U2FsdGVkX1+drTOggHdCJMH5R0zT9WO5Pvh/r1bAFD3M9qp/mciJ0Hak2LfTrTM2vzxJmahRoVgyLp83t09sIs4/IpXyYYnWC6yR8TiSIROT6qoy2oRIAj+2Bs3uvS1QXlz8Wb6MGYnLA51PvyfsEPibrsmN798HHtLlBdEUaNGiajiYLMzCzfB70RmwHFzKRM6Hod9UqcFZ/pq27sETF5Y/jqCBOObLkhCgluCgraYimGBcllDoG7GvxW/Rz31/IzMEqfgA6NcyIB7uvbmw80yB0NMAVuxITu9sbE1Zb9hhUfWQ8BtS8euI4do6GwdzNK/ouMCHKJgiszkwBD77VL3hAkzu7SI1/knq9CpVNBKDJMBvzq2EZOxy9olheIbAkyV5r6jnJb7Wo0l/vSOm12Kj7Jxz6s2Nwh5FWXRhApka4GXvzBrfAO4b1Zy1uVpzTve7kn/abycsl+7h2TrVwLOxbyM4Ku9wnUC0vNwpvYo929n9yYgYnRtFFR2XxQtVA+AVNWOSTfCs2tX1OzJ7y85Gydx5HDETVkn6B4WdrND9QvuusonVgxT28LgOuFBv1wYUlj90/5bnQ04czKML7Rck9ZM1FJ6g/fiwvBu6/3gmDdNKRwYIQI9GIpCAHtZYHGRY31MTnc9x9fQDARceKoghqdV2lfaQ6Xn8rOG8O/8U5DU8wxDQX0IZFuL8WUyS2uOMP0lwL4fwxjuZ2ANCBN7iehPVfVAdX3ogsLhdJTbI20s1a7wTQ85wgl0zzAcWpcEsaPi8xLwLp8uQ/IsOqjD3F5+UgT47Yx4hx8WgObXiZBm/8pjyQLFhgRSQXEOsNV69VFjZ4WZxIiVOXR6gGmVce0z3KY+ZqDhoGN845xFQuqpuKKXmktdN/ZQglTkBLvW4s65j+afPMPUW6RY54qv0zFV8jwRhjMQaqyM1Y+q6pYfqSm78RG4vTf7xF/2R5FHcYPahStxNtiYTjmx ZrIgdmNF XCKqY1dhJcexsGOnuCFbIue8bejGeFOocuWb+OG4m0xbqDBHA0rf6yrj7SnLdYOgcyGvPdsx+nCnQwHiQ9euXZuZkfMkzJJw/Iw2DF9Gadkp/VUqjpLKtoXcZYceLOIJT3dIINzegIYdRlD39VPrKc/3OPRaW9/dXFTPlomrzZgvfmJ3oQbyHDOm5MXMqhHD16K74hPjc3nQPa2Masu/wPja6AEUkhFWmpSd51yazb06eIEOJvGM1IrWWfRxwMlNXixcPJWSjFC9CMEH+lmaJuH4Zlgdz6cRZSmCtjdez9XJYIyVBjZmrDMGKCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/8/30 05:27, Kanchana P Sridhar wrote: > zswap_store() will now process and store mTHP and PMD-size THP folios. > > A new config variable CONFIG_ZSWAP_STORE_THP_DEFAULT_ON (off by default) > will enable/disable zswap storing of (m)THP. > > This change reuses and adapts the functionality in Ryan Roberts' RFC > patch [1]: > > "[RFC,v1] mm: zswap: Store large folios without splitting" > > [1] https://lore.kernel.org/linux-mm/20231019110543.3284654-1-ryan.roberts@arm.com/T/#u > > This patch provides a sequential implementation of storing an mTHP in > zswap_store() by iterating through each page in the folio to compress > and store it in the zswap zpool. > > Towards this goal, zswap_compress() is modified to take a page instead > of a folio as input. > > Each page's swap offset is stored as a separate zswap entry. > > If an error is encountered during the store of any page in the mTHP, > all previous pages/entries stored will be invalidated. Thus, an mTHP > is either entirely stored in ZSWAP, or entirely not stored in ZSWAP. > > This forms the basis for building batching of pages during zswap store > of large folios, by compressing batches of up to say, 8 pages in an > mTHP in parallel in hardware, with the Intel In-Memory Analytics > Accelerator (Intel IAA). > > Also, addressed some of the RFC comments from the discussion in [1]. > > Made a minor edit in the comments for "struct zswap_entry" to delete > the comments related to "value" since same-filled page handling has > been removed from zswap. > > Co-developed-by: Ryan Roberts > Signed-off-by: > Signed-off-by: Kanchana P Sridhar The code looks ok, but I also find this patch is a little hard to review, maybe it's better to split it into small patches as Yosry suggested. Thanks! [...] > + > +/* > + * Modified to store mTHP folios. Each page in the mTHP will be compressed > + * and stored sequentially. > + */ > +bool zswap_store(struct folio *folio) > +{ > + long nr_pages = folio_nr_pages(folio); > + swp_entry_t swp = folio->swap; > + pgoff_t offset = swp_offset(swp); > + struct xarray *tree = swap_zswap_tree(swp); > + struct obj_cgroup *objcg = NULL; > + struct mem_cgroup *memcg = NULL; > + struct zswap_pool *pool; > + bool ret = false; > + long index; > + > + VM_WARN_ON_ONCE(!folio_test_locked(folio)); > + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > + > + /* Storing large folios isn't enabled */ > + if (!zswap_mthp_enabled && folio_test_large(folio)) > + return false; > + > + if (!zswap_enabled) > + goto reject; > + > /* > - * If the zswap store fails or zswap is disabled, we must invalidate the > - * possibly stale entry which was previously stored at this offset. > - * Otherwise, writeback could overwrite the new data in the swapfile. > + * Check cgroup limits: > + * > + * The cgroup zswap limit check is done once at the beginning of an > + * mTHP store, and not within zswap_store_page() for each page > + * in the mTHP. We do however check the zswap pool limits at the > + * start of zswap_store_page(). What this means is, the cgroup > + * could go over the limits by at most (HPAGE_PMD_NR - 1) pages. > + * However, the per-store-page zswap pool limits check should > + * hopefully trigger the cgroup aware and zswap LRU aware global > + * reclaim implemented in the shrinker. If this assumption holds, > + * the cgroup exceeding the zswap limits could potentially be > + * resolved before the next zswap_store, and if it is not, the next > + * zswap_store would fail the cgroup zswap limit check at the start. > */ > - entry = xa_erase(tree, offset); > - if (entry) > - zswap_entry_free(entry); > - return false; > + objcg = get_obj_cgroup_from_folio(folio); > + if (objcg && !obj_cgroup_may_zswap(objcg)) { > + memcg = get_mem_cgroup_from_objcg(objcg); > + if (shrink_memcg(memcg)) { > + mem_cgroup_put(memcg); > + goto put_objcg; > + } > + mem_cgroup_put(memcg); > + } > + > + if (zswap_check_limits()) > + goto put_objcg; > + > + pool = zswap_pool_current_get(); > + if (!pool) > + goto put_objcg; > + > + if (objcg) { > + memcg = get_mem_cgroup_from_objcg(objcg); > + if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { > + mem_cgroup_put(memcg); > + goto put_pool; > + } > + mem_cgroup_put(memcg); > + } > + > + /* > + * Store each page of the folio as a separate entry. If we fail to store > + * a page, unwind by removing all the previous pages we stored. > + */ > + for (index = 0; index < nr_pages; ++index) { > + if (!zswap_store_page(folio, index, objcg, pool)) > + goto put_pool; > + } > + > + ret = true; > + > +put_pool: > + zswap_pool_put(pool); > +put_objcg: > + obj_cgroup_put(objcg); > + if (zswap_pool_reached_full) > + queue_work(shrink_wq, &zswap_shrink_work); > +reject: > + /* > + * If the zswap store fails or zswap is disabled, we must invalidate > + * the possibly stale entries which were previously stored at the > + * offsets corresponding to each page of the folio. Otherwise, > + * writeback could overwrite the new data in the swapfile. > + */ > + if (!ret) > + zswap_delete_stored_offsets(tree, offset, nr_pages); > + > + return ret; > } > > bool zswap_load(struct folio *folio)