From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50A33C3DA49 for ; Fri, 26 Jul 2024 07:10:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9B016B008C; Fri, 26 Jul 2024 03:10:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4B316B0092; Fri, 26 Jul 2024 03:10:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EC336B0096; Fri, 26 Jul 2024 03:10:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7E9E76B008C for ; Fri, 26 Jul 2024 03:10:48 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1498D14165A for ; Fri, 26 Jul 2024 07:10:47 +0000 (UTC) X-FDA: 82381031334.13.5AA35A2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf01.hostedemail.com (Postfix) with ESMTP id DB8BE40017 for ; Fri, 26 Jul 2024 07:10:44 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=p9nBAvWK; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721977843; a=rsa-sha256; cv=none; b=4G8tFWK1uNbyO0Pq0KXiL3OF4jN3YFoqObZIG+Bv0p9i+NcY/fB/gkBQ7vyt5wmk/zk4HV 1nrcWCtqEfZC/yNGjXC/GaF6E8AOqUoUa1BZB9HfBVIl6CZl8aAHy98XofpwpyLSH/RII0 j33mWsl5ANdrMm6MoHlj6Wbxh4Ut90I= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=p9nBAvWK; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721977843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bWmDzG94O90pAs1fuBG51wCW9xtw15DuTIQTEyNAJhU=; b=DPlql+z029hnRixuqe0JxwgDGVQ+UvvsPJvQ83AbmYq50rVvxxaHjTM21xzxXBne6+6xaa JASySnGvVW0zzbuPu83EgJl5tEfHovlaDRkzhsENAFsT5bWTKOzGMVHA6kdY6xgnLBTBj3 GmEzrFsF5LSpppvMlnhVg055knn66+k= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id EF75161709 for ; Fri, 26 Jul 2024 07:10:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5824C4AF0B for ; Fri, 26 Jul 2024 07:10:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721977843; bh=ciWz2uZkoogKcZci4bmKXteSuHA0QbYSUNAknuMv7tc=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=p9nBAvWKCbIXdBVaaA3UIp7yab0WwipTqZ56JG8tKdTCJSD+YZFfB6aiFkyXS0wMk rXvydHdr4J1YSRGquUi1QkJjgb12bWJ72DUYALIO0ZQtQv6i37tANeLs1ngSU6EXI2 Q391UFbHVIe2cSVdk93ex5eL8UH6JOy85ozCmtQ7eAYy3yJQZuTiI0+liT5eEWFYVf YNhteZdWlzbDttQkmzuZKJkGVlN5eaN2fbgwYi0oB+CviR3QO32YxUUIG+m1xKZRZX KcEMhTRJMK5hW4hEY1ZPJPSzBtl4EOk28ECJPsLQ3X00EXE5V6agpXK8p24yg9ETVZ IdnNwi/Eir7Cw== Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-661d7e68e89so13327607b3.0 for ; Fri, 26 Jul 2024 00:10:43 -0700 (PDT) X-Forwarded-Encrypted: i=1; AJvYcCX+k4eQ1o4ebMLIY+/Jf3puH2QUitBeXBdHxdFGJlzpw6e48GamQwDsbraHaPSu/3hAl/r7kj/n7cvSoizBfuAEBpg= X-Gm-Message-State: AOJu0YzCeM1jAQaIWtKLBCi+Kx8vjV9r6enJjEEmiqEf5Hpmsh3nciW9 JoUJZE9zRN28ZikyUNuUOYBHWCzt/YGGk8duTDXtwn5Mx60rnSxoMfiwG26U6youBOLMhzmNQT8 wIQmJggrejL4LwjYfaQJ0SbFo6t1wUylrIOPQPA== X-Google-Smtp-Source: AGHT+IF4vpduh+OhPn4JJeY78QAWRgRh79SI7t7KYM5Z3ITYKW7YZ6C1k3uTBAh9px2yVgxs0+J275ejoNxUTxUQyO8= X-Received: by 2002:a05:690c:f0f:b0:64b:1eb2:3dd4 with SMTP id 00721157ae682-674dfd748ffmr50854347b3.8.1721977842919; Fri, 26 Jul 2024 00:10:42 -0700 (PDT) MIME-Version: 1.0 References: <20240711-swap-allocator-v4-0-0295a4d4c7aa@kernel.org> <20240711-swap-allocator-v4-2-0295a4d4c7aa@kernel.org> <874j8nxhiq.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o76qjhqs.fsf@yhuang6-desk2.ccr.corp.intel.com> <43f73463-af42-4a00-8996-5f63bdf264a3@arm.com> <87jzhdkdzv.fsf@yhuang6-desk2.ccr.corp.intel.com> <87sew0ei84.fsf@yhuang6-desk2.ccr.corp.intel.com> <4ec149fc-7c13-4777-bc97-58ee455a3d7e@arm.com> <87le1q6jyo.fsf@yhuang6-desk2.ccr.corp.intel.com> <87zfq43o4n.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o76k3dkt.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87o76k3dkt.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Chris Li Date: Fri, 26 Jul 2024 00:10:31 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 2/3] mm: swap: mTHP allocate swap entries from nonfull list To: "Huang, Ying" Cc: Ryan Roberts , Andrew Morton , Kairui Song , Hugh Dickins , Kalesh Singh , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: DB8BE40017 X-Rspamd-Server: rspam01 X-Stat-Signature: ug48qkouurk3mumgum4hoijhoijua8sb X-HE-Tag: 1721977844-89199 X-HE-Meta: U2FsdGVkX18CR6A+QigSz+nbDGKGOeJsuG72hGJ1emTDG/eRTD/+Tk09Dd33pgT2hSPlmq+JJzD5yceCWoZ/vfKpMh8HFIDrwjGX215zmvxtVxEucOxtA4rA40urUDSIRdnX6jHabj5jtMjoZG28AY6BGWUYPtjLvOO+2CXv+z1+NZihajp7zKdn4OTJgAGu2wmnmWjwLS6TxZKWZI5+w2SiGM+vtznLDjurpFYRDx6Zu4Li7qxukZNyej7tS0PiLW7BS6yi55nvm2WsquBstmI9o2HvcoNrcDAEjH6Qz09RfqF0jognjpMbwln7z35XR1dug/OYEtrprwZgpmc9/deiJ4OPpr0irJV9T1Z6KMF9y8NvlxbsTuzy0ir9ENRAAN0N0BRE0bbw+FYcrc4DeWNje4HPMjhIpl4AH4PZU4bgdbeUWKdGIsEWtqwSDWhP32lMUDE+pcspQC4bVNKw2WFCzDyd0sbV4TyUaE7yK0YqGMSKFtuI8wzCPLATJnyo9T1ZVoY5OiqN1sa9EM9yjC/qHy+fiUxmXC1V5t0SctmDm6/tmQ46a4Rcr/Dp3pxlg1nWIxMS1UJZn1R2GjczXWcFeyv/vq8SLFT++AGkXmBRxUsx52AN5raLQTGFpZibyQ/MIpgNnFhB7Pgp+3fl9a+/VH2zS88/1b5LZ/vkOmNP69yMDV4J931B5Oi16S4qjchRVuTJ7ZM0msGks5t/nEk2eDjlk+caUwnfGlMEmlN5AmDeqT5tcKGS9UboT45uLxavX92npCzaVxlW9Q8C3y2zePkIhbyne8CRcG3rgyoajZ3B2jxkqLP9HJfoX/hzc1lFcmVAHZiLre4lkTwkQwHCxLjZOOuEFmzmhbApbgTKkBdNsdCtv8mQNR40nWI1hh7zxoWWicfX++13HtdZ6eA4Z1vFQo0/MevpXTZCzoWRIt0G3+Y9X8OZpacvyn3qvBR9nN/3ZRdzgldAXrD onMjVhRB 9ZVMCkCxyHmvLZKB8mR9Q8izttYOxdYvBzh6wSaop6LlRNIM6NFEqwRroAemuUq/gjAxwFnxifbibKqDf6Nrl1DZ8Lg/NyEzxluiazUjyoQFHthYZxwzB3X2ZRJ0EAk28KZCDLROKa0PPPQCRRAfD4LSnv67tR75qbjMdbYUFBvuheiaMNqUuz+zsYuyBOSmP12p/RfMaEWDmgc0XeryzHJLJkmVT/CkyfeUCTZ6j88dbp09O2RI7Qxk6WBt5k2kI9vVeadYbk1zKzdnf5QcSlO57a67+BRjZZwBV+nP5fNfrolcHDqY0EVafTNoMl6TigsbAEEjyUxqLkpU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 25, 2024 at 10:55=E2=80=AFPM Huang, Ying = wrote: > > Chris Li writes: > > > On Thu, Jul 25, 2024 at 7:07=E2=80=AFPM Huang, Ying wrote: > >> > If the freeing of swap entry is random distribution. You need 16 > >> > continuous swap entries free at the same time at aligned 16 base > >> > locations. The total number of order 4 free swap space add up togeth= er > >> > is much lower than the order 0 allocatable swap space. > >> > If having one entry free is 50% probability(swapfile half full), the= n > >> > having 16 swap entries is continually free is (0.5) EXP 16 =3D 1.5 E= -5. > >> > If the swapfile is 80% full, that number drops to 6.5 E -12. > >> > >> This depends on workloads. Quite some workloads will show some degree > >> of spatial locality. For a workload with no spatial locality at all a= s > >> above, mTHP may be not a good choice at the first place. > > > > The fragmentation comes from the order 0 entry not from the mTHP. mTHP > > have their own valid usage case, and should be separate from how you > > use the order 0 entry. That is why I consider this kind of strategy > > only works on the lucky case. I would much prefer the strategy that > > can guarantee work not depend on luck. > > It seems that you have some perfect solution. Will learn it when you > post it. No, I don't have perfect solutions. I see puting limit on order 0 swap usage and writing out discontinuous swap entries from a folio are more deterministic and not depend on luck. Both have their price to pay as well. > > >> >> - Order-4 pages need to be swapped out, but no enough order-4 non-f= ull > >> >> clusters available. > >> > > >> > Exactly. > >> > > >> >> > >> >> So, we need a way to migrate non-full clusters among orders to adju= st to > >> >> the various situations automatically. > >> > > >> > There is no easy way to migrate swap entries to different locations. > >> > That is why I like to have discontiguous swap entries allocation for > >> > mTHP. > >> > >> We suggest to migrate non-full swap clsuters among different lists, no= t > >> swap entries. > > > > Then you have the down side of reducing the number of total high order > > clusters. By chance it is much easier to fragment the cluster than > > anti-fragment a cluster. The orders of clusters have a natural > > tendency to move down rather than move up, given long enough time of > > random access. It will likely run out of high order clusters in the > > long run if we don't have any separation of orders. > > As my example above, you may have almost 0 high-order clusters forever. > So, your solution only works for very specific use cases. It's not a > general solution. One simple solution is having an optional limitation of 0 order swap. I understand you don't like that option, but there is no other easy solution to achieve the same effectiveness, so far. If there is, I like to hear it. > > >> >> But yes, data is needed for any performance related change. > >> > >> BTW: I think non-full cluster isn't a good name. Partial cluster is > >> much better and follows the same convention as partial slab. > > > > I am not opposed to it. The only reason I hold off on the rename is > > because there are patches from Kairui I am testing depending on it. > > Let's finish up the V5 patch with the swap cache reclaim code path > > then do the renaming as one batch job. We actually have more than one > > list that has the clusters partially full. It helps reduce the repeat > > scan of the cluster that is not full but also not able to allocate > > swap entries for this order. Just the name of one of them as > > "partial" is not precise either. Because the other lists are also > > partially full. We'd better give them precise meaning systematically. > > I don't think that it's hard to do a search/replace before the next > version. The overhead is on the other internal experimental patches. Again, I am not opposed to renaming it. Just want to do it at one batch not many times, including other list names. Chris