From: Chris Li <chrisl@kernel.org>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Kairui Song <kasong@tencent.com>,
Hugh Dickins <hughd@google.com>,
Kalesh Singh <kaleshsingh@google.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Barry Song <baohua@kernel.org>
Subject: Re: [PATCH v4 2/3] mm: swap: mTHP allocate swap entries from nonfull list
Date: Fri, 26 Jul 2024 00:10:31 -0700 [thread overview]
Message-ID: <CACePvbWe9wraG2FjBcX9OmHN5ynB4et9WEHqh6NPSVK5mzJi2A@mail.gmail.com> (raw)
In-Reply-To: <87o76k3dkt.fsf@yhuang6-desk2.ccr.corp.intel.com>
On Thu, Jul 25, 2024 at 10:55 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Chris Li <chrisl@kernel.org> writes:
>
> > On Thu, Jul 25, 2024 at 7:07 PM Huang, Ying <ying.huang@intel.com> wrote:
> >> > If the freeing of swap entry is random distribution. You need 16
> >> > continuous swap entries free at the same time at aligned 16 base
> >> > locations. The total number of order 4 free swap space add up together
> >> > is much lower than the order 0 allocatable swap space.
> >> > If having one entry free is 50% probability(swapfile half full), then
> >> > having 16 swap entries is continually free is (0.5) EXP 16 = 1.5 E-5.
> >> > If the swapfile is 80% full, that number drops to 6.5 E -12.
> >>
> >> This depends on workloads. Quite some workloads will show some degree
> >> of spatial locality. For a workload with no spatial locality at all as
> >> above, mTHP may be not a good choice at the first place.
> >
> > The fragmentation comes from the order 0 entry not from the mTHP. mTHP
> > have their own valid usage case, and should be separate from how you
> > use the order 0 entry. That is why I consider this kind of strategy
> > only works on the lucky case. I would much prefer the strategy that
> > can guarantee work not depend on luck.
>
> It seems that you have some perfect solution. Will learn it when you
> post it.
No, I don't have perfect solutions. I see puting limit on order 0 swap
usage and writing out discontinuous swap entries from a folio are more
deterministic and not depend on luck. Both have their price to pay as
well.
>
> >> >> - Order-4 pages need to be swapped out, but no enough order-4 non-full
> >> >> clusters available.
> >> >
> >> > Exactly.
> >> >
> >> >>
> >> >> So, we need a way to migrate non-full clusters among orders to adjust to
> >> >> the various situations automatically.
> >> >
> >> > There is no easy way to migrate swap entries to different locations.
> >> > That is why I like to have discontiguous swap entries allocation for
> >> > mTHP.
> >>
> >> We suggest to migrate non-full swap clsuters among different lists, not
> >> swap entries.
> >
> > Then you have the down side of reducing the number of total high order
> > clusters. By chance it is much easier to fragment the cluster than
> > anti-fragment a cluster. The orders of clusters have a natural
> > tendency to move down rather than move up, given long enough time of
> > random access. It will likely run out of high order clusters in the
> > long run if we don't have any separation of orders.
>
> As my example above, you may have almost 0 high-order clusters forever.
> So, your solution only works for very specific use cases. It's not a
> general solution.
One simple solution is having an optional limitation of 0 order swap.
I understand you don't like that option, but there is no other easy
solution to achieve the same effectiveness, so far. If there is, I
like to hear it.
>
> >> >> But yes, data is needed for any performance related change.
> >>
> >> BTW: I think non-full cluster isn't a good name. Partial cluster is
> >> much better and follows the same convention as partial slab.
> >
> > I am not opposed to it. The only reason I hold off on the rename is
> > because there are patches from Kairui I am testing depending on it.
> > Let's finish up the V5 patch with the swap cache reclaim code path
> > then do the renaming as one batch job. We actually have more than one
> > list that has the clusters partially full. It helps reduce the repeat
> > scan of the cluster that is not full but also not able to allocate
> > swap entries for this order. Just the name of one of them as
> > "partial" is not precise either. Because the other lists are also
> > partially full. We'd better give them precise meaning systematically.
>
> I don't think that it's hard to do a search/replace before the next
> version.
The overhead is on the other internal experimental patches. Again,
I am not opposed to renaming it. Just want to do it at one batch not
many times, including other list names.
Chris
next prev parent reply other threads:[~2024-07-26 7:10 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-11 7:29 [PATCH v4 0/3] mm: swap: mTHP swap allocator base on swap cluster order Chris Li
2024-07-11 7:29 ` [PATCH v4 1/3] mm: swap: swap cluster switch to double link list Chris Li
2024-07-15 14:57 ` Ryan Roberts
2024-07-16 22:11 ` Chris Li
2024-07-18 6:26 ` Huang, Ying
2024-07-26 5:46 ` Chris Li
2024-07-11 7:29 ` [PATCH v4 2/3] mm: swap: mTHP allocate swap entries from nonfull list Chris Li
2024-07-15 15:40 ` Ryan Roberts
2024-07-16 22:46 ` Chris Li
2024-07-17 10:14 ` Ryan Roberts
2024-07-17 15:41 ` Chris Li
2024-07-18 7:53 ` Huang, Ying
2024-07-19 10:30 ` Ryan Roberts
2024-07-22 2:14 ` Huang, Ying
2024-07-22 7:51 ` Ryan Roberts
2024-07-22 8:49 ` Huang, Ying
2024-07-22 9:54 ` Ryan Roberts
2024-07-23 6:27 ` Huang, Ying
2024-07-24 8:33 ` Ryan Roberts
2024-07-24 22:41 ` Chris Li
2024-07-25 6:43 ` Huang, Ying
2024-07-25 8:09 ` Chris Li
2024-07-26 2:09 ` Huang, Ying
2024-07-26 5:09 ` Chris Li
2024-07-26 6:02 ` Huang, Ying
2024-07-26 7:15 ` Chris Li
2024-07-26 7:42 ` Huang, Ying
2024-07-25 6:53 ` Huang, Ying
2024-07-25 8:26 ` Chris Li
2024-07-26 2:04 ` Huang, Ying
2024-07-26 4:50 ` Chris Li
2024-07-26 5:52 ` Huang, Ying
2024-07-26 7:10 ` Chris Li [this message]
2024-07-26 7:18 ` Huang, Ying
2024-07-26 7:26 ` Chris Li
2024-07-26 7:37 ` Huang, Ying
2024-07-11 7:29 ` [PATCH v4 3/3] RFC: mm: swap: seperate SSD allocation from scan_swap_map_slots() Chris Li
2024-07-11 10:02 ` [PATCH v4 0/3] mm: swap: mTHP swap allocator base on swap cluster order Ryan Roberts
2024-07-11 14:08 ` Chris Li
2024-07-15 14:10 ` Ryan Roberts
2024-07-15 18:14 ` Chris Li
2024-07-18 5:50 ` Huang, Ying
2024-07-26 5:51 ` Chris Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACePvbWe9wraG2FjBcX9OmHN5ynB4et9WEHqh6NPSVK5mzJi2A@mail.gmail.com \
--to=chrisl@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=hughd@google.com \
--cc=kaleshsingh@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).