linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
	Yosry Ahmed <yosryahmed@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	alexs@kernel.org, Vitaly Wool <vitaly.wool@konsulko.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	minchan@kernel.org, david@redhat.com, 42.hyeyoo@gmail.com,
	nphamcs@gmail.com
Subject: Re: [PATCH v5 00/21] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
Date: Wed, 14 Aug 2024 15:03:54 +0900	[thread overview]
Message-ID: <20240814060354.GC8686@google.com> (raw)
In-Reply-To: <ZrQ9lrZKWdPR7Zfu@casper.infradead.org>

On (24/08/08 04:37), Matthew Wilcox wrote:
[..]
>
> I don't know if it's _your_ problem.  It's _our_ problem.  The arguments
> for (at least attempting) to shrink struct page seem quite compelling.
> We have a plan for most of the users of struct page, in greater or
> lesser detail.  I don't think we have a plan for zsmalloc.  Or at least
> if there is a plan, I don't know what it is.

Got you, thanks.  And sorry for a very delayed reply.

> > > Do you allocate a per-page struct zpdesc, and have each one pointing
> > > to a zspage?
> > 
> > I'm not very knowledgeable when it comes to memdesc, excuse my
> > ignorance, and please feel free to educate me.
> 
> I've written about it here:
> https://kernelnewbies.org/MatthewWilcox/Memdescs
> https://kernelnewbies.org/MatthewWilcox/FolioAlloc
> https://kernelnewbies.org/MatthewWilcox/Memdescs/Path

Thanks a lot!

> > So I guess if we have something
> > 
> > struct zspage {
> > 	..
> > 	struct zpdesc *first_desc;
> > 	..
> > }
> > 
> > and we "chain" zpdesc-s to form a zspage, and make each of them point to
> > a corresponding struct page (memdesc -> *page), then it'll resemble current
> > zsmalloc and should work for everyone? I also assume for zspdesc-s zsmalloc
> > will need to maintain a dedicated kmem_cache?
> 
> Right, we could do that.  Each memdesc has to be a multiple of 16 bytes,
> sp we'd be doing something like allocating 32 bytes for each page.
> Is there really 32 bytes of information that we want to store for
> each page?  Or could we store all of the information in (a somewhat
> larger) zspage?  Assuming we allocate 3 pages per zspage, if we allocate
> an extra 64 bytes in the zspage, we've saved 32 bytes per zspage.

I certainly like (and appreciate) the approach that saves us
some bytes here and there.  zsmalloc page can consist of 1 to
up to CONFIG_ZSMALLOC_CHAIN_SIZE (max 16) physical pages.  I'm
trying to understand (in pseudo-C code) what does a "somewhat larger
zspage" mean.  A fixed size array (given that we know the max number
of physical pages) per-zspage?


  parent reply	other threads:[~2024-08-14  6:04 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-06  2:21 [PATCH v5 00/21] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
2024-08-06  2:22 ` alexs
2024-08-06  2:22   ` [PATCH v5 01/21] " alexs
2024-08-06  2:22   ` [PATCH v5 02/21] mm/zsmalloc: use zpdesc in trylock_zspage()/lock_zspage() alexs
2024-08-06  2:22   ` [PATCH v5 03/21] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc alexs
2024-08-06  2:22   ` [PATCH v5 04/21] mm/zsmalloc: add and use pfn/zpdesc seeking funcs alexs
2024-08-06  2:22   ` [PATCH v5 05/21] mm/zsmalloc: convert obj_malloc() to use zpdesc alexs
2024-08-06  2:22   ` [PATCH v5 06/21] mm/zsmalloc: convert create_page_chain() and its users " alexs
2024-08-06  2:22   ` [PATCH v5 07/21] mm/zsmalloc: convert obj_allocated() and related helpers " alexs
2024-08-06  2:22   ` [PATCH v5 08/21] mm/zsmalloc: convert init_zspage() " alexs
2024-08-06  2:22   ` [PATCH v5 09/21] mm/zsmalloc: convert obj_to_page() and zs_free() " alexs
2024-08-06  2:22   ` [PATCH v5 10/21] mm/zsmalloc: add zpdesc_is_isolated()/zpdesc_zone() helper for zs_page_migrate() alexs
2024-08-06  2:22   ` [PATCH v5 11/21] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it alexs
2024-08-06  2:22   ` [PATCH v5 12/21] mm/zsmalloc: convert __free_zspage() to use zdsesc alexs
2024-08-06  2:23   ` [PATCH v5 13/21] mm/zsmalloc: convert location_to_obj() to take zpdesc alexs
2024-08-06  2:23   ` [PATCH v5 14/21] mm/zsmalloc: convert migrate_zspage() to use zpdesc alexs
2024-08-06  2:23   ` [PATCH v5 15/21] mm/zsmalloc: convert get_zspage() to take zpdesc alexs
2024-08-06  2:23   ` [PATCH v5 16/21] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs alexs
2024-08-06  2:23   ` [PATCH v5 17/21] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc alexs
2024-08-06  2:23   ` [PATCH v5 18/21] mm/zsmalloc: introduce __zpdesc_clear_movable alexs
2024-08-06  2:23   ` [PATCH v5 19/21] mm/zsmalloc: introduce __zpdesc_clear/set_zsmalloc() alexs
2024-08-06  2:23   ` [PATCH v5 20/21] mm/zsmalloc: introduce zpdesc_clear_first() helper alexs
2024-08-06  2:23   ` [PATCH v5 21/21] mm/zsmalloc: update comments for page->zpdesc changes alexs
     [not found]   ` <20240806123213.2a747a8321bdf452b3307fa9@linux-foundation.org>
     [not found]     ` <CAJD7tkakcaLVWi0viUqaW0K81VoCuGmkCHN4KQXp5+SSJLMB9g@mail.gmail.com>
     [not found]       ` <20240807051754.GA428000@google.com>
     [not found]         ` <ZrQ9lrZKWdPR7Zfu@casper.infradead.org>
2024-08-09  2:32           ` [PATCH v5 00/21] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool Alex Shi
2024-08-15  3:13             ` Sergey Senozhatsky
2024-08-15  3:50               ` Alex Shi
2024-08-14  6:03           ` Sergey Senozhatsky [this message]
2024-08-27 23:19             ` Vishal Moola
2024-08-29  9:42               ` Alex Shi
2024-09-04 19:51                 ` Vishal Moola
2024-09-04 20:21                   ` Yosry Ahmed
2024-09-03  3:20               ` Sergey Senozhatsky
2024-09-03 17:35                 ` Vishal Moola

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240814060354.GC8686@google.com \
    --to=senozhatsky@chromium.org \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexs@kernel.org \
    --cc=david@redhat.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=vitaly.wool@konsulko.com \
    --cc=willy@infradead.org \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).