* [RFC PATCH 01/25] mm/zsmalloc: create new struct zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
@ 2023-02-20 13:21 ` Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 02/25] mm/zsmalloc: add utility functions for zsdesc Hyeonggon Yoo
` (24 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:21 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Currently zsmalloc reuses fields of struct page. As part of simplifying
struct page, create own type for zsmalloc, called zsdesc.
Remove comments about how zsmalloc reuses fields of struct page, because
zsdesc uses more intuitive names.
Note that zsmalloc does not use PG_owner_priv_v1 after commit a41ec880aa7b
("zsmalloc: move huge compressed obj from page to zspage"). Thus only
document how zsmalloc uses PG_private flag.
It is very tempting to rearrange zsdesc, but the three words after flags
field are not available for zsmalloc. Add comments about that.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 63 +++++++++++++++++++++++++++++++++++++--------------
1 file changed, 46 insertions(+), 17 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3aed46ab7e6c..e2e34992c439 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -11,23 +11,6 @@
* Released under the terms of GNU General Public License Version 2.0
*/
-/*
- * Following is how we use various fields and flags of underlying
- * struct page(s) to form a zspage.
- *
- * Usage of struct page fields:
- * page->private: points to zspage
- * page->index: links together all component pages of a zspage
- * For the huge page, this is always 0, so we use this field
- * to store handle.
- * page->page_type: first object offset in a subpage of zspage
- *
- * Usage of struct page flags:
- * PG_private: identifies the first component page
- * PG_owner_priv_1: identifies the huge component page
- *
- */
-
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
/*
@@ -303,6 +286,52 @@ struct mapping_area {
enum zs_mapmode vm_mm; /* mapping mode */
};
+/*
+ * struct zsdesc - memory descriptor for zsmalloc memory
+ *
+ * This struct overlays struct page for now. Do not modify without a
+ * good understanding of the issues.
+ *
+ * Usage of struct page flags on zsdesc:
+ * PG_private: identifies the first component zsdesc
+ */
+struct zsdesc {
+ unsigned long flags;
+
+ /*
+ * Although not used by zsmalloc, this field is used by non-LRU page migration
+ * code. Leave it unused.
+ */
+ struct list_head lru;
+
+ /* Always points to zsmalloc_mops with PAGE_MAPPING_MOVABLE set */
+ struct movable_operations *mops;
+
+ union {
+ /* linked list of all zsdescs in a zspage */
+ struct zsdesc *next;
+ /* for huge zspages */
+ unsigned long handle;
+ };
+ struct zspage *zspage;
+ unsigned int first_obj_offset;
+ unsigned int _refcount;
+};
+
+#define ZSDESC_MATCH(pg, zs) \
+ static_assert(offsetof(struct page, pg) == offsetof(struct zsdesc, zs))
+
+ZSDESC_MATCH(flags, flags);
+ZSDESC_MATCH(lru, lru);
+ZSDESC_MATCH(mapping, mops);
+ZSDESC_MATCH(index, next);
+ZSDESC_MATCH(index, handle);
+ZSDESC_MATCH(private, zspage);
+ZSDESC_MATCH(page_type, first_obj_offset);
+ZSDESC_MATCH(_refcount, _refcount);
+#undef ZSDESC_MATCH
+static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
+
/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
static void SetZsHugePage(struct zspage *zspage)
{
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 02/25] mm/zsmalloc: add utility functions for zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 01/25] mm/zsmalloc: create new struct zsdesc Hyeonggon Yoo
@ 2023-02-20 13:21 ` Hyeonggon Yoo
2023-02-27 15:30 ` Mike Rapoport
2023-02-20 13:21 ` [RFC PATCH 03/25] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage Hyeonggon Yoo
` (23 subsequent siblings)
25 siblings, 1 reply; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:21 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Introduce utility functions for zsdesc to avoid directly accessing fields
of struct page.
zsdesc_page() is defined this way to preserve constness. page_zsdesc() does
not call compound_head() because zsdesc is always a base page.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 118 insertions(+), 2 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e2e34992c439..4af9f87cafb7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -332,6 +332,124 @@ ZSDESC_MATCH(_refcount, _refcount);
#undef ZSDESC_MATCH
static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
+#define zsdesc_page(zdesc) (_Generic((zdesc), \
+ const struct zsdesc *: (const struct page *)zdesc, \
+ struct zsdesc *: (struct page *)zdesc))
+
+static inline struct zsdesc *page_zsdesc(struct page *page)
+{
+ return (struct zsdesc *)page;
+}
+
+static inline unsigned long zsdesc_pfn(const struct zsdesc *zsdesc)
+{
+ return page_to_pfn(zsdesc_page(zsdesc));
+}
+
+static inline struct zsdesc *pfn_zsdesc(unsigned long pfn)
+{
+ return page_zsdesc(pfn_to_page(pfn));
+}
+
+static inline struct zspage *zsdesc_zspage(const struct zsdesc *zsdesc)
+{
+ return (struct zspage *)page_private(zsdesc_page(zsdesc));
+}
+
+static inline int trylock_zsdesc(struct zsdesc *zsdesc)
+{
+ return trylock_page(zsdesc_page(zsdesc));
+}
+
+static inline void unlock_zsdesc(struct zsdesc *zsdesc)
+{
+ return unlock_page(zsdesc_page(zsdesc));
+}
+
+static inline struct zone *zsdesc_zone(struct zsdesc *zsdesc)
+{
+ return page_zone(zsdesc_page(zsdesc));
+}
+
+static inline void wait_on_zsdesc_locked(struct zsdesc *zsdesc)
+{
+ wait_on_page_locked(zsdesc_page(zsdesc));
+}
+
+static inline void zsdesc_get(struct zsdesc *zsdesc)
+{
+ struct folio *folio = (struct folio *)zsdesc;
+
+ folio_get(folio);
+}
+
+static inline void zsdesc_put(struct zsdesc *zsdesc)
+{
+ struct folio *folio = (struct folio *)zsdesc;
+
+ folio_put(folio);
+}
+
+static inline void *zsdesc_kmap_atomic(struct zsdesc *zsdesc)
+{
+ return kmap_atomic(zsdesc_page(zsdesc));
+}
+
+static inline void zsdesc_set_zspage(struct zsdesc *zsdesc, struct zspage *zspage)
+{
+ set_page_private(zsdesc_page(zsdesc), (unsigned long)zspage);
+}
+
+static inline void zsdesc_set_first(struct zsdesc *zsdesc)
+{
+ SetPagePrivate(zsdesc_page(zsdesc));
+}
+
+static inline bool zsdesc_is_locked(struct zsdesc *zsdesc)
+{
+ return PageLocked(zsdesc_page(zsdesc));
+}
+
+static inline bool zsdesc_is_isolated(struct zsdesc *zsdesc)
+{
+ return PageIsolated(zsdesc_page(zsdesc));
+}
+
+static inline void zsdesc_inc_zone_page_state(struct zsdesc *zsdesc)
+{
+ inc_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
+}
+
+static inline void zsdesc_dec_zone_page_state(struct zsdesc *zsdesc)
+{
+ dec_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
+}
+
+static inline struct zsdesc *alloc_zsdesc(gfp_t gfp)
+{
+ struct page *page = alloc_page(gfp);
+
+ zsdesc_inc_zone_page_state(page_zsdesc(page));
+ return page_zsdesc(page);
+}
+
+static inline void free_zsdesc(struct zsdesc *zsdesc)
+{
+ struct page *page = zsdesc_page(zsdesc);
+
+ zsdesc_dec_zone_page_state(page_zsdesc(page));
+ __free_page(page);
+}
+
+static const struct movable_operations zsmalloc_mops;
+
+static inline void zsdesc_set_movable(struct zsdesc *zsdesc)
+{
+ struct page *page = zsdesc_page(zsdesc);
+
+ __SetPageMovable(page, &zsmalloc_mops);
+}
+
/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
static void SetZsHugePage(struct zspage *zspage)
{
@@ -2012,8 +2130,6 @@ static void dec_zspage_isolation(struct zspage *zspage)
zspage->isolated--;
}
-static const struct movable_operations zsmalloc_mops;
-
static void replace_sub_page(struct size_class *class, struct zspage *zspage,
struct page *newpage, struct page *oldpage)
{
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [RFC PATCH 02/25] mm/zsmalloc: add utility functions for zsdesc
2023-02-20 13:21 ` [RFC PATCH 02/25] mm/zsmalloc: add utility functions for zsdesc Hyeonggon Yoo
@ 2023-02-27 15:30 ` Mike Rapoport
2023-03-01 6:19 ` Hyeonggon Yoo
0 siblings, 1 reply; 31+ messages in thread
From: Mike Rapoport @ 2023-02-27 15:30 UTC (permalink / raw)
To: Hyeonggon Yoo
Cc: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox, Andrew Morton,
linux-mm
On Mon, Feb 20, 2023 at 01:21:55PM +0000, Hyeonggon Yoo wrote:
> Introduce utility functions for zsdesc to avoid directly accessing fields
> of struct page.
>
> zsdesc_page() is defined this way to preserve constness. page_zsdesc() does
I'd suggest "zsdesc_page() is defined with _Generic to preserve constness"
and add a line break before page_zsdesc().
> not call compound_head() because zsdesc is always a base page.
>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> mm/zsmalloc.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 118 insertions(+), 2 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index e2e34992c439..4af9f87cafb7 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -332,6 +332,124 @@ ZSDESC_MATCH(_refcount, _refcount);
> #undef ZSDESC_MATCH
> static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
I didn't do through check if it's feasible, but I think it would be better
to add helpers along with their usage.
For instance, move definition of zdesc_page() and page_zsdesc() to
"mm/zsmalloc: replace first_page to first_zsdesc in struct zspage" and so
on.
> +#define zsdesc_page(zdesc) (_Generic((zdesc), \
> + const struct zsdesc *: (const struct page *)zdesc, \
> + struct zsdesc *: (struct page *)zdesc))
> +
> +static inline struct zsdesc *page_zsdesc(struct page *page)
> +{
> + return (struct zsdesc *)page;
> +}
> +
> +static inline unsigned long zsdesc_pfn(const struct zsdesc *zsdesc)
> +{
> + return page_to_pfn(zsdesc_page(zsdesc));
> +}
> +
> +static inline struct zsdesc *pfn_zsdesc(unsigned long pfn)
> +{
> + return page_zsdesc(pfn_to_page(pfn));
> +}
> +
> +static inline struct zspage *zsdesc_zspage(const struct zsdesc *zsdesc)
> +{
> + return (struct zspage *)page_private(zsdesc_page(zsdesc));
> +}
> +
> +static inline int trylock_zsdesc(struct zsdesc *zsdesc)
> +{
> + return trylock_page(zsdesc_page(zsdesc));
> +}
> +
> +static inline void unlock_zsdesc(struct zsdesc *zsdesc)
> +{
> + return unlock_page(zsdesc_page(zsdesc));
> +}
> +
> +static inline struct zone *zsdesc_zone(struct zsdesc *zsdesc)
> +{
> + return page_zone(zsdesc_page(zsdesc));
> +}
> +
> +static inline void wait_on_zsdesc_locked(struct zsdesc *zsdesc)
> +{
> + wait_on_page_locked(zsdesc_page(zsdesc));
> +}
> +
> +static inline void zsdesc_get(struct zsdesc *zsdesc)
> +{
> + struct folio *folio = (struct folio *)zsdesc;
> +
> + folio_get(folio);
> +}
> +
> +static inline void zsdesc_put(struct zsdesc *zsdesc)
> +{
> + struct folio *folio = (struct folio *)zsdesc;
> +
> + folio_put(folio);
> +}
> +
> +static inline void *zsdesc_kmap_atomic(struct zsdesc *zsdesc)
> +{
> + return kmap_atomic(zsdesc_page(zsdesc));
> +}
> +
> +static inline void zsdesc_set_zspage(struct zsdesc *zsdesc, struct zspage *zspage)
> +{
> + set_page_private(zsdesc_page(zsdesc), (unsigned long)zspage);
> +}
> +
> +static inline void zsdesc_set_first(struct zsdesc *zsdesc)
> +{
> + SetPagePrivate(zsdesc_page(zsdesc));
> +}
> +
> +static inline bool zsdesc_is_locked(struct zsdesc *zsdesc)
> +{
> + return PageLocked(zsdesc_page(zsdesc));
> +}
> +
> +static inline bool zsdesc_is_isolated(struct zsdesc *zsdesc)
> +{
> + return PageIsolated(zsdesc_page(zsdesc));
> +}
> +
> +static inline void zsdesc_inc_zone_page_state(struct zsdesc *zsdesc)
> +{
> + inc_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
> +}
> +
> +static inline void zsdesc_dec_zone_page_state(struct zsdesc *zsdesc)
> +{
> + dec_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
> +}
> +
> +static inline struct zsdesc *alloc_zsdesc(gfp_t gfp)
> +{
> + struct page *page = alloc_page(gfp);
> +
> + zsdesc_inc_zone_page_state(page_zsdesc(page));
> + return page_zsdesc(page);
> +}
> +
> +static inline void free_zsdesc(struct zsdesc *zsdesc)
> +{
> + struct page *page = zsdesc_page(zsdesc);
> +
> + zsdesc_dec_zone_page_state(page_zsdesc(page));
> + __free_page(page);
> +}
> +
> +static const struct movable_operations zsmalloc_mops;
> +
> +static inline void zsdesc_set_movable(struct zsdesc *zsdesc)
> +{
> + struct page *page = zsdesc_page(zsdesc);
> +
> + __SetPageMovable(page, &zsmalloc_mops);
> +}
> +
> /* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
> static void SetZsHugePage(struct zspage *zspage)
> {
> @@ -2012,8 +2130,6 @@ static void dec_zspage_isolation(struct zspage *zspage)
> zspage->isolated--;
> }
>
> -static const struct movable_operations zsmalloc_mops;
> -
> static void replace_sub_page(struct size_class *class, struct zspage *zspage,
> struct page *newpage, struct page *oldpage)
> {
> --
> 2.25.1
>
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: [RFC PATCH 02/25] mm/zsmalloc: add utility functions for zsdesc
2023-02-27 15:30 ` Mike Rapoport
@ 2023-03-01 6:19 ` Hyeonggon Yoo
0 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-03-01 6:19 UTC (permalink / raw)
To: Mike Rapoport
Cc: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox, Andrew Morton,
linux-mm
On Mon, Feb 27, 2023 at 05:30:14PM +0200, Mike Rapoport wrote:
> On Mon, Feb 20, 2023 at 01:21:55PM +0000, Hyeonggon Yoo wrote:
> > Introduce utility functions for zsdesc to avoid directly accessing fields
> > of struct page.
> >
> > zsdesc_page() is defined this way to preserve constness. page_zsdesc() does
>
> I'd suggest "zsdesc_page() is defined with _Generic to preserve constness"
> and add a line break before page_zsdesc().
Will do in next version.
>
> > not call compound_head() because zsdesc is always a base page.
> >
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > ---
> > mm/zsmalloc.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 118 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index e2e34992c439..4af9f87cafb7 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -332,6 +332,124 @@ ZSDESC_MATCH(_refcount, _refcount);
> > #undef ZSDESC_MATCH
> > static_assert(sizeof(struct zsdesc) <= sizeof(struct page));
>
> I didn't do through check if it's feasible, but I think it would be better
> to add helpers along with their usage.
>
> For instance, move definition of zdesc_page() and page_zsdesc() to
> "mm/zsmalloc: replace first_page to first_zsdesc in struct zspage" and so
> on.
Sure, that would be easier to review.
Will do in next version.
Thanks!
Hyeonggon
>
> > +#define zsdesc_page(zdesc) (_Generic((zdesc), \
> > + const struct zsdesc *: (const struct page *)zdesc, \
> > + struct zsdesc *: (struct page *)zdesc))
> > +
> > +static inline struct zsdesc *page_zsdesc(struct page *page)
> > +{
> > + return (struct zsdesc *)page;
> > +}
> > +
> > +static inline unsigned long zsdesc_pfn(const struct zsdesc *zsdesc)
> > +{
> > + return page_to_pfn(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline struct zsdesc *pfn_zsdesc(unsigned long pfn)
> > +{
> > + return page_zsdesc(pfn_to_page(pfn));
> > +}
> > +
> > +static inline struct zspage *zsdesc_zspage(const struct zsdesc *zsdesc)
> > +{
> > + return (struct zspage *)page_private(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline int trylock_zsdesc(struct zsdesc *zsdesc)
> > +{
> > + return trylock_page(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline void unlock_zsdesc(struct zsdesc *zsdesc)
> > +{
> > + return unlock_page(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline struct zone *zsdesc_zone(struct zsdesc *zsdesc)
> > +{
> > + return page_zone(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline void wait_on_zsdesc_locked(struct zsdesc *zsdesc)
> > +{
> > + wait_on_page_locked(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline void zsdesc_get(struct zsdesc *zsdesc)
> > +{
> > + struct folio *folio = (struct folio *)zsdesc;
> > +
> > + folio_get(folio);
> > +}
> > +
> > +static inline void zsdesc_put(struct zsdesc *zsdesc)
> > +{
> > + struct folio *folio = (struct folio *)zsdesc;
> > +
> > + folio_put(folio);
> > +}
> > +
> > +static inline void *zsdesc_kmap_atomic(struct zsdesc *zsdesc)
> > +{
> > + return kmap_atomic(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline void zsdesc_set_zspage(struct zsdesc *zsdesc, struct zspage *zspage)
> > +{
> > + set_page_private(zsdesc_page(zsdesc), (unsigned long)zspage);
> > +}
> > +
> > +static inline void zsdesc_set_first(struct zsdesc *zsdesc)
> > +{
> > + SetPagePrivate(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline bool zsdesc_is_locked(struct zsdesc *zsdesc)
> > +{
> > + return PageLocked(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline bool zsdesc_is_isolated(struct zsdesc *zsdesc)
> > +{
> > + return PageIsolated(zsdesc_page(zsdesc));
> > +}
> > +
> > +static inline void zsdesc_inc_zone_page_state(struct zsdesc *zsdesc)
> > +{
> > + inc_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
> > +}
> > +
> > +static inline void zsdesc_dec_zone_page_state(struct zsdesc *zsdesc)
> > +{
> > + dec_zone_page_state(zsdesc_page(zsdesc), NR_ZSPAGES);
> > +}
> > +
> > +static inline struct zsdesc *alloc_zsdesc(gfp_t gfp)
> > +{
> > + struct page *page = alloc_page(gfp);
> > +
> > + zsdesc_inc_zone_page_state(page_zsdesc(page));
> > + return page_zsdesc(page);
> > +}
> > +
> > +static inline void free_zsdesc(struct zsdesc *zsdesc)
> > +{
> > + struct page *page = zsdesc_page(zsdesc);
> > +
> > + zsdesc_dec_zone_page_state(page_zsdesc(page));
> > + __free_page(page);
> > +}
> > +
> > +static const struct movable_operations zsmalloc_mops;
> > +
> > +static inline void zsdesc_set_movable(struct zsdesc *zsdesc)
> > +{
> > + struct page *page = zsdesc_page(zsdesc);
> > +
> > + __SetPageMovable(page, &zsmalloc_mops);
> > +}
> > +
> > /* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
> > static void SetZsHugePage(struct zspage *zspage)
> > {
> > @@ -2012,8 +2130,6 @@ static void dec_zspage_isolation(struct zspage *zspage)
> > zspage->isolated--;
> > }
> >
> > -static const struct movable_operations zsmalloc_mops;
> > -
> > static void replace_sub_page(struct size_class *class, struct zspage *zspage,
> > struct page *newpage, struct page *oldpage)
> > {
> > --
> > 2.25.1
> >
> >
>
> --
> Sincerely yours,
> Mike.
^ permalink raw reply [flat|nested] 31+ messages in thread
* [RFC PATCH 03/25] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 01/25] mm/zsmalloc: create new struct zsdesc Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 02/25] mm/zsmalloc: add utility functions for zsdesc Hyeonggon Yoo
@ 2023-02-20 13:21 ` Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 04/25] mm/zsmalloc: add alternatives of frequently used helper functions Hyeonggon Yoo
` (22 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:21 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Replace first_page to first_zsdesc in struct zspage for further
conversion.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 4af9f87cafb7..f7b6b67e7b01 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -266,7 +266,7 @@ struct zspage {
};
unsigned int inuse;
unsigned int freeobj;
- struct page *first_page;
+ struct zsdesc *first_zsdesc;
struct list_head list; /* fullness list */
#ifdef CONFIG_ZPOOL
@@ -667,7 +667,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
static inline struct page *get_first_page(struct zspage *zspage)
{
- struct page *first_page = zspage->first_page;
+ struct page *first_page = zsdesc_page(zspage->first_zsdesc);
VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
return first_page;
@@ -1249,7 +1249,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
set_page_private(page, (unsigned long)zspage);
page->index = 0;
if (i == 0) {
- zspage->first_page = page;
+ zspage->first_zsdesc = page_zsdesc(page);
SetPagePrivate(page);
if (unlikely(class->objs_per_zspage == 1 &&
class->pages_per_zspage == 1))
@@ -1643,7 +1643,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
link->handle = handle;
else
/* record handle to page->index */
- zspage->first_page->index = handle;
+ zspage->first_zsdesc->handle = handle;
kunmap_atomic(vaddr);
mod_zspage_inuse(zspage, 1);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 04/25] mm/zsmalloc: add alternatives of frequently used helper functions
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (2 preceding siblings ...)
2023-02-20 13:21 ` [RFC PATCH 03/25] mm/zsmalloc: replace first_page to first_zsdesc in struct zspage Hyeonggon Yoo
@ 2023-02-20 13:21 ` Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 05/25] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc Hyeonggon Yoo
` (21 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:21 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
get_first_page(), get_next_page(), is_first_page() are frequently used
throughout zsmalloc code. As replacing them all at once would be hard to
review, add alternative helpers and gradually replace its users to
use new functions.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f7b6b67e7b01..1ee9cfae282b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -653,6 +653,11 @@ static __maybe_unused int is_first_page(struct page *page)
return PagePrivate(page);
}
+static __maybe_unused int is_first_zsdesc(struct zsdesc *zsdesc)
+{
+ return PagePrivate(zsdesc_page(zsdesc));
+}
+
/* Protected by pool->lock */
static inline int get_zspage_inuse(struct zspage *zspage)
{
@@ -665,7 +670,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
zspage->inuse += val;
}
-static inline struct page *get_first_page(struct zspage *zspage)
+static __maybe_unused inline struct page *get_first_page(struct zspage *zspage)
{
struct page *first_page = zsdesc_page(zspage->first_zsdesc);
@@ -673,6 +678,14 @@ static inline struct page *get_first_page(struct zspage *zspage)
return first_page;
}
+static __maybe_unused struct zsdesc *get_first_zsdesc(struct zspage *zspage)
+{
+ struct zsdesc *first_zsdesc = zspage->first_zsdesc;
+
+ VM_BUG_ON_PAGE(!is_first_zsdesc(first_zsdesc), zsdesc_page(first_zsdesc));
+ return first_zsdesc;
+}
+
static inline unsigned int get_first_obj_offset(struct page *page)
{
return page->page_type;
@@ -973,7 +986,7 @@ static struct zspage *get_zspage(struct page *page)
return zspage;
}
-static struct page *get_next_page(struct page *page)
+static __maybe_unused struct page *get_next_page(struct page *page)
{
struct zspage *zspage = get_zspage(page);
@@ -983,6 +996,16 @@ static struct page *get_next_page(struct page *page)
return (struct page *)page->index;
}
+static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
+{
+ struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
+
+ if (unlikely(ZsHugePage(zspage)))
+ return NULL;
+
+ return zsdesc->next;
+}
+
/**
* obj_to_location - get (<page>, <obj_idx>) from encoded object value
* @obj: the encoded object value
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 05/25] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (3 preceding siblings ...)
2023-02-20 13:21 ` [RFC PATCH 04/25] mm/zsmalloc: add alternatives of frequently used helper functions Hyeonggon Yoo
@ 2023-02-20 13:21 ` Hyeonggon Yoo
2023-02-20 13:21 ` [RFC PATCH 06/25] mm/zsmalloc: convert __zs_{map,unmap}_object() " Hyeonggon Yoo
` (20 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:21 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert trylock_zspage() and lock_zspage() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1ee9cfae282b..dc6a7130cdfd 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1091,11 +1091,11 @@ static void reset_page(struct page *page)
static int trylock_zspage(struct zspage *zspage)
{
- struct page *cursor, *fail;
+ struct zsdesc *cursor, *fail;
- for (cursor = get_first_page(zspage); cursor != NULL; cursor =
- get_next_page(cursor)) {
- if (!trylock_page(cursor)) {
+ for (cursor = get_first_zsdesc(zspage); cursor != NULL; cursor =
+ get_next_zsdesc(cursor)) {
+ if (!trylock_zsdesc(cursor)) {
fail = cursor;
goto unlock;
}
@@ -1103,9 +1103,9 @@ static int trylock_zspage(struct zspage *zspage)
return 1;
unlock:
- for (cursor = get_first_page(zspage); cursor != fail; cursor =
- get_next_page(cursor))
- unlock_page(cursor);
+ for (cursor = get_first_zsdesc(zspage); cursor != fail; cursor =
+ get_next_zsdesc(cursor))
+ unlock_zsdesc(cursor);
return 0;
}
@@ -2056,7 +2056,7 @@ static enum fullness_group putback_zspage(struct size_class *class,
*/
static void lock_zspage(struct zspage *zspage)
{
- struct page *curr_page, *page;
+ struct zsdesc *curr_zsdesc, *zsdesc;
/*
* Pages we haven't locked yet can be migrated off the list while we're
@@ -2068,24 +2068,24 @@ static void lock_zspage(struct zspage *zspage)
*/
while (1) {
migrate_read_lock(zspage);
- page = get_first_page(zspage);
- if (trylock_page(page))
+ zsdesc = get_first_zsdesc(zspage);
+ if (trylock_zsdesc(zsdesc))
break;
- get_page(page);
+ zsdesc_get(zsdesc);
migrate_read_unlock(zspage);
- wait_on_page_locked(page);
- put_page(page);
+ wait_on_zsdesc_locked(zsdesc);
+ zsdesc_put(zsdesc);
}
- curr_page = page;
- while ((page = get_next_page(curr_page))) {
- if (trylock_page(page)) {
- curr_page = page;
+ curr_zsdesc = zsdesc;
+ while ((zsdesc = get_next_zsdesc(curr_zsdesc))) {
+ if (trylock_zsdesc(zsdesc)) {
+ curr_zsdesc = zsdesc;
} else {
- get_page(page);
+ zsdesc_get(zsdesc);
migrate_read_unlock(zspage);
- wait_on_page_locked(page);
- put_page(page);
+ wait_on_zsdesc_locked(zsdesc);
+ zsdesc_put(zsdesc);
migrate_read_lock(zspage);
}
}
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 06/25] mm/zsmalloc: convert __zs_{map,unmap}_object() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (4 preceding siblings ...)
2023-02-20 13:21 ` [RFC PATCH 05/25] mm/zsmalloc: convert {try,}lock_zspage() to use zsdesc Hyeonggon Yoo
@ 2023-02-20 13:21 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 07/25] mm/zsmalloc: convert obj_to_location() and its users " Hyeonggon Yoo
` (19 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:21 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
These two functions take array of pointer to struct page. Make them
take array of pointer to zsdesc instead of page.
Add silly type casting when calling them which. Casting will be removed
in the next patch.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index dc6a7130cdfd..821d72ab888c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1361,7 +1361,7 @@ static inline void __zs_cpu_down(struct mapping_area *area)
}
static void *__zs_map_object(struct mapping_area *area,
- struct page *pages[2], int off, int size)
+ struct zsdesc *zsdescs[2], int off, int size)
{
int sizes[2];
void *addr;
@@ -1378,10 +1378,10 @@ static void *__zs_map_object(struct mapping_area *area,
sizes[1] = size - sizes[0];
/* copy object to per-cpu buffer */
- addr = kmap_atomic(pages[0]);
+ addr = zsdesc_kmap_atomic(zsdescs[0]);
memcpy(buf, addr + off, sizes[0]);
kunmap_atomic(addr);
- addr = kmap_atomic(pages[1]);
+ addr = zsdesc_kmap_atomic(zsdescs[1]);
memcpy(buf + sizes[0], addr, sizes[1]);
kunmap_atomic(addr);
out:
@@ -1389,7 +1389,7 @@ static void *__zs_map_object(struct mapping_area *area,
}
static void __zs_unmap_object(struct mapping_area *area,
- struct page *pages[2], int off, int size)
+ struct zsdesc *zsdescs[2], int off, int size)
{
int sizes[2];
void *addr;
@@ -1408,10 +1408,10 @@ static void __zs_unmap_object(struct mapping_area *area,
sizes[1] = size - sizes[0];
/* copy per-cpu buffer to object */
- addr = kmap_atomic(pages[0]);
+ addr = zsdesc_kmap_atomic(zsdescs[0]);
memcpy(addr + off, buf, sizes[0]);
kunmap_atomic(addr);
- addr = kmap_atomic(pages[1]);
+ addr = zsdesc_kmap_atomic(zsdescs[1]);
memcpy(addr, buf + sizes[0], sizes[1]);
kunmap_atomic(addr);
@@ -1572,7 +1572,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
pages[1] = get_next_page(page);
BUG_ON(!pages[1]);
- ret = __zs_map_object(area, pages, off, class->size);
+ ret = __zs_map_object(area, (struct zsdesc **)pages, off, class->size);
out:
if (likely(!ZsHugePage(zspage)))
ret += ZS_HANDLE_SIZE;
@@ -1607,7 +1607,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
pages[1] = get_next_page(page);
BUG_ON(!pages[1]);
- __zs_unmap_object(area, pages, off, class->size);
+ __zs_unmap_object(area, (struct zsdesc **)pages, off, class->size);
}
local_unlock(&zs_map_area.lock);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 07/25] mm/zsmalloc: convert obj_to_location() and its users to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (5 preceding siblings ...)
2023-02-20 13:21 ` [RFC PATCH 06/25] mm/zsmalloc: convert __zs_{map,unmap}_object() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 08/25] mm/zsmalloc: convert obj_malloc() " Hyeonggon Yoo
` (18 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert obj_to_location() to take zsdesc and also convert its users
to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 80 +++++++++++++++++++++++++--------------------------
1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 821d72ab888c..56cb93629c7f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1007,16 +1007,16 @@ static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
}
/**
- * obj_to_location - get (<page>, <obj_idx>) from encoded object value
+ * obj_to_location - get (<zsdesc>, <obj_idx>) from encoded object value
* @obj: the encoded object value
- * @page: page object resides in zspage
+ * @zsdesc: zsdesc object resides in zspage
* @obj_idx: object index
*/
-static void obj_to_location(unsigned long obj, struct page **page,
+static void obj_to_location(unsigned long obj, struct zsdesc **zsdesc,
unsigned int *obj_idx)
{
obj >>= OBJ_TAG_BITS;
- *page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+ *zsdesc = pfn_zsdesc(obj >> OBJ_INDEX_BITS);
*obj_idx = (obj & OBJ_INDEX_MASK);
}
@@ -1498,13 +1498,13 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
enum zs_mapmode mm)
{
struct zspage *zspage;
- struct page *page;
+ struct zsdesc *zsdesc;
unsigned long obj, off;
unsigned int obj_idx;
struct size_class *class;
struct mapping_area *area;
- struct page *pages[2];
+ struct zsdesc *zsdescs[2];
void *ret;
/*
@@ -1517,8 +1517,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
/* It guarantees it can get zspage from handle safely */
spin_lock(&pool->lock);
obj = handle_to_obj(handle);
- obj_to_location(obj, &page, &obj_idx);
- zspage = get_zspage(page);
+ obj_to_location(obj, &zsdesc, &obj_idx);
+ zspage = get_zspage(zsdesc_page(zsdesc));
#ifdef CONFIG_ZPOOL
/*
@@ -1561,18 +1561,18 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
area = this_cpu_ptr(&zs_map_area);
area->vm_mm = mm;
if (off + class->size <= PAGE_SIZE) {
- /* this object is contained entirely within a page */
- area->vm_addr = kmap_atomic(page);
+ /* this object is contained entirely within a zsdesc */
+ area->vm_addr = zsdesc_kmap_atomic(zsdesc);
ret = area->vm_addr + off;
goto out;
}
- /* this object spans two pages */
- pages[0] = page;
- pages[1] = get_next_page(page);
- BUG_ON(!pages[1]);
+ /* this object spans two zsdescs */
+ zsdescs[0] = zsdesc;
+ zsdescs[1] = get_next_zsdesc(zsdesc);
+ BUG_ON(!zsdescs[1]);
- ret = __zs_map_object(area, (struct zsdesc **)pages, off, class->size);
+ ret = __zs_map_object(area, zsdescs, off, class->size);
out:
if (likely(!ZsHugePage(zspage)))
ret += ZS_HANDLE_SIZE;
@@ -1584,7 +1584,7 @@ EXPORT_SYMBOL_GPL(zs_map_object);
void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
{
struct zspage *zspage;
- struct page *page;
+ struct zsdesc *zsdesc;
unsigned long obj, off;
unsigned int obj_idx;
@@ -1592,8 +1592,8 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
struct mapping_area *area;
obj = handle_to_obj(handle);
- obj_to_location(obj, &page, &obj_idx);
- zspage = get_zspage(page);
+ obj_to_location(obj, &zsdesc, &obj_idx);
+ zspage = get_zspage(zsdesc_page(zsdesc));
class = zspage_class(pool, zspage);
off = (class->size * obj_idx) & ~PAGE_MASK;
@@ -1601,13 +1601,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
if (off + class->size <= PAGE_SIZE)
kunmap_atomic(area->vm_addr);
else {
- struct page *pages[2];
+ struct zsdesc *zsdescs[2];
- pages[0] = page;
- pages[1] = get_next_page(page);
- BUG_ON(!pages[1]);
+ zsdescs[0] = zsdesc;
+ zsdescs[1] = get_next_zsdesc(zsdesc);
+ BUG_ON(!zsdescs[1]);
- __zs_unmap_object(area, (struct zsdesc **)pages, off, class->size);
+ __zs_unmap_object(area, zsdescs, off, class->size);
}
local_unlock(&zs_map_area.lock);
@@ -1750,16 +1750,16 @@ static void obj_free(int class_size, unsigned long obj, unsigned long *handle)
{
struct link_free *link;
struct zspage *zspage;
- struct page *f_page;
+ struct zsdesc *f_zsdesc;
unsigned long f_offset;
unsigned int f_objidx;
void *vaddr;
- obj_to_location(obj, &f_page, &f_objidx);
+ obj_to_location(obj, &f_zsdesc, &f_objidx);
f_offset = (class_size * f_objidx) & ~PAGE_MASK;
- zspage = get_zspage(f_page);
+ zspage = get_zspage(zsdesc_page(f_zsdesc));
- vaddr = kmap_atomic(f_page);
+ vaddr = zsdesc_kmap_atomic(f_zsdesc);
link = (struct link_free *)(vaddr + f_offset);
if (handle) {
@@ -1771,14 +1771,14 @@ static void obj_free(int class_size, unsigned long obj, unsigned long *handle)
if (likely(!ZsHugePage(zspage)))
link->deferred_handle = *handle;
else
- f_page->index = *handle;
+ f_zsdesc->handle = *handle;
#endif
} else {
/* Insert this object in containing zspage's freelist */
if (likely(!ZsHugePage(zspage)))
link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
else
- f_page->index = 0;
+ f_zsdesc->next = NULL;
set_freeobj(zspage, f_objidx);
}
@@ -1836,7 +1836,7 @@ EXPORT_SYMBOL_GPL(zs_free);
static void zs_object_copy(struct size_class *class, unsigned long dst,
unsigned long src)
{
- struct page *s_page, *d_page;
+ struct zsdesc *s_zsdesc, *d_zsdesc;
unsigned int s_objidx, d_objidx;
unsigned long s_off, d_off;
void *s_addr, *d_addr;
@@ -1845,8 +1845,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
s_size = d_size = class->size;
- obj_to_location(src, &s_page, &s_objidx);
- obj_to_location(dst, &d_page, &d_objidx);
+ obj_to_location(src, &s_zsdesc, &s_objidx);
+ obj_to_location(dst, &d_zsdesc, &d_objidx);
s_off = (class->size * s_objidx) & ~PAGE_MASK;
d_off = (class->size * d_objidx) & ~PAGE_MASK;
@@ -1857,8 +1857,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
if (d_off + class->size > PAGE_SIZE)
d_size = PAGE_SIZE - d_off;
- s_addr = kmap_atomic(s_page);
- d_addr = kmap_atomic(d_page);
+ s_addr = zsdesc_kmap_atomic(s_zsdesc);
+ d_addr = zsdesc_kmap_atomic(d_zsdesc);
while (1) {
size = min(s_size, d_size);
@@ -1883,17 +1883,17 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
if (s_off >= PAGE_SIZE) {
kunmap_atomic(d_addr);
kunmap_atomic(s_addr);
- s_page = get_next_page(s_page);
- s_addr = kmap_atomic(s_page);
- d_addr = kmap_atomic(d_page);
+ s_zsdesc = get_next_zsdesc(s_zsdesc);
+ s_addr = zsdesc_kmap_atomic(s_zsdesc);
+ d_addr = zsdesc_kmap_atomic(d_zsdesc);
s_size = class->size - written;
s_off = 0;
}
if (d_off >= PAGE_SIZE) {
kunmap_atomic(d_addr);
- d_page = get_next_page(d_page);
- d_addr = kmap_atomic(d_page);
+ d_zsdesc = get_next_zsdesc(d_zsdesc);
+ d_addr = zsdesc_kmap_atomic(d_zsdesc);
d_size = class->size - written;
d_off = 0;
}
@@ -2200,7 +2200,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
struct zs_pool *pool;
struct size_class *class;
struct zspage *zspage;
- struct page *dummy;
+ struct zsdesc *dummy;
void *s_addr, *d_addr, *addr;
unsigned int offset;
unsigned long handle;
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 08/25] mm/zsmalloc: convert obj_malloc() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (6 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 07/25] mm/zsmalloc: convert obj_to_location() and its users " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 09/25] mm/zsmalloc: convert create_page_chain() and its users " Hyeonggon Yoo
` (17 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert obj_malloc() to use zsdesc and replace helper functions with
new ones that take zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 56cb93629c7f..4386a24a246c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1637,12 +1637,12 @@ EXPORT_SYMBOL_GPL(zs_huge_class_size);
static unsigned long obj_malloc(struct zs_pool *pool,
struct zspage *zspage, unsigned long handle)
{
- int i, nr_page, offset;
+ int i, nr_zsdesc, offset;
unsigned long obj;
struct link_free *link;
struct size_class *class;
- struct page *m_page;
+ struct zsdesc *m_zsdesc;
unsigned long m_offset;
void *vaddr;
@@ -1651,14 +1651,14 @@ static unsigned long obj_malloc(struct zs_pool *pool,
obj = get_freeobj(zspage);
offset = obj * class->size;
- nr_page = offset >> PAGE_SHIFT;
+ nr_zsdesc = offset >> PAGE_SHIFT;
m_offset = offset & ~PAGE_MASK;
- m_page = get_first_page(zspage);
+ m_zsdesc = get_first_zsdesc(zspage);
- for (i = 0; i < nr_page; i++)
- m_page = get_next_page(m_page);
+ for (i = 0; i < nr_zsdesc; i++)
+ m_zsdesc = get_next_zsdesc(m_zsdesc);
- vaddr = kmap_atomic(m_page);
+ vaddr = zsdesc_kmap_atomic(m_zsdesc);
link = (struct link_free *)vaddr + m_offset / sizeof(*link);
set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
if (likely(!ZsHugePage(zspage)))
@@ -1671,7 +1671,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
kunmap_atomic(vaddr);
mod_zspage_inuse(zspage, 1);
- obj = location_to_obj(m_page, obj);
+ obj = location_to_obj(zsdesc_page(m_zsdesc), obj);
return obj;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 09/25] mm/zsmalloc: convert create_page_chain() and its users to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (7 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 08/25] mm/zsmalloc: convert obj_malloc() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 10/25] mm/zsmalloc: convert obj_tagged() and related helpers " Hyeonggon Yoo
` (16 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert create_page_chain() to use zsdesc and rename it to
create_zsdesc_chain(), update comments accordingly. Also, convert its
callers to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 80 +++++++++++++++++++++++++--------------------------
1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 4386a24a246c..c65bdce987e9 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1251,36 +1251,36 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
set_freeobj(zspage, 0);
}
-static void create_page_chain(struct size_class *class, struct zspage *zspage,
- struct page *pages[])
+static void create_zsdesc_chain(struct size_class *class, struct zspage *zspage,
+ struct zsdesc *zsdescs[])
{
int i;
- struct page *page;
- struct page *prev_page = NULL;
- int nr_pages = class->pages_per_zspage;
+ struct zsdesc *zsdesc;
+ struct zsdesc *prev_zsdesc = NULL;
+ int nr_zsdescs = class->pages_per_zspage;
/*
- * Allocate individual pages and link them together as:
- * 1. all pages are linked together using page->index
- * 2. each sub-page point to zspage using page->private
+ * Allocate individual zsdescs and link them together as:
+ * 1. all zsdescs are linked together using zsdesc->next
+ * 2. each sub-zsdesc point to zspage using zsdesc->zspage
*
- * we set PG_private to identify the first page (i.e. no other sub-page
+ * we set PG_private to identify the first zsdesc (i.e. no other sub-zsdesc
* has this flag set).
*/
- for (i = 0; i < nr_pages; i++) {
- page = pages[i];
- set_page_private(page, (unsigned long)zspage);
- page->index = 0;
+ for (i = 0; i < nr_zsdescs; i++) {
+ zsdesc = zsdescs[i];
+ zsdesc_set_zspage(zsdesc, zspage);
+ zsdesc->next = NULL;
if (i == 0) {
- zspage->first_zsdesc = page_zsdesc(page);
- SetPagePrivate(page);
+ zspage->first_zsdesc = zsdesc;
+ zsdesc_set_first(zsdesc);
if (unlikely(class->objs_per_zspage == 1 &&
class->pages_per_zspage == 1))
SetZsHugePage(zspage);
} else {
- prev_page->index = (unsigned long)page;
+ prev_zsdesc->next = zsdesc;
}
- prev_page = page;
+ prev_zsdesc = zsdesc;
}
}
@@ -1292,7 +1292,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
gfp_t gfp)
{
int i;
- struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE];
+ struct zsdesc *zsdescs[ZS_MAX_PAGES_PER_ZSPAGE];
struct zspage *zspage = cache_alloc_zspage(pool, gfp);
if (!zspage)
@@ -1302,23 +1302,21 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
migrate_lock_init(zspage);
for (i = 0; i < class->pages_per_zspage; i++) {
- struct page *page;
+ struct zsdesc *zsdesc;
- page = alloc_page(gfp);
- if (!page) {
+ zsdesc = alloc_zsdesc(gfp);
+ if (!zsdesc) {
while (--i >= 0) {
- dec_zone_page_state(pages[i], NR_ZSPAGES);
- __free_page(pages[i]);
+ free_zsdesc(zsdescs[i]);
}
cache_free_zspage(pool, zspage);
return NULL;
}
- inc_zone_page_state(page, NR_ZSPAGES);
- pages[i] = page;
+ zsdescs[i] = zsdesc;
}
- create_page_chain(class, zspage, pages);
+ create_zsdesc_chain(class, zspage, zsdescs);
init_zspage(class, zspage);
zspage->pool = pool;
@@ -2153,27 +2151,29 @@ static void dec_zspage_isolation(struct zspage *zspage)
zspage->isolated--;
}
-static void replace_sub_page(struct size_class *class, struct zspage *zspage,
- struct page *newpage, struct page *oldpage)
+static void replace_sub_zsdesc(struct size_class *class, struct zspage *zspage,
+ struct zsdesc *new_zsdesc, struct zsdesc *old_zsdesc)
{
- struct page *page;
- struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, };
+ struct zsdesc *zsdesc;
+ struct zsdesc *zsdescs[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, };
+ unsigned int first_obj_offset;
int idx = 0;
- page = get_first_page(zspage);
+ zsdesc = get_first_zsdesc(zspage);
do {
- if (page == oldpage)
- pages[idx] = newpage;
+ if (zsdesc == old_zsdesc)
+ zsdescs[idx] = new_zsdesc;
else
- pages[idx] = page;
+ zsdescs[idx] = zsdesc;
idx++;
- } while ((page = get_next_page(page)) != NULL);
+ } while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
- create_page_chain(class, zspage, pages);
- set_first_obj_offset(newpage, get_first_obj_offset(oldpage));
+ create_zsdesc_chain(class, zspage, zsdescs);
+ first_obj_offset = get_first_obj_offset(zsdesc_page(old_zsdesc));
+ set_first_obj_offset(zsdesc_page(new_zsdesc), first_obj_offset);
if (unlikely(ZsHugePage(zspage)))
- newpage->index = oldpage->index;
- __SetPageMovable(newpage, &zsmalloc_mops);
+ new_zsdesc->handle = old_zsdesc->handle;
+ zsdesc_set_movable(new_zsdesc);
}
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
@@ -2254,7 +2254,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
}
kunmap_atomic(s_addr);
- replace_sub_page(class, zspage, newpage, page);
+ replace_sub_zsdesc(class, zspage, page_zsdesc(newpage), page_zsdesc(page));
/*
* Since we complete the data copy and set up new zspage structure,
* it's okay to release the pool's lock.
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 10/25] mm/zsmalloc: convert obj_tagged() and related helpers to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (8 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 09/25] mm/zsmalloc: convert create_page_chain() and its users " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 11/25] mm/zsmalloc: convert init_zspage() " Hyeonggon Yoo
` (15 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert obj_tagged(), and related helpers to take zsdesc. Also make
its callers to cast (struct page *) to (struct zsdesc *) when calling them.
The users will be converted gradually as there are many.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 46 +++++++++++++++++++++++-----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c65bdce987e9..e1262c0a5ad4 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1047,15 +1047,15 @@ static unsigned long handle_to_obj(unsigned long handle)
return *(unsigned long *)handle;
}
-static bool obj_tagged(struct page *page, void *obj, unsigned long *phandle,
+static bool obj_tagged(struct zsdesc *zsdesc, void *obj, unsigned long *phandle,
int tag)
{
unsigned long handle;
- struct zspage *zspage = get_zspage(page);
+ struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
if (unlikely(ZsHugePage(zspage))) {
- VM_BUG_ON_PAGE(!is_first_page(page), page);
- handle = page->index;
+ VM_BUG_ON_PAGE(!is_first_zsdesc(zsdesc), zsdesc_page(zsdesc));
+ handle = zsdesc->handle;
} else
handle = *(unsigned long *)obj;
@@ -1067,16 +1067,16 @@ static bool obj_tagged(struct page *page, void *obj, unsigned long *phandle,
return true;
}
-static inline bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
+static inline bool obj_allocated(struct zsdesc *zsdesc, void *obj, unsigned long *phandle)
{
- return obj_tagged(page, obj, phandle, OBJ_ALLOCATED_TAG);
+ return obj_tagged(zsdesc, obj, phandle, OBJ_ALLOCATED_TAG);
}
#ifdef CONFIG_ZPOOL
-static bool obj_stores_deferred_handle(struct page *page, void *obj,
+static bool obj_stores_deferred_handle(struct zsdesc *zsdesc, void *obj,
unsigned long *phandle)
{
- return obj_tagged(page, obj, phandle, OBJ_DEFERRED_HANDLE_TAG);
+ return obj_tagged(zsdesc, obj, phandle, OBJ_DEFERRED_HANDLE_TAG);
}
#endif
@@ -1112,7 +1112,7 @@ static int trylock_zspage(struct zspage *zspage)
#ifdef CONFIG_ZPOOL
static unsigned long find_deferred_handle_obj(struct size_class *class,
- struct page *page, int *obj_idx);
+ struct zsdesc *zsdesc, int *obj_idx);
/*
* Free all the deferred handles whose objects are freed in zs_free.
@@ -1125,7 +1125,7 @@ static void free_handles(struct zs_pool *pool, struct size_class *class,
unsigned long handle;
while (1) {
- handle = find_deferred_handle_obj(class, page, &obj_idx);
+ handle = find_deferred_handle_obj(class, page_zsdesc(page), &obj_idx);
if (!handle) {
page = get_next_page(page);
if (!page)
@@ -1906,18 +1906,18 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
* return handle.
*/
static unsigned long find_tagged_obj(struct size_class *class,
- struct page *page, int *obj_idx, int tag)
+ struct zsdesc *zsdesc, int *obj_idx, int tag)
{
unsigned int offset;
int index = *obj_idx;
unsigned long handle = 0;
- void *addr = kmap_atomic(page);
+ void *addr = zsdesc_kmap_atomic(zsdesc);
- offset = get_first_obj_offset(page);
+ offset = get_first_obj_offset(zsdesc_page(zsdesc));
offset += class->size * index;
while (offset < PAGE_SIZE) {
- if (obj_tagged(page, addr + offset, &handle, tag))
+ if (obj_tagged(zsdesc, addr + offset, &handle, tag))
break;
offset += class->size;
@@ -1936,9 +1936,9 @@ static unsigned long find_tagged_obj(struct size_class *class,
* return handle.
*/
static unsigned long find_alloced_obj(struct size_class *class,
- struct page *page, int *obj_idx)
+ struct zsdesc *zsdesc, int *obj_idx)
{
- return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG);
+ return find_tagged_obj(class, zsdesc, obj_idx, OBJ_ALLOCATED_TAG);
}
#ifdef CONFIG_ZPOOL
@@ -1947,9 +1947,9 @@ static unsigned long find_alloced_obj(struct size_class *class,
* and return handle.
*/
static unsigned long find_deferred_handle_obj(struct size_class *class,
- struct page *page, int *obj_idx)
+ struct zsdesc *zsdesc, int *obj_idx)
{
- return find_tagged_obj(class, page, obj_idx, OBJ_DEFERRED_HANDLE_TAG);
+ return find_tagged_obj(class, zsdesc, obj_idx, OBJ_DEFERRED_HANDLE_TAG);
}
#endif
@@ -1975,7 +1975,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
int ret = 0;
while (1) {
- handle = find_alloced_obj(class, s_page, &obj_idx);
+ handle = find_alloced_obj(class, page_zsdesc(s_page), &obj_idx);
if (!handle) {
s_page = get_next_page(s_page);
if (!s_page)
@@ -2243,7 +2243,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
addr += class->size) {
- if (obj_allocated(page, addr, &handle)) {
+ if (obj_allocated(page_zsdesc(page), addr, &handle)) {
old_obj = handle_to_obj(handle);
obj_to_location(old_obj, &dummy, &obj_idx);
@@ -2727,14 +2727,14 @@ static void restore_freelist(struct zs_pool *pool, struct size_class *class,
void *obj_addr = vaddr + off;
/* skip allocated object */
- if (obj_allocated(page, obj_addr, &handle)) {
+ if (obj_allocated(page_zsdesc(page), obj_addr, &handle)) {
obj_idx++;
off += class->size;
continue;
}
/* free deferred handle from reclaim attempt */
- if (obj_stores_deferred_handle(page, obj_addr, &handle))
+ if (obj_stores_deferred_handle(page_zsdesc(page), obj_addr, &handle))
cache_free_handle(pool, handle);
if (prev_free)
@@ -2830,7 +2830,7 @@ static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries)
obj_idx = 0;
page = get_first_page(zspage);
while (1) {
- handle = find_alloced_obj(class, page, &obj_idx);
+ handle = find_alloced_obj(class, page_zsdesc(page), &obj_idx);
if (!handle) {
page = get_next_page(page);
if (!page)
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 11/25] mm/zsmalloc: convert init_zspage() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (9 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 10/25] mm/zsmalloc: convert obj_tagged() and related helpers " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 12/25] mm/zsmalloc: convert obj_to_page() and zs_free() " Hyeonggon Yoo
` (14 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert init_zspage() to use zsdesc and update its comment accordingly.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e1262c0a5ad4..cfcd63c50c36 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1206,16 +1206,16 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
{
unsigned int freeobj = 1;
unsigned long off = 0;
- struct page *page = get_first_page(zspage);
+ struct zsdesc *zsdesc = get_first_zsdesc(zspage);
- while (page) {
- struct page *next_page;
+ while (zsdesc) {
+ struct zsdesc *next_zsdesc;
struct link_free *link;
void *vaddr;
- set_first_obj_offset(page, off);
+ set_first_obj_offset(zsdesc_page(zsdesc), off);
- vaddr = kmap_atomic(page);
+ vaddr = zsdesc_kmap_atomic(zsdesc);
link = (struct link_free *)vaddr + off / sizeof(*link);
while ((off += class->size) < PAGE_SIZE) {
@@ -1225,11 +1225,11 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
/*
* We now come to the last (full or partial) object on this
- * page, which must point to the first object on the next
- * page (if present)
+ * zsdesc, which must point to the first object on the next
+ * zsdesc (if present)
*/
- next_page = get_next_page(page);
- if (next_page) {
+ next_zsdesc = get_next_zsdesc(zsdesc);
+ if (next_zsdesc) {
link->next = freeobj++ << OBJ_TAG_BITS;
} else {
/*
@@ -1239,7 +1239,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
link->next = -1UL << OBJ_TAG_BITS;
}
kunmap_atomic(vaddr);
- page = next_page;
+ zsdesc = next_zsdesc;
off %= PAGE_SIZE;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 12/25] mm/zsmalloc: convert obj_to_page() and zs_free() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (10 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 11/25] mm/zsmalloc: convert init_zspage() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 13/25] mm/zsmalloc: convert reset_page() to reset_zsdesc() Hyeonggon Yoo
` (13 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert obj_to_page() to obj_to_zsdesc() and also convert its user
zs_free() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index cfcd63c50c36..bbb65fb8749a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1020,10 +1020,10 @@ static void obj_to_location(unsigned long obj, struct zsdesc **zsdesc,
*obj_idx = (obj & OBJ_INDEX_MASK);
}
-static void obj_to_page(unsigned long obj, struct page **page)
+static void obj_to_zsdesc(unsigned long obj, struct zsdesc **zsdesc)
{
obj >>= OBJ_TAG_BITS;
- *page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+ *zsdesc = pfn_zsdesc(obj >> OBJ_INDEX_BITS);
}
/**
@@ -1787,7 +1787,7 @@ static void obj_free(int class_size, unsigned long obj, unsigned long *handle)
void zs_free(struct zs_pool *pool, unsigned long handle)
{
struct zspage *zspage;
- struct page *f_page;
+ struct zsdesc *f_zsdesc;
unsigned long obj;
struct size_class *class;
enum fullness_group fullness;
@@ -1801,8 +1801,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
*/
spin_lock(&pool->lock);
obj = handle_to_obj(handle);
- obj_to_page(obj, &f_page);
- zspage = get_zspage(f_page);
+ obj_to_zsdesc(obj, &f_zsdesc);
+ zspage = get_zspage(zsdesc_page(f_zsdesc));
class = zspage_class(pool, zspage);
class_stat_dec(class, OBJ_USED, 1);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 13/25] mm/zsmalloc: convert reset_page() to reset_zsdesc()
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (11 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 12/25] mm/zsmalloc: convert obj_to_page() and zs_free() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 14/25] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc Hyeonggon Yoo
` (12 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
reset_page() is called prior to freeing base pages of a zspage.
As it's closely associated with details of struct page, convert it to
reset_zsdesc() and move closer to newly added zsdesc helper functions.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index bbb65fb8749a..5a3948cbe06f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -450,6 +450,17 @@ static inline void zsdesc_set_movable(struct zsdesc *zsdesc)
__SetPageMovable(page, &zsmalloc_mops);
}
+static void reset_zsdesc(struct zsdesc *zsdesc)
+{
+ struct page *page = zsdesc_page(zsdesc);
+
+ __ClearPageMovable(page);
+ ClearPagePrivate(page);
+ set_page_private(page, 0);
+ page_mapcount_reset(page);
+ page->index = 0;
+}
+
/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
static void SetZsHugePage(struct zspage *zspage)
{
@@ -1080,15 +1091,6 @@ static bool obj_stores_deferred_handle(struct zsdesc *zsdesc, void *obj,
}
#endif
-static void reset_page(struct page *page)
-{
- __ClearPageMovable(page);
- ClearPagePrivate(page);
- set_page_private(page, 0);
- page_mapcount_reset(page);
- page->index = 0;
-}
-
static int trylock_zspage(struct zspage *zspage)
{
struct zsdesc *cursor, *fail;
@@ -1164,7 +1166,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
do {
VM_BUG_ON_PAGE(!PageLocked(page), page);
next = get_next_page(page);
- reset_page(page);
+ reset_zsdesc(page_zsdesc(page));
unlock_page(page);
dec_zone_page_state(page, NR_ZSPAGES);
put_page(page);
@@ -2269,7 +2271,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
inc_zone_page_state(newpage, NR_ZSPAGES);
}
- reset_page(page);
+ reset_zsdesc(page_zsdesc(page));
put_page(page);
return MIGRATEPAGE_SUCCESS;
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 14/25] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (12 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 13/25] mm/zsmalloc: convert reset_page() to reset_zsdesc() Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 15/25] mm/zsmalloc: convert __free_zspage() " Hyeonggon Yoo
` (11 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert the functions for movable operations of zsmalloc to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 40 ++++++++++++++++++++++------------------
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5a3948cbe06f..ced7f144b884 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2181,14 +2181,15 @@ static void replace_sub_zsdesc(struct size_class *class, struct zspage *zspage,
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
{
struct zspage *zspage;
+ struct zsdesc *zsdesc = page_zsdesc(page);
/*
* Page is locked so zspage couldn't be destroyed. For detail, look at
* lock_zspage in free_zspage.
*/
- VM_BUG_ON_PAGE(PageIsolated(page), page);
+ VM_BUG_ON_PAGE(zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
- zspage = get_zspage(page);
+ zspage = get_zspage(zsdesc_page(zsdesc));
migrate_write_lock(zspage);
inc_zspage_isolation(zspage);
migrate_write_unlock(zspage);
@@ -2203,6 +2204,8 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
struct size_class *class;
struct zspage *zspage;
struct zsdesc *dummy;
+ struct zsdesc *new_zsdesc = page_zsdesc(newpage);
+ struct zsdesc *zsdesc = page_zsdesc(page);
void *s_addr, *d_addr, *addr;
unsigned int offset;
unsigned long handle;
@@ -2217,10 +2220,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
if (mode == MIGRATE_SYNC_NO_COPY)
return -EINVAL;
- VM_BUG_ON_PAGE(!PageIsolated(page), page);
+ VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
/* The page is locked, so this pointer must remain valid */
- zspage = get_zspage(page);
+ zspage = get_zspage(zsdesc_page(zsdesc));
pool = zspage->pool;
/*
@@ -2233,30 +2236,30 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
/* the migrate_write_lock protects zpage access via zs_map_object */
migrate_write_lock(zspage);
- offset = get_first_obj_offset(page);
- s_addr = kmap_atomic(page);
+ offset = get_first_obj_offset(zsdesc_page(zsdesc));
+ s_addr = zsdesc_kmap_atomic(zsdesc);
/*
* Here, any user cannot access all objects in the zspage so let's move.
*/
- d_addr = kmap_atomic(newpage);
+ d_addr = zsdesc_kmap_atomic(new_zsdesc);
memcpy(d_addr, s_addr, PAGE_SIZE);
kunmap_atomic(d_addr);
for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
addr += class->size) {
- if (obj_allocated(page_zsdesc(page), addr, &handle)) {
+ if (obj_allocated(zsdesc, addr, &handle)) {
old_obj = handle_to_obj(handle);
obj_to_location(old_obj, &dummy, &obj_idx);
- new_obj = (unsigned long)location_to_obj(newpage,
+ new_obj = (unsigned long)location_to_obj(zsdesc_page(new_zsdesc),
obj_idx);
record_obj(handle, new_obj);
}
}
kunmap_atomic(s_addr);
- replace_sub_zsdesc(class, zspage, page_zsdesc(newpage), page_zsdesc(page));
+ replace_sub_zsdesc(class, zspage, new_zsdesc, zsdesc);
/*
* Since we complete the data copy and set up new zspage structure,
* it's okay to release the pool's lock.
@@ -2265,14 +2268,14 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
dec_zspage_isolation(zspage);
migrate_write_unlock(zspage);
- get_page(newpage);
- if (page_zone(newpage) != page_zone(page)) {
- dec_zone_page_state(page, NR_ZSPAGES);
- inc_zone_page_state(newpage, NR_ZSPAGES);
+ zsdesc_get(new_zsdesc);
+ if (zsdesc_zone(new_zsdesc) != zsdesc_zone(zsdesc)) {
+ zsdesc_dec_zone_page_state(zsdesc);
+ zsdesc_inc_zone_page_state(new_zsdesc);
}
- reset_zsdesc(page_zsdesc(page));
- put_page(page);
+ reset_zsdesc(zsdesc);
+ zsdesc_put(zsdesc);
return MIGRATEPAGE_SUCCESS;
}
@@ -2280,10 +2283,11 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
static void zs_page_putback(struct page *page)
{
struct zspage *zspage;
+ struct zsdesc *zsdesc = page_zsdesc(page);
- VM_BUG_ON_PAGE(!PageIsolated(page), page);
+ VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
- zspage = get_zspage(page);
+ zspage = get_zspage(zsdesc_page(zsdesc));
migrate_write_lock(zspage);
dec_zspage_isolation(zspage);
migrate_write_unlock(zspage);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 15/25] mm/zsmalloc: convert __free_zspage() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (13 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 14/25] mm/zsmalloc: convert zs_page_{isolate,migrate,putback} to use zsdesc Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 16/25] mm/zsmalloc: convert unlock_zspage() " Hyeonggon Yoo
` (10 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert __free_zspage() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ced7f144b884..7ec616ec5cf5 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1148,7 +1148,7 @@ static inline void free_handles(struct zs_pool *pool, struct size_class *class,
static void __free_zspage(struct zs_pool *pool, struct size_class *class,
struct zspage *zspage)
{
- struct page *page, *next;
+ struct zsdesc *zsdesc, *next;
enum fullness_group fg;
unsigned int class_idx;
@@ -1162,16 +1162,16 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
/* Free all deferred handles from zs_free */
free_handles(pool, class, zspage);
- next = page = get_first_page(zspage);
+ next = zsdesc = get_first_zsdesc(zspage);
do {
- VM_BUG_ON_PAGE(!PageLocked(page), page);
- next = get_next_page(page);
- reset_zsdesc(page_zsdesc(page));
- unlock_page(page);
- dec_zone_page_state(page, NR_ZSPAGES);
- put_page(page);
- page = next;
- } while (page != NULL);
+ VM_BUG_ON_PAGE(!zsdesc_is_locked(zsdesc), zsdesc_page(zsdesc));
+ next = get_next_zsdesc(zsdesc);
+ reset_zsdesc(zsdesc);
+ unlock_zsdesc(zsdesc);
+ zsdesc_dec_zone_page_state(zsdesc);
+ zsdesc_put(zsdesc);
+ zsdesc = next;
+ } while (zsdesc != NULL);
cache_free_zspage(pool, zspage);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 16/25] mm/zsmalloc: convert unlock_zspage() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (14 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 15/25] mm/zsmalloc: convert __free_zspage() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 17/25] mm/zsmalloc: convert location_to_obj() " Hyeonggon Yoo
` (9 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert unlock_zspage() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7ec616ec5cf5..affb2755d9d7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2102,11 +2102,11 @@ static void lock_zspage(struct zspage *zspage)
*/
static void unlock_zspage(struct zspage *zspage)
{
- struct page *page = get_first_page(zspage);
+ struct zsdesc *zsdesc = get_first_zsdesc(zspage);
do {
- unlock_page(page);
- } while ((page = get_next_page(page)) != NULL);
+ unlock_zsdesc(zsdesc);
+ } while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
}
#endif /* CONFIG_ZPOOL */
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 17/25] mm/zsmalloc: convert location_to_obj() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (15 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 16/25] mm/zsmalloc: convert unlock_zspage() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 18/25] mm/zsmalloc: convert free_handles() " Hyeonggon Yoo
` (8 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
As all users of location_to_obj() now use zsdesc, convert
location_to_obj() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index affb2755d9d7..dbc404045487 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1038,15 +1038,15 @@ static void obj_to_zsdesc(unsigned long obj, struct zsdesc **zsdesc)
}
/**
- * location_to_obj - get obj value encoded from (<page>, <obj_idx>)
- * @page: page object resides in zspage
+ * location_to_obj - get obj value encoded from (<zsdesc>, <obj_idx>)
+ * @zsdesc object resides in zspage
* @obj_idx: object index
*/
-static unsigned long location_to_obj(struct page *page, unsigned int obj_idx)
+static unsigned long location_to_obj(struct zsdesc *zsdesc, unsigned int obj_idx)
{
unsigned long obj;
- obj = page_to_pfn(page) << OBJ_INDEX_BITS;
+ obj = zsdesc_pfn(zsdesc) << OBJ_INDEX_BITS;
obj |= obj_idx & OBJ_INDEX_MASK;
obj <<= OBJ_TAG_BITS;
@@ -1671,7 +1671,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
kunmap_atomic(vaddr);
mod_zspage_inuse(zspage, 1);
- obj = location_to_obj(zsdesc_page(m_zsdesc), obj);
+ obj = location_to_obj(m_zsdesc, obj);
return obj;
}
@@ -2252,7 +2252,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
old_obj = handle_to_obj(handle);
obj_to_location(old_obj, &dummy, &obj_idx);
- new_obj = (unsigned long)location_to_obj(zsdesc_page(new_zsdesc),
+ new_obj = (unsigned long)location_to_obj(new_zsdesc,
obj_idx);
record_obj(handle, new_obj);
}
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 18/25] mm/zsmalloc: convert free_handles() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (16 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 17/25] mm/zsmalloc: convert location_to_obj() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 19/25] mm/zsmalloc: convert zs_compact_control and its users " Hyeonggon Yoo
` (7 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert free_handles() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index dbc404045487..b58821b3494b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1123,14 +1123,14 @@ static void free_handles(struct zs_pool *pool, struct size_class *class,
struct zspage *zspage)
{
int obj_idx = 0;
- struct page *page = get_first_page(zspage);
+ struct zsdesc *zsdesc = get_first_zsdesc(zspage);
unsigned long handle;
while (1) {
- handle = find_deferred_handle_obj(class, page_zsdesc(page), &obj_idx);
+ handle = find_deferred_handle_obj(class, zsdesc, &obj_idx);
if (!handle) {
- page = get_next_page(page);
- if (!page)
+ zsdesc = get_next_zsdesc(zsdesc);
+ if (!zsdesc)
break;
obj_idx = 0;
continue;
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 19/25] mm/zsmalloc: convert zs_compact_control and its users to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (17 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 18/25] mm/zsmalloc: convert free_handles() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 20/25] mm/zsmalloc: convert get_zspage() to take zsdesc Hyeonggon Yoo
` (6 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert struct zs_compact_control to use zsdesc, update comments
accordingly, and also convert its users.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b58821b3494b..488dc570d660 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1956,12 +1956,12 @@ static unsigned long find_deferred_handle_obj(struct size_class *class,
#endif
struct zs_compact_control {
- /* Source spage for migration which could be a subpage of zspage */
- struct page *s_page;
- /* Destination page for migration which should be a first page
+ /* Source zsdesc for migration which could be a sub-zsdesc of zspage */
+ struct zsdesc *s_zsdesc;
+ /* Destination zsdesc for migration which should be a first zsdesc
* of zspage. */
- struct page *d_page;
- /* Starting object index within @s_page which used for live object
+ struct zsdesc *d_zsdesc;
+ /* Starting object index within @s_zsdesc which used for live object
* in the subpage. */
int obj_idx;
};
@@ -1971,29 +1971,29 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
{
unsigned long used_obj, free_obj;
unsigned long handle;
- struct page *s_page = cc->s_page;
- struct page *d_page = cc->d_page;
+ struct zsdesc *s_zsdesc = cc->s_zsdesc;
+ struct zsdesc *d_zsdesc = cc->d_zsdesc;
int obj_idx = cc->obj_idx;
int ret = 0;
while (1) {
- handle = find_alloced_obj(class, page_zsdesc(s_page), &obj_idx);
+ handle = find_alloced_obj(class, s_zsdesc, &obj_idx);
if (!handle) {
- s_page = get_next_page(s_page);
- if (!s_page)
+ s_zsdesc = get_next_zsdesc(s_zsdesc);
+ if (!s_zsdesc)
break;
obj_idx = 0;
continue;
}
/* Stop if there is no more space */
- if (zspage_full(class, get_zspage(d_page))) {
+ if (zspage_full(class, get_zspage(zsdesc_page(d_zsdesc)))) {
ret = -ENOMEM;
break;
}
used_obj = handle_to_obj(handle);
- free_obj = obj_malloc(pool, get_zspage(d_page), handle);
+ free_obj = obj_malloc(pool, get_zspage(zsdesc_page(d_zsdesc)), handle);
zs_object_copy(class, free_obj, used_obj);
obj_idx++;
record_obj(handle, free_obj);
@@ -2001,7 +2001,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
}
/* Remember last position in this iteration */
- cc->s_page = s_page;
+ cc->s_zsdesc = s_zsdesc;
cc->obj_idx = obj_idx;
return ret;
@@ -2410,12 +2410,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
break;
cc.obj_idx = 0;
- cc.s_page = get_first_page(src_zspage);
+ cc.s_zsdesc = get_first_zsdesc(src_zspage);
while ((dst_zspage = isolate_zspage(class, false))) {
migrate_write_lock_nested(dst_zspage);
- cc.d_page = get_first_page(dst_zspage);
+ cc.d_zsdesc = get_first_zsdesc(dst_zspage);
/*
* If there is no more space in dst_page, resched
* and see if anyone had allocated another zspage.
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 20/25] mm/zsmalloc: convert get_zspage() to take zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (18 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 19/25] mm/zsmalloc: convert zs_compact_control and its users " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 21/25] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc Hyeonggon Yoo
` (5 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Now that all users except get_next_page() (which will be removed in
later patch) use zsdesc, convert get_zspage() to take zsdesc instead
of page.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 488dc570d660..5af0fee6e3ed 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -989,9 +989,9 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
return newfg;
}
-static struct zspage *get_zspage(struct page *page)
+static struct zspage *get_zspage(struct zsdesc *zsdesc)
{
- struct zspage *zspage = (struct zspage *)page_private(page);
+ struct zspage *zspage = zsdesc->zspage;
BUG_ON(zspage->magic != ZSPAGE_MAGIC);
return zspage;
@@ -999,7 +999,7 @@ static struct zspage *get_zspage(struct page *page)
static __maybe_unused struct page *get_next_page(struct page *page)
{
- struct zspage *zspage = get_zspage(page);
+ struct zspage *zspage = get_zspage(page_zsdesc(page));
if (unlikely(ZsHugePage(zspage)))
return NULL;
@@ -1009,7 +1009,7 @@ static __maybe_unused struct page *get_next_page(struct page *page)
static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
{
- struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
+ struct zspage *zspage = get_zspage(zsdesc);
if (unlikely(ZsHugePage(zspage)))
return NULL;
@@ -1062,7 +1062,7 @@ static bool obj_tagged(struct zsdesc *zsdesc, void *obj, unsigned long *phandle,
int tag)
{
unsigned long handle;
- struct zspage *zspage = get_zspage(zsdesc_page(zsdesc));
+ struct zspage *zspage = get_zspage(zsdesc);
if (unlikely(ZsHugePage(zspage))) {
VM_BUG_ON_PAGE(!is_first_zsdesc(zsdesc), zsdesc_page(zsdesc));
@@ -1518,7 +1518,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
spin_lock(&pool->lock);
obj = handle_to_obj(handle);
obj_to_location(obj, &zsdesc, &obj_idx);
- zspage = get_zspage(zsdesc_page(zsdesc));
+ zspage = get_zspage(zsdesc);
#ifdef CONFIG_ZPOOL
/*
@@ -1593,7 +1593,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
obj = handle_to_obj(handle);
obj_to_location(obj, &zsdesc, &obj_idx);
- zspage = get_zspage(zsdesc_page(zsdesc));
+ zspage = get_zspage(zsdesc);
class = zspage_class(pool, zspage);
off = (class->size * obj_idx) & ~PAGE_MASK;
@@ -1757,7 +1757,7 @@ static void obj_free(int class_size, unsigned long obj, unsigned long *handle)
obj_to_location(obj, &f_zsdesc, &f_objidx);
f_offset = (class_size * f_objidx) & ~PAGE_MASK;
- zspage = get_zspage(zsdesc_page(f_zsdesc));
+ zspage = get_zspage(f_zsdesc);
vaddr = zsdesc_kmap_atomic(f_zsdesc);
link = (struct link_free *)(vaddr + f_offset);
@@ -1804,7 +1804,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
spin_lock(&pool->lock);
obj = handle_to_obj(handle);
obj_to_zsdesc(obj, &f_zsdesc);
- zspage = get_zspage(zsdesc_page(f_zsdesc));
+ zspage = get_zspage(f_zsdesc);
class = zspage_class(pool, zspage);
class_stat_dec(class, OBJ_USED, 1);
@@ -1987,13 +1987,13 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
}
/* Stop if there is no more space */
- if (zspage_full(class, get_zspage(zsdesc_page(d_zsdesc)))) {
+ if (zspage_full(class, get_zspage(d_zsdesc))) {
ret = -ENOMEM;
break;
}
used_obj = handle_to_obj(handle);
- free_obj = obj_malloc(pool, get_zspage(zsdesc_page(d_zsdesc)), handle);
+ free_obj = obj_malloc(pool, get_zspage(d_zsdesc), handle);
zs_object_copy(class, free_obj, used_obj);
obj_idx++;
record_obj(handle, free_obj);
@@ -2189,7 +2189,7 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
*/
VM_BUG_ON_PAGE(zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
- zspage = get_zspage(zsdesc_page(zsdesc));
+ zspage = get_zspage(zsdesc);
migrate_write_lock(zspage);
inc_zspage_isolation(zspage);
migrate_write_unlock(zspage);
@@ -2223,7 +2223,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
/* The page is locked, so this pointer must remain valid */
- zspage = get_zspage(zsdesc_page(zsdesc));
+ zspage = get_zspage(zsdesc);
pool = zspage->pool;
/*
@@ -2287,7 +2287,7 @@ static void zs_page_putback(struct page *page)
VM_BUG_ON_PAGE(!zsdesc_is_isolated(zsdesc), zsdesc_page(zsdesc));
- zspage = get_zspage(zsdesc_page(zsdesc));
+ zspage = get_zspage(zsdesc);
migrate_write_lock(zspage);
dec_zspage_isolation(zspage);
migrate_write_unlock(zspage);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 21/25] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (19 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 20/25] mm/zsmalloc: convert get_zspage() to take zsdesc Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 22/25] mm/zsmalloc: convert restore_freelist() " Hyeonggon Yoo
` (4 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert SetZsPageMovable() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5af0fee6e3ed..e9202bb14704 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2357,13 +2357,13 @@ static void init_deferred_free(struct zs_pool *pool)
static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage)
{
- struct page *page = get_first_page(zspage);
+ struct zsdesc *zsdesc = get_first_zsdesc(zspage);
do {
- WARN_ON(!trylock_page(page));
- __SetPageMovable(page, &zsmalloc_mops);
- unlock_page(page);
- } while ((page = get_next_page(page)) != NULL);
+ WARN_ON(!trylock_zsdesc(zsdesc));
+ zsdesc_set_movable(zsdesc);
+ unlock_zsdesc(zsdesc);
+ } while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
}
#else
static inline void zs_flush_migration(struct zs_pool *pool) { }
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 22/25] mm/zsmalloc: convert restore_freelist() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (20 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 21/25] mm/zsmalloc: convert SetZsPageMovable() to use zsdesc Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 23/25] mm/zsmalloc: convert zs_reclaim_page() " Hyeonggon Yoo
` (3 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert restore_freelist() to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 44 ++++++++++++++++++++++----------------------
1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e9202bb14704..b6ca93012c9a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2718,29 +2718,29 @@ static void restore_freelist(struct zs_pool *pool, struct size_class *class,
{
unsigned int obj_idx = 0;
unsigned long handle, off = 0; /* off is within-page offset */
- struct page *page = get_first_page(zspage);
+ struct zsdesc *zsdesc = get_first_zsdesc(zspage);
struct link_free *prev_free = NULL;
- void *prev_page_vaddr = NULL;
+ void *prev_zsdesc_vaddr = NULL;
/* in case no free object found */
set_freeobj(zspage, (unsigned int)(-1UL));
- while (page) {
- void *vaddr = kmap_atomic(page);
- struct page *next_page;
+ while (zsdesc) {
+ void *vaddr = zsdesc_kmap_atomic(zsdesc);
+ struct zsdesc *next_zsdesc;
while (off < PAGE_SIZE) {
void *obj_addr = vaddr + off;
/* skip allocated object */
- if (obj_allocated(page_zsdesc(page), obj_addr, &handle)) {
+ if (obj_allocated(zsdesc, obj_addr, &handle)) {
obj_idx++;
off += class->size;
continue;
}
/* free deferred handle from reclaim attempt */
- if (obj_stores_deferred_handle(page_zsdesc(page), obj_addr, &handle))
+ if (obj_stores_deferred_handle(zsdesc, obj_addr, &handle))
cache_free_handle(pool, handle);
if (prev_free)
@@ -2749,10 +2749,10 @@ static void restore_freelist(struct zs_pool *pool, struct size_class *class,
set_freeobj(zspage, obj_idx);
prev_free = (struct link_free *)vaddr + off / sizeof(*prev_free);
- /* if last free object in a previous page, need to unmap */
- if (prev_page_vaddr) {
- kunmap_atomic(prev_page_vaddr);
- prev_page_vaddr = NULL;
+ /* if last free object in a previous zsdesc, need to unmap */
+ if (prev_zsdesc_vaddr) {
+ kunmap_atomic(prev_zsdesc_vaddr);
+ prev_zsdesc_vaddr = NULL;
}
obj_idx++;
@@ -2760,19 +2760,19 @@ static void restore_freelist(struct zs_pool *pool, struct size_class *class,
}
/*
- * Handle the last (full or partial) object on this page.
+ * Handle the last (full or partial) object on this zsdesc.
*/
- next_page = get_next_page(page);
- if (next_page) {
- if (!prev_free || prev_page_vaddr) {
+ next_zsdesc = get_next_zsdesc(zsdesc);
+ if (next_zsdesc) {
+ if (!prev_free || prev_zsdesc_vaddr) {
/*
* There is no free object in this page, so we can safely
* unmap it.
*/
kunmap_atomic(vaddr);
} else {
- /* update prev_page_vaddr since prev_free is on this page */
- prev_page_vaddr = vaddr;
+ /* update prev_zsdesc_vaddr since prev_free is on this zsdesc */
+ prev_zsdesc_vaddr = vaddr;
}
} else { /* this is the last page */
if (prev_free) {
@@ -2783,16 +2783,16 @@ static void restore_freelist(struct zs_pool *pool, struct size_class *class,
prev_free->next = -1UL << OBJ_TAG_BITS;
}
- /* unmap previous page (if not done yet) */
- if (prev_page_vaddr) {
- kunmap_atomic(prev_page_vaddr);
- prev_page_vaddr = NULL;
+ /* unmap previous zsdesc (if not done yet) */
+ if (prev_zsdesc_vaddr) {
+ kunmap_atomic(prev_zsdesc_vaddr);
+ prev_zsdesc_vaddr = NULL;
}
kunmap_atomic(vaddr);
}
- page = next_page;
+ zsdesc = next_zsdesc;
off %= PAGE_SIZE;
}
}
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 23/25] mm/zsmalloc: convert zs_reclaim_page() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (21 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 22/25] mm/zsmalloc: convert restore_freelist() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 24/25] mm/zsmalloc: remove now unused helper functions Hyeonggon Yoo
` (2 subsequent siblings)
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Convert zs_reclaim_page() to use zsdesc and update its comments
accordingly.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b6ca93012c9a..7153688f5bca 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2802,7 +2802,7 @@ static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries)
int i, obj_idx, ret = 0;
unsigned long handle;
struct zspage *zspage;
- struct page *page;
+ struct zsdesc *zsdesc;
enum fullness_group fullness;
/* Lock LRU and fullness list */
@@ -2830,16 +2830,16 @@ static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries)
spin_unlock(&pool->lock);
cond_resched();
- /* Lock backing pages into place */
+ /* Lock backing zsdescs into place */
lock_zspage(zspage);
obj_idx = 0;
- page = get_first_page(zspage);
+ zsdesc = get_first_zsdesc(zspage);
while (1) {
- handle = find_alloced_obj(class, page_zsdesc(page), &obj_idx);
+ handle = find_alloced_obj(class, zsdesc, &obj_idx);
if (!handle) {
- page = get_next_page(page);
- if (!page)
+ zsdesc = get_next_zsdesc(zsdesc);
+ if (!zsdesc)
break;
obj_idx = 0;
continue;
@@ -2870,7 +2870,7 @@ static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries)
if (!get_zspage_inuse(zspage)) {
/*
* Fullness went stale as zs_free() won't touch it
- * while the page is removed from the pool. Fix it
+ * while the zsdesc is removed from the pool. Fix it
* up for the check in __free_zspage().
*/
zspage->fullness = ZS_EMPTY;
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 24/25] mm/zsmalloc: remove now unused helper functions
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (22 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 23/25] mm/zsmalloc: convert zs_reclaim_page() " Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-20 13:22 ` [RFC PATCH 25/25] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc Hyeonggon Yoo
2023-02-24 0:01 ` [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
All users of is_first_page(), get_first_page(), get_next_page()
are now converted to use new helper functions that takes zsdesc.
Remove now unused helper functions.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 27 ++-------------------------
1 file changed, 2 insertions(+), 25 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7153688f5bca..59fe8d469aed 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -659,11 +659,6 @@ static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
.lock = INIT_LOCAL_LOCK(lock),
};
-static __maybe_unused int is_first_page(struct page *page)
-{
- return PagePrivate(page);
-}
-
static __maybe_unused int is_first_zsdesc(struct zsdesc *zsdesc)
{
return PagePrivate(zsdesc_page(zsdesc));
@@ -681,15 +676,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
zspage->inuse += val;
}
-static __maybe_unused inline struct page *get_first_page(struct zspage *zspage)
-{
- struct page *first_page = zsdesc_page(zspage->first_zsdesc);
-
- VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
- return first_page;
-}
-
-static __maybe_unused struct zsdesc *get_first_zsdesc(struct zspage *zspage)
+static struct zsdesc *get_first_zsdesc(struct zspage *zspage)
{
struct zsdesc *first_zsdesc = zspage->first_zsdesc;
@@ -997,17 +984,7 @@ static struct zspage *get_zspage(struct zsdesc *zsdesc)
return zspage;
}
-static __maybe_unused struct page *get_next_page(struct page *page)
-{
- struct zspage *zspage = get_zspage(page_zsdesc(page));
-
- if (unlikely(ZsHugePage(zspage)))
- return NULL;
-
- return (struct page *)page->index;
-}
-
-static __maybe_unused struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
+static struct zsdesc *get_next_zsdesc(struct zsdesc *zsdesc)
{
struct zspage *zspage = get_zspage(zsdesc);
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [RFC PATCH 25/25] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (23 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 24/25] mm/zsmalloc: remove now unused helper functions Hyeonggon Yoo
@ 2023-02-20 13:22 ` Hyeonggon Yoo
2023-02-24 0:01 ` [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
25 siblings, 0 replies; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-20 13:22 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox
Cc: Andrew Morton, linux-mm, Hyeonggon Yoo
Now that all users of {get,set}_first_obj_offset() are converted
to use zsdesc, convert them to use zsdesc.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/zsmalloc.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 59fe8d469aed..9ac72114e589 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -684,14 +684,14 @@ static struct zsdesc *get_first_zsdesc(struct zspage *zspage)
return first_zsdesc;
}
-static inline unsigned int get_first_obj_offset(struct page *page)
+static inline unsigned int get_first_obj_offset(struct zsdesc *zsdesc)
{
- return page->page_type;
+ return zsdesc->first_obj_offset;
}
-static inline void set_first_obj_offset(struct page *page, unsigned int offset)
+static inline void set_first_obj_offset(struct zsdesc *zsdesc, unsigned int offset)
{
- page->page_type = offset;
+ zsdesc->first_obj_offset = offset;
}
static inline unsigned int get_freeobj(struct zspage *zspage)
@@ -1192,7 +1192,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
struct link_free *link;
void *vaddr;
- set_first_obj_offset(zsdesc_page(zsdesc), off);
+ set_first_obj_offset(zsdesc, off);
vaddr = zsdesc_kmap_atomic(zsdesc);
link = (struct link_free *)vaddr + off / sizeof(*link);
@@ -1892,7 +1892,7 @@ static unsigned long find_tagged_obj(struct size_class *class,
unsigned long handle = 0;
void *addr = zsdesc_kmap_atomic(zsdesc);
- offset = get_first_obj_offset(zsdesc_page(zsdesc));
+ offset = get_first_obj_offset(zsdesc);
offset += class->size * index;
while (offset < PAGE_SIZE) {
@@ -2148,8 +2148,8 @@ static void replace_sub_zsdesc(struct size_class *class, struct zspage *zspage,
} while ((zsdesc = get_next_zsdesc(zsdesc)) != NULL);
create_zsdesc_chain(class, zspage, zsdescs);
- first_obj_offset = get_first_obj_offset(zsdesc_page(old_zsdesc));
- set_first_obj_offset(zsdesc_page(new_zsdesc), first_obj_offset);
+ first_obj_offset = get_first_obj_offset(old_zsdesc);
+ set_first_obj_offset(new_zsdesc, first_obj_offset);
if (unlikely(ZsHugePage(zspage)))
new_zsdesc->handle = old_zsdesc->handle;
zsdesc_set_movable(new_zsdesc);
@@ -2213,7 +2213,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
/* the migrate_write_lock protects zpage access via zs_map_object */
migrate_write_lock(zspage);
- offset = get_first_obj_offset(zsdesc_page(zsdesc));
+ offset = get_first_obj_offset(zsdesc);
s_addr = zsdesc_kmap_atomic(zsdesc);
/*
--
2.25.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page
2023-02-20 13:21 [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Hyeonggon Yoo
` (24 preceding siblings ...)
2023-02-20 13:22 ` [RFC PATCH 25/25] mm/zsmalloc: convert {get,set}_first_obj_offset() to use zsdesc Hyeonggon Yoo
@ 2023-02-24 0:01 ` Minchan Kim
2023-02-28 0:32 ` Hyeonggon Yoo
25 siblings, 1 reply; 31+ messages in thread
From: Minchan Kim @ 2023-02-24 0:01 UTC (permalink / raw)
To: Hyeonggon Yoo; +Cc: Sergey Senozhatsky, Matthew Wilcox, Andrew Morton, linux-mm
Hi Hyeonggon
On Mon, Feb 20, 2023 at 01:21:53PM +0000, Hyeonggon Yoo wrote:
> [Maybe not the best time to send patch series, but just wanted to
> get some early feedback from zsmalloc maintainers]
>
> The purpose of this series is to define own memory descriptor for zsmalloc,
> instead of re-using various fields of struct page. This is a part of the
> effort to reduce the size of struct page to unsigned long and enable
> dynamic allocation of memory descriptors.
>
> While [1] outlines this ultimate objective, the current use of struct page
> is highly interdependent, making it challenging to separately allocate
> memory descriptors.
>
> Therefore, this series introduces new descriptor for zsmalloc, called
> zsdesc. It overlays struct page for now, but will eventually be allocated
> independently in the future. And apart from dynamic allocation of descriptors,
> this is a nice cleanup.
>
> I have no strong opinion about its name. I was thinking about between
> zsmem and zsdesc, and wanted to be consistent with struct ptdesc.
> (which is AFAIK work in progress)
I wanted to have the chance to take a look zsmalloc folio stuff but
couldn't set up some time. :( Thanks for the good work, Hyeonggon!
I will take a look once when I am available.
Just FYI, Sergey was doing some changes in zsmalloc
https://lore.kernel.org/linux-mm/20230223030451.543162-1-senozhatsky@chromium.org/
I guess this patch would conflict with it so may need to rebase
once they were merged. Anyway, Regardless of that, I will review
this patch as soon as finishing urgent stuff.
Thanks.
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page
2023-02-24 0:01 ` [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page Minchan Kim
@ 2023-02-28 0:32 ` Hyeonggon Yoo
2023-02-28 2:02 ` Sergey Senozhatsky
0 siblings, 1 reply; 31+ messages in thread
From: Hyeonggon Yoo @ 2023-02-28 0:32 UTC (permalink / raw)
To: Minchan Kim; +Cc: Sergey Senozhatsky, Matthew Wilcox, Andrew Morton, linux-mm
On Thu, Feb 23, 2023 at 04:01:54PM -0800, Minchan Kim wrote:
> Hi Hyeonggon
Hi Minchan.
> On Mon, Feb 20, 2023 at 01:21:53PM +0000, Hyeonggon Yoo wrote:
> > [Maybe not the best time to send patch series, but just wanted to
> > get some early feedback from zsmalloc maintainers]
> >
> > The purpose of this series is to define own memory descriptor for zsmalloc,
> > instead of re-using various fields of struct page. This is a part of the
> > effort to reduce the size of struct page to unsigned long and enable
> > dynamic allocation of memory descriptors.
> >
> > While [1] outlines this ultimate objective, the current use of struct page
> > is highly interdependent, making it challenging to separately allocate
> > memory descriptors.
> >
> > Therefore, this series introduces new descriptor for zsmalloc, called
> > zsdesc. It overlays struct page for now, but will eventually be allocated
> > independently in the future. And apart from dynamic allocation of descriptors,
> > this is a nice cleanup.
> >
> > I have no strong opinion about its name. I was thinking about between
> > zsmem and zsdesc, and wanted to be consistent with struct ptdesc.
> > (which is AFAIK work in progress)
>
> I wanted to have the chance to take a look zsmalloc folio stuff but
> couldn't set up some time. :( Thanks for the good work, Hyeonggon!
My pleasure :)
> I will take a look once when I am available.
> Just FYI, Sergey was doing some changes in zsmalloc
> https://lore.kernel.org/linux-mm/20230223030451.543162-1-senozhatsky@chromium.org/
> I guess this patch would conflict with it so may need to rebase
> once they were merged.
Sure. I'll rebase as they are already in mm-unstable.
> Anyway, Regardless of that, I will review
> this patch as soon as finishing urgent stuff.
No problem, thank you so much!
>
> Thanks.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [RFC PATCH 00/25] mm/zsmalloc: Split zsdesc from struct page
2023-02-28 0:32 ` Hyeonggon Yoo
@ 2023-02-28 2:02 ` Sergey Senozhatsky
0 siblings, 0 replies; 31+ messages in thread
From: Sergey Senozhatsky @ 2023-02-28 2:02 UTC (permalink / raw)
To: Hyeonggon Yoo
Cc: Minchan Kim, Sergey Senozhatsky, Matthew Wilcox, Andrew Morton,
linux-mm
On (23/02/28 00:32), Hyeonggon Yoo wrote:
> > I wanted to have the chance to take a look zsmalloc folio stuff but
> > couldn't set up some time. :( Thanks for the good work, Hyeonggon!
>
> My pleasure :)
>
> > I will take a look once when I am available.
> > Just FYI, Sergey was doing some changes in zsmalloc
> > https://lore.kernel.org/linux-mm/20230223030451.543162-1-senozhatsky@chromium.org/
> > I guess this patch would conflict with it so may need to rebase
> > once they were merged.
>
> Sure. I'll rebase as they are already in mm-unstable.
The seies that is currenly in mm-unstable will be rebased, we
probably will land a new version by the end of this week.
> > Anyway, Regardless of that, I will review
> > this patch as soon as finishing urgent stuff.
>
> No problem, thank you so much!
Thank you.
^ permalink raw reply [flat|nested] 31+ messages in thread