Linux IOMMU Development
 help / color / mirror / Atom feed
* [PATCH v2 00/33] Separate struct slab from struct page
@ 2021-12-01 18:14 Vlastimil Babka
  2021-12-01 18:15 ` [PATCH v2 29/33] iommu: Use put_pages_list Vlastimil Babka
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-01 18:14 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin, Will Deacon,
	Sergey Senozhatsky, x86, iommu, kasan-dev, Ingo Molnar,
	Vlastimil Babka, Nitin Gupta, Vladimir Davydov, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Dmitry Vyukov, Andrey Konovalov, patches, Julia Lawall,
	Minchan Kim, Luis Chamberlain, Johannes Weiner, Andrew Morton,
	David Woodhouse

Folks from non-slab subsystems are Cc'd only to patches affecting them, and
this cover letter.

Series also available in git, based on 5.16-rc3:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
the git pull request way of eventually merging this, as it's also not a small
series. I will thus reply to this mail with asking to include my branch in
linux-next.

As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
maintaining a git tree for all slab patches in general. It was apparently
already done that way before, by Pekka:
https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/

Changes from v1/RFC:
https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/
- Added virt_to_folio() and folio_address() in the new Patch 1.
- Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
- Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)

Previous version from Matthew Wilcox:
https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/

LWN coverage of the above:
https://lwn.net/Articles/871982/

This is originally an offshoot of the folio work by Matthew. One of the more
complex parts of the struct page definition are the parts used by the slab
allocators. It would be good for the MM in general if struct slab were its own
data type, and it also helps to prevent tail pages from slipping in anywhere.
As Matthew requested in his proof of concept series, I have taken over the
development of this series, so it's a mix of patches from him (often modified
by me) and my own.

One big difference is the use of coccinelle to perform the relatively trivial
parts of the conversions automatically and at once, instead of a larger number
of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
Chamberlain for all their help!

Another notable difference is (based also on review feedback) I don't represent
with a struct slab the large kmalloc allocations which are not really a slab,
but use page allocator directly. When going from an object address to a struct
slab, the code tests first folio slab flag, and only if it's set it converts to
struct slab. This makes the struct slab type stronger.

Finally, although Matthew's version didn't use any of the folio work, the
initial support has been merged meanwhile so my version builds on top of it
where appropriate. This eliminates some of the redundant compound_head()
being performed e.g. when testing the slab flag.

To sum up, after this series, struct page fields used by slab allocators are
moved from struct page to a new struct slab, that uses the same physical
storage. The availability of the fields is further distinguished by the
selected slab allocator implementation. The advantages include:

- Similar to folios, if the slab is of order > 0, struct slab always is
  guaranteed to be the head page. Additionally it's guaranteed to be an actual
  slab page, not a large kmalloc. This removes uncertainty and potential for
  bugs.
- It's not possible to accidentally use fields of the slab implementation that's
  not configured.
- Other subsystems cannot use slab's fields in struct page anymore (some
  existing non-slab usages had to be adjusted in this series), so slab
  implementations have more freedom in rearranging them in the struct slab.

Matthew Wilcox (Oracle) (16):
  mm: Split slab into its own type
  mm: Add account_slab() and unaccount_slab()
  mm: Convert virt_to_cache() to use struct slab
  mm: Convert __ksize() to struct slab
  mm: Use struct slab in kmem_obj_info()
  mm: Convert check_heap_object() to use struct slab
  mm/slub: Convert detached_freelist to use a struct slab
  mm/slub: Convert kfree() to use a struct slab
  mm/slub: Convert print_page_info() to print_slab_info()
  mm/slub: Convert pfmemalloc_match() to take a struct slab
  mm/slob: Convert SLOB to use struct slab
  mm/kasan: Convert to struct folio and struct slab
  zsmalloc: Stop using slab fields in struct page
  bootmem: Use page->index instead of page->freelist
  iommu: Use put_pages_list
  mm: Remove slab from struct page

Vlastimil Babka (17):
  mm: add virt_to_folio() and folio_address()
  mm/slab: Dissolve slab_map_pages() in its caller
  mm/slub: Make object_err() static
  mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
  mm/slub: Convert alloc_slab_page() to return a struct slab
  mm/slub: Convert __free_slab() to use struct slab
  mm/slub: Convert most struct page to struct slab by spatch
  mm/slub: Finish struct page to struct slab conversion
  mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
  mm/slab: Convert most struct page to struct slab by spatch
  mm/slab: Finish struct page to struct slab conversion
  mm: Convert struct page to struct slab in functions used by other
    subsystems
  mm/memcg: Convert slab objcgs from struct page to struct slab
  mm/kfence: Convert kfence_guarded_alloc() to struct slab
  mm/sl*b: Differentiate struct slab fields by sl*b implementations
  mm/slub: Simplify struct slab slabs field definition
  mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
    when enabled

 arch/x86/mm/init_64.c          |    2 +-
 drivers/iommu/amd/io_pgtable.c |   59 +-
 drivers/iommu/dma-iommu.c      |   11 +-
 drivers/iommu/intel/iommu.c    |   89 +--
 include/linux/bootmem_info.h   |    2 +-
 include/linux/iommu.h          |    3 +-
 include/linux/kasan.h          |    9 +-
 include/linux/memcontrol.h     |   48 --
 include/linux/mm.h             |   12 +
 include/linux/mm_types.h       |   38 +-
 include/linux/page-flags.h     |   37 -
 include/linux/slab.h           |    8 -
 include/linux/slab_def.h       |   16 +-
 include/linux/slub_def.h       |   29 +-
 mm/bootmem_info.c              |    7 +-
 mm/kasan/common.c              |   27 +-
 mm/kasan/generic.c             |    8 +-
 mm/kasan/kasan.h               |    1 +
 mm/kasan/quarantine.c          |    2 +-
 mm/kasan/report.c              |   13 +-
 mm/kasan/report_tags.c         |   10 +-
 mm/kfence/core.c               |   17 +-
 mm/kfence/kfence_test.c        |    6 +-
 mm/memcontrol.c                |   43 +-
 mm/slab.c                      |  455 ++++++-------
 mm/slab.h                      |  322 ++++++++-
 mm/slab_common.c               |    8 +-
 mm/slob.c                      |   46 +-
 mm/slub.c                      | 1164 ++++++++++++++++----------------
 mm/sparse.c                    |    2 +-
 mm/usercopy.c                  |   13 +-
 mm/zsmalloc.c                  |   18 +-
 32 files changed, 1317 insertions(+), 1208 deletions(-)

-- 
2.33.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 29/33] iommu: Use put_pages_list
  2021-12-01 18:14 [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
@ 2021-12-01 18:15 ` Vlastimil Babka
  2021-12-01 19:07   ` Matthew Wilcox
  2021-12-02 12:25 ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
  2021-12-14 12:57 ` Vlastimil Babka
  2 siblings, 1 reply; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-01 18:15 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: David Woodhouse, patches, linux-mm, iommu, Andrew Morton,
	Will Deacon, Vlastimil Babka

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

page->freelist is for the use of slab.  We already have the ability
to free a list of pages in the core mm, but it requires the use of a
list_head and for the pages to be chained together through page->lru.
Switch the iommu code over to using put_pages_list().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Will Deacon <will@kernel.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: <iommu@lists.linux-foundation.org>
---
 drivers/iommu/amd/io_pgtable.c | 59 +++++++++-------------
 drivers/iommu/dma-iommu.c      | 11 +----
 drivers/iommu/intel/iommu.c    | 89 ++++++++++++----------------------
 include/linux/iommu.h          |  3 +-
 4 files changed, 57 insertions(+), 105 deletions(-)

diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 182c93a43efd..08ea6a02cda9 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -74,27 +74,15 @@ static u64 *first_pte_l7(u64 *pte, unsigned long *page_size,
  *
  ****************************************************************************/
 
-static void free_page_list(struct page *freelist)
-{
-	while (freelist != NULL) {
-		unsigned long p = (unsigned long)page_address(freelist);
-
-		freelist = freelist->freelist;
-		free_page(p);
-	}
-}
-
-static struct page *free_pt_page(unsigned long pt, struct page *freelist)
+static void free_pt_page(unsigned long pt, struct list_head *list)
 {
 	struct page *p = virt_to_page((void *)pt);
 
-	p->freelist = freelist;
-
-	return p;
+	list_add(&p->lru, list);
 }
 
 #define DEFINE_FREE_PT_FN(LVL, FN)						\
-static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist)	\
+static void free_pt_##LVL (unsigned long __pt, struct list_head *list)		\
 {										\
 	unsigned long p;							\
 	u64 *pt;								\
@@ -113,10 +101,10 @@ static struct page *free_pt_##LVL (unsigned long __pt, struct page *freelist)	\
 			continue;						\
 										\
 		p = (unsigned long)IOMMU_PTE_PAGE(pt[i]);			\
-		freelist = FN(p, freelist);					\
+		FN(p, list);							\
 	}									\
 										\
-	return free_pt_page((unsigned long)pt, freelist);			\
+	free_pt_page((unsigned long)pt, list);					\
 }
 
 DEFINE_FREE_PT_FN(l2, free_pt_page)
@@ -125,36 +113,33 @@ DEFINE_FREE_PT_FN(l4, free_pt_l3)
 DEFINE_FREE_PT_FN(l5, free_pt_l4)
 DEFINE_FREE_PT_FN(l6, free_pt_l5)
 
-static struct page *free_sub_pt(unsigned long root, int mode,
-				struct page *freelist)
+static void free_sub_pt(unsigned long root, int mode, struct list_head *list)
 {
 	switch (mode) {
 	case PAGE_MODE_NONE:
 	case PAGE_MODE_7_LEVEL:
 		break;
 	case PAGE_MODE_1_LEVEL:
-		freelist = free_pt_page(root, freelist);
+		free_pt_page(root, list);
 		break;
 	case PAGE_MODE_2_LEVEL:
-		freelist = free_pt_l2(root, freelist);
+		free_pt_l2(root, list);
 		break;
 	case PAGE_MODE_3_LEVEL:
-		freelist = free_pt_l3(root, freelist);
+		free_pt_l3(root, list);
 		break;
 	case PAGE_MODE_4_LEVEL:
-		freelist = free_pt_l4(root, freelist);
+		free_pt_l4(root, list);
 		break;
 	case PAGE_MODE_5_LEVEL:
-		freelist = free_pt_l5(root, freelist);
+		free_pt_l5(root, list);
 		break;
 	case PAGE_MODE_6_LEVEL:
-		freelist = free_pt_l6(root, freelist);
+		free_pt_l6(root, list);
 		break;
 	default:
 		BUG();
 	}
-
-	return freelist;
 }
 
 void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
@@ -362,7 +347,7 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable,
 	return pte;
 }
 
-static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist)
+static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *list)
 {
 	unsigned long pt;
 	int mode;
@@ -373,12 +358,12 @@ static struct page *free_clear_pte(u64 *pte, u64 pteval, struct page *freelist)
 	}
 
 	if (!IOMMU_PTE_PRESENT(pteval))
-		return freelist;
+		return;
 
 	pt   = (unsigned long)IOMMU_PTE_PAGE(pteval);
 	mode = IOMMU_PTE_MODE(pteval);
 
-	return free_sub_pt(pt, mode, freelist);
+	free_sub_pt(pt, mode, list);
 }
 
 /*
@@ -392,7 +377,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova,
 			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct protection_domain *dom = io_pgtable_ops_to_domain(ops);
-	struct page *freelist = NULL;
+	LIST_HEAD(freelist);
 	bool updated = false;
 	u64 __pte, *pte;
 	int ret, i, count;
@@ -412,9 +397,9 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova,
 		goto out;
 
 	for (i = 0; i < count; ++i)
-		freelist = free_clear_pte(&pte[i], pte[i], freelist);
+		free_clear_pte(&pte[i], pte[i], &freelist);
 
-	if (freelist != NULL)
+	if (!list_empty(&freelist))
 		updated = true;
 
 	if (count > 1) {
@@ -449,7 +434,7 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova,
 	}
 
 	/* Everything flushed out, free pages now */
-	free_page_list(freelist);
+	put_pages_list(&freelist);
 
 	return ret;
 }
@@ -511,7 +496,7 @@ static void v1_free_pgtable(struct io_pgtable *iop)
 {
 	struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
 	struct protection_domain *dom;
-	struct page *freelist = NULL;
+	LIST_HEAD(freelist);
 	unsigned long root;
 
 	if (pgtable->mode == PAGE_MODE_NONE)
@@ -530,9 +515,9 @@ static void v1_free_pgtable(struct io_pgtable *iop)
 	       pgtable->mode > PAGE_MODE_6_LEVEL);
 
 	root = (unsigned long)pgtable->root;
-	freelist = free_sub_pt(root, pgtable->mode, freelist);
+	free_sub_pt(root, pgtable->mode, &freelist);
 
-	free_page_list(freelist);
+	put_pages_list(&freelist);
 }
 
 static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b42e38a0dbe2..e61881c2c258 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -66,14 +66,7 @@ early_param("iommu.forcedac", iommu_dma_forcedac_setup);
 
 static void iommu_dma_entry_dtor(unsigned long data)
 {
-	struct page *freelist = (struct page *)data;
-
-	while (freelist) {
-		unsigned long p = (unsigned long)page_address(freelist);
-
-		freelist = freelist->freelist;
-		free_page(p);
-	}
+	put_pages_list((struct list_head *)data);
 }
 
 static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie)
@@ -479,7 +472,7 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
 	else if (gather && gather->queued)
 		queue_iova(iovad, iova_pfn(iovad, iova),
 				size >> iova_shift(iovad),
-				(unsigned long)gather->freelist);
+				(unsigned long)&gather->freelist);
 	else
 		free_iova_fast(iovad, iova_pfn(iovad, iova),
 				size >> iova_shift(iovad));
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index b6a8f3282411..cd2ec6779cac 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1303,35 +1303,30 @@ static void dma_pte_free_pagetable(struct dmar_domain *domain,
    know the hardware page-walk will no longer touch them.
    The 'pte' argument is the *parent* PTE, pointing to the page that is to
    be freed. */
-static struct page *dma_pte_list_pagetables(struct dmar_domain *domain,
-					    int level, struct dma_pte *pte,
-					    struct page *freelist)
+static void dma_pte_list_pagetables(struct dmar_domain *domain,
+				    int level, struct dma_pte *pte,
+				    struct list_head *list)
 {
 	struct page *pg;
 
 	pg = pfn_to_page(dma_pte_addr(pte) >> PAGE_SHIFT);
-	pg->freelist = freelist;
-	freelist = pg;
+	list_add(&pg->lru, list);
 
 	if (level == 1)
-		return freelist;
+		return;
 
 	pte = page_address(pg);
 	do {
 		if (dma_pte_present(pte) && !dma_pte_superpage(pte))
-			freelist = dma_pte_list_pagetables(domain, level - 1,
-							   pte, freelist);
+			dma_pte_list_pagetables(domain, level - 1, pte, list);
 		pte++;
 	} while (!first_pte_in_page(pte));
-
-	return freelist;
 }
 
-static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level,
-					struct dma_pte *pte, unsigned long pfn,
-					unsigned long start_pfn,
-					unsigned long last_pfn,
-					struct page *freelist)
+static void dma_pte_clear_level(struct dmar_domain *domain, int level,
+				struct dma_pte *pte, unsigned long pfn,
+				unsigned long start_pfn, unsigned long last_pfn,
+				struct list_head *list)
 {
 	struct dma_pte *first_pte = NULL, *last_pte = NULL;
 
@@ -1350,7 +1345,7 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level,
 			/* These suborbinate page tables are going away entirely. Don't
 			   bother to clear them; we're just going to *free* them. */
 			if (level > 1 && !dma_pte_superpage(pte))
-				freelist = dma_pte_list_pagetables(domain, level - 1, pte, freelist);
+				dma_pte_list_pagetables(domain, level - 1, pte, list);
 
 			dma_clear_pte(pte);
 			if (!first_pte)
@@ -1358,10 +1353,10 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level,
 			last_pte = pte;
 		} else if (level > 1) {
 			/* Recurse down into a level that isn't *entirely* obsolete */
-			freelist = dma_pte_clear_level(domain, level - 1,
-						       phys_to_virt(dma_pte_addr(pte)),
-						       level_pfn, start_pfn, last_pfn,
-						       freelist);
+			dma_pte_clear_level(domain, level - 1,
+					    phys_to_virt(dma_pte_addr(pte)),
+					    level_pfn, start_pfn, last_pfn,
+					    list);
 		}
 next:
 		pfn = level_pfn + level_size(level);
@@ -1370,47 +1365,28 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level,
 	if (first_pte)
 		domain_flush_cache(domain, first_pte,
 				   (void *)++last_pte - (void *)first_pte);
-
-	return freelist;
 }
 
 /* We can't just free the pages because the IOMMU may still be walking
    the page tables, and may have cached the intermediate levels. The
    pages can only be freed after the IOTLB flush has been done. */
-static struct page *domain_unmap(struct dmar_domain *domain,
-				 unsigned long start_pfn,
-				 unsigned long last_pfn,
-				 struct page *freelist)
+static void domain_unmap(struct dmar_domain *domain, unsigned long start_pfn,
+			 unsigned long last_pfn, struct list_head *list)
 {
 	BUG_ON(!domain_pfn_supported(domain, start_pfn));
 	BUG_ON(!domain_pfn_supported(domain, last_pfn));
 	BUG_ON(start_pfn > last_pfn);
 
 	/* we don't need lock here; nobody else touches the iova range */
-	freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw),
-				       domain->pgd, 0, start_pfn, last_pfn,
-				       freelist);
+	dma_pte_clear_level(domain, agaw_to_level(domain->agaw),
+			    domain->pgd, 0, start_pfn, last_pfn, list);
 
 	/* free pgd */
 	if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) {
 		struct page *pgd_page = virt_to_page(domain->pgd);
-		pgd_page->freelist = freelist;
-		freelist = pgd_page;
-
+		list_add(&pgd_page->lru, list);
 		domain->pgd = NULL;
 	}
-
-	return freelist;
-}
-
-static void dma_free_pagelist(struct page *freelist)
-{
-	struct page *pg;
-
-	while ((pg = freelist)) {
-		freelist = pg->freelist;
-		free_pgtable_page(page_address(pg));
-	}
 }
 
 /* iommu handling */
@@ -2095,11 +2071,10 @@ static void domain_exit(struct dmar_domain *domain)
 	domain_remove_dev_info(domain);
 
 	if (domain->pgd) {
-		struct page *freelist;
+		LIST_HEAD(pages);
 
-		freelist = domain_unmap(domain, 0,
-					DOMAIN_MAX_PFN(domain->gaw), NULL);
-		dma_free_pagelist(freelist);
+		domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw), &pages);
+		put_pages_list(&pages);
 	}
 
 	free_domain_mem(domain);
@@ -4192,19 +4167,17 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
 		{
 			struct dmar_drhd_unit *drhd;
 			struct intel_iommu *iommu;
-			struct page *freelist;
+			LIST_HEAD(pages);
 
-			freelist = domain_unmap(si_domain,
-						start_vpfn, last_vpfn,
-						NULL);
+			domain_unmap(si_domain, start_vpfn, last_vpfn, &pages);
 
 			rcu_read_lock();
 			for_each_active_iommu(iommu, drhd)
 				iommu_flush_iotlb_psi(iommu, si_domain,
 					start_vpfn, mhp->nr_pages,
-					!freelist, 0);
+					list_empty(&pages), 0);
 			rcu_read_unlock();
-			dma_free_pagelist(freelist);
+			put_pages_list(&pages);
 		}
 		break;
 	}
@@ -5211,8 +5184,7 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
 	start_pfn = iova >> VTD_PAGE_SHIFT;
 	last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT;
 
-	gather->freelist = domain_unmap(dmar_domain, start_pfn,
-					last_pfn, gather->freelist);
+	domain_unmap(dmar_domain, start_pfn, last_pfn, &gather->freelist);
 
 	if (dmar_domain->max_addr == iova + size)
 		dmar_domain->max_addr = iova;
@@ -5248,9 +5220,10 @@ static void intel_iommu_tlb_sync(struct iommu_domain *domain,
 
 	for_each_domain_iommu(iommu_id, dmar_domain)
 		iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain,
-				      start_pfn, nrpages, !gather->freelist, 0);
+				      start_pfn, nrpages,
+				      list_empty(&gather->freelist), 0);
 
-	dma_free_pagelist(gather->freelist);
+	put_pages_list(&gather->freelist);
 }
 
 static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index d2f3435e7d17..de0c57a567c8 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -186,7 +186,7 @@ struct iommu_iotlb_gather {
 	unsigned long		start;
 	unsigned long		end;
 	size_t			pgsize;
-	struct page		*freelist;
+	struct list_head	freelist;
 	bool			queued;
 };
 
@@ -399,6 +399,7 @@ static inline void iommu_iotlb_gather_init(struct iommu_iotlb_gather *gather)
 {
 	*gather = (struct iommu_iotlb_gather) {
 		.start	= ULONG_MAX,
+		.freelist = LIST_HEAD_INIT(gather->freelist),
 	};
 }
 
-- 
2.33.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 29/33] iommu: Use put_pages_list
  2021-12-01 18:15 ` [PATCH v2 29/33] iommu: Use put_pages_list Vlastimil Babka
@ 2021-12-01 19:07   ` Matthew Wilcox
  2021-12-01 19:45     ` Robin Murphy
  0 siblings, 1 reply; 25+ messages in thread
From: Matthew Wilcox @ 2021-12-01 19:07 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, David Woodhouse, patches, Pekka Enberg, linux-mm,
	iommu, David Rientjes, Robin Murphy, Christoph Lameter,
	Will Deacon, Joonsoo Kim

On Wed, Dec 01, 2021 at 07:15:06PM +0100, Vlastimil Babka wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> page->freelist is for the use of slab.  We already have the ability
> to free a list of pages in the core mm, but it requires the use of a
> list_head and for the pages to be chained together through page->lru.
> Switch the iommu code over to using put_pages_list().

FYI, this is going to have conflicts with
https://lore.kernel.org/lkml/cover.1637671820.git.robin.murphy@arm.com/

I'm not sure what the appropriate resolution is going to be here;
maybe hold back part of this patch series to the following merge
window to give the iommu people time to merge their own patches?
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 29/33] iommu: Use put_pages_list
  2021-12-01 19:07   ` Matthew Wilcox
@ 2021-12-01 19:45     ` Robin Murphy
  0 siblings, 0 replies; 25+ messages in thread
From: Robin Murphy @ 2021-12-01 19:45 UTC (permalink / raw)
  To: Matthew Wilcox, Vlastimil Babka
  Cc: Andrew Morton, David Woodhouse, patches, Pekka Enberg, linux-mm,
	iommu, David Rientjes, Christoph Lameter, Will Deacon,
	Joonsoo Kim

On 2021-12-01 19:07, Matthew Wilcox wrote:
> On Wed, Dec 01, 2021 at 07:15:06PM +0100, Vlastimil Babka wrote:
>> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>>
>> page->freelist is for the use of slab.  We already have the ability
>> to free a list of pages in the core mm, but it requires the use of a
>> list_head and for the pages to be chained together through page->lru.
>> Switch the iommu code over to using put_pages_list().
> 
> FYI, this is going to have conflicts with
> https://lore.kernel.org/lkml/cover.1637671820.git.robin.murphy@arm.com/
> 
> I'm not sure what the appropriate resolution is going to be here;
> maybe hold back part of this patch series to the following merge
> window to give the iommu people time to merge their own patches?

More than that, this version is subtly but catastrophically broken - we 
can't simply pass &gather->freelist through the current IOVA entry_dtor 
machinery, since gather is a stack variable from a few call frames up so 
the actual list head will be long gone by the time 
iommu_dma_entry_dtor() tries to dereference it. It took until I was 
elbow-deep in refactoring the RFC to notice that one :)

Thanks,
Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-01 18:14 [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
  2021-12-01 18:15 ` [PATCH v2 29/33] iommu: Use put_pages_list Vlastimil Babka
@ 2021-12-02 12:25 ` Vlastimil Babka
  2021-12-14 12:57 ` Vlastimil Babka
  2 siblings, 0 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-02 12:25 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin, Will Deacon,
	Sergey Senozhatsky, x86, Luis Chamberlain, kasan-dev, Ingo Molnar,
	Vladimir Davydov, Nitin Gupta, Marco Elver, Borislav Petkov,
	Andy Lutomirski, cgroups, Thomas Gleixner, Dmitry Vyukov,
	Andrey Konovalov, Robin Murphy, patches, Julia Lawall,
	Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

I have pushed a v3, but not going to resent immediately to avoid unnecessary
spamming, the differences is just that some patches are removed and other
reordered, so the current v2 posting should be still sufficient for on-list
review:

https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v3r1

patch 29/33 iommu: Use put_pages_list
- removed as this version is broken and Robin Murphy has meanwhile
incorporated it partially to his series:
https://lore.kernel.org/lkml/cover.1637671820.git.robin.murphy@arm.com/

patch 30/33 mm: Remove slab from struct page
- removed and postponed for later as this can be only be applied after the
iommu use of page.freelist is resolved

patch 27/33 zsmalloc: Stop using slab fields in struct page
patch 28/33 bootmem: Use page->index instead of page->freelist
- moved towards the end of series, to further separate the part that adjusts
non-slab users of slab fields towards removing those fields from struct page.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-01 18:14 [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
  2021-12-01 18:15 ` [PATCH v2 29/33] iommu: Use put_pages_list Vlastimil Babka
  2021-12-02 12:25 ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
@ 2021-12-14 12:57 ` Vlastimil Babka
  2021-12-14 14:38   ` Hyeonggon Yoo
                     ` (3 more replies)
  2 siblings, 4 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-14 12:57 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin,
	Hyeonggon Yoo, Will Deacon, Sergey Senozhatsky, x86,
	Luis Chamberlain, kasan-dev, Ingo Molnar, Vladimir Davydov,
	Nitin Gupta, Marco Elver, Borislav Petkov, Andy Lutomirski,
	cgroups, Thomas Gleixner, Dmitry Vyukov, Andrey Konovalov,
	patches, Julia Lawall, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse

On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

 1:  10b656f9eb1e =  1:  10b656f9eb1e mm: add virt_to_folio() and folio_address()
 2:  5e6ad846acf1 =  2:  5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
 3:  48d4e9407aa0 =  3:  48d4e9407aa0 mm/slub: Make object_err() static
 4:  fe1e19081321 =  4:  fe1e19081321 mm: Split slab into its own type
 5:  af7fd46fbb9b =  5:  af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
 6:  7ed088d601d9 =  6:  7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
 7:  1d41188b9401 =  7:  1d41188b9401 mm: Convert __ksize() to struct slab
 8:  5d9d1231461f !  8:  8fd22e0b086e mm: Use struct slab in kmem_obj_info()
    @@ Commit message
         slab type instead of the page type, we make it obvious that this can
         only be called for slabs.
     
    +    [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
    +
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
     
    @@ mm/slab.h: struct kmem_obj_info {
      #endif /* MM_SLAB_H */
     
      ## mm/slab_common.c ##
    +@@ mm/slab_common.c: bool slab_is_available(void)
    +  */
    + bool kmem_valid_obj(void *object)
    + {
    +-	struct page *page;
    ++	struct folio *folio;
    + 
    + 	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
    + 	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
    + 		return false;
    +-	page = virt_to_head_page(object);
    +-	return PageSlab(page);
    ++	folio = virt_to_folio(object);
    ++	return folio_test_slab(folio);
    + }
    + EXPORT_SYMBOL_GPL(kmem_valid_obj);
    + 
     @@ mm/slab_common.c: void kmem_dump_obj(void *object)
      {
      	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
    @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
      	objp = base + s->size * objnr;
      	kpp->kp_objp = objp;
     -	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
    -+	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
    ++	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
    ++			 || (objp - base) % s->size) ||
      	    !(s->flags & SLAB_STORE_USER))
      		return;
      #ifdef CONFIG_SLUB_DEBUG
 9:  3aef771be335 !  9:  c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
    @@ mm/slab.h: struct kmem_obj_info {
     +#else
     +static inline
     +void __check_heap_object(const void *ptr, unsigned long n,
    -+			 const struct slab *slab, bool to_user) { }
    ++			 const struct slab *slab, bool to_user)
    ++{
    ++}
     +#endif
     +
      #endif /* MM_SLAB_H */
10:  2253e45e6bef = 10:  da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
11:  f28202bc27ba = 11:  383887e77104 mm/slub: Convert kfree() to use a struct slab
12:  31b58b1e914f = 12:  c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
13:  636406a3ad59 = 13:  49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
14:  3b49efda3b6f = 14:  4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
15:  61a195526d3b ! 15:  4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
    @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
      
     -	__ClearPageSlabPfmemalloc(page);
     -	__ClearPageSlab(page);
    +-	/* In union with page->mapping where page allocator expects NULL */
    +-	page->slab_cache = NULL;
     +	__slab_clear_pfmemalloc(slab);
     +	__folio_clear_slab(folio);
    - 	/* In union with page->mapping where page allocator expects NULL */
    --	page->slab_cache = NULL;
    -+	slab->slab_cache = NULL;
    ++	folio->mapping = NULL;
      	if (current->reclaim_state)
      		current->reclaim_state->reclaimed_slab += pages;
     -	unaccount_slab(page_slab(page), order, s);
16:  987c7ed31580 = 16:  f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
17:  cc742564237e ! 17:  06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
    @@ Commit message
     
         // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
         // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
    -    // embedded script script
    +    // embedded script
     
         // build list of functions to exclude from applying the next rule
         @initialize:ocaml@
18:  b45acac9aace = 18:  1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
19:  76c3eeb39684 ! 19:  1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
    @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
     -	__ClearPageSlabPfmemalloc(page);
     -	__ClearPageSlab(page);
     -	page_mapcount_reset(page);
    +-	/* In union with page->mapping where page allocator expects NULL */
    +-	page->slab_cache = NULL;
     +	BUG_ON(!folio_test_slab(folio));
     +	__slab_clear_pfmemalloc(slab);
     +	__folio_clear_slab(folio);
     +	page_mapcount_reset(folio_page(folio, 0));
    - 	/* In union with page->mapping where page allocator expects NULL */
    --	page->slab_cache = NULL;
    -+	slab->slab_cache = NULL;
    ++	folio->mapping = NULL;
      
      	if (current->reclaim_state)
      		current->reclaim_state->reclaimed_slab += 1 << order;
20:  ed6144dbebce ! 20:  fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
    @@ Commit message
     
         // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
         // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
    -    // embedded script script
    +    // embedded script
     
         // build list of functions for applying the next rule
         @initialize:ocaml@
21:  17fb81e601e6 = 21:  b59720b2edba mm/slab: Finish struct page to struct slab conversion
22:  4e8d1faebc24 ! 22:  65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
    @@ Commit message
           ,...)
     
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
         Cc: Julia Lawall <julia.lawall@inria.fr>
         Cc: Luis Chamberlain <mcgrof@kernel.org>
         Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
23:  eefa12e18a92 = 23:  c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
24:  fa5ba4107ce2 ! 24:  def731137335 mm/slob: Convert SLOB to use struct slab
    @@ Metadata
     Author: Matthew Wilcox (Oracle) <willy@infradead.org>
     
      ## Commit message ##
    -    mm/slob: Convert SLOB to use struct slab
    +    mm/slob: Convert SLOB to use struct slab and struct folio
     
    -    Use struct slab throughout the slob allocator.
    +    Use struct slab throughout the slob allocator. Where non-slab page can appear
    +    use struct folio instead of struct page.
     
         [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h just
           for the single callers being wrappers in mm/slob.c ]
     
    +    [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
    +
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slob.c ##
    +@@
    +  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
    +  * alloc_pages() directly, allocating compound pages so the page order
    +  * does not have to be separately tracked.
    +- * These objects are detected in kfree() because PageSlab()
    ++ * These objects are detected in kfree() because folio_test_slab()
    +  * is false for them.
    +  *
    +  * SLAB is emulated on top of SLOB by simply calling constructors and
     @@ mm/slob.c: static LIST_HEAD(free_slob_large);
      /*
       * slob_page_free: true for pages on free_slob_pages list.
    @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
      							int align_offset)
      {
     -	struct page *sp;
    ++	struct folio *folio;
     +	struct slab *sp;
      	struct list_head *slob_list;
      	slob_t *b = NULL;
    @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
      			return NULL;
     -		sp = virt_to_page(b);
     -		__SetPageSlab(sp);
    -+		sp = virt_to_slab(b);
    -+		__SetPageSlab(slab_page(sp));
    ++		folio = virt_to_folio(b);
    ++		__folio_set_slab(folio);
    ++		sp = folio_slab(folio);
      
      		spin_lock_irqsave(&slob_lock, flags);
      		sp->units = SLOB_UNITS(PAGE_SIZE);
    @@ mm/slob.c: static void slob_free(void *block, int size)
      		spin_unlock_irqrestore(&slob_lock, flags);
     -		__ClearPageSlab(sp);
     -		page_mapcount_reset(sp);
    -+		__ClearPageSlab(slab_page(sp));
    ++		__folio_clear_slab(slab_folio(sp));
     +		page_mapcount_reset(slab_page(sp));
      		slob_free_pages(b, 0);
      		return;
      	}
    +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
    + 
    + void kfree(const void *block)
    + {
    +-	struct page *sp;
    ++	struct folio *sp;
    + 
    + 	trace_kfree(_RET_IP_, block);
    + 
    +@@ mm/slob.c: void kfree(const void *block)
    + 		return;
    + 	kmemleak_free(block);
    + 
    +-	sp = virt_to_page(block);
    +-	if (PageSlab(sp)) {
    ++	sp = virt_to_folio(block);
    ++	if (folio_test_slab(sp)) {
    + 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
    + 		unsigned int *m = (unsigned int *)(block - align);
    + 		slob_free(m, *m + align);
    + 	} else {
    +-		unsigned int order = compound_order(sp);
    +-		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
    ++		unsigned int order = folio_order(sp);
    ++
    ++		mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
    + 				    -(PAGE_SIZE << order));
    +-		__free_pages(sp, order);
    ++		__free_pages(folio_page(sp, 0), order);
    + 
    + 	}
    + }
25:  aa4f573a4c96 ! 25:  466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
    @@ Commit message
     
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
         Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
         Cc: Alexander Potapenko <glider@google.com>
         Cc: Andrey Konovalov <andreyknvl@gmail.com>
26:  67b7966d2fb6 = 26:  b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
31:  d64dfe49c1e7 ! 27:  4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
    @@ Commit message
         possible.
     
         This should also prevent accidental use of fields that don't exist in given
    -    implementation. Before this patch virt_to_cache() and and cache_from_obj() was
    -    visible for SLOB (albeit not used), although it relies on the slab_cache field
    +    implementation. Before this patch virt_to_cache() and cache_from_obj() were
    +    visible for SLOB (albeit not used), although they rely on the slab_cache field
         that isn't set by SLOB. With this patch it's now a compile error, so these
         functions are now hidden behind #ifndef CONFIG_SLOB.
     
    @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
     -		slab->s_mem = addr;
     +#if defined(CONFIG_SLUB)
     +	slab->objects = 1;
    -+#elif defined (CONFIG_SLAB)
    ++#elif defined(CONFIG_SLAB)
     +	slab->s_mem = addr;
     +#endif
      
    @@ mm/slab.h
     +
     +#if defined(CONFIG_SLAB)
     +
    -+	union {
    -+		struct list_head slab_list;
    + 	union {
    + 		struct list_head slab_list;
    +-		struct {	/* Partial pages */
     +		struct rcu_head rcu_head;
     +	};
     +	struct kmem_cache *slab_cache;
     +	void *freelist;	/* array of free object indexes */
    -+	void * s_mem;	/* first object */
    ++	void *s_mem;	/* first object */
     +	unsigned int active;
     +
     +#elif defined(CONFIG_SLUB)
     +
    - 	union {
    - 		struct list_head slab_list;
    --		struct {	/* Partial pages */
    ++	union {
    ++		struct list_head slab_list;
     +		struct rcu_head rcu_head;
     +		struct {
      			struct slab *next;
    @@ mm/slab.h: struct slab {
     +#elif defined(CONFIG_SLOB)
     +
     +	struct list_head slab_list;
    -+	void * __unused_1;
    ++	void *__unused_1;
     +	void *freelist;		/* first free block */
    -+	void * __unused_2;
    ++	void *__unused_2;
     +	int units;
     +
     +#else
    @@ mm/slab.h: struct slab {
      #ifdef CONFIG_MEMCG
      	unsigned long memcg_data;
     @@ mm/slab.h: struct slab {
    - 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
      SLAB_MATCH(flags, __page_flags);
      SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
    + SLAB_MATCH(slab_list, slab_list);
     +#ifndef CONFIG_SLOB
      SLAB_MATCH(rcu_head, rcu_head);
    + SLAB_MATCH(slab_cache, slab_cache);
    ++#endif
    ++#ifdef CONFIG_SLAB
    + SLAB_MATCH(s_mem, s_mem);
    + SLAB_MATCH(active, active);
     +#endif
      SLAB_MATCH(_refcount, __page_refcount);
      #ifdef CONFIG_MEMCG
32:  0abf87bae67e = 28:  94b78948d53f mm/slub: Simplify struct slab slabs field definition
33:  813c304f18e4 = 29:  f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
27:  ebce4b5b5ced ! 30:  1414e8c87de6 zsmalloc: Stop using slab fields in struct page
    @@ Commit message
     
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    -    Cc: Minchan Kim <minchan@kernel.org>
    +    Acked-by: Minchan Kim <minchan@kernel.org>
         Cc: Nitin Gupta <ngupta@vflare.org>
         Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
     
28:  f124425ae7de = 31:  8a3cda6b38eb bootmem: Use page->index instead of page->freelist
29:  82da48c73b2e <  -:  ------------ iommu: Use put_pages_list
30:  181e16dfefbb <  -:  ------------ mm: Remove slab from struct page
 -:  ------------ > 32:  91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 12:57 ` Vlastimil Babka
@ 2021-12-14 14:38   ` Hyeonggon Yoo
  2021-12-14 14:43     ` Vlastimil Babka
  2021-12-15  1:03   ` Roman Gushchin via iommu
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-14 14:38 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 

Hello Vlastimil, Thank you for nice work.
I'm going to review and test new version soon in free time.

Btw, I gave you some review and test tags and seems to be missing in new
series. Did I do review/test process wrongly? It's first time to review
patches so please let me know if I did it wrongly.

--
Thank you.
Hyeonggon.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 14:38   ` Hyeonggon Yoo
@ 2021-12-14 14:43     ` Vlastimil Babka
  2021-12-15  3:47       ` Hyeonggon Yoo
  0 siblings, 1 reply; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-14 14:43 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On 12/14/21 15:38, Hyeonggon Yoo wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
> 
> Hello Vlastimil, Thank you for nice work.
> I'm going to review and test new version soon in free time.

Thanks!

> Btw, I gave you some review and test tags and seems to be missing in new
> series. Did I do review/test process wrongly? It's first time to review
> patches so please let me know if I did it wrongly.

You did right, sorry! I didn't include them as those were for patches that I
was additionally changing after your review/test and the decision what is
substantial change enough to need a new test/review is often fuzzy. So if
you can recheck the new versions it would be great and then I will pick that
up, thanks!

> --
> Thank you.
> Hyeonggon.

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 12:57 ` Vlastimil Babka
  2021-12-14 14:38   ` Hyeonggon Yoo
@ 2021-12-15  1:03   ` Roman Gushchin via iommu
  2021-12-15 23:38     ` Roman Gushchin via iommu
  2021-12-20  0:24     ` Vlastimil Babka
  2021-12-16 15:00   ` Hyeonggon Yoo
  2021-12-22 16:56   ` Vlastimil Babka
  3 siblings, 2 replies; 25+ messages in thread
From: Roman Gushchin via iommu @ 2021-12-15  1:03 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Hyeonggon Yoo, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86, Luis Chamberlain, Matthew Wilcox,
	Ingo Molnar, Vladimir Davydov, David Rientjes, Nitin Gupta,
	Marco Elver, Borislav Petkov, Andy Lutomirski, cgroups,
	Thomas Gleixner, Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov,
	patches, Pekka Enberg, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse

On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Hi Vlastimil!

I've started to review this patchset (btw, a really nice work, I like
the resulting code way more). Because I'm looking at v3 and I don't have
the whole v2 in my mailbox, here is what I've now:

* mm: add virt_to_folio() and folio_address()
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slab: Dissolve slab_map_pages() in its caller
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Make object_err() static
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Split slab into its own type
1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
comparison?
2) page_slab() is used only in kasan and only in one place, so maybe it's better
to not introduce it as a generic helper?
Other than that
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Add account_slab() and unaccount_slab()
1) maybe change the title to convert/replace instead of add?
2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Convert virt_to_cache() to use struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Convert __ksize() to struct slab
It looks like certain parts of __ksize() can be merged between slab, slub and slob?
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Use struct slab in kmem_obj_info()
Reviewed-by: Roman Gushchin <guro@fb.com>


I'll try to finish reviewing the patchset until the  end of the week.

Thanks!

Roman
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 14:43     ` Vlastimil Babka
@ 2021-12-15  3:47       ` Hyeonggon Yoo
  0 siblings, 0 replies; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-15  3:47 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On Tue, Dec 14, 2021 at 03:43:35PM +0100, Vlastimil Babka wrote:
> On 12/14/21 15:38, Hyeonggon Yoo wrote:
> > On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> >> On 12/1/21 19:14, Vlastimil Babka wrote:
> >> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> > this cover letter.
> >> > 
> >> > Series also available in git, based on 5.16-rc3:
> >> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> >> 
> >> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> >> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> >> 
> > 
> > Hello Vlastimil, Thank you for nice work.
> > I'm going to review and test new version soon in free time.
> 
> Thanks!
> 

You're welcome!

> > Btw, I gave you some review and test tags and seems to be missing in new
> > series. Did I do review/test process wrongly? It's first time to review
> > patches so please let me know if I did it wrongly.
> 
> You did right, sorry! I didn't include them as those were for patches that I
> was additionally changing after your review/test and the decision what is
> substantial change enough to need a new test/review is often fuzzy. 

Ah, Okay. review/test becomes invalid after some changing.
that's okay. I was just unfamiliar with the process. Thank you!

> So if you can recheck the new versions it would be great and then I will pick that
> up, thanks!

Okay. I'll new versions.

> 
> > --
> > Thank you.
> > Hyeonggon.
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-15  1:03   ` Roman Gushchin via iommu
@ 2021-12-15 23:38     ` Roman Gushchin via iommu
  2021-12-16  9:19       ` Vlastimil Babka
  2021-12-20  0:47       ` Vlastimil Babka
  2021-12-20  0:24     ` Vlastimil Babka
  1 sibling, 2 replies; 25+ messages in thread
From: Roman Gushchin via iommu @ 2021-12-15 23:38 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Hyeonggon Yoo, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86, Luis Chamberlain, Matthew Wilcox,
	Ingo Molnar, Vladimir Davydov, David Rientjes, Nitin Gupta,
	Marco Elver, Borislav Petkov, Andy Lutomirski, cgroups,
	Thomas Gleixner, Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov,
	patches, Pekka Enberg, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse

On Tue, Dec 14, 2021 at 05:03:12PM -0800, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> > > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > > this cover letter.
> > > 
> > > Series also available in git, based on 5.16-rc3:
> > > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi Vlastimil!
> 
> I've started to review this patchset (btw, a really nice work, I like
> the resulting code way more). Because I'm looking at v3 and I don't have
> the whole v2 in my mailbox, here is what I've now:
> 
> * mm: add virt_to_folio() and folio_address()
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slab: Dissolve slab_map_pages() in its caller
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Make object_err() static
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Split slab into its own type
> 1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
> comparison?
> 2) page_slab() is used only in kasan and only in one place, so maybe it's better
> to not introduce it as a generic helper?
> Other than that
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Add account_slab() and unaccount_slab()
> 1) maybe change the title to convert/replace instead of add?
> 2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert virt_to_cache() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert __ksize() to struct slab
> It looks like certain parts of __ksize() can be merged between slab, slub and slob?
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Use struct slab in kmem_obj_info()
> Reviewed-by: Roman Gushchin <guro@fb.com>

Part 2:

* mm: Convert check_heap_object() to use struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert detached_freelist to use a struct slab
How about to convert free_nonslab_page() to free_nonslab_folio()?
And maybe rename it to something like free_large_kmalloc()?
If I'm not missing something, large kmallocs is the only way how we can end up
there with a !slab folio/page.

* mm/slub: Convert kfree() to use a struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert print_page_info() to print_slab_info()
Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert alloc_slab_page() to return a struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert pfmemalloc_match() to take a struct slab
Cool! Removing pfmemalloc_unsafe() is really nice.
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert most struct page to struct slab by spatch
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Finish struct page to struct slab conversion
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slab: Convert most struct page to struct slab by spatch

Another patch with the same title? Rebase error?

* mm/slab: Finish struct page to struct slab conversion

And this one too?


Thanks!

Roman
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-15 23:38     ` Roman Gushchin via iommu
@ 2021-12-16  9:19       ` Vlastimil Babka
  2021-12-20  0:47       ` Vlastimil Babka
  1 sibling, 0 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-16  9:19 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Hyeonggon Yoo, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86, Luis Chamberlain, Matthew Wilcox,
	Ingo Molnar, Vladimir Davydov, David Rientjes, Nitin Gupta,
	Marco Elver, Borislav Petkov, Andy Lutomirski, cgroups,
	Thomas Gleixner, Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov,
	patches, Pekka Enberg, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse

On 12/16/21 00:38, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 05:03:12PM -0800, Roman Gushchin wrote:
>> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> > On 12/1/21 19:14, Vlastimil Babka wrote:
>> > > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > > this cover letter.
>> > > 
>> > > Series also available in git, based on 5.16-rc3:
>> > > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> > 
>> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
>> Hi Vlastimil!
>> 
>> I've started to review this patchset (btw, a really nice work, I like
>> the resulting code way more). Because I'm looking at v3 and I don't have

Thanks a lot, Roman!

...

> 
> * mm/slab: Convert most struct page to struct slab by spatch
> 
> Another patch with the same title? Rebase error?
> 
> * mm/slab: Finish struct page to struct slab conversion
> 
> And this one too?

No, these are for mm/slab.c, the previous were for mm/slub.c :)

> 
> Thanks!
> 
> Roman

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 12:57 ` Vlastimil Babka
  2021-12-14 14:38   ` Hyeonggon Yoo
  2021-12-15  1:03   ` Roman Gushchin via iommu
@ 2021-12-16 15:00   ` Hyeonggon Yoo
  2021-12-20 23:58     ` Vlastimil Babka
  2021-12-22 16:56   ` Vlastimil Babka
  3 siblings, 1 reply; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-16 15:00 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Reviewing the whole patch series takes longer than I thought.
I'll try to review and test rest of patches when I have time.

I added Tested-by if kernel builds okay and kselftests
does not break the kernel on my machine.
(with CONFIG_SLAB/SLUB/SLOB depending on the patch),
Let me know me if you know better way to test a patch.

# mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled

Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Comment:
Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
btw, do we need slabs_cpu_partial attribute when we don't use
cpu partials? (!SLUB_CPU_PARTIAL)

# mm/slub: Simplify struct slab slabs field definition
Comment:

This is how struct page looks on the top of v3r3 branch:
struct page {
[...]
                struct {        /* slab, slob and slub */
                        union {
                                struct list_head slab_list;
                                struct {        /* Partial pages */
                                        struct page *next;
#ifdef CONFIG_64BIT
                                        int pages;      /* Nr of pages left */
#else
                                        short int pages;
#endif
                                };
                        };
[...]

It's not consistent with struct slab.
I think this is because "mm: Remove slab from struct page" was dropped.
Would you update some of patches?

# mm/sl*b: Differentiate struct slab fields by sl*b implementations
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Works SL[AUO]B on my machine and makes code much better.

# mm/slob: Convert SLOB to use struct slab and struct folio
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
It still works fine on SLOB.

# mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

# mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Thanks,
Hyeonggon.

> 
>  1:  10b656f9eb1e =  1:  10b656f9eb1e mm: add virt_to_folio() and folio_address()
>  2:  5e6ad846acf1 =  2:  5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
>  3:  48d4e9407aa0 =  3:  48d4e9407aa0 mm/slub: Make object_err() static
>  4:  fe1e19081321 =  4:  fe1e19081321 mm: Split slab into its own type
>  5:  af7fd46fbb9b =  5:  af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
>  6:  7ed088d601d9 =  6:  7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
>  7:  1d41188b9401 =  7:  1d41188b9401 mm: Convert __ksize() to struct slab
>  8:  5d9d1231461f !  8:  8fd22e0b086e mm: Use struct slab in kmem_obj_info()
>     @@ Commit message
>          slab type instead of the page type, we make it obvious that this can
>          only be called for slabs.
>      
>     +    [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
>     +
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>      
>     @@ mm/slab.h: struct kmem_obj_info {
>       #endif /* MM_SLAB_H */
>      
>       ## mm/slab_common.c ##
>     +@@ mm/slab_common.c: bool slab_is_available(void)
>     +  */
>     + bool kmem_valid_obj(void *object)
>     + {
>     +-	struct page *page;
>     ++	struct folio *folio;
>     + 
>     + 	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
>     + 	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
>     + 		return false;
>     +-	page = virt_to_head_page(object);
>     +-	return PageSlab(page);
>     ++	folio = virt_to_folio(object);
>     ++	return folio_test_slab(folio);
>     + }
>     + EXPORT_SYMBOL_GPL(kmem_valid_obj);
>     + 
>      @@ mm/slab_common.c: void kmem_dump_obj(void *object)
>       {
>       	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
>     @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
>       	objp = base + s->size * objnr;
>       	kpp->kp_objp = objp;
>      -	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
>     -+	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
>     ++	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
>     ++			 || (objp - base) % s->size) ||
>       	    !(s->flags & SLAB_STORE_USER))
>       		return;
>       #ifdef CONFIG_SLUB_DEBUG
>  9:  3aef771be335 !  9:  c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
>     @@ mm/slab.h: struct kmem_obj_info {
>      +#else
>      +static inline
>      +void __check_heap_object(const void *ptr, unsigned long n,
>     -+			 const struct slab *slab, bool to_user) { }
>     ++			 const struct slab *slab, bool to_user)
>     ++{
>     ++}
>      +#endif
>      +
>       #endif /* MM_SLAB_H */
> 10:  2253e45e6bef = 10:  da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
> 11:  f28202bc27ba = 11:  383887e77104 mm/slub: Convert kfree() to use a struct slab
> 12:  31b58b1e914f = 12:  c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> 13:  636406a3ad59 = 13:  49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
> 14:  3b49efda3b6f = 14:  4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
> 15:  61a195526d3b ! 15:  4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
>     @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
>       
>      -	__ClearPageSlabPfmemalloc(page);
>      -	__ClearPageSlab(page);
>     +-	/* In union with page->mapping where page allocator expects NULL */
>     +-	page->slab_cache = NULL;
>      +	__slab_clear_pfmemalloc(slab);
>      +	__folio_clear_slab(folio);
>     - 	/* In union with page->mapping where page allocator expects NULL */
>     --	page->slab_cache = NULL;
>     -+	slab->slab_cache = NULL;
>     ++	folio->mapping = NULL;
>       	if (current->reclaim_state)
>       		current->reclaim_state->reclaimed_slab += pages;
>      -	unaccount_slab(page_slab(page), order, s);
> 16:  987c7ed31580 = 16:  f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
> 17:  cc742564237e ! 17:  06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
>     @@ Commit message
>      
>          // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
>          // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
>     -    // embedded script script
>     +    // embedded script
>      
>          // build list of functions to exclude from applying the next rule
>          @initialize:ocaml@
> 18:  b45acac9aace = 18:  1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
> 19:  76c3eeb39684 ! 19:  1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>     @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
>      -	__ClearPageSlabPfmemalloc(page);
>      -	__ClearPageSlab(page);
>      -	page_mapcount_reset(page);
>     +-	/* In union with page->mapping where page allocator expects NULL */
>     +-	page->slab_cache = NULL;
>      +	BUG_ON(!folio_test_slab(folio));
>      +	__slab_clear_pfmemalloc(slab);
>      +	__folio_clear_slab(folio);
>      +	page_mapcount_reset(folio_page(folio, 0));
>     - 	/* In union with page->mapping where page allocator expects NULL */
>     --	page->slab_cache = NULL;
>     -+	slab->slab_cache = NULL;
>     ++	folio->mapping = NULL;
>       
>       	if (current->reclaim_state)
>       		current->reclaim_state->reclaimed_slab += 1 << order;
> 20:  ed6144dbebce ! 20:  fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
>     @@ Commit message
>      
>          // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
>          // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
>     -    // embedded script script
>     +    // embedded script
>      
>          // build list of functions for applying the next rule
>          @initialize:ocaml@
> 21:  17fb81e601e6 = 21:  b59720b2edba mm/slab: Finish struct page to struct slab conversion
> 22:  4e8d1faebc24 ! 22:  65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
>     @@ Commit message
>            ,...)
>      
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>     +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>          Cc: Julia Lawall <julia.lawall@inria.fr>
>          Cc: Luis Chamberlain <mcgrof@kernel.org>
>          Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> 23:  eefa12e18a92 = 23:  c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
> 24:  fa5ba4107ce2 ! 24:  def731137335 mm/slob: Convert SLOB to use struct slab
>     @@ Metadata
>      Author: Matthew Wilcox (Oracle) <willy@infradead.org>
>      
>       ## Commit message ##
>     -    mm/slob: Convert SLOB to use struct slab
>     +    mm/slob: Convert SLOB to use struct slab and struct folio
>      
>     -    Use struct slab throughout the slob allocator.
>     +    Use struct slab throughout the slob allocator. Where non-slab page can appear
>     +    use struct folio instead of struct page.
>      
>          [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h just
>            for the single callers being wrappers in mm/slob.c ]
>      
>     +    [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
>     +
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>      
>       ## mm/slob.c ##
>     +@@
>     +  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
>     +  * alloc_pages() directly, allocating compound pages so the page order
>     +  * does not have to be separately tracked.
>     +- * These objects are detected in kfree() because PageSlab()
>     ++ * These objects are detected in kfree() because folio_test_slab()
>     +  * is false for them.
>     +  *
>     +  * SLAB is emulated on top of SLOB by simply calling constructors and
>      @@ mm/slob.c: static LIST_HEAD(free_slob_large);
>       /*
>        * slob_page_free: true for pages on free_slob_pages list.
>     @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
>       							int align_offset)
>       {
>      -	struct page *sp;
>     ++	struct folio *folio;
>      +	struct slab *sp;
>       	struct list_head *slob_list;
>       	slob_t *b = NULL;
>     @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
>       			return NULL;
>      -		sp = virt_to_page(b);
>      -		__SetPageSlab(sp);
>     -+		sp = virt_to_slab(b);
>     -+		__SetPageSlab(slab_page(sp));
>     ++		folio = virt_to_folio(b);
>     ++		__folio_set_slab(folio);
>     ++		sp = folio_slab(folio);
>       
>       		spin_lock_irqsave(&slob_lock, flags);
>       		sp->units = SLOB_UNITS(PAGE_SIZE);
>     @@ mm/slob.c: static void slob_free(void *block, int size)
>       		spin_unlock_irqrestore(&slob_lock, flags);
>      -		__ClearPageSlab(sp);
>      -		page_mapcount_reset(sp);
>     -+		__ClearPageSlab(slab_page(sp));
>     ++		__folio_clear_slab(slab_folio(sp));
>      +		page_mapcount_reset(slab_page(sp));
>       		slob_free_pages(b, 0);
>       		return;
>       	}
>     +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
>     + 
>     + void kfree(const void *block)
>     + {
>     +-	struct page *sp;
>     ++	struct folio *sp;
>     + 
>     + 	trace_kfree(_RET_IP_, block);
>     + 
>     +@@ mm/slob.c: void kfree(const void *block)
>     + 		return;
>     + 	kmemleak_free(block);
>     + 
>     +-	sp = virt_to_page(block);
>     +-	if (PageSlab(sp)) {
>     ++	sp = virt_to_folio(block);
>     ++	if (folio_test_slab(sp)) {
>     + 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
>     + 		unsigned int *m = (unsigned int *)(block - align);
>     + 		slob_free(m, *m + align);
>     + 	} else {
>     +-		unsigned int order = compound_order(sp);
>     +-		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>     ++		unsigned int order = folio_order(sp);
>     ++
>     ++		mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>     + 				    -(PAGE_SIZE << order));
>     +-		__free_pages(sp, order);
>     ++		__free_pages(folio_page(sp, 0), order);
>     + 
>     + 	}
>     + }
> 25:  aa4f573a4c96 ! 25:  466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
>     @@ Commit message
>      
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>     +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>          Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
>          Cc: Alexander Potapenko <glider@google.com>
>          Cc: Andrey Konovalov <andreyknvl@gmail.com>
> 26:  67b7966d2fb6 = 26:  b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
> 31:  d64dfe49c1e7 ! 27:  4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
>     @@ Commit message
>          possible.
>      
>          This should also prevent accidental use of fields that don't exist in given
>     -    implementation. Before this patch virt_to_cache() and and cache_from_obj() was
>     -    visible for SLOB (albeit not used), although it relies on the slab_cache field
>     +    implementation. Before this patch virt_to_cache() and cache_from_obj() were
>     +    visible for SLOB (albeit not used), although they rely on the slab_cache field
>          that isn't set by SLOB. With this patch it's now a compile error, so these
>          functions are now hidden behind #ifndef CONFIG_SLOB.
>      
>     @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
>      -		slab->s_mem = addr;
>      +#if defined(CONFIG_SLUB)
>      +	slab->objects = 1;
>     -+#elif defined (CONFIG_SLAB)
>     ++#elif defined(CONFIG_SLAB)
>      +	slab->s_mem = addr;
>      +#endif
>       
>     @@ mm/slab.h
>      +
>      +#if defined(CONFIG_SLAB)
>      +
>     -+	union {
>     -+		struct list_head slab_list;
>     + 	union {
>     + 		struct list_head slab_list;
>     +-		struct {	/* Partial pages */
>      +		struct rcu_head rcu_head;
>      +	};
>      +	struct kmem_cache *slab_cache;
>      +	void *freelist;	/* array of free object indexes */
>     -+	void * s_mem;	/* first object */
>     ++	void *s_mem;	/* first object */
>      +	unsigned int active;
>      +
>      +#elif defined(CONFIG_SLUB)
>      +
>     - 	union {
>     - 		struct list_head slab_list;
>     --		struct {	/* Partial pages */
>     ++	union {
>     ++		struct list_head slab_list;
>      +		struct rcu_head rcu_head;
>      +		struct {
>       			struct slab *next;
>     @@ mm/slab.h: struct slab {
>      +#elif defined(CONFIG_SLOB)
>      +
>      +	struct list_head slab_list;
>     -+	void * __unused_1;
>     ++	void *__unused_1;
>      +	void *freelist;		/* first free block */
>     -+	void * __unused_2;
>     ++	void *__unused_2;
>      +	int units;
>      +
>      +#else
>     @@ mm/slab.h: struct slab {
>       #ifdef CONFIG_MEMCG
>       	unsigned long memcg_data;
>      @@ mm/slab.h: struct slab {
>     - 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
>       SLAB_MATCH(flags, __page_flags);
>       SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
>     + SLAB_MATCH(slab_list, slab_list);
>      +#ifndef CONFIG_SLOB
>       SLAB_MATCH(rcu_head, rcu_head);
>     + SLAB_MATCH(slab_cache, slab_cache);
>     ++#endif
>     ++#ifdef CONFIG_SLAB
>     + SLAB_MATCH(s_mem, s_mem);
>     + SLAB_MATCH(active, active);
>      +#endif
>       SLAB_MATCH(_refcount, __page_refcount);
>       #ifdef CONFIG_MEMCG
> 32:  0abf87bae67e = 28:  94b78948d53f mm/slub: Simplify struct slab slabs field definition
> 33:  813c304f18e4 = 29:  f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 27:  ebce4b5b5ced ! 30:  1414e8c87de6 zsmalloc: Stop using slab fields in struct page
>     @@ Commit message
>      
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>     -    Cc: Minchan Kim <minchan@kernel.org>
>     +    Acked-by: Minchan Kim <minchan@kernel.org>
>          Cc: Nitin Gupta <ngupta@vflare.org>
>          Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
>      
> 28:  f124425ae7de = 31:  8a3cda6b38eb bootmem: Use page->index instead of page->freelist
> 29:  82da48c73b2e <  -:  ------------ iommu: Use put_pages_list
> 30:  181e16dfefbb <  -:  ------------ mm: Remove slab from struct page
>  -:  ------------ > 32:  91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-15  1:03   ` Roman Gushchin via iommu
  2021-12-15 23:38     ` Roman Gushchin via iommu
@ 2021-12-20  0:24     ` Vlastimil Babka
  1 sibling, 0 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-20  0:24 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Hyeonggon Yoo, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86, Luis Chamberlain, Matthew Wilcox,
	Ingo Molnar, Vladimir Davydov, David Rientjes, Nitin Gupta,
	Marco Elver, Borislav Petkov, Andy Lutomirski, cgroups,
	Thomas Gleixner, Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov,
	patches, Pekka Enberg, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse

On 12/15/21 02:03, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi Vlastimil!
> 
> I've started to review this patchset (btw, a really nice work, I like
> the resulting code way more). Because I'm looking at v3 and I don't have
> the whole v2 in my mailbox, here is what I've now:

Thanks a lot, Roman!

> * mm: add virt_to_folio() and folio_address()
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slab: Dissolve slab_map_pages() in its caller
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Make object_err() static
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Split slab into its own type
> 1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
> comparison?

Folio doesn't have define most of the fields, and matching some to page and
others to folio seems like unnecessary complication. Maybe as part of the
final struct page cleanup when the slab fields are gone from struct page,
the rest could all be in folio - I'll check once we get there.

> 2) page_slab() is used only in kasan and only in one place, so maybe it's better
> to not introduce it as a generic helper?

Yeah that's the case after the series, but as part of the incremental steps,
page_slab() gets used in many places. I'll consider removing it on top though.

> Other than that
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Add account_slab() and unaccount_slab()
> 1) maybe change the title to convert/replace instead of add?

Done.

> 2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?

Maybe possible, but that would distort the series more than I'd like to at
this rc6 time.

> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert virt_to_cache() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert __ksize() to struct slab
> It looks like certain parts of __ksize() can be merged between slab, slub and slob?
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Use struct slab in kmem_obj_info()
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> 
> I'll try to finish reviewing the patchset until the  end of the week.
> 
> Thanks!
> 
> Roman

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-15 23:38     ` Roman Gushchin via iommu
  2021-12-16  9:19       ` Vlastimil Babka
@ 2021-12-20  0:47       ` Vlastimil Babka
  2021-12-20  1:42         ` Matthew Wilcox
  1 sibling, 1 reply; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-20  0:47 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Hyeonggon Yoo, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86, Luis Chamberlain, Matthew Wilcox,
	Ingo Molnar, Vladimir Davydov, David Rientjes, Nitin Gupta,
	Marco Elver, Borislav Petkov, Andy Lutomirski, cgroups,
	Thomas Gleixner, Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov,
	patches, Pekka Enberg, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse

On 12/16/21 00:38, Roman Gushchin wrote:
> Part 2:
> 
> * mm: Convert check_heap_object() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert detached_freelist to use a struct slab
> How about to convert free_nonslab_page() to free_nonslab_folio()?
> And maybe rename it to something like free_large_kmalloc()?
> If I'm not missing something, large kmallocs is the only way how we can end up
> there with a !slab folio/page.

Good point, thanks! But did at as part of the following patch, where it fits
logically better.

> * mm/slub: Convert kfree() to use a struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>

Didn't add your tag because of the addition of free_large_kmalloc() change.

> * mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert print_page_info() to print_slab_info()
> Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?

Unfortunately yes, as long as folio_flags() don't take const struct folio *,
which will need some yak shaving.

> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert alloc_slab_page() to return a struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert __free_slab() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert pfmemalloc_match() to take a struct slab
> Cool! Removing pfmemalloc_unsafe() is really nice.
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert most struct page to struct slab by spatch
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Finish struct page to struct slab conversion
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>

Thanks again!

> * mm/slab: Convert most struct page to struct slab by spatch
> 
> Another patch with the same title? Rebase error?
> 
> * mm/slab: Finish struct page to struct slab conversion
> 
> And this one too?
> 
> 
> Thanks!
> 
> Roman

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-20  0:47       ` Vlastimil Babka
@ 2021-12-20  1:42         ` Matthew Wilcox
  0 siblings, 0 replies; 25+ messages in thread
From: Matthew Wilcox @ 2021-12-20  1:42 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin,
	Hyeonggon Yoo, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86, Luis Chamberlain, kasan-dev, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse, Roman Gushchin

On Mon, Dec 20, 2021 at 01:47:54AM +0100, Vlastimil Babka wrote:
> > * mm/slub: Convert print_page_info() to print_slab_info()
> > Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?
> 
> Unfortunately yes, as long as folio_flags() don't take const struct folio *,
> which will need some yak shaving.

In case anyone's interested ...

folio_flags calls VM_BUG_ON_PGFLAGS() which would need its second
argument to be const.

That means dump_page() needs to take a const struct page, which
means __dump_page() needs its argument to be const.

That calls ...

is_migrate_cma_page()
page_mapping()
page_mapcount()
page_ref_count()
page_to_pgoff()
page_to_pfn()
hpage_pincount_available()
head_compound_mapcount()
head_compound_pincount()
compound_order()
PageKsm()
PageAnon()
PageCompound()

... and at that point, I ran out of motivation to track down some parts
of this tarbaby that could be fixed.  I did do:

    mm: constify page_count and page_ref_count
    mm: constify get_pfnblock_flags_mask and get_pfnblock_migratetype
    mm: make compound_head const-preserving
    mm/page_owner: constify dump_page_owner

so some of those are already done.  But a lot of them just need to be
done at the same time.  For example, page_mapping() calls
folio_mapping() which calls folio_test_slab() which calls folio_flags(),
so dump_page() and page_mapping() need to be done at the same time.

One bit that could be broken off easily (I think ...) is PageTransTail()
PageTail(), PageCompound(), PageHuge(), page_to_pgoff() and
page_to_index().  One wrinkle is needed a temporary
TESTPAGEFLAGS_FALSE_CONST.  But I haven't tried it yet.

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-16 15:00   ` Hyeonggon Yoo
@ 2021-12-20 23:58     ` Vlastimil Babka
  2021-12-21 17:25       ` Robin Murphy
  2021-12-22  7:36       ` Hyeonggon Yoo
  0 siblings, 2 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-20 23:58 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On 12/16/21 16:00, Hyeonggon Yoo wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Reviewing the whole patch series takes longer than I thought.
> I'll try to review and test rest of patches when I have time.
> 
> I added Tested-by if kernel builds okay and kselftests
> does not break the kernel on my machine.
> (with CONFIG_SLAB/SLUB/SLOB depending on the patch),

Thanks!

> Let me know me if you know better way to test a patch.

Testing on your machine is just fine.

> # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> Comment:
> Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
> btw, do we need slabs_cpu_partial attribute when we don't use
> cpu partials? (!SLUB_CPU_PARTIAL)

The sysfs attribute? Yeah we should be consistent to userspace expecting to
read it (even with zeroes), regardless of config.

> # mm/slub: Simplify struct slab slabs field definition
> Comment:
> 
> This is how struct page looks on the top of v3r3 branch:
> struct page {
> [...]
>                 struct {        /* slab, slob and slub */
>                         union {
>                                 struct list_head slab_list;
>                                 struct {        /* Partial pages */
>                                         struct page *next;
> #ifdef CONFIG_64BIT
>                                         int pages;      /* Nr of pages left */
> #else
>                                         short int pages;
> #endif
>                                 };
>                         };
> [...]
> 
> It's not consistent with struct slab.

Hm right. But as we don't actually use the struct page version anymore, and
it's not one of the fields checked by SLAB_MATCH(), we can ignore this.

> I think this is because "mm: Remove slab from struct page" was dropped.

That was just postponed until iommu changes are in. Matthew mentioned those
might be merged too, so that final cleanup will happen too and take care of
the discrepancy above, so no need for extra churn to address it speficially.

> Would you update some of patches?
> 
> # mm/sl*b: Differentiate struct slab fields by sl*b implementations
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Works SL[AUO]B on my machine and makes code much better.
> 
> # mm/slob: Convert SLOB to use struct slab and struct folio
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> It still works fine on SLOB.
> 
> # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> # mm/slub: Convert __free_slab() to use struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> Thanks,
> Hyeonggon.

Thanks again,
Vlastimil
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-20 23:58     ` Vlastimil Babka
@ 2021-12-21 17:25       ` Robin Murphy
  2021-12-22  7:36       ` Hyeonggon Yoo
  1 sibling, 0 replies; 25+ messages in thread
From: Robin Murphy @ 2021-12-21 17:25 UTC (permalink / raw)
  To: Vlastimil Babka, Hyeonggon Yoo
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, Matthew Wilcox,
	H. Peter Anvin, Christoph Lameter, Will Deacon, Pekka Enberg,
	Minchan Kim, x86, iommu, kasan-dev, Ingo Molnar, Vladimir Davydov,
	David Rientjes, Nitin Gupta, Marco Elver, Borislav Petkov,
	Andy Lutomirski, cgroups, Thomas Gleixner, Andrew Morton,
	Dmitry Vyukov, Andrey Konovalov, patches, Julia Lawall,
	Sergey Senozhatsky, Luis Chamberlain, Johannes Weiner,
	Joonsoo Kim, David Woodhouse

On 2021-12-20 23:58, Vlastimil Babka wrote:
> On 12/16/21 16:00, Hyeonggon Yoo wrote:
>> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>>> On 12/1/21 19:14, Vlastimil Babka wrote:
>>>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>>>> this cover letter.
>>>>
>>>> Series also available in git, based on 5.16-rc3:
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>>>
>>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>>
>> Reviewing the whole patch series takes longer than I thought.
>> I'll try to review and test rest of patches when I have time.
>>
>> I added Tested-by if kernel builds okay and kselftests
>> does not break the kernel on my machine.
>> (with CONFIG_SLAB/SLUB/SLOB depending on the patch),
> 
> Thanks!
> 
>> Let me know me if you know better way to test a patch.
> 
> Testing on your machine is just fine.
> 
>> # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
>>
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>>
>> Comment:
>> Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
>> btw, do we need slabs_cpu_partial attribute when we don't use
>> cpu partials? (!SLUB_CPU_PARTIAL)
> 
> The sysfs attribute? Yeah we should be consistent to userspace expecting to
> read it (even with zeroes), regardless of config.
> 
>> # mm/slub: Simplify struct slab slabs field definition
>> Comment:
>>
>> This is how struct page looks on the top of v3r3 branch:
>> struct page {
>> [...]
>>                  struct {        /* slab, slob and slub */
>>                          union {
>>                                  struct list_head slab_list;
>>                                  struct {        /* Partial pages */
>>                                          struct page *next;
>> #ifdef CONFIG_64BIT
>>                                          int pages;      /* Nr of pages left */
>> #else
>>                                          short int pages;
>> #endif
>>                                  };
>>                          };
>> [...]
>>
>> It's not consistent with struct slab.
> 
> Hm right. But as we don't actually use the struct page version anymore, and
> it's not one of the fields checked by SLAB_MATCH(), we can ignore this.
> 
>> I think this is because "mm: Remove slab from struct page" was dropped.
> 
> That was just postponed until iommu changes are in. Matthew mentioned those
> might be merged too, so that final cleanup will happen too and take care of
> the discrepancy above, so no need for extra churn to address it speficially.

FYI the IOMMU changes are now queued in linux-next, so if all goes well 
you might be able to sneak that final patch in too.

Robin.

> 
>> Would you update some of patches?
>>
>> # mm/sl*b: Differentiate struct slab fields by sl*b implementations
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Works SL[AUO]B on my machine and makes code much better.
>>
>> # mm/slob: Convert SLOB to use struct slab and struct folio
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> It still works fine on SLOB.
>>
>> # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>>
>> # mm/slub: Convert __free_slab() to use struct slab
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>>
>> Thanks,
>> Hyeonggon.
> 
> Thanks again,
> Vlastimil
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-20 23:58     ` Vlastimil Babka
  2021-12-21 17:25       ` Robin Murphy
@ 2021-12-22  7:36       ` Hyeonggon Yoo
  1 sibling, 0 replies; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-22  7:36 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse

On Tue, Dec 21, 2021 at 12:58:14AM +0100, Vlastimil Babka wrote:
> On 12/16/21 16:00, Hyeonggon Yoo wrote:
> > On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> >> On 12/1/21 19:14, Vlastimil Babka wrote:
> >> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> > this cover letter.
> >> > 
> >> > Series also available in git, based on 5.16-rc3:
> >> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> >> 
> >> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> >> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> > 
> > Reviewing the whole patch series takes longer than I thought.
> > I'll try to review and test rest of patches when I have time.
> > 
> > I added Tested-by if kernel builds okay and kselftests
> > does not break the kernel on my machine.
> > (with CONFIG_SLAB/SLUB/SLOB depending on the patch),
> 
> Thanks!
>

:)

> > Let me know me if you know better way to test a patch.
> 
> Testing on your machine is just fine.
>

Good!

> > # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> > 
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > 
> > Comment:
> > Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
> > btw, do we need slabs_cpu_partial attribute when we don't use
> > cpu partials? (!SLUB_CPU_PARTIAL)
> 
> The sysfs attribute? Yeah we should be consistent to userspace expecting to
> read it (even with zeroes), regardless of config.
> 

I thought entirely disabling the attribute is simpler,
But okay If it should be exposed even if it's always zero.

> > # mm/slub: Simplify struct slab slabs field definition
> > Comment:
> > 
> > This is how struct page looks on the top of v3r3 branch:
> > struct page {
> > [...]
> >                 struct {        /* slab, slob and slub */
> >                         union {
> >                                 struct list_head slab_list;
> >                                 struct {        /* Partial pages */
> >                                         struct page *next;
> > #ifdef CONFIG_64BIT
> >                                         int pages;      /* Nr of pages left */
> > #else
> >                                         short int pages;
> > #endif
> >                                 };
> >                         };
> > [...]
> > 
> > It's not consistent with struct slab.
> 
> Hm right. But as we don't actually use the struct page version anymore, and
> it's not one of the fields checked by SLAB_MATCH(), we can ignore this.
>

Yeah this is not a big problem. just mentioned this because 
it looked weird and I didn't know when the patch "mm: Remove slab from struct page"
will come back.

> > I think this is because "mm: Remove slab from struct page" was dropped.
>
> That was just postponed until iommu changes are in. Matthew mentioned those
> might be merged too, so that final cleanup will happen too and take care of
> the discrepancy above, so no need for extra churn to address it speficially.
> 

Okay it seems no extra work needed until the iommu changes are in!

BTW, in the patch (that I sent) ("mm/slob: Remove unnecessary page_mapcount_reset()
function call"), it refers commit 4525180926f9  ("mm/sl*b: Differentiate struct slab fields by
sl*b implementations"). But the commit hash 4525180926f9 changed after the
tree has been changed.

It will be nice to write a script to handle situations like this.

> > Would you update some of patches?
> > 
> > # mm/sl*b: Differentiate struct slab fields by sl*b implementations
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Works SL[AUO]B on my machine and makes code much better.
> > 
> > # mm/slob: Convert SLOB to use struct slab and struct folio
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > It still works fine on SLOB.
> > 
> > # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> >
> > # mm/slub: Convert __free_slab() to use struct slab
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > 
> > Thanks,
> > Hyeonggon.
> 
> Thanks again,
> Vlastimil

Have a nice day, thanks!
Hyeonggon.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 12:57 ` Vlastimil Babka
                     ` (2 preceding siblings ...)
  2021-12-16 15:00   ` Hyeonggon Yoo
@ 2021-12-22 16:56   ` Vlastimil Babka
  2021-12-25  9:16     ` Hyeonggon Yoo
  2021-12-29 11:22     ` Hyeonggon Yoo
  3 siblings, 2 replies; 25+ messages in thread
From: Vlastimil Babka @ 2021-12-22 16:56 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin,
	Hyeonggon Yoo, Will Deacon, Sergey Senozhatsky, x86,
	Luis Chamberlain, kasan-dev, Ingo Molnar, Vladimir Davydov,
	Nitin Gupta, Marco Elver, Borislav Petkov, Andy Lutomirski,
	cgroups, Thomas Gleixner, Dmitry Vyukov, Andrey Konovalov,
	patches, Julia Lawall, Minchan Kim, iommu, Johannes Weiner,
	Andrew Morton, David Woodhouse, Roman Gushchin

On 12/14/21 13:57, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> this cover letter.
>>
>> Series also available in git, based on 5.16-rc3:
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
-next. I've shortened git commit log lines to make checkpatch happier,
so no range-diff as it would be too long. I believe it would be useless
spam to post the whole series now, shortly before xmas, so I will do it
at rc8 time, to hopefully collect remaining reviews. But if anyone wants
a mailed version, I can do that.

Changes in v4:
- rebase to 5.16-rc6 to avoid a conflict with mainline
- collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
thanks!
- in patch "mm/slub: Convert detached_freelist to use a struct slab"
renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
as suggested by Roman
- in "mm/memcg: Convert slab objcgs from struct page to struct slab"
change one caller of slab_objcgs_check() to slab_objcgs() as suggested
by Johannes, realize the other caller should be also changed, and remove
slab_objcgs_check() completely.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-22 16:56   ` Vlastimil Babka
@ 2021-12-25  9:16     ` Hyeonggon Yoo
  2021-12-25 17:53       ` Matthew Wilcox
  2021-12-29 11:22     ` Hyeonggon Yoo
  1 sibling, 1 reply; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-25  9:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse, Roman Gushchin

On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
> On 12/14/21 13:57, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> this cover letter.
> >>
> >> Series also available in git, based on 5.16-rc3:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
> -next. I've shortened git commit log lines to make checkpatch happier,
> so no range-diff as it would be too long. I believe it would be useless
> spam to post the whole series now, shortly before xmas, so I will do it
> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
> a mailed version, I can do that.

Hello Vlastimil, Merry Christmas!
This is part 2 of reviewing/testing patches.

# mm/kasan: Convert to struct folio and struct slab
I'm not familiar with kasan yet but kasan runs well on my machine and
kasan's bug report functionality too works fine.
Tested-by: Hyeongogn Yoo <42.hyeyoo@gmail.com>

# mm: Convert struct page to struct slab in functions used by other subsystems
I'm not familiar with kasan, but to ask:
Does ____kasan_slab_free detect invalid free if someone frees
an object that is not allocated from slab?

@@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
-       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
+       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
            object)) {
                kasan_report_invalid_free(tagged_object, ip);
                return true;

I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
returns false. That will cause NULL pointer dereference in nearest_obj.
I don't think this change is intended.

This makes me think some of virt_to_head_page() -> virt_to_slab()
conversion need to be reviewed with caution.

# mm/slab: Finish struct page to struct slab conversion
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

# mm/slab: Convert most struct page to struct slab by spatch
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

I'll come back with part 3 :)
Enjoy your Christmas!
Hyeonggon

> Changes in v4:
> - rebase to 5.16-rc6 to avoid a conflict with mainline
> - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
> thanks!
> - in patch "mm/slub: Convert detached_freelist to use a struct slab"
> renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
> as suggested by Roman
> - in "mm/memcg: Convert slab objcgs from struct page to struct slab"
> change one caller of slab_objcgs_check() to slab_objcgs() as suggested
> by Johannes, realize the other caller should be also changed, and remove
> slab_objcgs_check() completely.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-25  9:16     ` Hyeonggon Yoo
@ 2021-12-25 17:53       ` Matthew Wilcox
  2021-12-27  2:43         ` Hyeonggon Yoo
  0 siblings, 1 reply; 25+ messages in thread
From: Matthew Wilcox @ 2021-12-25 17:53 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, kasan-dev, Ingo Molnar, Vlastimil Babka,
	David Rientjes, Nitin Gupta, Vladimir Davydov, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse, Roman Gushchin

On Sat, Dec 25, 2021 at 09:16:55AM +0000, Hyeonggon Yoo wrote:
> # mm: Convert struct page to struct slab in functions used by other subsystems
> I'm not familiar with kasan, but to ask:
> Does ____kasan_slab_free detect invalid free if someone frees
> an object that is not allocated from slab?
> 
> @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
>             object)) {
>                 kasan_report_invalid_free(tagged_object, ip);
>                 return true;
> 
> I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
> returns false. That will cause NULL pointer dereference in nearest_obj.
> I don't think this change is intended.

You need to track down how this could happen.  As far as I can tell,
it's always called when we know the object is part of a slab.  That's
where the cachep pointer is deduced from.

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-25 17:53       ` Matthew Wilcox
@ 2021-12-27  2:43         ` Hyeonggon Yoo
  0 siblings, 0 replies; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-27  2:43 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, kasan-dev, Ingo Molnar, Vlastimil Babka,
	David Rientjes, Nitin Gupta, Vladimir Davydov, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse, Roman Gushchin

On Sat, Dec 25, 2021 at 05:53:23PM +0000, Matthew Wilcox wrote:
> On Sat, Dec 25, 2021 at 09:16:55AM +0000, Hyeonggon Yoo wrote:
> > # mm: Convert struct page to struct slab in functions used by other subsystems
> > I'm not familiar with kasan, but to ask:
> > Does ____kasan_slab_free detect invalid free if someone frees
> > an object that is not allocated from slab?
> > 
> > @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> > -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> > +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
> >             object)) {
> >                 kasan_report_invalid_free(tagged_object, ip);
> >                 return true;
> > 
> > I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
> > returns false. That will cause NULL pointer dereference in nearest_obj.
> > I don't think this change is intended.
> 
> You need to track down how this could happen.  As far as I can tell,
> it's always called when we know the object is part of a slab.  That's
> where the cachep pointer is deduced from.

Thank you Matthew, you are right. I read the code too narrowly.
when we call kasan hooks, we know that the object is allocated from
the slab cache. (through cache_from_obj)

I'll review that patch again in part 3!

Thanks,
Hyeonggon
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-22 16:56   ` Vlastimil Babka
  2021-12-25  9:16     ` Hyeonggon Yoo
@ 2021-12-29 11:22     ` Hyeonggon Yoo
  2022-01-03 17:56       ` Vlastimil Babka
  1 sibling, 1 reply; 25+ messages in thread
From: Hyeonggon Yoo @ 2021-12-29 11:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse, Roman Gushchin

On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
> On 12/14/21 13:57, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> this cover letter.
> >>
> >> Series also available in git, based on 5.16-rc3:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
> -next. I've shortened git commit log lines to make checkpatch happier,
> so no range-diff as it would be too long. I believe it would be useless
> spam to post the whole series now, shortly before xmas, so I will do it
> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
> a mailed version, I can do that.
>

Hello Matthew and Vlastimil.
it's part 3 of review.

# mm: Convert struct page to struct slab in functions used by other subsystems
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>


# mm/slub: Convert most struct page to struct slab by spatch
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
with a question below.

-static int check_slab(struct kmem_cache *s, struct page *page)
+static int check_slab(struct kmem_cache *s, struct slab *slab)
 {
        int maxobj;
 
-       if (!PageSlab(page)) {
-               slab_err(s, page, "Not a valid slab page");
+       if (!folio_test_slab(slab_folio(slab))) {
+               slab_err(s, slab, "Not a valid slab page");
                return 0;
        }

Can't we guarantee that struct slab * always points to a slab?

for struct page * it can be !PageSlab(page) because struct page *
can be other than slab. but struct slab * can only be slab
unlike struct page. code will be simpler if we guarantee that
struct slab * always points to a slab (or NULL).


# mm/slub: Convert pfmemalloc_match() to take a struct slab
It's confusing to me because the original pfmemalloc_match() is removed
and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and
converted to use slab_test_pfmemalloc() helper.

But I agree with the resulting code. so:
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>


# mm/slub: Convert alloc_slab_page() to return a struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>


# mm/slub: Convert print_page_info() to print_slab_info()
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

I hope to review rest of patches in a week.

Thanks,
Hyeonggon

> Changes in v4:
> - rebase to 5.16-rc6 to avoid a conflict with mainline
> - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
> thanks!
> - in patch "mm/slub: Convert detached_freelist to use a struct slab"
> renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
> as suggested by Roman
> - in "mm/memcg: Convert slab objcgs from struct page to struct slab"
> change one caller of slab_objcgs_check() to slab_objcgs() as suggested
> by Johannes, realize the other caller should be also changed, and remove
> slab_objcgs_check() completely.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-29 11:22     ` Hyeonggon Yoo
@ 2022-01-03 17:56       ` Vlastimil Babka
  0 siblings, 0 replies; 25+ messages in thread
From: Vlastimil Babka @ 2022-01-03 17:56 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko, linux-mm,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, H. Peter Anvin,
	Christoph Lameter, Will Deacon, Julia Lawall, Sergey Senozhatsky,
	x86, Luis Chamberlain, Matthew Wilcox, Ingo Molnar,
	Vladimir Davydov, David Rientjes, Nitin Gupta, Marco Elver,
	Borislav Petkov, Andy Lutomirski, cgroups, Thomas Gleixner,
	Joonsoo Kim, Dmitry Vyukov, Andrey Konovalov, patches,
	Pekka Enberg, Minchan Kim, iommu, Johannes Weiner, Andrew Morton,
	David Woodhouse, Roman Gushchin

On 12/29/21 12:22, Hyeonggon Yoo wrote:
> On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
>> On 12/14/21 13:57, Vlastimil Babka wrote:
>> > On 12/1/21 19:14, Vlastimil Babka wrote:
>> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> >> this cover letter.
>> >>
>> >> Series also available in git, based on 5.16-rc3:
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> > 
>> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
>> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
>> -next. I've shortened git commit log lines to make checkpatch happier,
>> so no range-diff as it would be too long. I believe it would be useless
>> spam to post the whole series now, shortly before xmas, so I will do it
>> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
>> a mailed version, I can do that.
>>
> 
> Hello Matthew and Vlastimil.
> it's part 3 of review.
> 
> # mm: Convert struct page to struct slab in functions used by other subsystems
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> 
> # mm/slub: Convert most struct page to struct slab by spatch
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> with a question below.
> 
> -static int check_slab(struct kmem_cache *s, struct page *page)
> +static int check_slab(struct kmem_cache *s, struct slab *slab)
>  {
>         int maxobj;
>  
> -       if (!PageSlab(page)) {
> -               slab_err(s, page, "Not a valid slab page");
> +       if (!folio_test_slab(slab_folio(slab))) {
> +               slab_err(s, slab, "Not a valid slab page");
>                 return 0;
>         }
> 
> Can't we guarantee that struct slab * always points to a slab?

Normally, yes.

> for struct page * it can be !PageSlab(page) because struct page *
> can be other than slab. but struct slab * can only be slab
> unlike struct page. code will be simpler if we guarantee that
> struct slab * always points to a slab (or NULL).

That's what the code does indeed. But check_slab() is called as part of
various consistency checks, so there we on purpose question all assumptions
in order to find a bug (e.g. memory corruption) - such as a page that's
still on the list of slabs while it was already freed and reused and thus
e.g. lacks the slab page flag.

But it's nice how using struct slab makes such a check immediately stand out
as suspicious, right?

> # mm/slub: Convert pfmemalloc_match() to take a struct slab
> It's confusing to me because the original pfmemalloc_match() is removed
> and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and
> converted to use slab_test_pfmemalloc() helper.
> 
> But I agree with the resulting code. so:
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> 
> # mm/slub: Convert alloc_slab_page() to return a struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> 
> # mm/slub: Convert print_page_info() to print_slab_info()
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> I hope to review rest of patches in a week.

Thanks for your reviews/tests!
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-01-03 17:56 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-12-01 18:14 [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
2021-12-01 18:15 ` [PATCH v2 29/33] iommu: Use put_pages_list Vlastimil Babka
2021-12-01 19:07   ` Matthew Wilcox
2021-12-01 19:45     ` Robin Murphy
2021-12-02 12:25 ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
2021-12-14 12:57 ` Vlastimil Babka
2021-12-14 14:38   ` Hyeonggon Yoo
2021-12-14 14:43     ` Vlastimil Babka
2021-12-15  3:47       ` Hyeonggon Yoo
2021-12-15  1:03   ` Roman Gushchin via iommu
2021-12-15 23:38     ` Roman Gushchin via iommu
2021-12-16  9:19       ` Vlastimil Babka
2021-12-20  0:47       ` Vlastimil Babka
2021-12-20  1:42         ` Matthew Wilcox
2021-12-20  0:24     ` Vlastimil Babka
2021-12-16 15:00   ` Hyeonggon Yoo
2021-12-20 23:58     ` Vlastimil Babka
2021-12-21 17:25       ` Robin Murphy
2021-12-22  7:36       ` Hyeonggon Yoo
2021-12-22 16:56   ` Vlastimil Babka
2021-12-25  9:16     ` Hyeonggon Yoo
2021-12-25 17:53       ` Matthew Wilcox
2021-12-27  2:43         ` Hyeonggon Yoo
2021-12-29 11:22     ` Hyeonggon Yoo
2022-01-03 17:56       ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox