* [PATCH 3/3] s390: supplement for ptdesc conversion [not found] <cover.1709541697.git.zhengqi.arch@bytedance.com> @ 2024-03-04 11:07 ` Qi Zheng 2024-03-05 7:21 ` [PATCH v2 " Qi Zheng 2024-03-26 19:48 ` [PATCH " Vishal Moola 0 siblings, 2 replies; 5+ messages in thread From: Qi Zheng @ 2024-03-04 11:07 UTC (permalink / raw) To: akpm, vishal.moola, hughd, david, rppt, willy, muchun.song Cc: linux-mm, linux-kernel, Qi Zheng, Christian Borntraeger, Janosch Frank, Claudio Imbrenda, kvm, linux-s390 After commit 6326c26c1514 ("s390: convert various pgalloc functions to use ptdescs"), there are still some positions that use page->{lru, index} instead of ptdesc->{pt_list, pt_index}. In order to make the use of ptdesc->{pt_list, pt_index} clearer, it would be better to convert them as well. Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Janosch Frank <frankja@linux.ibm.com> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/pgalloc.h | 4 ++-- arch/s390/mm/gmap.c | 38 +++++++++++++++++---------------- arch/s390/mm/pgalloc.c | 8 +++---- 3 files changed, 26 insertions(+), 24 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 502d655fe6ae..7b84ef6dc4b6 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -23,9 +23,9 @@ unsigned long *crst_table_alloc(struct mm_struct *); void crst_table_free(struct mm_struct *, unsigned long *); unsigned long *page_table_alloc(struct mm_struct *); -struct page *page_table_alloc_pgste(struct mm_struct *mm); +struct ptdesc *page_table_alloc_pgste(struct mm_struct *mm); void page_table_free(struct mm_struct *, unsigned long *); -void page_table_free_pgste(struct page *page); +void page_table_free_pgste(struct ptdesc *ptdesc); extern int page_table_allocate_pgste; static inline void crst_table_init(unsigned long *crst, unsigned long entry) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 8da39deb56ca..4d2674f89322 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -206,9 +206,11 @@ static void gmap_free(struct gmap *gmap) /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { + struct ptdesc *ptdesc; + /* Free all page tables. */ - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) - page_table_free_pgste(page); + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) + page_table_free_pgste(ptdesc); gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ gmap_put(gmap->parent); @@ -1348,7 +1350,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) { unsigned long *ste; phys_addr_t sto, pgt; - struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */ @@ -1361,9 +1363,9 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) *ste = _SEGMENT_ENTRY_EMPTY; __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ - page = phys_to_page(pgt); - list_del(&page->lru); - page_table_free_pgste(page); + ptdesc = page_ptdesc(phys_to_page(pgt)); + list_del(&ptdesc->pt_list); + page_table_free_pgste(ptdesc); } /** @@ -1377,7 +1379,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, unsigned long *sgt) { - struct page *page; + struct ptdesc *ptdesc; phys_addr_t pgt; int i; @@ -1389,9 +1391,9 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, sgt[i] = _SEGMENT_ENTRY_EMPTY; __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ - page = phys_to_page(pgt); - list_del(&page->lru); - page_table_free_pgste(page); + ptdesc = page_ptdesc(phys_to_page(pgt)); + list_del(&ptdesc->pt_list); + page_table_free_pgste(ptdesc); } } @@ -2058,19 +2060,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, { unsigned long raddr, origin; unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; phys_addr_t s_pgt; int rc; BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE)); /* Allocate a shadow page table */ - page = page_table_alloc_pgste(sg->mm); - if (!page) + ptdesc = page_table_alloc_pgste(sg->mm); + if (!ptdesc) return -ENOMEM; - page->index = pgt & _SEGMENT_ENTRY_ORIGIN; + ptdesc->pt_index = pgt & _SEGMENT_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; - s_pgt = page_to_phys(page); + ptdesc->pt_index |= GMAP_SHADOW_FAKE_TABLE; + s_pgt = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow page table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */ @@ -2088,7 +2090,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, /* mark as invalid as long as the parent table is not protected */ *table = (unsigned long) s_pgt | _SEGMENT_ENTRY | (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID; - list_add(&page->lru, &sg->pt_list); + list_add(&ptdesc->pt_list, &sg->pt_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_SEGMENT_ENTRY_INVALID; @@ -2114,7 +2116,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page_table_free_pgste(page); + page_table_free_pgste(ptdesc); return rc; } diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 008e487c94a6..abb629d7e131 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -135,7 +135,7 @@ int crst_table_upgrade(struct mm_struct *mm, unsigned long end) #ifdef CONFIG_PGSTE -struct page *page_table_alloc_pgste(struct mm_struct *mm) +struct ptdesc *page_table_alloc_pgste(struct mm_struct *mm) { struct ptdesc *ptdesc; u64 *table; @@ -147,12 +147,12 @@ struct page *page_table_alloc_pgste(struct mm_struct *mm) memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return ptdesc_page(ptdesc); + return ptdesc; } -void page_table_free_pgste(struct page *page) +void page_table_free_pgste(struct ptdesc *ptdesc) { - pagetable_free(page_ptdesc(page)); + pagetable_free(ptdesc); } #endif /* CONFIG_PGSTE */ -- 2.30.2 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 3/3] s390: supplement for ptdesc conversion 2024-03-04 11:07 ` [PATCH 3/3] s390: supplement for ptdesc conversion Qi Zheng @ 2024-03-05 7:21 ` Qi Zheng 2024-03-26 7:46 ` Heiko Carstens 2024-03-26 19:48 ` [PATCH " Vishal Moola 1 sibling, 1 reply; 5+ messages in thread From: Qi Zheng @ 2024-03-05 7:21 UTC (permalink / raw) To: akpm, vishal.moola, hughd, david, rppt, willy, muchun.song Cc: linux-mm, linux-kernel, Qi Zheng, Christian Borntraeger, Janosch Frank, Claudio Imbrenda, kvm, linux-s390 After commit 6326c26c1514 ("s390: convert various pgalloc functions to use ptdescs"), there are still some positions that use page->{lru, index} instead of ptdesc->{pt_list, pt_index}. In order to make the use of ptdesc->{pt_list, pt_index} clearer, it would be better to convert them as well. Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Janosch Frank <frankja@linux.ibm.com> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-s390@vger.kernel.org --- v1 -> v2: fix build failure (cross compilation successful) arch/s390/include/asm/pgalloc.h | 4 ++-- arch/s390/mm/gmap.c | 38 +++++++++++++++++---------------- arch/s390/mm/pgalloc.c | 8 +++---- 3 files changed, 26 insertions(+), 24 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 502d655fe6ae..7b84ef6dc4b6 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -23,9 +23,9 @@ unsigned long *crst_table_alloc(struct mm_struct *); void crst_table_free(struct mm_struct *, unsigned long *); unsigned long *page_table_alloc(struct mm_struct *); -struct page *page_table_alloc_pgste(struct mm_struct *mm); +struct ptdesc *page_table_alloc_pgste(struct mm_struct *mm); void page_table_free(struct mm_struct *, unsigned long *); -void page_table_free_pgste(struct page *page); +void page_table_free_pgste(struct ptdesc *ptdesc); extern int page_table_allocate_pgste; static inline void crst_table_init(unsigned long *crst, unsigned long entry) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 8da39deb56ca..e43a5a3befd4 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -206,9 +206,11 @@ static void gmap_free(struct gmap *gmap) /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { + struct ptdesc *ptdesc, *n; + /* Free all page tables. */ - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) - page_table_free_pgste(page); + list_for_each_entry_safe(ptdesc, n, &gmap->pt_list, pt_list) + page_table_free_pgste(ptdesc); gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ gmap_put(gmap->parent); @@ -1348,7 +1350,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) { unsigned long *ste; phys_addr_t sto, pgt; - struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */ @@ -1361,9 +1363,9 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) *ste = _SEGMENT_ENTRY_EMPTY; __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ - page = phys_to_page(pgt); - list_del(&page->lru); - page_table_free_pgste(page); + ptdesc = page_ptdesc(phys_to_page(pgt)); + list_del(&ptdesc->pt_list); + page_table_free_pgste(ptdesc); } /** @@ -1377,7 +1379,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, unsigned long *sgt) { - struct page *page; + struct ptdesc *ptdesc; phys_addr_t pgt; int i; @@ -1389,9 +1391,9 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, sgt[i] = _SEGMENT_ENTRY_EMPTY; __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ - page = phys_to_page(pgt); - list_del(&page->lru); - page_table_free_pgste(page); + ptdesc = page_ptdesc(phys_to_page(pgt)); + list_del(&ptdesc->pt_list); + page_table_free_pgste(ptdesc); } } @@ -2058,19 +2060,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, { unsigned long raddr, origin; unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; phys_addr_t s_pgt; int rc; BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE)); /* Allocate a shadow page table */ - page = page_table_alloc_pgste(sg->mm); - if (!page) + ptdesc = page_table_alloc_pgste(sg->mm); + if (!ptdesc) return -ENOMEM; - page->index = pgt & _SEGMENT_ENTRY_ORIGIN; + ptdesc->pt_index = pgt & _SEGMENT_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; - s_pgt = page_to_phys(page); + ptdesc->pt_index |= GMAP_SHADOW_FAKE_TABLE; + s_pgt = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow page table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */ @@ -2088,7 +2090,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, /* mark as invalid as long as the parent table is not protected */ *table = (unsigned long) s_pgt | _SEGMENT_ENTRY | (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID; - list_add(&page->lru, &sg->pt_list); + list_add(&ptdesc->pt_list, &sg->pt_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_SEGMENT_ENTRY_INVALID; @@ -2114,7 +2116,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page_table_free_pgste(page); + page_table_free_pgste(ptdesc); return rc; } diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 008e487c94a6..abb629d7e131 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -135,7 +135,7 @@ int crst_table_upgrade(struct mm_struct *mm, unsigned long end) #ifdef CONFIG_PGSTE -struct page *page_table_alloc_pgste(struct mm_struct *mm) +struct ptdesc *page_table_alloc_pgste(struct mm_struct *mm) { struct ptdesc *ptdesc; u64 *table; @@ -147,12 +147,12 @@ struct page *page_table_alloc_pgste(struct mm_struct *mm) memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return ptdesc_page(ptdesc); + return ptdesc; } -void page_table_free_pgste(struct page *page) +void page_table_free_pgste(struct ptdesc *ptdesc) { - pagetable_free(page_ptdesc(page)); + pagetable_free(ptdesc); } #endif /* CONFIG_PGSTE */ -- 2.30.2 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 3/3] s390: supplement for ptdesc conversion 2024-03-05 7:21 ` [PATCH v2 " Qi Zheng @ 2024-03-26 7:46 ` Heiko Carstens 0 siblings, 0 replies; 5+ messages in thread From: Heiko Carstens @ 2024-03-26 7:46 UTC (permalink / raw) To: Qi Zheng Cc: akpm, vishal.moola, hughd, david, rppt, willy, muchun.song, linux-mm, linux-kernel, Christian Borntraeger, Janosch Frank, Claudio Imbrenda, kvm, linux-s390, Alexander Gordeev, Vasily Gorbik On Tue, Mar 05, 2024 at 03:21:54PM +0800, Qi Zheng wrote: > After commit 6326c26c1514 ("s390: convert various pgalloc functions to use > ptdescs"), there are still some positions that use page->{lru, index} > instead of ptdesc->{pt_list, pt_index}. In order to make the use of > ptdesc->{pt_list, pt_index} clearer, it would be better to convert them > as well. > > Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> > Cc: Christian Borntraeger <borntraeger@linux.ibm.com> > Cc: Janosch Frank <frankja@linux.ibm.com> > Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: kvm@vger.kernel.org > Cc: linux-s390@vger.kernel.org > --- > v1 -> v2: fix build failure (cross compilation successful) > > arch/s390/include/asm/pgalloc.h | 4 ++-- > arch/s390/mm/gmap.c | 38 +++++++++++++++++---------------- > arch/s390/mm/pgalloc.c | 8 +++---- > 3 files changed, 26 insertions(+), 24 deletions(-) Same here: Christian, Janosch, or Claudio, please have a look and provide an ACK if this is ok with you. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 3/3] s390: supplement for ptdesc conversion 2024-03-04 11:07 ` [PATCH 3/3] s390: supplement for ptdesc conversion Qi Zheng 2024-03-05 7:21 ` [PATCH v2 " Qi Zheng @ 2024-03-26 19:48 ` Vishal Moola 2024-03-27 2:11 ` Qi Zheng 1 sibling, 1 reply; 5+ messages in thread From: Vishal Moola @ 2024-03-26 19:48 UTC (permalink / raw) To: Qi Zheng Cc: akpm, hughd, david, rppt, willy, muchun.song, linux-mm, linux-kernel, Christian Borntraeger, Janosch Frank, Claudio Imbrenda, kvm, linux-s390 On Mon, Mar 04, 2024 at 07:07:20PM +0800, Qi Zheng wrote: > --- a/arch/s390/mm/gmap.c > +++ b/arch/s390/mm/gmap.c > @@ -206,9 +206,11 @@ static void gmap_free(struct gmap *gmap) > > /* Free additional data for a shadow gmap */ > if (gmap_is_shadow(gmap)) { > + struct ptdesc *ptdesc; > + > /* Free all page tables. */ > - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) > - page_table_free_pgste(page); > + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) > + page_table_free_pgste(ptdesc); An important note: ptdesc allocation/freeing is different than the standard alloc_pages()/free_pages() routines architectures are used to. Are we sure we don't have memory leaks here? We always allocate and free ptdescs as compound pages; for page table struct pages, most archictectures do not. s390 has CRST_ALLOC_ORDER pagetables, meaning if we free anything using the ptdesc api, we better be sure it was allocated using the ptdesc api as well. Like you, I don't have a s390 to test on, so hopefully some s390 expert can chime in to let us know if we need a fix for this. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 3/3] s390: supplement for ptdesc conversion 2024-03-26 19:48 ` [PATCH " Vishal Moola @ 2024-03-27 2:11 ` Qi Zheng 0 siblings, 0 replies; 5+ messages in thread From: Qi Zheng @ 2024-03-27 2:11 UTC (permalink / raw) To: Vishal Moola Cc: akpm, hughd, david, rppt, willy, muchun.song, linux-mm, linux-kernel, Christian Borntraeger, Janosch Frank, Claudio Imbrenda, kvm, linux-s390 On 2024/3/27 03:48, Vishal Moola wrote: > On Mon, Mar 04, 2024 at 07:07:20PM +0800, Qi Zheng wrote: >> --- a/arch/s390/mm/gmap.c >> +++ b/arch/s390/mm/gmap.c >> @@ -206,9 +206,11 @@ static void gmap_free(struct gmap *gmap) >> >> /* Free additional data for a shadow gmap */ >> if (gmap_is_shadow(gmap)) { >> + struct ptdesc *ptdesc; >> + >> /* Free all page tables. */ >> - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) >> - page_table_free_pgste(page); >> + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) >> + page_table_free_pgste(ptdesc); > > An important note: ptdesc allocation/freeing is different than the > standard alloc_pages()/free_pages() routines architectures are used to. > Are we sure we don't have memory leaks here? > > We always allocate and free ptdescs as compound pages; for page table > struct pages, most archictectures do not. s390 has CRST_ALLOC_ORDER > pagetables, meaning if we free anything using the ptdesc api, we better > be sure it was allocated using the ptdesc api as well. According to the code inspection, all ptdescs added to the pmap->pt_list are allocated via page_table_alloc_pgste(). > > Like you, I don't have a s390 to test on, so hopefully some s390 expert > can chime in to let us know if we need a fix for this. Yes, hope so! ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-03-27 2:11 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cover.1709541697.git.zhengqi.arch@bytedance.com>
2024-03-04 11:07 ` [PATCH 3/3] s390: supplement for ptdesc conversion Qi Zheng
2024-03-05 7:21 ` [PATCH v2 " Qi Zheng
2024-03-26 7:46 ` Heiko Carstens
2024-03-26 19:48 ` [PATCH " Vishal Moola
2024-03-27 2:11 ` Qi Zheng
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox