* [PATCH AUTOSEL 4.4 04/56] hugetlbfs: on restore reserve error path retain subpool reservation
[not found] <20190601132600.27427-1-sashal@kernel.org>
@ 2019-06-01 13:25 ` Sasha Levin
2019-06-01 13:25 ` [PATCH AUTOSEL 4.4 05/56] mm/cma.c: fix crash on CMA allocation if bitmap allocation fails Sasha Levin
2019-06-01 13:25 ` [PATCH AUTOSEL 4.4 06/56] mm/cma_debug.c: fix the break condition in cma_maxchunk_get() Sasha Levin
2 siblings, 0 replies; 3+ messages in thread
From: Sasha Levin @ 2019-06-01 13:25 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Mike Kravetz, Naoya Horiguchi, Davidlohr Bueso, Joonsoo Kim,
Michal Hocko, Kirill A . Shutemov, Andrew Morton, Linus Torvalds,
Sasha Levin, linux-mm
From: Mike Kravetz <mike.kravetz@oracle.com>
[ Upstream commit 0919e1b69ab459e06df45d3ba6658d281962db80 ]
When a huge page is allocated, PagePrivate() is set if the allocation
consumed a reservation. When freeing a huge page, PagePrivate is checked.
If set, it indicates the reservation should be restored. PagePrivate
being set at free huge page time mostly happens on error paths.
When huge page reservations are created, a check is made to determine if
the mapping is associated with an explicitly mounted filesystem. If so,
pages are also reserved within the filesystem. The default action when
freeing a huge page is to decrement the usage count in any associated
explicitly mounted filesystem. However, if the reservation is to be
restored the reservation/use count within the filesystem should not be
decrementd. Otherwise, a subsequent page allocation and free for the same
mapping location will cause the file filesystem usage to go 'negative'.
Filesystem Size Used Avail Use% Mounted on
nodev 4.0G -4.0M 4.1G - /opt/hugepool
To fix, when freeing a huge page do not adjust filesystem usage if
PagePrivate() is set to indicate the reservation should be restored.
I did not cc stable as the problem has been around since reserves were
added to hugetlbfs and nobody has noticed.
Link: http://lkml.kernel.org/r/20190328234704.27083-2-mike.kravetz@oracle.com
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/hugetlb.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 324b2953e57e9..0357ad53af368 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1221,12 +1221,23 @@ void free_huge_page(struct page *page)
ClearPagePrivate(page);
/*
- * A return code of zero implies that the subpool will be under its
- * minimum size if the reservation is not restored after page is free.
- * Therefore, force restore_reserve operation.
+ * If PagePrivate() was set on page, page allocation consumed a
+ * reservation. If the page was associated with a subpool, there
+ * would have been a page reserved in the subpool before allocation
+ * via hugepage_subpool_get_pages(). Since we are 'restoring' the
+ * reservtion, do not call hugepage_subpool_put_pages() as this will
+ * remove the reserved page from the subpool.
*/
- if (hugepage_subpool_put_pages(spool, 1) == 0)
- restore_reserve = true;
+ if (!restore_reserve) {
+ /*
+ * A return code of zero implies that the subpool will be
+ * under its minimum size if the reservation is not restored
+ * after page is free. Therefore, force restore_reserve
+ * operation.
+ */
+ if (hugepage_subpool_put_pages(spool, 1) == 0)
+ restore_reserve = true;
+ }
spin_lock(&hugetlb_lock);
clear_page_huge_active(page);
--
2.20.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH AUTOSEL 4.4 05/56] mm/cma.c: fix crash on CMA allocation if bitmap allocation fails
[not found] <20190601132600.27427-1-sashal@kernel.org>
2019-06-01 13:25 ` [PATCH AUTOSEL 4.4 04/56] hugetlbfs: on restore reserve error path retain subpool reservation Sasha Levin
@ 2019-06-01 13:25 ` Sasha Levin
2019-06-01 13:25 ` [PATCH AUTOSEL 4.4 06/56] mm/cma_debug.c: fix the break condition in cma_maxchunk_get() Sasha Levin
2 siblings, 0 replies; 3+ messages in thread
From: Sasha Levin @ 2019-06-01 13:25 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Yue Hu, Anshuman Khandual, Joonsoo Kim, Laura Abbott,
Mike Rapoport, Randy Dunlap, Andrew Morton, Linus Torvalds,
Sasha Levin, linux-mm
From: Yue Hu <huyue2@yulong.com>
[ Upstream commit 1df3a339074e31db95c4790ea9236874b13ccd87 ]
f022d8cb7ec7 ("mm: cma: Don't crash on allocation if CMA area can't be
activated") fixes the crash issue when activation fails via setting
cma->count as 0, same logic exists if bitmap allocation fails.
Link: http://lkml.kernel.org/r/20190325081309.6004-1-zbestahu@gmail.com
Signed-off-by: Yue Hu <huyue2@yulong.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/cma.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/cma.c b/mm/cma.c
index f0d91aca5a4cd..5ae4452656cdf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -100,8 +100,10 @@ static int __init cma_activate_area(struct cma *cma)
cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
- if (!cma->bitmap)
+ if (!cma->bitmap) {
+ cma->count = 0;
return -ENOMEM;
+ }
WARN_ON_ONCE(!pfn_valid(pfn));
zone = page_zone(pfn_to_page(pfn));
--
2.20.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH AUTOSEL 4.4 06/56] mm/cma_debug.c: fix the break condition in cma_maxchunk_get()
[not found] <20190601132600.27427-1-sashal@kernel.org>
2019-06-01 13:25 ` [PATCH AUTOSEL 4.4 04/56] hugetlbfs: on restore reserve error path retain subpool reservation Sasha Levin
2019-06-01 13:25 ` [PATCH AUTOSEL 4.4 05/56] mm/cma.c: fix crash on CMA allocation if bitmap allocation fails Sasha Levin
@ 2019-06-01 13:25 ` Sasha Levin
2 siblings, 0 replies; 3+ messages in thread
From: Sasha Levin @ 2019-06-01 13:25 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Yue Hu, Andrew Morton, Michal Hocko, Joe Perches, David Rientjes,
Dmitry Safonov, Joonsoo Kim, Linus Torvalds, Sasha Levin,
linux-mm
From: Yue Hu <huyue2@yulong.com>
[ Upstream commit f0fd50504a54f5548eb666dc16ddf8394e44e4b7 ]
If not find zero bit in find_next_zero_bit(), it will return the size
parameter passed in, so the start bit should be compared with bitmap_maxno
rather than cma->count. Although getting maxchunk is working fine due to
zero value of order_per_bit currently, the operation will be stuck if
order_per_bit is set as non-zero.
Link: http://lkml.kernel.org/r/20190319092734.276-1-zbestahu@gmail.com
Signed-off-by: Yue Hu <huyue2@yulong.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Joe Perches <joe@perches.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dmitry Safonov <d.safonov@partner.samsung.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/cma_debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/cma_debug.c b/mm/cma_debug.c
index f8e4b60db1672..da50dab56b700 100644
--- a/mm/cma_debug.c
+++ b/mm/cma_debug.c
@@ -57,7 +57,7 @@ static int cma_maxchunk_get(void *data, u64 *val)
mutex_lock(&cma->lock);
for (;;) {
start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end);
- if (start >= cma->count)
+ if (start >= bitmap_maxno)
break;
end = find_next_bit(cma->bitmap, bitmap_maxno, start);
maxchunk = max(end - start, maxchunk);
--
2.20.1
^ permalink raw reply related [flat|nested] 3+ messages in thread