From: kernel test robot <lkp@intel.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, cgroups@vger.kernel.org
Cc: kbuild-all@lists.01.org,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio
Date: Wed, 30 Jun 2021 16:46:35 +0800 [thread overview]
Message-ID: <202106301631.E9FoPgkk-lkp@intel.com> (raw)
In-Reply-To: <20210630040034.1155892-11-willy@infradead.org>
[-- Attachment #1: Type: text/plain, Size: 15464 bytes --]
Hi "Matthew,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on next-20210630]
[cannot apply to hnaz-linux-mm/master tip/perf/core cgroup/for-next v5.13]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Matthew-Wilcox-Oracle/Folio-conversion-of-memcg/20210630-121408
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 007b350a58754a93ca9fe50c498cc27780171153
config: sparc64-randconfig-r002-20210628 (attached as .config)
compiler: sparc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/b527e805e7d7066a8fea14ff4a49f53454c355a1
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Matthew-Wilcox-Oracle/Folio-conversion-of-memcg/20210630-121408
git checkout b527e805e7d7066a8fea14ff4a49f53454c355a1
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=sparc64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
403 | VM_BUG_ON_FOLIO(folio_slab(folio), folio);
| ^~~~~~~~~~~~~~~
| VM_BUG_ON_MM
include/linux/memcontrol.h:403:18: error: implicit declaration of function 'folio_slab' [-Werror=implicit-function-declaration]
403 | VM_BUG_ON_FOLIO(folio_slab(folio), folio);
| ^~~~~~~~~~
include/linux/memcontrol.h: At top level:
include/linux/memcontrol.h:420:55: warning: 'struct folio' declared inside parameter list will not be visible outside of this definition or declaration
420 | static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
| ^~~~~
include/linux/memcontrol.h: In function '__folio_objcg':
include/linux/memcontrol.h:422:34: error: dereferencing pointer to incomplete type 'struct folio'
422 | unsigned long memcg_data = folio->memcg_data;
| ^~
include/linux/memcontrol.h: At top level:
include/linux/memcontrol.h:451:53: warning: 'struct folio' declared inside parameter list will not be visible outside of this definition or declaration
451 | static inline struct mem_cgroup *folio_memcg(struct folio *folio)
| ^~~~~
include/linux/memcontrol.h: In function 'folio_memcg':
include/linux/memcontrol.h:453:23: error: passing argument 1 of 'folio_memcg_kmem' from incompatible pointer type [-Werror=incompatible-pointer-types]
453 | if (folio_memcg_kmem(folio))
| ^~~~~
| |
| struct folio *
include/linux/memcontrol.h:375:51: note: expected 'struct folio *' but argument is of type 'struct folio *'
375 | static inline bool folio_memcg_kmem(struct folio *folio);
| ~~~~~~~~~~~~~~^~~~~
include/linux/memcontrol.h:454:41: error: passing argument 1 of '__folio_objcg' from incompatible pointer type [-Werror=incompatible-pointer-types]
454 | return obj_cgroup_memcg(__folio_objcg(folio));
| ^~~~~
| |
| struct folio *
include/linux/memcontrol.h:420:62: note: expected 'struct folio *' but argument is of type 'struct folio *'
420 | static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
| ~~~~~~~~~~~~~~^~~~~
include/linux/memcontrol.h:456:24: error: passing argument 1 of '__folio_memcg' from incompatible pointer type [-Werror=incompatible-pointer-types]
456 | return __folio_memcg(folio);
| ^~~~~
| |
| struct folio *
include/linux/memcontrol.h:399:62: note: expected 'struct folio *' but argument is of type 'struct folio *'
399 | static inline struct mem_cgroup *__folio_memcg(struct folio *folio)
| ~~~~~~~~~~~~~~^~~~~
include/linux/memcontrol.h: In function 'page_memcg':
include/linux/memcontrol.h:461:21: error: implicit declaration of function 'page_folio' [-Werror=implicit-function-declaration]
461 | return folio_memcg(page_folio(page));
| ^~~~~~~~~~
include/linux/memcontrol.h:461:21: warning: passing argument 1 of 'folio_memcg' makes pointer from integer without a cast [-Wint-conversion]
461 | return folio_memcg(page_folio(page));
| ^~~~~~~~~~~~~~~~
| |
| int
include/linux/memcontrol.h:451:60: note: expected 'struct folio *' but argument is of type 'int'
451 | static inline struct mem_cgroup *folio_memcg(struct folio *folio)
| ~~~~~~~~~~~~~~^~~~~
include/linux/memcontrol.h: At top level:
include/linux/memcontrol.h:589:44: warning: 'struct folio' declared inside parameter list will not be visible outside of this definition or declaration
589 | static inline bool folio_memcg_kmem(struct folio *folio)
| ^~~~~
include/linux/memcontrol.h:589:20: error: conflicting types for 'folio_memcg_kmem'
589 | static inline bool folio_memcg_kmem(struct folio *folio)
| ^~~~~~~~~~~~~~~~
include/linux/memcontrol.h:375:20: note: previous declaration of 'folio_memcg_kmem' was here
375 | static inline bool folio_memcg_kmem(struct folio *folio);
| ^~~~~~~~~~~~~~~~
include/linux/memcontrol.h: In function 'PageMemcgKmem':
include/linux/memcontrol.h:607:26: warning: passing argument 1 of 'folio_memcg_kmem' makes pointer from integer without a cast [-Wint-conversion]
607 | return folio_memcg_kmem(page_folio(page));
| ^~~~~~~~~~~~~~~~
| |
| int
include/linux/memcontrol.h:589:51: note: expected 'struct folio *' but argument is of type 'int'
589 | static inline bool folio_memcg_kmem(struct folio *folio)
| ~~~~~~~~~~~~~~^~~~~
include/linux/memcontrol.h: At top level:
include/linux/memcontrol.h:708:30: warning: 'struct folio' declared inside parameter list will not be visible outside of this definition or declaration
708 | int mem_cgroup_charge(struct folio *, struct mm_struct *, gfp_t);
| ^~~~~
include/linux/memcontrol.h:713:33: warning: 'struct folio' declared inside parameter list will not be visible outside of this definition or declaration
713 | void mem_cgroup_uncharge(struct folio *);
| ^~~~~
In file included from arch/sparc/include/asm/bug.h:6,
from include/linux/bug.h:5,
from include/linux/mmdebug.h:5,
from include/linux/mm.h:9,
from mm/khugepaged.c:4:
mm/khugepaged.c: In function 'collapse_huge_page':
mm/khugepaged.c:1091:33: warning: passing argument 1 of 'mem_cgroup_charge' makes pointer from integer without a cast [-Wint-conversion]
1091 | if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) {
| ^~~~~~~~~~~~~~~~~~~~
| |
| int
include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
78 | # define unlikely(x) __builtin_expect(!!(x), 0)
| ^
In file included from include/linux/rmap.h:12,
from mm/khugepaged.c:9:
include/linux/memcontrol.h:708:23: note: expected 'struct folio *' but argument is of type 'int'
708 | int mem_cgroup_charge(struct folio *, struct mm_struct *, gfp_t);
| ^~~~~~~~~~~~~~
>> mm/khugepaged.c:1215:23: warning: passing argument 1 of 'mem_cgroup_uncharge' makes pointer from integer without a cast [-Wint-conversion]
1215 | mem_cgroup_uncharge(page_folio(*hpage));
| ^~~~~~~~~~~~~~~~~~
| |
| int
In file included from include/linux/rmap.h:12,
from mm/khugepaged.c:9:
include/linux/memcontrol.h:713:26: note: expected 'struct folio *' but argument is of type 'int'
713 | void mem_cgroup_uncharge(struct folio *);
| ^~~~~~~~~~~~~~
mm/khugepaged.c: At top level:
include/linux/memcontrol.h:375:20: warning: 'folio_memcg_kmem' used but never defined
375 | static inline bool folio_memcg_kmem(struct folio *folio);
| ^~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/mem_cgroup_uncharge +1215 mm/khugepaged.c
1056
1057 static void collapse_huge_page(struct mm_struct *mm,
1058 unsigned long address,
1059 struct page **hpage,
1060 int node, int referenced, int unmapped)
1061 {
1062 LIST_HEAD(compound_pagelist);
1063 pmd_t *pmd, _pmd;
1064 pte_t *pte;
1065 pgtable_t pgtable;
1066 struct page *new_page;
1067 spinlock_t *pmd_ptl, *pte_ptl;
1068 int isolated = 0, result = 0;
1069 struct vm_area_struct *vma;
1070 struct mmu_notifier_range range;
1071 gfp_t gfp;
1072
1073 VM_BUG_ON(address & ~HPAGE_PMD_MASK);
1074
1075 /* Only allocate from the target node */
1076 gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
1077
1078 /*
1079 * Before allocating the hugepage, release the mmap_lock read lock.
1080 * The allocation can take potentially a long time if it involves
1081 * sync compaction, and we do not need to hold the mmap_lock during
1082 * that. We will recheck the vma after taking it again in write mode.
1083 */
1084 mmap_read_unlock(mm);
1085 new_page = khugepaged_alloc_page(hpage, gfp, node);
1086 if (!new_page) {
1087 result = SCAN_ALLOC_HUGE_PAGE_FAIL;
1088 goto out_nolock;
1089 }
1090
1091 if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) {
1092 result = SCAN_CGROUP_CHARGE_FAIL;
1093 goto out_nolock;
1094 }
1095 count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC);
1096
1097 mmap_read_lock(mm);
1098 result = hugepage_vma_revalidate(mm, address, &vma);
1099 if (result) {
1100 mmap_read_unlock(mm);
1101 goto out_nolock;
1102 }
1103
1104 pmd = mm_find_pmd(mm, address);
1105 if (!pmd) {
1106 result = SCAN_PMD_NULL;
1107 mmap_read_unlock(mm);
1108 goto out_nolock;
1109 }
1110
1111 /*
1112 * __collapse_huge_page_swapin always returns with mmap_lock locked.
1113 * If it fails, we release mmap_lock and jump out_nolock.
1114 * Continuing to collapse causes inconsistency.
1115 */
1116 if (unmapped && !__collapse_huge_page_swapin(mm, vma, address,
1117 pmd, referenced)) {
1118 mmap_read_unlock(mm);
1119 goto out_nolock;
1120 }
1121
1122 mmap_read_unlock(mm);
1123 /*
1124 * Prevent all access to pagetables with the exception of
1125 * gup_fast later handled by the ptep_clear_flush and the VM
1126 * handled by the anon_vma lock + PG_lock.
1127 */
1128 mmap_write_lock(mm);
1129 result = hugepage_vma_revalidate(mm, address, &vma);
1130 if (result)
1131 goto out_up_write;
1132 /* check if the pmd is still valid */
1133 if (mm_find_pmd(mm, address) != pmd)
1134 goto out_up_write;
1135
1136 anon_vma_lock_write(vma->anon_vma);
1137
1138 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm,
1139 address, address + HPAGE_PMD_SIZE);
1140 mmu_notifier_invalidate_range_start(&range);
1141
1142 pte = pte_offset_map(pmd, address);
1143 pte_ptl = pte_lockptr(mm, pmd);
1144
1145 pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
1146 /*
1147 * After this gup_fast can't run anymore. This also removes
1148 * any huge TLB entry from the CPU so we won't allow
1149 * huge and small TLB entries for the same virtual address
1150 * to avoid the risk of CPU bugs in that area.
1151 */
1152 _pmd = pmdp_collapse_flush(vma, address, pmd);
1153 spin_unlock(pmd_ptl);
1154 mmu_notifier_invalidate_range_end(&range);
1155
1156 spin_lock(pte_ptl);
1157 isolated = __collapse_huge_page_isolate(vma, address, pte,
1158 &compound_pagelist);
1159 spin_unlock(pte_ptl);
1160
1161 if (unlikely(!isolated)) {
1162 pte_unmap(pte);
1163 spin_lock(pmd_ptl);
1164 BUG_ON(!pmd_none(*pmd));
1165 /*
1166 * We can only use set_pmd_at when establishing
1167 * hugepmds and never for establishing regular pmds that
1168 * points to regular pagetables. Use pmd_populate for that
1169 */
1170 pmd_populate(mm, pmd, pmd_pgtable(_pmd));
1171 spin_unlock(pmd_ptl);
1172 anon_vma_unlock_write(vma->anon_vma);
1173 result = SCAN_FAIL;
1174 goto out_up_write;
1175 }
1176
1177 /*
1178 * All pages are isolated and locked so anon_vma rmap
1179 * can't run anymore.
1180 */
1181 anon_vma_unlock_write(vma->anon_vma);
1182
1183 __collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl,
1184 &compound_pagelist);
1185 pte_unmap(pte);
1186 /*
1187 * spin_lock() below is not the equivalent of smp_wmb(), but
1188 * the smp_wmb() inside __SetPageUptodate() can be reused to
1189 * avoid the copy_huge_page writes to become visible after
1190 * the set_pmd_at() write.
1191 */
1192 __SetPageUptodate(new_page);
1193 pgtable = pmd_pgtable(_pmd);
1194
1195 _pmd = mk_huge_pmd(new_page, vma->vm_page_prot);
1196 _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
1197
1198 spin_lock(pmd_ptl);
1199 BUG_ON(!pmd_none(*pmd));
1200 page_add_new_anon_rmap(new_page, vma, address, true);
1201 lru_cache_add_inactive_or_unevictable(new_page, vma);
1202 pgtable_trans_huge_deposit(mm, pmd, pgtable);
1203 set_pmd_at(mm, address, pmd, _pmd);
1204 update_mmu_cache_pmd(vma, address, pmd);
1205 spin_unlock(pmd_ptl);
1206
1207 *hpage = NULL;
1208
1209 khugepaged_pages_collapsed++;
1210 result = SCAN_SUCCEED;
1211 out_up_write:
1212 mmap_write_unlock(mm);
1213 out_nolock:
1214 if (!IS_ERR_OR_NULL(*hpage))
> 1215 mem_cgroup_uncharge(page_folio(*hpage));
1216 trace_mm_collapse_huge_page(mm, isolated, result);
1217 return;
1218 }
1219
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 26865 bytes --]
next prev parent reply other threads:[~2021-06-30 8:48 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-30 4:00 [PATCH v3 00/18] Folio conversion of memcg Matthew Wilcox (Oracle)
2021-06-30 4:00 ` [PATCH v3 01/18] mm: Add folio_nid() Matthew Wilcox (Oracle)
2021-07-01 6:56 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 02/18] mm/memcg: Remove 'page' parameter to mem_cgroup_charge_statistics() Matthew Wilcox (Oracle)
2021-06-30 14:17 ` Johannes Weiner
2021-06-30 4:00 ` [PATCH v3 03/18] mm/memcg: Use the node id in mem_cgroup_update_tree() Matthew Wilcox (Oracle)
2021-06-30 6:55 ` Michal Hocko
2021-06-30 14:18 ` Johannes Weiner
2021-07-01 6:57 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 04/18] mm/memcg: Remove soft_limit_tree_node() Matthew Wilcox (Oracle)
2021-06-30 6:56 ` Michal Hocko
2021-06-30 14:19 ` Johannes Weiner
2021-07-01 7:09 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 05/18] mm/memcg: Convert memcg_check_events to take a node ID Matthew Wilcox (Oracle)
2021-06-30 6:58 ` Michal Hocko
2021-06-30 6:59 ` Michal Hocko
2021-07-01 7:09 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Matthew Wilcox (Oracle)
2021-06-30 6:53 ` kernel test robot
2021-07-01 7:12 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 07/18] mm/memcg: Convert commit_charge() to take a folio Matthew Wilcox (Oracle)
2021-06-30 4:00 ` [PATCH v3 08/18] mm/memcg: Convert mem_cgroup_charge() " Matthew Wilcox (Oracle)
2021-06-30 7:17 ` kernel test robot
2021-07-01 7:13 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 09/18] mm/memcg: Convert uncharge_page() to uncharge_folio() Matthew Wilcox (Oracle)
2021-07-01 7:15 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio Matthew Wilcox (Oracle)
2021-06-30 8:46 ` kernel test robot [this message]
2021-07-01 7:17 ` Christoph Hellwig
2021-07-07 12:09 ` Matthew Wilcox
2021-06-30 4:00 ` [PATCH v3 11/18] mm/memcg: Convert mem_cgroup_migrate() to take folios Matthew Wilcox (Oracle)
2021-07-01 7:20 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 12/18] mm/memcg: Convert mem_cgroup_track_foreign_dirty_slowpath() to folio Matthew Wilcox (Oracle)
2021-06-30 4:00 ` [PATCH v3 13/18] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock() Matthew Wilcox (Oracle)
2021-06-30 8:32 ` Michal Hocko
2021-07-07 15:10 ` Matthew Wilcox
2021-07-08 7:28 ` Michal Hocko
2021-07-07 17:08 ` Johannes Weiner
2021-07-07 19:28 ` Matthew Wilcox
2021-07-07 20:41 ` Johannes Weiner
2021-07-09 19:37 ` Matthew Wilcox
2021-06-30 4:00 ` [PATCH v3 14/18] mm/memcg: Convert mem_cgroup_move_account() to use a folio Matthew Wilcox (Oracle)
2021-06-30 8:30 ` Michal Hocko
2021-06-30 11:22 ` Matthew Wilcox
2021-06-30 12:20 ` Michal Hocko
2021-06-30 12:31 ` Matthew Wilcox
2021-06-30 12:45 ` Michal Hocko
2021-07-07 15:25 ` Matthew Wilcox
2021-07-08 7:30 ` Michal Hocko
2021-06-30 4:00 ` [PATCH v3 15/18] mm/memcg: Add mem_cgroup_folio_lruvec() Matthew Wilcox (Oracle)
2021-06-30 8:12 ` kernel test robot
2021-06-30 19:18 ` Matthew Wilcox
2021-06-30 21:21 ` Johannes Weiner
2021-06-30 4:00 ` [PATCH v3 16/18] mm/memcg: Add folio_lruvec_lock() and similar functions Matthew Wilcox (Oracle)
2021-06-30 8:36 ` Michal Hocko
2021-06-30 4:00 ` [PATCH v3 17/18] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Matthew Wilcox (Oracle)
2021-06-30 8:39 ` Michal Hocko
2021-06-30 4:00 ` [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Matthew Wilcox (Oracle)
2021-06-30 8:44 ` [PATCH v3 00/18] Folio conversion of memcg Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202106301631.E9FoPgkk-lkp@intel.com \
--to=lkp@intel.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kbuild-all@lists.01.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).