* [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *'
@ 2025-08-18 10:17 kernel test robot
2025-08-18 12:15 ` Matthew Wilcox
2025-08-18 12:44 ` Matthew Wilcox
0 siblings, 2 replies; 4+ messages in thread
From: kernel test robot @ 2025-08-18 10:17 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: llvm, oe-kbuild-all, Andrew Morton, Linux Memory Management List
tree: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-new
head: dd1510cefdfec9dc3fa2bae40e0f16f1bf825567
commit: a2004477b3282318589bf43a786d952856adcaee [166/188] mm: introduce memdesc_flags_t
config: arm-randconfig-002-20250818 (https://download.01.org/0day-ci/archive/20250818/202508181807.zhZF5TQ0-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 93d24b6b7b148c47a2fa228a4ef31524fa1d9f3f)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250818/202508181807.zhZF5TQ0-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202508181807.zhZF5TQ0-lkp@intel.com/
All errors (new ones prefixed by >>):
>> arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
307 | if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:194:66: note: expanded from macro 'test_and_set_bit'
194 | #define test_and_set_bit(nr,p) ATOMIC_BITOP(test_and_set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:52: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:73:71: note: passing argument to parameter 'p' here
73 | ____atomic_test_and_set_bit(unsigned int bit, volatile unsigned long *p)
| ^
>> arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
307 | if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:194:66: note: expanded from macro 'test_and_set_bit'
194 | #define test_and_set_bit(nr,p) ATOMIC_BITOP(test_and_set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:68: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:156:63: note: passing argument to parameter 'p' here
156 | extern int _test_and_set_bit(int nr, volatile unsigned long * p);
| ^
>> arch/arm/mm/flush.c:346:33: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'const volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
346 | if (test_bit(PG_dcache_clean, &folio->flags))
| ^~~~~~~~~~~~~
include/linux/bitops.h:60:50: note: expanded from macro 'test_bit'
60 | #define test_bit(nr, addr) bitop(_test_bit, nr, addr)
| ^~~~
include/linux/bitops.h:47:17: note: expanded from macro 'bitop'
47 | const##op(nr, addr) : op(nr, addr))
| ^~~~
include/asm-generic/bitops/generic-non-atomic.h:166:64: note: passing argument to parameter 'addr' here
166 | const_test_bit(unsigned long nr, const volatile unsigned long *addr)
| ^
>> arch/arm/mm/flush.c:346:33: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'const volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
346 | if (test_bit(PG_dcache_clean, &folio->flags))
| ^~~~~~~~~~~~~
include/linux/bitops.h:60:50: note: expanded from macro 'test_bit'
60 | #define test_bit(nr, addr) bitop(_test_bit, nr, addr)
| ^~~~
include/linux/bitops.h:47:32: note: expanded from macro 'bitop'
47 | const##op(nr, addr) : op(nr, addr))
| ^~~~
include/asm-generic/bitops/generic-non-atomic.h:121:66: note: passing argument to parameter 'addr' here
121 | generic_test_bit(unsigned long nr, const volatile unsigned long *addr)
| ^
arch/arm/mm/flush.c:347:31: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
347 | clear_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:192:53: note: expanded from macro 'clear_bit'
192 | #define clear_bit(nr,p) ATOMIC_BITOP(clear_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:52: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:48:83: note: passing argument to parameter 'p' here
48 | static inline void ____atomic_clear_bit(unsigned int bit, volatile unsigned long *p)
| ^
arch/arm/mm/flush.c:347:31: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
347 | clear_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:192:53: note: expanded from macro 'clear_bit'
192 | #define clear_bit(nr,p) ATOMIC_BITOP(clear_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:68: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:154:57: note: passing argument to parameter 'p' here
154 | extern void _clear_bit(int nr, volatile unsigned long * p);
| ^
arch/arm/mm/flush.c:355:30: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
355 | clear_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:192:53: note: expanded from macro 'clear_bit'
192 | #define clear_bit(nr,p) ATOMIC_BITOP(clear_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:52: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:48:83: note: passing argument to parameter 'p' here
48 | static inline void ____atomic_clear_bit(unsigned int bit, volatile unsigned long *p)
| ^
arch/arm/mm/flush.c:355:30: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
355 | clear_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:192:53: note: expanded from macro 'clear_bit'
192 | #define clear_bit(nr,p) ATOMIC_BITOP(clear_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:68: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:154:57: note: passing argument to parameter 'p' here
154 | extern void _clear_bit(int nr, volatile unsigned long * p);
| ^
arch/arm/mm/flush.c:362:28: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
362 | set_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:191:49: note: expanded from macro 'set_bit'
191 | #define set_bit(nr,p) ATOMIC_BITOP(set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:52: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:36:81: note: passing argument to parameter 'p' here
36 | static inline void ____atomic_set_bit(unsigned int bit, volatile unsigned long *p)
| ^
arch/arm/mm/flush.c:362:28: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
362 | set_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:191:49: note: expanded from macro 'set_bit'
191 | #define set_bit(nr,p) ATOMIC_BITOP(set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:68: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:153:55: note: passing argument to parameter 'p' here
153 | extern void _set_bit(int nr, volatile unsigned long * p);
| ^
10 errors generated.
--
>> arch/arm/mm/copypage-v6.c:76:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
76 | if (!test_and_set_bit(PG_dcache_clean, &src->flags))
| ^~~~~~~~~~~
arch/arm/include/asm/bitops.h:194:66: note: expanded from macro 'test_and_set_bit'
194 | #define test_and_set_bit(nr,p) ATOMIC_BITOP(test_and_set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:52: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:73:71: note: passing argument to parameter 'p' here
73 | ____atomic_test_and_set_bit(unsigned int bit, volatile unsigned long *p)
| ^
>> arch/arm/mm/copypage-v6.c:76:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
76 | if (!test_and_set_bit(PG_dcache_clean, &src->flags))
| ^~~~~~~~~~~
arch/arm/include/asm/bitops.h:194:66: note: expanded from macro 'test_and_set_bit'
194 | #define test_and_set_bit(nr,p) ATOMIC_BITOP(test_and_set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:68: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:156:63: note: passing argument to parameter 'p' here
156 | extern int _test_and_set_bit(int nr, volatile unsigned long * p);
| ^
2 errors generated.
--
>> arch/arm/mm/dma-mapping.c:721:30: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
721 | set_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:191:49: note: expanded from macro 'set_bit'
191 | #define set_bit(nr,p) ATOMIC_BITOP(set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:52: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:36:81: note: passing argument to parameter 'p' here
36 | static inline void ____atomic_set_bit(unsigned int bit, volatile unsigned long *p)
| ^
>> arch/arm/mm/dma-mapping.c:721:30: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
721 | set_bit(PG_dcache_clean, &folio->flags);
| ^~~~~~~~~~~~~
arch/arm/include/asm/bitops.h:191:49: note: expanded from macro 'set_bit'
191 | #define set_bit(nr,p) ATOMIC_BITOP(set_bit,nr,p)
| ^
arch/arm/include/asm/bitops.h:183:68: note: expanded from macro 'ATOMIC_BITOP'
183 | (__builtin_constant_p(nr) ? ____atomic_##name(nr, p) : _##name(nr,p))
| ^
arch/arm/include/asm/bitops.h:153:55: note: passing argument to parameter 'p' here
153 | extern void _set_bit(int nr, volatile unsigned long * p);
| ^
2 errors generated.
vim +307 arch/arm/mm/flush.c
^1da177e4c3f415 Linus Torvalds 2005-04-16 283
6012191aa9c6fff Catalin Marinas 2010-09-13 284 #if __LINUX_ARM_ARCH__ >= 6
6012191aa9c6fff Catalin Marinas 2010-09-13 285 void __sync_icache_dcache(pte_t pteval)
6012191aa9c6fff Catalin Marinas 2010-09-13 286 {
6012191aa9c6fff Catalin Marinas 2010-09-13 287 unsigned long pfn;
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 288) struct folio *folio;
6012191aa9c6fff Catalin Marinas 2010-09-13 289 struct address_space *mapping;
6012191aa9c6fff Catalin Marinas 2010-09-13 290
6012191aa9c6fff Catalin Marinas 2010-09-13 291 if (cache_is_vipt_nonaliasing() && !pte_exec(pteval))
6012191aa9c6fff Catalin Marinas 2010-09-13 292 /* only flush non-aliasing VIPT caches for exec mappings */
6012191aa9c6fff Catalin Marinas 2010-09-13 293 return;
6012191aa9c6fff Catalin Marinas 2010-09-13 294 pfn = pte_pfn(pteval);
6012191aa9c6fff Catalin Marinas 2010-09-13 295 if (!pfn_valid(pfn))
6012191aa9c6fff Catalin Marinas 2010-09-13 296 return;
6012191aa9c6fff Catalin Marinas 2010-09-13 297
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 298) folio = page_folio(pfn_to_page(pfn));
0c66c6f4e21cb22 Yongqiang Liu 2024-03-07 299 if (folio_test_reserved(folio))
0c66c6f4e21cb22 Yongqiang Liu 2024-03-07 300 return;
0c66c6f4e21cb22 Yongqiang Liu 2024-03-07 301
6012191aa9c6fff Catalin Marinas 2010-09-13 302 if (cache_is_vipt_aliasing())
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 303) mapping = folio_flush_mapping(folio);
6012191aa9c6fff Catalin Marinas 2010-09-13 304 else
6012191aa9c6fff Catalin Marinas 2010-09-13 305 mapping = NULL;
6012191aa9c6fff Catalin Marinas 2010-09-13 306
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 @307) if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 308) __flush_dcache_folio(mapping, folio);
8373dc38ca8d491 saeed bishara 2011-05-16 309
8373dc38ca8d491 saeed bishara 2011-05-16 310 if (pte_exec(pteval))
6012191aa9c6fff Catalin Marinas 2010-09-13 311 __flush_icache_all();
6012191aa9c6fff Catalin Marinas 2010-09-13 312 }
6012191aa9c6fff Catalin Marinas 2010-09-13 313 #endif
6012191aa9c6fff Catalin Marinas 2010-09-13 314
^1da177e4c3f415 Linus Torvalds 2005-04-16 315 /*
^1da177e4c3f415 Linus Torvalds 2005-04-16 316 * Ensure cache coherency between kernel mapping and userspace mapping
^1da177e4c3f415 Linus Torvalds 2005-04-16 317 * of this page.
^1da177e4c3f415 Linus Torvalds 2005-04-16 318 *
^1da177e4c3f415 Linus Torvalds 2005-04-16 319 * We have three cases to consider:
^1da177e4c3f415 Linus Torvalds 2005-04-16 320 * - VIPT non-aliasing cache: fully coherent so nothing required.
^1da177e4c3f415 Linus Torvalds 2005-04-16 321 * - VIVT: fully aliasing, so we need to handle every alias in our
^1da177e4c3f415 Linus Torvalds 2005-04-16 322 * current VM view.
^1da177e4c3f415 Linus Torvalds 2005-04-16 323 * - VIPT aliasing: need to handle one alias in our current VM view.
^1da177e4c3f415 Linus Torvalds 2005-04-16 324 *
^1da177e4c3f415 Linus Torvalds 2005-04-16 325 * If we need to handle aliasing:
^1da177e4c3f415 Linus Torvalds 2005-04-16 326 * If the page only exists in the page cache and there are no user
^1da177e4c3f415 Linus Torvalds 2005-04-16 327 * space mappings, we can be lazy and remember that we may have dirty
^1da177e4c3f415 Linus Torvalds 2005-04-16 328 * kernel cache lines for later. Otherwise, we assume we have
^1da177e4c3f415 Linus Torvalds 2005-04-16 329 * aliasing mappings.
df2f5e721ed36e2 Russell King 2005-11-30 330 *
31bee4cf0e74e9c saeed bishara 2011-05-16 331 * Note that we disable the lazy flush for SMP configurations where
31bee4cf0e74e9c saeed bishara 2011-05-16 332 * the cache maintenance operations are not automatically broadcasted.
^1da177e4c3f415 Linus Torvalds 2005-04-16 333 */
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 334) void flush_dcache_folio(struct folio *folio)
^1da177e4c3f415 Linus Torvalds 2005-04-16 335 {
421fe93cc4b06b2 Russell King 2009-10-25 336 struct address_space *mapping;
421fe93cc4b06b2 Russell King 2009-10-25 337
421fe93cc4b06b2 Russell King 2009-10-25 338 /*
421fe93cc4b06b2 Russell King 2009-10-25 339 * The zero page is never written to, so never has any dirty
421fe93cc4b06b2 Russell King 2009-10-25 340 * cache lines, and therefore never needs to be flushed.
421fe93cc4b06b2 Russell King 2009-10-25 341 */
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 342) if (is_zero_pfn(folio_pfn(folio)))
421fe93cc4b06b2 Russell King 2009-10-25 343 return;
421fe93cc4b06b2 Russell King 2009-10-25 344
00a19f3e25c0c40 Rabin Vincent 2016-11-08 345 if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) {
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 @346) if (test_bit(PG_dcache_clean, &folio->flags))
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 347) clear_bit(PG_dcache_clean, &folio->flags);
00a19f3e25c0c40 Rabin Vincent 2016-11-08 348 return;
00a19f3e25c0c40 Rabin Vincent 2016-11-08 349 }
00a19f3e25c0c40 Rabin Vincent 2016-11-08 350
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 351) mapping = folio_flush_mapping(folio);
^1da177e4c3f415 Linus Torvalds 2005-04-16 352
85848dd7ab75fce Catalin Marinas 2010-09-13 353 if (!cache_ops_need_broadcast() &&
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 354) mapping && !folio_mapped(folio))
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 355) clear_bit(PG_dcache_clean, &folio->flags);
85848dd7ab75fce Catalin Marinas 2010-09-13 356 else {
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 357) __flush_dcache_folio(mapping, folio);
8830f04a092b47f Russell King 2005-06-20 358 if (mapping && cache_is_vivt())
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 359) __flush_dcache_aliases(mapping, folio);
826cbdaff29764b Catalin Marinas 2008-06-13 360 else if (mapping)
826cbdaff29764b Catalin Marinas 2008-06-13 361 __flush_icache_all();
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 362) set_bit(PG_dcache_clean, &folio->flags);
8830f04a092b47f Russell King 2005-06-20 363 }
^1da177e4c3f415 Linus Torvalds 2005-04-16 364 }
8b5989f33337172 Matthew Wilcox (Oracle 2023-08-02 365) EXPORT_SYMBOL(flush_dcache_folio);
6020dff09252e36 Russell King 2006-12-30 366
:::::: The code at line 307 was first introduced by commit
:::::: 8b5989f3333717273d02ab87ba8781f72a6783ab arm: implement the new page table range API
:::::: TO: Matthew Wilcox (Oracle) <willy@infradead.org>
:::::: CC: Andrew Morton <akpm@linux-foundation.org>
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *'
2025-08-18 10:17 [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' kernel test robot
@ 2025-08-18 12:15 ` Matthew Wilcox
2025-08-30 12:59 ` Philip Li
2025-08-18 12:44 ` Matthew Wilcox
1 sibling, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2025-08-18 12:15 UTC (permalink / raw)
To: kernel test robot
Cc: llvm, oe-kbuild-all, Andrew Morton, Linux Memory Management List
On Mon, Aug 18, 2025 at 06:17:13PM +0800, kernel test robot wrote:
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-new
> head: dd1510cefdfec9dc3fa2bae40e0f16f1bf825567
> commit: a2004477b3282318589bf43a786d952856adcaee [166/188] mm: introduce memdesc_flags_t
I've had this commit sitting in
https://git.infradead.org/?p=users/willy/pagecache.git;a=shortlog;h=refs/heads/folio-page-split
for over two weeks now. Are you no longer testing pagecache.git?
(the fix is trivial, I'll send it)
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *'
2025-08-18 10:17 [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' kernel test robot
2025-08-18 12:15 ` Matthew Wilcox
@ 2025-08-18 12:44 ` Matthew Wilcox
1 sibling, 0 replies; 4+ messages in thread
From: Matthew Wilcox @ 2025-08-18 12:44 UTC (permalink / raw)
To: kernel test robot
Cc: llvm, oe-kbuild-all, Andrew Morton, Linux Memory Management List
On Mon, Aug 18, 2025 at 06:17:13PM +0800, kernel test robot wrote:
> >> arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' [-Werror,-Wincompatible-pointer-types]
There's a lot of similar things to fix across a lot of architectures
and I don't have cross-compilers for all of them. You're going to get
a lot of errors here, so if you can add this patch to the tree before
testing on the dozens of builds you're going to try, you'll save
both of us a lot of time.
From c3b882bf4a14d6fcf6cf8be533bee4251ade8040 Mon Sep 17 00:00:00 2001
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Date: Mon, 18 Aug 2025 08:42:22 -0400
Subject: [PATCH] fixes
---
arch/arc/mm/cache.c | 8 ++++----
arch/arc/mm/tlb.c | 2 +-
arch/arm/include/asm/hugetlb.h | 2 +-
arch/arm/mm/copypage-v4mc.c | 2 +-
arch/arm/mm/copypage-v6.c | 2 +-
arch/arm/mm/copypage-xscale.c | 2 +-
arch/arm/mm/dma-mapping.c | 2 +-
arch/arm/mm/fault-armv.c | 2 +-
arch/arm/mm/flush.c | 10 +++++-----
arch/arm64/include/asm/hugetlb.h | 6 +++---
arch/arm64/include/asm/mte.h | 16 ++++++++--------
arch/arm64/mm/flush.c | 8 ++++----
arch/csky/abiv1/cacheflush.c | 6 +++---
arch/nios2/mm/cacheflush.c | 6 +++---
arch/openrisc/include/asm/cacheflush.h | 2 +-
arch/openrisc/mm/cache.c | 2 +-
arch/parisc/kernel/cache.c | 6 +++---
arch/powerpc/include/asm/cacheflush.h | 4 ++--
arch/powerpc/include/asm/kvm_ppc.h | 4 ++--
arch/powerpc/mm/book3s64/hash_utils.c | 4 ++--
arch/powerpc/mm/pgtable.c | 12 ++++++------
arch/riscv/include/asm/cacheflush.h | 4 ++--
arch/riscv/include/asm/hugetlb.h | 2 +-
arch/riscv/mm/cacheflush.c | 4 ++--
arch/s390/include/asm/hugetlb.h | 2 +-
arch/s390/kernel/uv.c | 12 ++++++------
arch/s390/mm/gmap.c | 2 +-
arch/s390/mm/hugetlbpage.c | 2 +-
arch/sh/include/asm/hugetlb.h | 2 +-
arch/sh/mm/cache-sh4.c | 2 +-
arch/sh/mm/cache-sh7705.c | 2 +-
arch/sh/mm/cache.c | 14 +++++++-------
arch/sh/mm/kmap.c | 2 +-
arch/sparc/mm/init_64.c | 10 +++++-----
arch/xtensa/mm/cache.c | 12 ++++++------
35 files changed, 90 insertions(+), 90 deletions(-)
diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
index 9106ceac323c..7d2f93dc1e91 100644
--- a/arch/arc/mm/cache.c
+++ b/arch/arc/mm/cache.c
@@ -704,7 +704,7 @@ static inline void arc_slc_enable(void)
void flush_dcache_folio(struct folio *folio)
{
- clear_bit(PG_dc_clean, &folio->flags);
+ clear_bit(PG_dc_clean, &folio->flags.f);
return;
}
EXPORT_SYMBOL(flush_dcache_folio);
@@ -889,8 +889,8 @@ void copy_user_highpage(struct page *to, struct page *from,
copy_page(kto, kfrom);
- clear_bit(PG_dc_clean, &dst->flags);
- clear_bit(PG_dc_clean, &src->flags);
+ clear_bit(PG_dc_clean, &dst->flags.f);
+ clear_bit(PG_dc_clean, &src->flags.f);
kunmap_atomic(kto);
kunmap_atomic(kfrom);
@@ -900,7 +900,7 @@ void clear_user_page(void *to, unsigned long u_vaddr, struct page *page)
{
struct folio *folio = page_folio(page);
clear_page(to);
- clear_bit(PG_dc_clean, &folio->flags);
+ clear_bit(PG_dc_clean, &folio->flags.f);
}
EXPORT_SYMBOL(clear_user_page);
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index cae4a7aae0ed..ed6915ba76ec 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -488,7 +488,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
*/
if (vma->vm_flags & VM_EXEC) {
struct folio *folio = page_folio(page);
- int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags);
+ int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags.f);
if (dirty) {
unsigned long offset = offset_in_folio(folio, paddr);
nr = folio_nr_pages(folio);
diff --git a/arch/arm/include/asm/hugetlb.h b/arch/arm/include/asm/hugetlb.h
index b766c4b373f6..700055b1ccb3 100644
--- a/arch/arm/include/asm/hugetlb.h
+++ b/arch/arm/include/asm/hugetlb.h
@@ -17,7 +17,7 @@
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
#define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
index 7ddd82b9fe8b..ed843bb22020 100644
--- a/arch/arm/mm/copypage-v4mc.c
+++ b/arch/arm/mm/copypage-v4mc.c
@@ -67,7 +67,7 @@ void v4_mc_copy_user_highpage(struct page *to, struct page *from,
struct folio *src = page_folio(from);
void *kto = kmap_atomic(to);
- if (!test_and_set_bit(PG_dcache_clean, &src->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &src->flags.f))
__flush_dcache_folio(folio_flush_mapping(src), src);
raw_spin_lock(&minicache_lock);
diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c
index a1a71f36d850..0710dba5c0bf 100644
--- a/arch/arm/mm/copypage-v6.c
+++ b/arch/arm/mm/copypage-v6.c
@@ -73,7 +73,7 @@ static void v6_copy_user_highpage_aliasing(struct page *to,
unsigned int offset = CACHE_COLOUR(vaddr);
unsigned long kfrom, kto;
- if (!test_and_set_bit(PG_dcache_clean, &src->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &src->flags.f))
__flush_dcache_folio(folio_flush_mapping(src), src);
/* FIXME: not highmem safe */
diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c
index f1e29d3e8193..e16af68d709f 100644
--- a/arch/arm/mm/copypage-xscale.c
+++ b/arch/arm/mm/copypage-xscale.c
@@ -87,7 +87,7 @@ void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
struct folio *src = page_folio(from);
void *kto = kmap_atomic(to);
- if (!test_and_set_bit(PG_dcache_clean, &src->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &src->flags.f))
__flush_dcache_folio(folio_flush_mapping(src), src);
raw_spin_lock(&minicache_lock);
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 88c2d68a69c9..08641a936394 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -718,7 +718,7 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
if (size < sz)
break;
if (!offset)
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
offset = 0;
size -= sz;
if (!size)
diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
index 39fd5df73317..91e488767783 100644
--- a/arch/arm/mm/fault-armv.c
+++ b/arch/arm/mm/fault-armv.c
@@ -203,7 +203,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
folio = page_folio(pfn_to_page(pfn));
mapping = folio_flush_mapping(folio);
- if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &folio->flags.f))
__flush_dcache_folio(mapping, folio);
if (mapping) {
if (cache_is_vivt())
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 5219158d54cf..19470d938b23 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -304,7 +304,7 @@ void __sync_icache_dcache(pte_t pteval)
else
mapping = NULL;
- if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &folio->flags.f))
__flush_dcache_folio(mapping, folio);
if (pte_exec(pteval))
@@ -343,8 +343,8 @@ void flush_dcache_folio(struct folio *folio)
return;
if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) {
- if (test_bit(PG_dcache_clean, &folio->flags))
- clear_bit(PG_dcache_clean, &folio->flags);
+ if (test_bit(PG_dcache_clean, &folio->flags.f))
+ clear_bit(PG_dcache_clean, &folio->flags.f);
return;
}
@@ -352,14 +352,14 @@ void flush_dcache_folio(struct folio *folio)
if (!cache_ops_need_broadcast() &&
mapping && !folio_mapped(folio))
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
else {
__flush_dcache_folio(mapping, folio);
if (mapping && cache_is_vivt())
__flush_dcache_aliases(mapping, folio);
else if (mapping)
__flush_icache_all();
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
EXPORT_SYMBOL(flush_dcache_folio);
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 2a8155c4a882..44c1f757bfcf 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -21,12 +21,12 @@ extern bool arch_hugetlb_migration_supported(struct hstate *h);
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
#ifdef CONFIG_ARM64_MTE
if (system_supports_mte()) {
- clear_bit(PG_mte_tagged, &folio->flags);
- clear_bit(PG_mte_lock, &folio->flags);
+ clear_bit(PG_mte_tagged, &folio->flags.f);
+ clear_bit(PG_mte_lock, &folio->flags.f);
}
#endif
}
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 6567df8ec8ca..3b5069f4683d 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -48,12 +48,12 @@ static inline void set_page_mte_tagged(struct page *page)
* before the page flags update.
*/
smp_wmb();
- set_bit(PG_mte_tagged, &page->flags);
+ set_bit(PG_mte_tagged, &page->flags.f);
}
static inline bool page_mte_tagged(struct page *page)
{
- bool ret = test_bit(PG_mte_tagged, &page->flags);
+ bool ret = test_bit(PG_mte_tagged, &page->flags.f);
VM_WARN_ON_ONCE(folio_test_hugetlb(page_folio(page)));
@@ -82,7 +82,7 @@ static inline bool try_page_mte_tagging(struct page *page)
{
VM_WARN_ON_ONCE(folio_test_hugetlb(page_folio(page)));
- if (!test_and_set_bit(PG_mte_lock, &page->flags))
+ if (!test_and_set_bit(PG_mte_lock, &page->flags.f))
return true;
/*
@@ -90,7 +90,7 @@ static inline bool try_page_mte_tagging(struct page *page)
* already. Check if the PG_mte_tagged flag has been set or wait
* otherwise.
*/
- smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged));
+ smp_cond_load_acquire(&page->flags.f, VAL & (1UL << PG_mte_tagged));
return false;
}
@@ -173,13 +173,13 @@ static inline void folio_set_hugetlb_mte_tagged(struct folio *folio)
* before the folio flags update.
*/
smp_wmb();
- set_bit(PG_mte_tagged, &folio->flags);
+ set_bit(PG_mte_tagged, &folio->flags.f);
}
static inline bool folio_test_hugetlb_mte_tagged(struct folio *folio)
{
- bool ret = test_bit(PG_mte_tagged, &folio->flags);
+ bool ret = test_bit(PG_mte_tagged, &folio->flags.f);
VM_WARN_ON_ONCE(!folio_test_hugetlb(folio));
@@ -196,7 +196,7 @@ static inline bool folio_try_hugetlb_mte_tagging(struct folio *folio)
{
VM_WARN_ON_ONCE(!folio_test_hugetlb(folio));
- if (!test_and_set_bit(PG_mte_lock, &folio->flags))
+ if (!test_and_set_bit(PG_mte_lock, &folio->flags.f))
return true;
/*
@@ -204,7 +204,7 @@ static inline bool folio_try_hugetlb_mte_tagging(struct folio *folio)
* already. Check if the PG_mte_tagged flag has been set or wait
* otherwise.
*/
- smp_cond_load_acquire(&folio->flags, VAL & (1UL << PG_mte_tagged));
+ smp_cond_load_acquire(&folio->flags.f, VAL & (1UL << PG_mte_tagged));
return false;
}
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 013eead9b695..fbf08b543c3f 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -53,11 +53,11 @@ void __sync_icache_dcache(pte_t pte)
{
struct folio *folio = page_folio(pte_page(pte));
- if (!test_bit(PG_dcache_clean, &folio->flags)) {
+ if (!test_bit(PG_dcache_clean, &folio->flags.f)) {
sync_icache_aliases((unsigned long)folio_address(folio),
(unsigned long)folio_address(folio) +
folio_size(folio));
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
EXPORT_SYMBOL_GPL(__sync_icache_dcache);
@@ -69,8 +69,8 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache);
*/
void flush_dcache_folio(struct folio *folio)
{
- if (test_bit(PG_dcache_clean, &folio->flags))
- clear_bit(PG_dcache_clean, &folio->flags);
+ if (test_bit(PG_dcache_clean, &folio->flags.f))
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
EXPORT_SYMBOL(flush_dcache_folio);
diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c
index 171e8fb32285..4bc0aad3cf8a 100644
--- a/arch/csky/abiv1/cacheflush.c
+++ b/arch/csky/abiv1/cacheflush.c
@@ -25,12 +25,12 @@ void flush_dcache_folio(struct folio *folio)
mapping = folio_flush_mapping(folio);
if (mapping && !folio_mapped(folio))
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
else {
dcache_wbinv_all();
if (mapping)
icache_inv_all();
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
EXPORT_SYMBOL(flush_dcache_folio);
@@ -56,7 +56,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
return;
folio = page_folio(pfn_to_page(pfn));
- if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &folio->flags.f))
dcache_wbinv_all();
if (folio_flush_mapping(folio)) {
diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c
index 0ee9c5f02e08..8321182eb927 100644
--- a/arch/nios2/mm/cacheflush.c
+++ b/arch/nios2/mm/cacheflush.c
@@ -187,7 +187,7 @@ void flush_dcache_folio(struct folio *folio)
/* Flush this page if there are aliases. */
if (mapping && !mapping_mapped(mapping)) {
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
} else {
__flush_dcache_folio(folio);
if (mapping) {
@@ -195,7 +195,7 @@ void flush_dcache_folio(struct folio *folio)
flush_aliases(mapping, folio);
flush_icache_range(start, start + folio_size(folio));
}
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
EXPORT_SYMBOL(flush_dcache_folio);
@@ -227,7 +227,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
return;
folio = page_folio(pfn_to_page(pfn));
- if (!test_and_set_bit(PG_dcache_clean, &folio->flags))
+ if (!test_and_set_bit(PG_dcache_clean, &folio->flags.f))
__flush_dcache_folio(folio);
mapping = folio_flush_mapping(folio);
diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h
index 0e60af486ec1..cd8f971c0fec 100644
--- a/arch/openrisc/include/asm/cacheflush.h
+++ b/arch/openrisc/include/asm/cacheflush.h
@@ -75,7 +75,7 @@ static inline void sync_icache_dcache(struct page *page)
static inline void flush_dcache_folio(struct folio *folio)
{
- clear_bit(PG_dc_clean, &folio->flags);
+ clear_bit(PG_dc_clean, &folio->flags.f);
}
#define flush_dcache_folio flush_dcache_folio
diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c
index 0f265b8e73ec..f33df46dae4e 100644
--- a/arch/openrisc/mm/cache.c
+++ b/arch/openrisc/mm/cache.c
@@ -83,7 +83,7 @@ void update_cache(struct vm_area_struct *vma, unsigned long address,
{
unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT;
struct folio *folio = page_folio(pfn_to_page(pfn));
- int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags);
+ int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags.f);
/*
* Since icaches do not snoop for updated data on OpenRISC, we
diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
index 37ca484cc495..4c5240d3a3c7 100644
--- a/arch/parisc/kernel/cache.c
+++ b/arch/parisc/kernel/cache.c
@@ -122,10 +122,10 @@ void __update_cache(pte_t pte)
pfn = folio_pfn(folio);
nr = folio_nr_pages(folio);
if (folio_flush_mapping(folio) &&
- test_bit(PG_dcache_dirty, &folio->flags)) {
+ test_bit(PG_dcache_dirty, &folio->flags.f)) {
while (nr--)
flush_kernel_dcache_page_addr(pfn_va(pfn + nr));
- clear_bit(PG_dcache_dirty, &folio->flags);
+ clear_bit(PG_dcache_dirty, &folio->flags.f);
} else if (parisc_requires_coherency())
while (nr--)
flush_kernel_dcache_page_addr(pfn_va(pfn + nr));
@@ -481,7 +481,7 @@ void flush_dcache_folio(struct folio *folio)
pgoff_t pgoff;
if (mapping && !mapping_mapped(mapping)) {
- set_bit(PG_dcache_dirty, &folio->flags);
+ set_bit(PG_dcache_dirty, &folio->flags.f);
return;
}
diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h
index f2656774aaa9..1fea42928f64 100644
--- a/arch/powerpc/include/asm/cacheflush.h
+++ b/arch/powerpc/include/asm/cacheflush.h
@@ -40,8 +40,8 @@ static inline void flush_dcache_folio(struct folio *folio)
if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
return;
/* avoid an atomic op if possible */
- if (test_bit(PG_dcache_clean, &folio->flags))
- clear_bit(PG_dcache_clean, &folio->flags);
+ if (test_bit(PG_dcache_clean, &folio->flags.f))
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
#define flush_dcache_folio flush_dcache_folio
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ca3829d47ab7..0953f2daa466 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -939,9 +939,9 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn)
/* Clear i-cache for new pages */
folio = page_folio(pfn_to_page(pfn));
- if (!test_bit(PG_dcache_clean, &folio->flags)) {
+ if (!test_bit(PG_dcache_clean, &folio->flags.f)) {
flush_dcache_icache_folio(folio);
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 4693c464fc5a..3aee3af614af 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1562,11 +1562,11 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
folio = page_folio(pte_page(pte));
/* page is dirty */
- if (!test_bit(PG_dcache_clean, &folio->flags) &&
+ if (!test_bit(PG_dcache_clean, &folio->flags.f) &&
!folio_test_reserved(folio)) {
if (trap == INTERRUPT_INST_STORAGE) {
flush_dcache_icache_folio(folio);
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
} else
pp |= HPTE_R_N;
}
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index dfaa9fd86f7e..56d7e8960e77 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -87,9 +87,9 @@ static pte_t set_pte_filter_hash(pte_t pte, unsigned long addr)
struct folio *folio = maybe_pte_to_folio(pte);
if (!folio)
return pte;
- if (!test_bit(PG_dcache_clean, &folio->flags)) {
+ if (!test_bit(PG_dcache_clean, &folio->flags.f)) {
flush_dcache_icache_folio(folio);
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
return pte;
@@ -127,13 +127,13 @@ static inline pte_t set_pte_filter(pte_t pte, unsigned long addr)
return pte;
/* If the page clean, we move on */
- if (test_bit(PG_dcache_clean, &folio->flags))
+ if (test_bit(PG_dcache_clean, &folio->flags.f))
return pte;
/* If it's an exec fault, we flush the cache and make it clean */
if (is_exec_fault()) {
flush_dcache_icache_folio(folio);
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
return pte;
}
@@ -175,12 +175,12 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma,
goto bail;
/* If the page is already clean, we move on */
- if (test_bit(PG_dcache_clean, &folio->flags))
+ if (test_bit(PG_dcache_clean, &folio->flags.f))
goto bail;
/* Clean the page and set PG_dcache_clean */
flush_dcache_icache_folio(folio);
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
bail:
return pte_mkexec(pte);
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
index 6086b38d5427..0092513c3376 100644
--- a/arch/riscv/include/asm/cacheflush.h
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -23,8 +23,8 @@ static inline void local_flush_icache_range(unsigned long start,
static inline void flush_dcache_folio(struct folio *folio)
{
- if (test_bit(PG_dcache_clean, &folio->flags))
- clear_bit(PG_dcache_clean, &folio->flags);
+ if (test_bit(PG_dcache_clean, &folio->flags.f))
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
#define flush_dcache_folio flush_dcache_folio
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index 446126497768..0872d43fc0c0 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -7,7 +7,7 @@
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
#define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
index 4ca5aafce22e..d83a612464f6 100644
--- a/arch/riscv/mm/cacheflush.c
+++ b/arch/riscv/mm/cacheflush.c
@@ -101,9 +101,9 @@ void flush_icache_pte(struct mm_struct *mm, pte_t pte)
{
struct folio *folio = page_folio(pte_page(pte));
- if (!test_bit(PG_dcache_clean, &folio->flags)) {
+ if (!test_bit(PG_dcache_clean, &folio->flags.f)) {
flush_icache_mm(mm, false);
- set_bit(PG_dcache_clean, &folio->flags);
+ set_bit(PG_dcache_clean, &folio->flags.f);
}
}
#endif /* CONFIG_MMU */
diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
index 931fcc413598..69131736daaa 100644
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -39,7 +39,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
- clear_bit(PG_arch_1, &folio->flags);
+ clear_bit(PG_arch_1, &folio->flags.f);
}
#define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index 47f574cd1728..93b2a01bae40 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -144,7 +144,7 @@ int uv_destroy_folio(struct folio *folio)
folio_get(folio);
rc = uv_destroy(folio_to_phys(folio));
if (!rc)
- clear_bit(PG_arch_1, &folio->flags);
+ clear_bit(PG_arch_1, &folio->flags.f);
folio_put(folio);
return rc;
}
@@ -193,7 +193,7 @@ int uv_convert_from_secure_folio(struct folio *folio)
folio_get(folio);
rc = uv_convert_from_secure(folio_to_phys(folio));
if (!rc)
- clear_bit(PG_arch_1, &folio->flags);
+ clear_bit(PG_arch_1, &folio->flags.f);
folio_put(folio);
return rc;
}
@@ -289,7 +289,7 @@ static int __make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
expected = expected_folio_refs(folio) + 1;
if (!folio_ref_freeze(folio, expected))
return -EBUSY;
- set_bit(PG_arch_1, &folio->flags);
+ set_bit(PG_arch_1, &folio->flags.f);
/*
* If the UVC does not succeed or fail immediately, we don't want to
* loop for long, or we might get stall notifications.
@@ -483,18 +483,18 @@ int arch_make_folio_accessible(struct folio *folio)
* convert_to_secure.
* As secure pages are never large folios, both variants can co-exists.
*/
- if (!test_bit(PG_arch_1, &folio->flags))
+ if (!test_bit(PG_arch_1, &folio->flags.f))
return 0;
rc = uv_pin_shared(folio_to_phys(folio));
if (!rc) {
- clear_bit(PG_arch_1, &folio->flags);
+ clear_bit(PG_arch_1, &folio->flags.f);
return 0;
}
rc = uv_convert_from_secure(folio_to_phys(folio));
if (!rc) {
- clear_bit(PG_arch_1, &folio->flags);
+ clear_bit(PG_arch_1, &folio->flags.f);
return 0;
}
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index c7defe4ed1f6..8ff6bba107e8 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2272,7 +2272,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
start = pmd_val(*pmd) & HPAGE_MASK;
end = start + HPAGE_SIZE;
__storage_key_init_range(start, end);
- set_bit(PG_arch_1, &folio->flags);
+ set_bit(PG_arch_1, &folio->flags.f);
cond_resched();
return 0;
}
diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
index e88c02c9e642..72e8fa136af5 100644
--- a/arch/s390/mm/hugetlbpage.c
+++ b/arch/s390/mm/hugetlbpage.c
@@ -155,7 +155,7 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
paddr = rste & PMD_MASK;
}
- if (!test_and_set_bit(PG_arch_1, &folio->flags))
+ if (!test_and_set_bit(PG_arch_1, &folio->flags.f))
__storage_key_init_range(paddr, paddr + size);
}
diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h
index 4a92e6e4d627..974512f359f0 100644
--- a/arch/sh/include/asm/hugetlb.h
+++ b/arch/sh/include/asm/hugetlb.h
@@ -14,7 +14,7 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
#define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c
index 46393b00137e..83fb34b39ca7 100644
--- a/arch/sh/mm/cache-sh4.c
+++ b/arch/sh/mm/cache-sh4.c
@@ -114,7 +114,7 @@ static void sh4_flush_dcache_folio(void *arg)
struct address_space *mapping = folio_flush_mapping(folio);
if (mapping && !mapping_mapped(mapping))
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
else
#endif
{
diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c
index b509a407588f..71f8be9fc8e0 100644
--- a/arch/sh/mm/cache-sh7705.c
+++ b/arch/sh/mm/cache-sh7705.c
@@ -138,7 +138,7 @@ static void sh7705_flush_dcache_folio(void *arg)
struct address_space *mapping = folio_flush_mapping(folio);
if (mapping && !mapping_mapped(mapping))
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
else {
unsigned long pfn = folio_pfn(folio);
unsigned int i, nr = folio_nr_pages(folio);
diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c
index 6ebdeaff3021..c3f028bed049 100644
--- a/arch/sh/mm/cache.c
+++ b/arch/sh/mm/cache.c
@@ -64,14 +64,14 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
struct folio *folio = page_folio(page);
if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) &&
- test_bit(PG_dcache_clean, &folio->flags)) {
+ test_bit(PG_dcache_clean, &folio->flags.f)) {
void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK);
memcpy(vto, src, len);
kunmap_coherent(vto);
} else {
memcpy(dst, src, len);
if (boot_cpu_data.dcache.n_aliases)
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
if (vma->vm_flags & VM_EXEC)
@@ -85,14 +85,14 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
struct folio *folio = page_folio(page);
if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) &&
- test_bit(PG_dcache_clean, &folio->flags)) {
+ test_bit(PG_dcache_clean, &folio->flags.f)) {
void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK);
memcpy(dst, vfrom, len);
kunmap_coherent(vfrom);
} else {
memcpy(dst, src, len);
if (boot_cpu_data.dcache.n_aliases)
- clear_bit(PG_dcache_clean, &folio->flags);
+ clear_bit(PG_dcache_clean, &folio->flags.f);
}
}
@@ -105,7 +105,7 @@ void copy_user_highpage(struct page *to, struct page *from,
vto = kmap_atomic(to);
if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) &&
- test_bit(PG_dcache_clean, &src->flags)) {
+ test_bit(PG_dcache_clean, &src->flags.f)) {
vfrom = kmap_coherent(from, vaddr);
copy_page(vto, vfrom);
kunmap_coherent(vfrom);
@@ -148,7 +148,7 @@ void __update_cache(struct vm_area_struct *vma,
if (pfn_valid(pfn)) {
struct folio *folio = page_folio(pfn_to_page(pfn));
- int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags);
+ int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags.f);
if (dirty)
__flush_purge_region(folio_address(folio),
folio_size(folio));
@@ -162,7 +162,7 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr)
if (pages_do_alias(addr, vmaddr)) {
if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) &&
- test_bit(PG_dcache_clean, &folio->flags)) {
+ test_bit(PG_dcache_clean, &folio->flags.f)) {
void *kaddr;
kaddr = kmap_coherent(page, vmaddr);
diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c
index fa50e8f6e7a9..c9f32d5a54b8 100644
--- a/arch/sh/mm/kmap.c
+++ b/arch/sh/mm/kmap.c
@@ -31,7 +31,7 @@ void *kmap_coherent(struct page *page, unsigned long addr)
enum fixed_addresses idx;
unsigned long vaddr;
- BUG_ON(!test_bit(PG_dcache_clean, &folio->flags));
+ BUG_ON(!test_bit(PG_dcache_clean, &folio->flags.f));
preempt_disable();
pagefault_disable();
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 7ed58bf3aaca..df9f7c444c39 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -224,7 +224,7 @@ inline void flush_dcache_folio_impl(struct folio *folio)
((1UL<<ilog2(roundup_pow_of_two(NR_CPUS)))-1UL)
#define dcache_dirty_cpu(folio) \
- (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask)
+ (((folio)->flags.f >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask)
static inline void set_dcache_dirty(struct folio *folio, int this_cpu)
{
@@ -243,7 +243,7 @@ static inline void set_dcache_dirty(struct folio *folio, int this_cpu)
"bne,pn %%xcc, 1b\n\t"
" nop"
: /* no outputs */
- : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags)
+ : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags.f)
: "g1", "g7");
}
@@ -265,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu
" nop\n"
"2:"
: /* no outputs */
- : "r" (cpu), "r" (mask), "r" (&folio->flags),
+ : "r" (cpu), "r" (mask), "r" (&folio->flags.f),
"i" (PG_dcache_cpu_mask),
"i" (PG_dcache_cpu_shift)
: "g1", "g7");
@@ -292,7 +292,7 @@ static void flush_dcache(unsigned long pfn)
struct folio *folio = page_folio(page);
unsigned long pg_flags;
- pg_flags = folio->flags;
+ pg_flags = folio->flags.f;
if (pg_flags & (1UL << PG_dcache_dirty)) {
int cpu = ((pg_flags >> PG_dcache_cpu_shift) &
PG_dcache_cpu_mask);
@@ -480,7 +480,7 @@ void flush_dcache_folio(struct folio *folio)
mapping = folio_flush_mapping(folio);
if (mapping && !mapping_mapped(mapping)) {
- bool dirty = test_bit(PG_dcache_dirty, &folio->flags);
+ bool dirty = test_bit(PG_dcache_dirty, &folio->flags.f);
if (dirty) {
int dirty_cpu = dcache_dirty_cpu(folio);
diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c
index 23be0e7516ce..5354df52d61f 100644
--- a/arch/xtensa/mm/cache.c
+++ b/arch/xtensa/mm/cache.c
@@ -134,8 +134,8 @@ void flush_dcache_folio(struct folio *folio)
*/
if (mapping && !mapping_mapped(mapping)) {
- if (!test_bit(PG_arch_1, &folio->flags))
- set_bit(PG_arch_1, &folio->flags);
+ if (!test_bit(PG_arch_1, &folio->flags.f))
+ set_bit(PG_arch_1, &folio->flags.f);
return;
} else {
@@ -232,7 +232,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
#if (DCACHE_WAY_SIZE > PAGE_SIZE)
- if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) {
+ if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags.f)) {
unsigned long phys = folio_pfn(folio) * PAGE_SIZE;
unsigned long tmp;
@@ -247,10 +247,10 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
}
preempt_enable();
- clear_bit(PG_arch_1, &folio->flags);
+ clear_bit(PG_arch_1, &folio->flags.f);
}
#else
- if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags)
+ if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags.f)
&& (vma->vm_flags & VM_EXEC) != 0) {
for (i = 0; i < nr; i++) {
void *paddr = kmap_local_folio(folio, i * PAGE_SIZE);
@@ -258,7 +258,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
__invalidate_icache_page((unsigned long)paddr);
kunmap_local(paddr);
}
- set_bit(PG_arch_1, &folio->flags);
+ set_bit(PG_arch_1, &folio->flags.f);
}
#endif
}
--
2.47.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *'
2025-08-18 12:15 ` Matthew Wilcox
@ 2025-08-30 12:59 ` Philip Li
0 siblings, 0 replies; 4+ messages in thread
From: Philip Li @ 2025-08-30 12:59 UTC (permalink / raw)
To: Matthew Wilcox
Cc: kernel test robot, llvm, oe-kbuild-all, Andrew Morton,
Linux Memory Management List
On Mon, Aug 18, 2025 at 01:15:42PM +0100, Matthew Wilcox wrote:
> On Mon, Aug 18, 2025 at 06:17:13PM +0800, kernel test robot wrote:
> > tree: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-new
> > head: dd1510cefdfec9dc3fa2bae40e0f16f1bf825567
> > commit: a2004477b3282318589bf43a786d952856adcaee [166/188] mm: introduce memdesc_flags_t
>
> I've had this commit sitting in
> https://git.infradead.org/?p=users/willy/pagecache.git;a=shortlog;h=refs/heads/folio-page-split
> for over two weeks now. Are you no longer testing pagecache.git?
Sorry for the late response. There's issue in the bot that causes failure
to fetch from git.infradead.org for quite long time, which is not noticed.
It is fixed now, and we will be more careful to monitor the status of each
possible issue. Sorry about this.
>
> (the fix is trivial, I'll send it)
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-08-30 13:00 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-18 10:17 [akpm-mm:mm-new 166/188] arch/arm/mm/flush.c:307:41: error: incompatible pointer types passing 'memdesc_flags_t *' to parameter of type 'volatile unsigned long *' kernel test robot
2025-08-18 12:15 ` Matthew Wilcox
2025-08-30 12:59 ` Philip Li
2025-08-18 12:44 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).