sparclinux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/12] mm: establish const-correctness for pointer parameters
@ 2025-09-01 20:50 Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 01/12] mm: constify shmem related test functions for improved const-correctness Max Kellermann
                   ` (14 more replies)
  0 siblings, 15 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

For improved const-correctness in the low-level memory-management
subsystem, which provides a basis for further const-ification further
up the call stack (e.g. filesystems).

This patch series splitted into smaller patches was initially posted
as a single large patch:

 https://lore.kernel.org/lkml/20250827192233.447920-1-max.kellermann@ionos.com/

I started this work when I tried to constify the Ceph filesystem code,
but found that to be impossible because many "mm" functions accept
non-const pointer, even though they modify nothing.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
---
v1 -> v2:
- made several parameter values const (i.e. the pointer address, not
  just the pointed-to memory), as suggested by Andrew Morton and
  Yuanchu Xie
- drop existing+obsolete "extern" keywords on lines modified by these
  patches (suggested by Vishal Moola)
- add missing parameter names on lines modified by these patches
  (suggested by Vishal Moola)
- more "const" pointers (e.g. the task_struct passed to
  process_shares_mm())
- add missing "const" to s390, fixing s390 build failure
- moved the mmap_is_legacy() change in arch/s390/mm/mmap.c from 08/12
  to 06/12 (suggested by Vishal Moola)

v2 -> v3:
- remove garbage from 06/12
- changed tags on subject line (suggested by Matthew Wilcox)

v3 -> v4:
- more verbose commit messages including a listing of function names
  (suggested by David Hildenbrand and Lorenzo Stoakes)

v4 -> v5:
- back to shorter commit messages after an agreement between David
  Hildenbrand and Lorenzo Stoakes was found

v5 -> v6:
- fix inconsistent constness of assert_fault_locked()
- revert the const parameter value change from v2 (requested by
  Lorenzo Stoakes)
- revert the long cover letter, removing long explanations again
  (requested by Lorenzo Stoakes)

Max Kellermann (12):
  mm: constify shmem related test functions for improved
    const-correctness
  mm: constify pagemap related test/getter functions
  mm: constify zone related test/getter functions
  fs: constify mapping related test functions for improved
    const-correctness
  mm: constify process_shares_mm() for improved const-correctness
  mm, s390: constify mapping related test/getter functions
  parisc: constify mmap_upper_limit() parameter
  mm: constify arch_pick_mmap_layout() for improved const-correctness
  mm: constify ptdesc_pmd_pts_count() and folio_get_private()
  mm: constify various inline functions for improved const-correctness
  mm: constify assert/test functions in mm.h
  mm: constify highmem related functions for improved const-correctness

 arch/arm/include/asm/highmem.h      |  6 +--
 arch/parisc/include/asm/processor.h |  2 +-
 arch/parisc/kernel/sys_parisc.c     |  2 +-
 arch/s390/mm/mmap.c                 |  6 +--
 arch/sparc/kernel/sys_sparc_64.c    |  2 +-
 arch/x86/mm/mmap.c                  |  6 +--
 arch/xtensa/include/asm/highmem.h   |  2 +-
 include/linux/fs.h                  |  6 +--
 include/linux/highmem-internal.h    | 36 +++++++++---------
 include/linux/highmem.h             |  8 ++--
 include/linux/mm.h                  | 56 +++++++++++++--------------
 include/linux/mm_inline.h           | 25 ++++++------
 include/linux/mm_types.h            |  4 +-
 include/linux/mmzone.h              | 42 ++++++++++----------
 include/linux/pagemap.h             | 59 +++++++++++++++--------------
 include/linux/sched/mm.h            |  4 +-
 include/linux/shmem_fs.h            |  4 +-
 mm/highmem.c                        | 10 ++---
 mm/oom_kill.c                       |  6 +--
 mm/shmem.c                          |  6 +--
 mm/util.c                           | 16 ++++----
 21 files changed, 155 insertions(+), 153 deletions(-)

-- 
2.47.2


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v6 01/12] mm: constify shmem related test functions for improved const-correctness
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 02/12] mm: constify pagemap related test/getter functions Max Kellermann
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

We select certain test functions which either invoke each other,
functions that are already const-ified, or no further functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h       | 8 ++++----
 include/linux/shmem_fs.h | 4 ++--
 mm/shmem.c               | 6 +++---
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index cd14298bb958..18deb14cb1f5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -979,11 +979,11 @@ static inline void vma_iter_set(struct vma_iterator *vmi, unsigned long addr)
  * The vma_is_shmem is not inline because it is used only by slow
  * paths in userfault.
  */
-bool vma_is_shmem(struct vm_area_struct *vma);
-bool vma_is_anon_shmem(struct vm_area_struct *vma);
+bool vma_is_shmem(const struct vm_area_struct *vma);
+bool vma_is_anon_shmem(const struct vm_area_struct *vma);
 #else
-static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; }
-static inline bool vma_is_anon_shmem(struct vm_area_struct *vma) { return false; }
+static inline bool vma_is_shmem(const struct vm_area_struct *vma) { return false; }
+static inline bool vma_is_anon_shmem(const struct vm_area_struct *vma) { return false; }
 #endif
 
 int vma_is_stack_for_current(struct vm_area_struct *vma);
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 6d0f9c599ff7..0e47465ef0fd 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -99,9 +99,9 @@ extern unsigned long shmem_get_unmapped_area(struct file *, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 extern int shmem_lock(struct file *file, int lock, struct ucounts *ucounts);
 #ifdef CONFIG_SHMEM
-bool shmem_mapping(struct address_space *mapping);
+bool shmem_mapping(const struct address_space *mapping);
 #else
-static inline bool shmem_mapping(struct address_space *mapping)
+static inline bool shmem_mapping(const struct address_space *mapping)
 {
 	return false;
 }
diff --git a/mm/shmem.c b/mm/shmem.c
index 640fecc42f60..2df26f4d6e60 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -275,18 +275,18 @@ static const struct vm_operations_struct shmem_vm_ops;
 static const struct vm_operations_struct shmem_anon_vm_ops;
 static struct file_system_type shmem_fs_type;
 
-bool shmem_mapping(struct address_space *mapping)
+bool shmem_mapping(const struct address_space *mapping)
 {
 	return mapping->a_ops == &shmem_aops;
 }
 EXPORT_SYMBOL_GPL(shmem_mapping);
 
-bool vma_is_anon_shmem(struct vm_area_struct *vma)
+bool vma_is_anon_shmem(const struct vm_area_struct *vma)
 {
 	return vma->vm_ops == &shmem_anon_vm_ops;
 }
 
-bool vma_is_shmem(struct vm_area_struct *vma)
+bool vma_is_shmem(const struct vm_area_struct *vma)
 {
 	return vma_is_anon_shmem(vma) || vma->vm_ops == &shmem_vm_ops;
 }
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 02/12] mm: constify pagemap related test/getter functions
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 01/12] mm: constify shmem related test functions for improved const-correctness Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 03/12] mm: constify zone " Max Kellermann
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

For improved const-correctness.

We select certain test functions which either invoke each other,
functions that are already const-ified, or no further functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 include/linux/pagemap.h | 57 +++++++++++++++++++++--------------------
 1 file changed, 29 insertions(+), 28 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index a3e16d74792f..1d3803c397e9 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -140,7 +140,7 @@ static inline int inode_drain_writes(struct inode *inode)
 	return filemap_write_and_wait(inode->i_mapping);
 }
 
-static inline bool mapping_empty(struct address_space *mapping)
+static inline bool mapping_empty(const struct address_space *mapping)
 {
 	return xa_empty(&mapping->i_pages);
 }
@@ -166,7 +166,7 @@ static inline bool mapping_empty(struct address_space *mapping)
  * refcount and the referenced bit, which will be elevated or set in
  * the process of adding new cache pages to an inode.
  */
-static inline bool mapping_shrinkable(struct address_space *mapping)
+static inline bool mapping_shrinkable(const struct address_space *mapping)
 {
 	void *head;
 
@@ -267,7 +267,7 @@ static inline void mapping_clear_unevictable(struct address_space *mapping)
 	clear_bit(AS_UNEVICTABLE, &mapping->flags);
 }
 
-static inline bool mapping_unevictable(struct address_space *mapping)
+static inline bool mapping_unevictable(const struct address_space *mapping)
 {
 	return mapping && test_bit(AS_UNEVICTABLE, &mapping->flags);
 }
@@ -277,7 +277,7 @@ static inline void mapping_set_exiting(struct address_space *mapping)
 	set_bit(AS_EXITING, &mapping->flags);
 }
 
-static inline int mapping_exiting(struct address_space *mapping)
+static inline int mapping_exiting(const struct address_space *mapping)
 {
 	return test_bit(AS_EXITING, &mapping->flags);
 }
@@ -287,7 +287,7 @@ static inline void mapping_set_no_writeback_tags(struct address_space *mapping)
 	set_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags);
 }
 
-static inline int mapping_use_writeback_tags(struct address_space *mapping)
+static inline int mapping_use_writeback_tags(const struct address_space *mapping)
 {
 	return !test_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags);
 }
@@ -333,7 +333,7 @@ static inline void mapping_set_inaccessible(struct address_space *mapping)
 	set_bit(AS_INACCESSIBLE, &mapping->flags);
 }
 
-static inline bool mapping_inaccessible(struct address_space *mapping)
+static inline bool mapping_inaccessible(const struct address_space *mapping)
 {
 	return test_bit(AS_INACCESSIBLE, &mapping->flags);
 }
@@ -343,18 +343,18 @@ static inline void mapping_set_writeback_may_deadlock_on_reclaim(struct address_
 	set_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags);
 }
 
-static inline bool mapping_writeback_may_deadlock_on_reclaim(struct address_space *mapping)
+static inline bool mapping_writeback_may_deadlock_on_reclaim(const struct address_space *mapping)
 {
 	return test_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags);
 }
 
-static inline gfp_t mapping_gfp_mask(struct address_space * mapping)
+static inline gfp_t mapping_gfp_mask(const struct address_space *mapping)
 {
 	return mapping->gfp_mask;
 }
 
 /* Restricts the given gfp_mask to what the mapping allows. */
-static inline gfp_t mapping_gfp_constraint(struct address_space *mapping,
+static inline gfp_t mapping_gfp_constraint(const struct address_space *mapping,
 		gfp_t gfp_mask)
 {
 	return mapping_gfp_mask(mapping) & gfp_mask;
@@ -477,13 +477,13 @@ mapping_min_folio_order(const struct address_space *mapping)
 }
 
 static inline unsigned long
-mapping_min_folio_nrpages(struct address_space *mapping)
+mapping_min_folio_nrpages(const struct address_space *mapping)
 {
 	return 1UL << mapping_min_folio_order(mapping);
 }
 
 static inline unsigned long
-mapping_min_folio_nrbytes(struct address_space *mapping)
+mapping_min_folio_nrbytes(const struct address_space *mapping)
 {
 	return mapping_min_folio_nrpages(mapping) << PAGE_SHIFT;
 }
@@ -497,7 +497,7 @@ mapping_min_folio_nrbytes(struct address_space *mapping)
  * new folio to the page cache and need to know what index to give it,
  * call this function.
  */
-static inline pgoff_t mapping_align_index(struct address_space *mapping,
+static inline pgoff_t mapping_align_index(const struct address_space *mapping,
 					  pgoff_t index)
 {
 	return round_down(index, mapping_min_folio_nrpages(mapping));
@@ -507,7 +507,7 @@ static inline pgoff_t mapping_align_index(struct address_space *mapping,
  * Large folio support currently depends on THP.  These dependencies are
  * being worked on but are not yet fixed.
  */
-static inline bool mapping_large_folio_support(struct address_space *mapping)
+static inline bool mapping_large_folio_support(const struct address_space *mapping)
 {
 	/* AS_FOLIO_ORDER is only reasonable for pagecache folios */
 	VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON,
@@ -522,7 +522,7 @@ static inline size_t mapping_max_folio_size(const struct address_space *mapping)
 	return PAGE_SIZE << mapping_max_folio_order(mapping);
 }
 
-static inline int filemap_nr_thps(struct address_space *mapping)
+static inline int filemap_nr_thps(const struct address_space *mapping)
 {
 #ifdef CONFIG_READ_ONLY_THP_FOR_FS
 	return atomic_read(&mapping->nr_thps);
@@ -936,7 +936,7 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
  *
  * Return: The index of the folio which follows this folio in the file.
  */
-static inline pgoff_t folio_next_index(struct folio *folio)
+static inline pgoff_t folio_next_index(const struct folio *folio)
 {
 	return folio->index + folio_nr_pages(folio);
 }
@@ -965,7 +965,7 @@ static inline struct page *folio_file_page(struct folio *folio, pgoff_t index)
  * e.g., shmem did not move this folio to the swap cache.
  * Return: true or false.
  */
-static inline bool folio_contains(struct folio *folio, pgoff_t index)
+static inline bool folio_contains(const struct folio *folio, pgoff_t index)
 {
 	VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio);
 	return index - folio->index < folio_nr_pages(folio);
@@ -1042,13 +1042,13 @@ static inline loff_t page_offset(struct page *page)
 /*
  * Get the offset in PAGE_SIZE (even for hugetlb folios).
  */
-static inline pgoff_t folio_pgoff(struct folio *folio)
+static inline pgoff_t folio_pgoff(const struct folio *folio)
 {
 	return folio->index;
 }
 
-static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
-					unsigned long address)
+static inline pgoff_t linear_page_index(const struct vm_area_struct *vma,
+					const unsigned long address)
 {
 	pgoff_t pgoff;
 	pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
@@ -1468,7 +1468,7 @@ static inline unsigned int __readahead_batch(struct readahead_control *rac,
  * readahead_pos - The byte offset into the file of this readahead request.
  * @rac: The readahead request.
  */
-static inline loff_t readahead_pos(struct readahead_control *rac)
+static inline loff_t readahead_pos(const struct readahead_control *rac)
 {
 	return (loff_t)rac->_index * PAGE_SIZE;
 }
@@ -1477,7 +1477,7 @@ static inline loff_t readahead_pos(struct readahead_control *rac)
  * readahead_length - The number of bytes in this readahead request.
  * @rac: The readahead request.
  */
-static inline size_t readahead_length(struct readahead_control *rac)
+static inline size_t readahead_length(const struct readahead_control *rac)
 {
 	return rac->_nr_pages * PAGE_SIZE;
 }
@@ -1486,7 +1486,7 @@ static inline size_t readahead_length(struct readahead_control *rac)
  * readahead_index - The index of the first page in this readahead request.
  * @rac: The readahead request.
  */
-static inline pgoff_t readahead_index(struct readahead_control *rac)
+static inline pgoff_t readahead_index(const struct readahead_control *rac)
 {
 	return rac->_index;
 }
@@ -1495,7 +1495,7 @@ static inline pgoff_t readahead_index(struct readahead_control *rac)
  * readahead_count - The number of pages in this readahead request.
  * @rac: The readahead request.
  */
-static inline unsigned int readahead_count(struct readahead_control *rac)
+static inline unsigned int readahead_count(const struct readahead_control *rac)
 {
 	return rac->_nr_pages;
 }
@@ -1504,12 +1504,12 @@ static inline unsigned int readahead_count(struct readahead_control *rac)
  * readahead_batch_length - The number of bytes in the current batch.
  * @rac: The readahead request.
  */
-static inline size_t readahead_batch_length(struct readahead_control *rac)
+static inline size_t readahead_batch_length(const struct readahead_control *rac)
 {
 	return rac->_batch_count * PAGE_SIZE;
 }
 
-static inline unsigned long dir_pages(struct inode *inode)
+static inline unsigned long dir_pages(const struct inode *inode)
 {
 	return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >>
 			       PAGE_SHIFT;
@@ -1523,8 +1523,8 @@ static inline unsigned long dir_pages(struct inode *inode)
  * Return: the number of bytes in the folio up to EOF,
  * or -EFAULT if the folio was truncated.
  */
-static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio,
-					      struct inode *inode)
+static inline ssize_t folio_mkwrite_check_truncate(const struct folio *folio,
+						   const struct inode *inode)
 {
 	loff_t size = i_size_read(inode);
 	pgoff_t index = size >> PAGE_SHIFT;
@@ -1555,7 +1555,8 @@ static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio,
  * Return: The number of filesystem blocks covered by this folio.
  */
 static inline
-unsigned int i_blocks_per_folio(struct inode *inode, struct folio *folio)
+unsigned int i_blocks_per_folio(const struct inode *inode,
+				const struct folio *folio)
 {
 	return folio_size(folio) >> inode->i_blkbits;
 }
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 03/12] mm: constify zone related test/getter functions
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 01/12] mm: constify shmem related test functions for improved const-correctness Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 02/12] mm: constify pagemap related test/getter functions Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness Max Kellermann
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

For improved const-correctness.

We select certain test functions which either invoke each other,
functions that are already const-ified, or no further functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mmzone.h | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f3272ef5131b..6c4eae96160d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1104,7 +1104,7 @@ static inline unsigned long promo_wmark_pages(const struct zone *z)
 	return wmark_pages(z, WMARK_PROMO);
 }
 
-static inline unsigned long zone_managed_pages(struct zone *zone)
+static inline unsigned long zone_managed_pages(const struct zone *zone)
 {
 	return (unsigned long)atomic_long_read(&zone->managed_pages);
 }
@@ -1128,12 +1128,12 @@ static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
 	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
 }
 
-static inline bool zone_is_initialized(struct zone *zone)
+static inline bool zone_is_initialized(const struct zone *zone)
 {
 	return zone->initialized;
 }
 
-static inline bool zone_is_empty(struct zone *zone)
+static inline bool zone_is_empty(const struct zone *zone)
 {
 	return zone->spanned_pages == 0;
 }
@@ -1273,7 +1273,7 @@ static inline bool folio_is_zone_movable(const struct folio *folio)
  * Return true if [start_pfn, start_pfn + nr_pages) range has a non-empty
  * intersection with the given zone
  */
-static inline bool zone_intersects(struct zone *zone,
+static inline bool zone_intersects(const struct zone *zone,
 		unsigned long start_pfn, unsigned long nr_pages)
 {
 	if (zone_is_empty(zone))
@@ -1581,12 +1581,12 @@ static inline int local_memory_node(int node_id) { return node_id; };
 #define zone_idx(zone)		((zone) - (zone)->zone_pgdat->node_zones)
 
 #ifdef CONFIG_ZONE_DEVICE
-static inline bool zone_is_zone_device(struct zone *zone)
+static inline bool zone_is_zone_device(const struct zone *zone)
 {
 	return zone_idx(zone) == ZONE_DEVICE;
 }
 #else
-static inline bool zone_is_zone_device(struct zone *zone)
+static inline bool zone_is_zone_device(const struct zone *zone)
 {
 	return false;
 }
@@ -1598,19 +1598,19 @@ static inline bool zone_is_zone_device(struct zone *zone)
  * populated_zone(). If the whole zone is reserved then we can easily
  * end up with populated_zone() && !managed_zone().
  */
-static inline bool managed_zone(struct zone *zone)
+static inline bool managed_zone(const struct zone *zone)
 {
 	return zone_managed_pages(zone);
 }
 
 /* Returns true if a zone has memory */
-static inline bool populated_zone(struct zone *zone)
+static inline bool populated_zone(const struct zone *zone)
 {
 	return zone->present_pages;
 }
 
 #ifdef CONFIG_NUMA
-static inline int zone_to_nid(struct zone *zone)
+static inline int zone_to_nid(const struct zone *zone)
 {
 	return zone->node;
 }
@@ -1620,7 +1620,7 @@ static inline void zone_set_nid(struct zone *zone, int nid)
 	zone->node = nid;
 }
 #else
-static inline int zone_to_nid(struct zone *zone)
+static inline int zone_to_nid(const struct zone *zone)
 {
 	return 0;
 }
@@ -1647,7 +1647,7 @@ static inline int is_highmem_idx(enum zone_type idx)
  * @zone: pointer to struct zone variable
  * Return: 1 for a highmem zone, 0 otherwise
  */
-static inline int is_highmem(struct zone *zone)
+static inline int is_highmem(const struct zone *zone)
 {
 	return is_highmem_idx(zone_idx(zone));
 }
@@ -1713,12 +1713,12 @@ static inline struct zone *zonelist_zone(struct zoneref *zoneref)
 	return zoneref->zone;
 }
 
-static inline int zonelist_zone_idx(struct zoneref *zoneref)
+static inline int zonelist_zone_idx(const struct zoneref *zoneref)
 {
 	return zoneref->zone_idx;
 }
 
-static inline int zonelist_node_idx(struct zoneref *zoneref)
+static inline int zonelist_node_idx(const struct zoneref *zoneref)
 {
 	return zone_to_nid(zoneref->zone);
 }
@@ -2021,7 +2021,7 @@ static inline struct page *__section_mem_map_addr(struct mem_section *section)
 	return (struct page *)map;
 }
 
-static inline int present_section(struct mem_section *section)
+static inline int present_section(const struct mem_section *section)
 {
 	return (section && (section->section_mem_map & SECTION_MARKED_PRESENT));
 }
@@ -2031,12 +2031,12 @@ static inline int present_section_nr(unsigned long nr)
 	return present_section(__nr_to_section(nr));
 }
 
-static inline int valid_section(struct mem_section *section)
+static inline int valid_section(const struct mem_section *section)
 {
 	return (section && (section->section_mem_map & SECTION_HAS_MEM_MAP));
 }
 
-static inline int early_section(struct mem_section *section)
+static inline int early_section(const struct mem_section *section)
 {
 	return (section && (section->section_mem_map & SECTION_IS_EARLY));
 }
@@ -2046,27 +2046,27 @@ static inline int valid_section_nr(unsigned long nr)
 	return valid_section(__nr_to_section(nr));
 }
 
-static inline int online_section(struct mem_section *section)
+static inline int online_section(const struct mem_section *section)
 {
 	return (section && (section->section_mem_map & SECTION_IS_ONLINE));
 }
 
 #ifdef CONFIG_ZONE_DEVICE
-static inline int online_device_section(struct mem_section *section)
+static inline int online_device_section(const struct mem_section *section)
 {
 	unsigned long flags = SECTION_IS_ONLINE | SECTION_TAINT_ZONE_DEVICE;
 
 	return section && ((section->section_mem_map & flags) == flags);
 }
 #else
-static inline int online_device_section(struct mem_section *section)
+static inline int online_device_section(const struct mem_section *section)
 {
 	return 0;
 }
 #endif
 
 #ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT
-static inline int preinited_vmemmap_section(struct mem_section *section)
+static inline int preinited_vmemmap_section(const struct mem_section *section)
 {
 	return (section &&
 		(section->section_mem_map & SECTION_IS_VMEMMAP_PREINIT));
@@ -2076,7 +2076,7 @@ void sparse_vmemmap_init_nid_early(int nid);
 void sparse_vmemmap_init_nid_late(int nid);
 
 #else
-static inline int preinited_vmemmap_section(struct mem_section *section)
+static inline int preinited_vmemmap_section(const struct mem_section *section)
 {
 	return 0;
 }
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (2 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 03/12] mm: constify zone " Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02 10:42   ` Jan Kara
  2025-09-02 10:57   ` Christian Brauner
  2025-09-01 20:50 ` [PATCH v6 05/12] mm: constify process_shares_mm() " Max Kellermann
                   ` (10 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

We select certain test functions which either invoke each other,
functions that are already const-ified, or no further functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 include/linux/fs.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3b9f54446db0..0b43edb33be2 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -537,7 +537,7 @@ struct address_space {
 /*
  * Returns true if any of the pages in the mapping are marked with the tag.
  */
-static inline bool mapping_tagged(struct address_space *mapping, xa_mark_t tag)
+static inline bool mapping_tagged(const struct address_space *mapping, xa_mark_t tag)
 {
 	return xa_marked(&mapping->i_pages, tag);
 }
@@ -585,7 +585,7 @@ static inline void i_mmap_assert_write_locked(struct address_space *mapping)
 /*
  * Might pages of this file be mapped into userspace?
  */
-static inline int mapping_mapped(struct address_space *mapping)
+static inline int mapping_mapped(const struct address_space *mapping)
 {
 	return	!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root);
 }
@@ -599,7 +599,7 @@ static inline int mapping_mapped(struct address_space *mapping)
  * If i_mmap_writable is negative, no new writable mappings are allowed. You
  * can only deny writable mappings, if none exists right now.
  */
-static inline int mapping_writably_mapped(struct address_space *mapping)
+static inline int mapping_writably_mapped(const struct address_space *mapping)
 {
 	return atomic_read(&mapping->i_mmap_writable) > 0;
 }
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 05/12] mm: constify process_shares_mm() for improved const-correctness
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (3 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  8:03   ` David Hildenbrand
  2025-09-01 20:50 ` [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions Max Kellermann
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

This function only reads from the pointer arguments.

Local (loop) variables are also annotated with `const` to clarify that
these will not be written to.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 include/linux/mm.h | 2 +-
 mm/oom_kill.c      | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 18deb14cb1f5..f70c6b4d5f80 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3840,7 +3840,7 @@ static inline int in_gate_area(struct mm_struct *mm, unsigned long addr)
 }
 #endif	/* __HAVE_ARCH_GATE_AREA */
 
-extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm);
+bool process_shares_mm(const struct task_struct *p, const struct mm_struct *mm);
 
 void drop_slab(void);
 
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 17650f0b516e..58bd4cf71d52 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -490,12 +490,12 @@ static bool oom_killer_disabled __read_mostly;
  * task's threads: if one of those is using this mm then this task was also
  * using it.
  */
-bool process_shares_mm(struct task_struct *p, struct mm_struct *mm)
+bool process_shares_mm(const struct task_struct *p, const struct mm_struct *mm)
 {
-	struct task_struct *t;
+	const struct task_struct *t;
 
 	for_each_thread(p, t) {
-		struct mm_struct *t_mm = READ_ONCE(t->mm);
+		const struct mm_struct *t_mm = READ_ONCE(t->mm);
 		if (t_mm)
 			return t_mm == mm;
 	}
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (4 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 05/12] mm: constify process_shares_mm() " Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  6:13   ` Lorenzo Stoakes
  2025-09-02  8:04   ` David Hildenbrand
  2025-09-01 20:50 ` [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter Max Kellermann
                   ` (8 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

For improved const-correctness.

We select certain test functions which either invoke each other,
functions that are already const-ified, or no further functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

(Even though seemingly unrelated, this also constifies the pointer
parameter of mmap_is_legacy() in arch/s390/mm/mmap.c because a copy of
the function exists in mm/util.c.)

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/mmap.c     |  2 +-
 include/linux/mm.h      |  6 +++---
 include/linux/pagemap.h |  2 +-
 mm/util.c               | 10 +++++-----
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 547104ccc22a..e188cb6d4946 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -27,7 +27,7 @@ static unsigned long stack_maxrandom_size(void)
 	return STACK_RND_MASK << PAGE_SHIFT;
 }
 
-static inline int mmap_is_legacy(struct rlimit *rlim_stack)
+static inline int mmap_is_legacy(const struct rlimit *rlim_stack)
 {
 	if (current->personality & ADDR_COMPAT_LAYOUT)
 		return 1;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f70c6b4d5f80..23864c3519d6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -986,7 +986,7 @@ static inline bool vma_is_shmem(const struct vm_area_struct *vma) { return false
 static inline bool vma_is_anon_shmem(const struct vm_area_struct *vma) { return false; }
 #endif
 
-int vma_is_stack_for_current(struct vm_area_struct *vma);
+int vma_is_stack_for_current(const struct vm_area_struct *vma);
 
 /* flush_tlb_range() takes a vma, not a mm, and can care about flags */
 #define TLB_FLUSH_VMA(mm,flags) { .vm_mm = (mm), .vm_flags = (flags) }
@@ -2585,7 +2585,7 @@ void folio_add_pin(struct folio *folio);
 
 int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc);
 int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
-			struct task_struct *task, bool bypass_rlim);
+			const struct task_struct *task, bool bypass_rlim);
 
 struct kvec;
 struct page *get_dump_page(unsigned long addr, int *locked);
@@ -3348,7 +3348,7 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
 	     avc; avc = anon_vma_interval_tree_iter_next(avc, start, last))
 
 /* mmap.c */
-extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
+extern int __vm_enough_memory(const struct mm_struct *mm, long pages, int cap_sys_admin);
 extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
 extern void exit_mmap(struct mm_struct *);
 bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_struct *vma,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 1d3803c397e9..185644e288ea 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -551,7 +551,7 @@ static inline void filemap_nr_thps_dec(struct address_space *mapping)
 #endif
 }
 
-struct address_space *folio_mapping(struct folio *);
+struct address_space *folio_mapping(const struct folio *folio);
 
 /**
  * folio_flush_mapping - Find the file mapping this folio belongs to.
diff --git a/mm/util.c b/mm/util.c
index d235b74f7aff..241d2eaf26ca 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -315,7 +315,7 @@ void *memdup_user_nul(const void __user *src, size_t len)
 EXPORT_SYMBOL(memdup_user_nul);
 
 /* Check if the vma is being used as a stack by this task */
-int vma_is_stack_for_current(struct vm_area_struct *vma)
+int vma_is_stack_for_current(const struct vm_area_struct *vma)
 {
 	struct task_struct * __maybe_unused t = current;
 
@@ -410,7 +410,7 @@ unsigned long arch_mmap_rnd(void)
 	return rnd << PAGE_SHIFT;
 }
 
-static int mmap_is_legacy(struct rlimit *rlim_stack)
+static int mmap_is_legacy(const struct rlimit *rlim_stack)
 {
 	if (current->personality & ADDR_COMPAT_LAYOUT)
 		return 1;
@@ -504,7 +504,7 @@ EXPORT_SYMBOL_IF_KUNIT(arch_pick_mmap_layout);
  * * -ENOMEM if RLIMIT_MEMLOCK would be exceeded.
  */
 int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
-			struct task_struct *task, bool bypass_rlim)
+			const struct task_struct *task, bool bypass_rlim)
 {
 	unsigned long locked_vm, limit;
 	int ret = 0;
@@ -688,7 +688,7 @@ struct anon_vma *folio_anon_vma(const struct folio *folio)
  * You can call this for folios which aren't in the swap cache or page
  * cache and it will return NULL.
  */
-struct address_space *folio_mapping(struct folio *folio)
+struct address_space *folio_mapping(const struct folio *folio)
 {
 	struct address_space *mapping;
 
@@ -926,7 +926,7 @@ EXPORT_SYMBOL_GPL(vm_memory_committed);
  * Note this is a helper function intended to be used by LSMs which
  * wish to use this logic.
  */
-int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
+int __vm_enough_memory(const struct mm_struct *mm, long pages, int cap_sys_admin)
 {
 	long allowed;
 	unsigned long bytes_failed;
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (5 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  6:13   ` Lorenzo Stoakes
  2025-09-02  8:04   ` David Hildenbrand
  2025-09-01 20:50 ` [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness Max Kellermann
                   ` (7 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

For improved const-correctness.

This piece is necessary to make the `rlim_stack` parameter to
mmap_base() const.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/parisc/include/asm/processor.h | 2 +-
 arch/parisc/kernel/sys_parisc.c     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h
index 4c14bde39aac..dd0b5e199559 100644
--- a/arch/parisc/include/asm/processor.h
+++ b/arch/parisc/include/asm/processor.h
@@ -48,7 +48,7 @@
 #ifndef __ASSEMBLER__
 
 struct rlimit;
-unsigned long mmap_upper_limit(struct rlimit *rlim_stack);
+unsigned long mmap_upper_limit(const struct rlimit *rlim_stack);
 unsigned long calc_max_stack_size(unsigned long stack_max);
 
 /*
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index f852fe274abe..b2cdbb8a12b1 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -77,7 +77,7 @@ unsigned long calc_max_stack_size(unsigned long stack_max)
  * indicating that "current" should be used instead of a passed-in
  * value from the exec bprm as done with arch_pick_mmap_layout().
  */
-unsigned long mmap_upper_limit(struct rlimit *rlim_stack)
+unsigned long mmap_upper_limit(const struct rlimit *rlim_stack)
 {
 	unsigned long stack_base;
 
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (6 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  6:15   ` Lorenzo Stoakes
  2025-09-02  8:05   ` David Hildenbrand
  2025-09-01 20:50 ` [PATCH v6 09/12] mm: constify ptdesc_pmd_pts_count() and folio_get_private() Max Kellermann
                   ` (6 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

This function only reads from the rlimit pointer (but writes to the
mm_struct pointer which is kept without `const`).

All callees are already const-ified or (internal functions) are being
constified by this patch.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/mmap.c              | 4 ++--
 arch/sparc/kernel/sys_sparc_64.c | 2 +-
 arch/x86/mm/mmap.c               | 6 +++---
 include/linux/sched/mm.h         | 4 ++--
 mm/util.c                        | 6 +++---
 5 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index e188cb6d4946..197c1d9497a7 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -47,7 +47,7 @@ static unsigned long mmap_base_legacy(unsigned long rnd)
 }
 
 static inline unsigned long mmap_base(unsigned long rnd,
-				      struct rlimit *rlim_stack)
+				      const struct rlimit *rlim_stack)
 {
 	unsigned long gap = rlim_stack->rlim_cur;
 	unsigned long pad = stack_maxrandom_size() + stack_guard_gap;
@@ -169,7 +169,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
  * This function, called very early during the creation of a new
  * process VM image, sets up which VM layout function to use:
  */
-void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
+void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
 {
 	unsigned long random_factor = 0UL;
 
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index 785e9909340f..55faf2effa46 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -294,7 +294,7 @@ static unsigned long mmap_rnd(void)
 	return rnd << PAGE_SHIFT;
 }
 
-void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
+void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
 {
 	unsigned long random_factor = mmap_rnd();
 	unsigned long gap;
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 708f85dc9380..82f3a987f7cf 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -80,7 +80,7 @@ unsigned long arch_mmap_rnd(void)
 }
 
 static unsigned long mmap_base(unsigned long rnd, unsigned long task_size,
-			       struct rlimit *rlim_stack)
+			       const struct rlimit *rlim_stack)
 {
 	unsigned long gap = rlim_stack->rlim_cur;
 	unsigned long pad = stack_maxrandom_size(task_size) + stack_guard_gap;
@@ -110,7 +110,7 @@ static unsigned long mmap_legacy_base(unsigned long rnd,
  */
 static void arch_pick_mmap_base(unsigned long *base, unsigned long *legacy_base,
 		unsigned long random_factor, unsigned long task_size,
-		struct rlimit *rlim_stack)
+		const struct rlimit *rlim_stack)
 {
 	*legacy_base = mmap_legacy_base(random_factor, task_size);
 	if (mmap_is_legacy())
@@ -119,7 +119,7 @@ static void arch_pick_mmap_base(unsigned long *base, unsigned long *legacy_base,
 		*base = mmap_base(random_factor, task_size, rlim_stack);
 }
 
-void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
+void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
 {
 	if (mmap_is_legacy())
 		mm_flags_clear(MMF_TOPDOWN, mm);
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 2201da0afecc..0232d983b715 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -178,7 +178,7 @@ static inline void mm_update_next_owner(struct mm_struct *mm)
 #endif
 
 extern void arch_pick_mmap_layout(struct mm_struct *mm,
-				  struct rlimit *rlim_stack);
+				  const struct rlimit *rlim_stack);
 
 unsigned long
 arch_get_unmapped_area(struct file *filp, unsigned long addr,
@@ -211,7 +211,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
 				  unsigned long flags, vm_flags_t vm_flags);
 #else
 static inline void arch_pick_mmap_layout(struct mm_struct *mm,
-					 struct rlimit *rlim_stack) {}
+					 const struct rlimit *rlim_stack) {}
 #endif
 
 static inline bool in_vfork(struct task_struct *tsk)
diff --git a/mm/util.c b/mm/util.c
index 241d2eaf26ca..77462027ad24 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -431,7 +431,7 @@ static int mmap_is_legacy(const struct rlimit *rlim_stack)
 #define MIN_GAP		(SZ_128M)
 #define MAX_GAP		(STACK_TOP / 6 * 5)
 
-static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
+static unsigned long mmap_base(const unsigned long rnd, const struct rlimit *rlim_stack)
 {
 #ifdef CONFIG_STACK_GROWSUP
 	/*
@@ -462,7 +462,7 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
 #endif
 }
 
-void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
+void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
 {
 	unsigned long random_factor = 0UL;
 
@@ -478,7 +478,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
 	}
 }
 #elif defined(CONFIG_MMU) && !defined(HAVE_ARCH_PICK_MMAP_LAYOUT)
-void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
+void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
 {
 	mm->mmap_base = TASK_UNMAPPED_BASE;
 	mm_flags_clear(MMF_TOPDOWN, mm);
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 09/12] mm: constify ptdesc_pmd_pts_count() and folio_get_private()
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (7 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-01 20:50 ` [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness Max Kellermann
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

These functions from mm_types.h are trivial getters that should never
write to the given pointers.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm_types.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index d934a3a5b443..275e8060d918 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -632,7 +632,7 @@ static inline void ptdesc_pmd_pts_dec(struct ptdesc *ptdesc)
 	atomic_dec(&ptdesc->pt_share_count);
 }
 
-static inline int ptdesc_pmd_pts_count(struct ptdesc *ptdesc)
+static inline int ptdesc_pmd_pts_count(const struct ptdesc *ptdesc)
 {
 	return atomic_read(&ptdesc->pt_share_count);
 }
@@ -660,7 +660,7 @@ static inline void set_page_private(struct page *page, unsigned long private)
 	page->private = private;
 }
 
-static inline void *folio_get_private(struct folio *folio)
+static inline void *folio_get_private(const struct folio *folio)
 {
 	return folio->private;
 }
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (8 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 09/12] mm: constify ptdesc_pmd_pts_count() and folio_get_private() Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  6:16   ` Lorenzo Stoakes
  2025-09-02  8:05   ` David Hildenbrand
  2025-09-01 20:50 ` [PATCH v6 11/12] mm: constify assert/test functions in mm.h Max Kellermann
                   ` (4 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

We select certain test functions plus folio_migrate_refs() from
mm_inline.h which either invoke each other, functions that are already
const-ified, or no further functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

One exception is the function folio_migrate_refs() which does write to
the "new" folio pointer; there, only the "old" folio pointer is being
constified; only its "flags" field is read, but nothing written.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm_inline.h | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 150302b4a905..d6c1011b38f2 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -25,7 +25,7 @@
  * 0 if @folio is a normal anonymous folio, a tmpfs folio or otherwise
  * ram or swap backed folio.
  */
-static inline int folio_is_file_lru(struct folio *folio)
+static inline int folio_is_file_lru(const struct folio *folio)
 {
 	return !folio_test_swapbacked(folio);
 }
@@ -84,7 +84,7 @@ static __always_inline void __folio_clear_lru_flags(struct folio *folio)
  * Return: The LRU list a folio should be on, as an index
  * into the array of LRU lists.
  */
-static __always_inline enum lru_list folio_lru_list(struct folio *folio)
+static __always_inline enum lru_list folio_lru_list(const struct folio *folio)
 {
 	enum lru_list lru;
 
@@ -141,7 +141,7 @@ static inline int lru_tier_from_refs(int refs, bool workingset)
 	return workingset ? MAX_NR_TIERS - 1 : order_base_2(refs);
 }
 
-static inline int folio_lru_refs(struct folio *folio)
+static inline int folio_lru_refs(const struct folio *folio)
 {
 	unsigned long flags = READ_ONCE(folio->flags.f);
 
@@ -154,14 +154,14 @@ static inline int folio_lru_refs(struct folio *folio)
 	return ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1;
 }
 
-static inline int folio_lru_gen(struct folio *folio)
+static inline int folio_lru_gen(const struct folio *folio)
 {
 	unsigned long flags = READ_ONCE(folio->flags.f);
 
 	return ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
 }
 
-static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen)
+static inline bool lru_gen_is_active(const struct lruvec *lruvec, int gen)
 {
 	unsigned long max_seq = lruvec->lrugen.max_seq;
 
@@ -217,12 +217,13 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli
 	VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen));
 }
 
-static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct folio *folio,
+static inline unsigned long lru_gen_folio_seq(const struct lruvec *lruvec,
+					      const struct folio *folio,
 					      bool reclaiming)
 {
 	int gen;
 	int type = folio_is_file_lru(folio);
-	struct lru_gen_folio *lrugen = &lruvec->lrugen;
+	const struct lru_gen_folio *lrugen = &lruvec->lrugen;
 
 	/*
 	 * +-----------------------------------+-----------------------------------+
@@ -302,7 +303,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
 	return true;
 }
 
-static inline void folio_migrate_refs(struct folio *new, struct folio *old)
+static inline void folio_migrate_refs(struct folio *new, const struct folio *old)
 {
 	unsigned long refs = READ_ONCE(old->flags.f) & LRU_REFS_MASK;
 
@@ -330,7 +331,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
 	return false;
 }
 
-static inline void folio_migrate_refs(struct folio *new, struct folio *old)
+static inline void folio_migrate_refs(struct folio *new, const struct folio *old)
 {
 
 }
@@ -508,7 +509,7 @@ static inline void dec_tlb_flush_pending(struct mm_struct *mm)
 	atomic_dec(&mm->tlb_flush_pending);
 }
 
-static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
+static inline bool mm_tlb_flush_pending(const struct mm_struct *mm)
 {
 	/*
 	 * Must be called after having acquired the PTL; orders against that
@@ -521,7 +522,7 @@ static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
 	return atomic_read(&mm->tlb_flush_pending);
 }
 
-static inline bool mm_tlb_flush_nested(struct mm_struct *mm)
+static inline bool mm_tlb_flush_nested(const struct mm_struct *mm)
 {
 	/*
 	 * Similar to mm_tlb_flush_pending(), we must have acquired the PTL
@@ -605,7 +606,7 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
 	return false;
 }
 
-static inline bool vma_has_recency(struct vm_area_struct *vma)
+static inline bool vma_has_recency(const struct vm_area_struct *vma)
 {
 	if (vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))
 		return false;
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 11/12] mm: constify assert/test functions in mm.h
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (9 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  6:17   ` Lorenzo Stoakes
  2025-09-02  8:06   ` David Hildenbrand
  2025-09-01 20:50 ` [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness Max Kellermann
                   ` (3 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

For improved const-correctness.

We select certain assert and test functions which either invoke each
other, functions that are already const-ified, or no further
functions.

It is therefore relatively trivial to const-ify them, which
provides a basis for further const-ification further up the call
stack.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
---
 include/linux/mm.h | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 23864c3519d6..c3767688771c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -703,7 +703,7 @@ static inline void release_fault_lock(struct vm_fault *vmf)
 		mmap_read_unlock(vmf->vma->vm_mm);
 }
 
-static inline void assert_fault_locked(struct vm_fault *vmf)
+static inline void assert_fault_locked(const struct vm_fault *vmf)
 {
 	if (vmf->flags & FAULT_FLAG_VMA_LOCK)
 		vma_assert_locked(vmf->vma);
@@ -716,7 +716,7 @@ static inline void release_fault_lock(struct vm_fault *vmf)
 	mmap_read_unlock(vmf->vma->vm_mm);
 }
 
-static inline void assert_fault_locked(struct vm_fault *vmf)
+static inline void assert_fault_locked(const struct vm_fault *vmf)
 {
 	mmap_assert_locked(vmf->vma->vm_mm);
 }
@@ -859,7 +859,7 @@ static inline bool vma_is_initial_stack(const struct vm_area_struct *vma)
 		vma->vm_end >= vma->vm_mm->start_stack;
 }
 
-static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
+static inline bool vma_is_temporary_stack(const struct vm_area_struct *vma)
 {
 	int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
 
@@ -873,7 +873,7 @@ static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
 	return false;
 }
 
-static inline bool vma_is_foreign(struct vm_area_struct *vma)
+static inline bool vma_is_foreign(const struct vm_area_struct *vma)
 {
 	if (!current->mm)
 		return true;
@@ -884,7 +884,7 @@ static inline bool vma_is_foreign(struct vm_area_struct *vma)
 	return false;
 }
 
-static inline bool vma_is_accessible(struct vm_area_struct *vma)
+static inline bool vma_is_accessible(const struct vm_area_struct *vma)
 {
 	return vma->vm_flags & VM_ACCESS_FLAGS;
 }
@@ -895,7 +895,7 @@ static inline bool is_shared_maywrite(vm_flags_t vm_flags)
 		(VM_SHARED | VM_MAYWRITE);
 }
 
-static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
+static inline bool vma_is_shared_maywrite(const struct vm_area_struct *vma)
 {
 	return is_shared_maywrite(vma->vm_flags);
 }
@@ -1839,7 +1839,7 @@ static inline struct folio *pfn_folio(unsigned long pfn)
 }
 
 #ifdef CONFIG_MMU
-static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
+static inline pte_t mk_pte(const struct page *page, pgprot_t pgprot)
 {
 	return pfn_pte(page_to_pfn(page), pgprot);
 }
@@ -1854,7 +1854,7 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
  *
  * Return: A page table entry suitable for mapping this folio.
  */
-static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot)
+static inline pte_t folio_mk_pte(const struct folio *folio, pgprot_t pgprot)
 {
 	return pfn_pte(folio_pfn(folio), pgprot);
 }
@@ -1870,7 +1870,7 @@ static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot)
  *
  * Return: A page table entry suitable for mapping this folio.
  */
-static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot)
+static inline pmd_t folio_mk_pmd(const struct folio *folio, pgprot_t pgprot)
 {
 	return pmd_mkhuge(pfn_pmd(folio_pfn(folio), pgprot));
 }
@@ -1886,7 +1886,7 @@ static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot)
  *
  * Return: A page table entry suitable for mapping this folio.
  */
-static inline pud_t folio_mk_pud(struct folio *folio, pgprot_t pgprot)
+static inline pud_t folio_mk_pud(const struct folio *folio, pgprot_t pgprot)
 {
 	return pud_mkhuge(pfn_pud(folio_pfn(folio), pgprot));
 }
@@ -3488,7 +3488,7 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
 	return mtree_load(&mm->mm_mt, addr);
 }
 
-static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
+static inline unsigned long stack_guard_start_gap(const struct vm_area_struct *vma)
 {
 	if (vma->vm_flags & VM_GROWSDOWN)
 		return stack_guard_gap;
@@ -3500,7 +3500,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
 	return 0;
 }
 
-static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_start_gap(const struct vm_area_struct *vma)
 {
 	unsigned long gap = stack_guard_start_gap(vma);
 	unsigned long vm_start = vma->vm_start;
@@ -3511,7 +3511,7 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
 	return vm_start;
 }
 
-static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_end_gap(const struct vm_area_struct *vma)
 {
 	unsigned long vm_end = vma->vm_end;
 
@@ -3523,7 +3523,7 @@ static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
 	return vm_end;
 }
 
-static inline unsigned long vma_pages(struct vm_area_struct *vma)
+static inline unsigned long vma_pages(const struct vm_area_struct *vma)
 {
 	return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
 }
@@ -3540,7 +3540,7 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
 	return vma;
 }
 
-static inline bool range_in_vma(struct vm_area_struct *vma,
+static inline bool range_in_vma(const struct vm_area_struct *vma,
 				unsigned long start, unsigned long end)
 {
 	return (vma && vma->vm_start <= start && end <= vma->vm_end);
@@ -3656,7 +3656,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
  * Indicates whether GUP can follow a PROT_NONE mapped page, or whether
  * a (NUMA hinting) fault is required.
  */
-static inline bool gup_can_follow_protnone(struct vm_area_struct *vma,
+static inline bool gup_can_follow_protnone(const struct vm_area_struct *vma,
 					   unsigned int flags)
 {
 	/*
@@ -3786,7 +3786,7 @@ static inline bool debug_guardpage_enabled(void)
 	return static_branch_unlikely(&_debug_guardpage_enabled);
 }
 
-static inline bool page_is_guard(struct page *page)
+static inline bool page_is_guard(const struct page *page)
 {
 	if (!debug_guardpage_enabled())
 		return false;
@@ -3817,7 +3817,7 @@ static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {}
 static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {}
 static inline unsigned int debug_guardpage_minorder(void) { return 0; }
 static inline bool debug_guardpage_enabled(void) { return false; }
-static inline bool page_is_guard(struct page *page) { return false; }
+static inline bool page_is_guard(const struct page *page) { return false; }
 static inline bool set_page_guard(struct zone *zone, struct page *page,
 			unsigned int order) { return false; }
 static inline void clear_page_guard(struct zone *zone, struct page *page,
@@ -3899,7 +3899,7 @@ void vmemmap_free(unsigned long start, unsigned long end,
 #endif
 
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
+static inline unsigned long vmem_altmap_offset(const struct vmem_altmap *altmap)
 {
 	/* number of pfns from base where pfn_to_page() is valid */
 	if (altmap)
@@ -3913,7 +3913,7 @@ static inline void vmem_altmap_free(struct vmem_altmap *altmap,
 	altmap->alloc -= nr_pfns;
 }
 #else
-static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
+static inline unsigned long vmem_altmap_offset(const struct vmem_altmap *altmap)
 {
 	return 0;
 }
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (10 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 11/12] mm: constify assert/test functions in mm.h Max Kellermann
@ 2025-09-01 20:50 ` Max Kellermann
  2025-09-02  6:17   ` Lorenzo Stoakes
  2025-09-02  8:11   ` David Hildenbrand
  2025-09-01 21:34 ` [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Vlastimil Babka
                   ` (2 subsequent siblings)
  14 siblings, 2 replies; 32+ messages in thread
From: Max Kellermann @ 2025-09-01 20:50 UTC (permalink / raw)
  To: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, max.kellermann, thuth, broonie, osalvador,
	jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc, linux-s390,
	sparclinux, linux-fsdevel

Lots of functions in mm/highmem.c do not write to the given pointers
and do not call functions that take non-const pointers and can
therefore be constified.

This includes functions like kunmap() which might be implemented in a
way that writes to the pointer (e.g. to update reference counters or
mapping fields), but currently are not.

kmap() on the other hand cannot be made const because it calls
set_page_address() which is non-const in some
architectures/configurations.

Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
---
 arch/arm/include/asm/highmem.h    |  6 +++---
 arch/xtensa/include/asm/highmem.h |  2 +-
 include/linux/highmem-internal.h  | 36 +++++++++++++++----------------
 include/linux/highmem.h           |  8 +++----
 mm/highmem.c                      | 10 ++++-----
 5 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index b4b66220952d..bdb209e002a4 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -46,9 +46,9 @@ extern pte_t *pkmap_page_table;
 #endif
 
 #ifdef ARCH_NEEDS_KMAP_HIGH_GET
-extern void *kmap_high_get(struct page *page);
+extern void *kmap_high_get(const struct page *page);
 
-static inline void *arch_kmap_local_high_get(struct page *page)
+static inline void *arch_kmap_local_high_get(const struct page *page)
 {
 	if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !cache_is_vivt())
 		return NULL;
@@ -57,7 +57,7 @@ static inline void *arch_kmap_local_high_get(struct page *page)
 #define arch_kmap_local_high_get arch_kmap_local_high_get
 
 #else /* ARCH_NEEDS_KMAP_HIGH_GET */
-static inline void *kmap_high_get(struct page *page)
+static inline void *kmap_high_get(const struct page *page)
 {
 	return NULL;
 }
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 34b8b620e7f1..b55235f4adac 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -29,7 +29,7 @@
 
 #if DCACHE_WAY_SIZE > PAGE_SIZE
 #define get_pkmap_color get_pkmap_color
-static inline int get_pkmap_color(struct page *page)
+static inline int get_pkmap_color(const struct page *page)
 {
 	return DCACHE_ALIAS(page_to_phys(page));
 }
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index 36053c3d6d64..0574c21ca45d 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -7,7 +7,7 @@
  */
 #ifdef CONFIG_KMAP_LOCAL
 void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot);
-void *__kmap_local_page_prot(struct page *page, pgprot_t prot);
+void *__kmap_local_page_prot(const struct page *page, pgprot_t prot);
 void kunmap_local_indexed(const void *vaddr);
 void kmap_local_fork(struct task_struct *tsk);
 void __kmap_local_sched_out(void);
@@ -33,7 +33,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { }
 #endif
 
 void *kmap_high(struct page *page);
-void kunmap_high(struct page *page);
+void kunmap_high(const struct page *page);
 void __kmap_flush_unused(void);
 struct page *__kmap_to_page(void *addr);
 
@@ -50,7 +50,7 @@ static inline void *kmap(struct page *page)
 	return addr;
 }
 
-static inline void kunmap(struct page *page)
+static inline void kunmap(const struct page *page)
 {
 	might_sleep();
 	if (!PageHighMem(page))
@@ -68,12 +68,12 @@ static inline void kmap_flush_unused(void)
 	__kmap_flush_unused();
 }
 
-static inline void *kmap_local_page(struct page *page)
+static inline void *kmap_local_page(const struct page *page)
 {
 	return __kmap_local_page_prot(page, kmap_prot);
 }
 
-static inline void *kmap_local_page_try_from_panic(struct page *page)
+static inline void *kmap_local_page_try_from_panic(const struct page *page)
 {
 	if (!PageHighMem(page))
 		return page_address(page);
@@ -81,13 +81,13 @@ static inline void *kmap_local_page_try_from_panic(struct page *page)
 	return NULL;
 }
 
-static inline void *kmap_local_folio(struct folio *folio, size_t offset)
+static inline void *kmap_local_folio(const struct folio *folio, size_t offset)
 {
-	struct page *page = folio_page(folio, offset / PAGE_SIZE);
+	const struct page *page = folio_page(folio, offset / PAGE_SIZE);
 	return __kmap_local_page_prot(page, kmap_prot) + offset % PAGE_SIZE;
 }
 
-static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
+static inline void *kmap_local_page_prot(const struct page *page, pgprot_t prot)
 {
 	return __kmap_local_page_prot(page, prot);
 }
@@ -102,7 +102,7 @@ static inline void __kunmap_local(const void *vaddr)
 	kunmap_local_indexed(vaddr);
 }
 
-static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+static inline void *kmap_atomic_prot(const struct page *page, pgprot_t prot)
 {
 	if (IS_ENABLED(CONFIG_PREEMPT_RT))
 		migrate_disable();
@@ -113,7 +113,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 	return __kmap_local_page_prot(page, prot);
 }
 
-static inline void *kmap_atomic(struct page *page)
+static inline void *kmap_atomic(const struct page *page)
 {
 	return kmap_atomic_prot(page, kmap_prot);
 }
@@ -173,32 +173,32 @@ static inline void *kmap(struct page *page)
 	return page_address(page);
 }
 
-static inline void kunmap_high(struct page *page) { }
+static inline void kunmap_high(const struct page *page) { }
 static inline void kmap_flush_unused(void) { }
 
-static inline void kunmap(struct page *page)
+static inline void kunmap(const struct page *page)
 {
 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
 	kunmap_flush_on_unmap(page_address(page));
 #endif
 }
 
-static inline void *kmap_local_page(struct page *page)
+static inline void *kmap_local_page(const struct page *page)
 {
 	return page_address(page);
 }
 
-static inline void *kmap_local_page_try_from_panic(struct page *page)
+static inline void *kmap_local_page_try_from_panic(const struct page *page)
 {
 	return page_address(page);
 }
 
-static inline void *kmap_local_folio(struct folio *folio, size_t offset)
+static inline void *kmap_local_folio(const struct folio *folio, size_t offset)
 {
 	return folio_address(folio) + offset;
 }
 
-static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
+static inline void *kmap_local_page_prot(const struct page *page, pgprot_t prot)
 {
 	return kmap_local_page(page);
 }
@@ -215,7 +215,7 @@ static inline void __kunmap_local(const void *addr)
 #endif
 }
 
-static inline void *kmap_atomic(struct page *page)
+static inline void *kmap_atomic(const struct page *page)
 {
 	if (IS_ENABLED(CONFIG_PREEMPT_RT))
 		migrate_disable();
@@ -225,7 +225,7 @@ static inline void *kmap_atomic(struct page *page)
 	return page_address(page);
 }
 
-static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+static inline void *kmap_atomic_prot(const struct page *page, pgprot_t prot)
 {
 	return kmap_atomic(page);
 }
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 6234f316468c..105cc4c00cc3 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -43,7 +43,7 @@ static inline void *kmap(struct page *page);
  * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of
  * pages in the low memory area.
  */
-static inline void kunmap(struct page *page);
+static inline void kunmap(const struct page *page);
 
 /**
  * kmap_to_page - Get the page for a kmap'ed address
@@ -93,7 +93,7 @@ static inline void kmap_flush_unused(void);
  * disabling migration in order to keep the virtual address stable across
  * preemption. No caller of kmap_local_page() can rely on this side effect.
  */
-static inline void *kmap_local_page(struct page *page);
+static inline void *kmap_local_page(const struct page *page);
 
 /**
  * kmap_local_folio - Map a page in this folio for temporary usage
@@ -129,7 +129,7 @@ static inline void *kmap_local_page(struct page *page);
  * Context: Can be invoked from any context.
  * Return: The virtual address of @offset.
  */
-static inline void *kmap_local_folio(struct folio *folio, size_t offset);
+static inline void *kmap_local_folio(const struct folio *folio, size_t offset);
 
 /**
  * kmap_atomic - Atomically map a page for temporary usage - Deprecated!
@@ -176,7 +176,7 @@ static inline void *kmap_local_folio(struct folio *folio, size_t offset);
  * kunmap_atomic(vaddr2);
  * kunmap_atomic(vaddr1);
  */
-static inline void *kmap_atomic(struct page *page);
+static inline void *kmap_atomic(const struct page *page);
 
 /* Highmem related interfaces for management code */
 static inline unsigned long nr_free_highpages(void);
diff --git a/mm/highmem.c b/mm/highmem.c
index ef3189b36cad..b5c8e4c2d5d4 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -61,7 +61,7 @@ static inline int kmap_local_calc_idx(int idx)
 /*
  * Determine color of virtual address where the page should be mapped.
  */
-static inline unsigned int get_pkmap_color(struct page *page)
+static inline unsigned int get_pkmap_color(const struct page *page)
 {
 	return 0;
 }
@@ -334,7 +334,7 @@ EXPORT_SYMBOL(kmap_high);
  *
  * This can be called from any context.
  */
-void *kmap_high_get(struct page *page)
+void *kmap_high_get(const struct page *page)
 {
 	unsigned long vaddr, flags;
 
@@ -356,7 +356,7 @@ void *kmap_high_get(struct page *page)
  * If ARCH_NEEDS_KMAP_HIGH_GET is not defined then this may be called
  * only from user context.
  */
-void kunmap_high(struct page *page)
+void kunmap_high(const struct page *page)
 {
 	unsigned long vaddr;
 	unsigned long nr;
@@ -508,7 +508,7 @@ static inline void kmap_local_idx_pop(void)
 #endif
 
 #ifndef arch_kmap_local_high_get
-static inline void *arch_kmap_local_high_get(struct page *page)
+static inline void *arch_kmap_local_high_get(const struct page *page)
 {
 	return NULL;
 }
@@ -572,7 +572,7 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot)
 }
 EXPORT_SYMBOL_GPL(__kmap_local_pfn_prot);
 
-void *__kmap_local_page_prot(struct page *page, pgprot_t prot)
+void *__kmap_local_page_prot(const struct page *page, pgprot_t prot)
 {
 	void *kmap;
 
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 00/12] mm: establish const-correctness for pointer parameters
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (11 preceding siblings ...)
  2025-09-01 20:50 ` [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness Max Kellermann
@ 2025-09-01 21:34 ` Vlastimil Babka
  2025-09-02  6:19 ` Lorenzo Stoakes
  2025-09-02 10:02 ` Mike Rapoport
  14 siblings, 0 replies; 32+ messages in thread
From: Vlastimil Babka @ 2025-09-01 21:34 UTC (permalink / raw)
  To: Max Kellermann, akpm, david, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 9/1/25 22:50, Max Kellermann wrote:
> For improved const-correctness in the low-level memory-management
> subsystem, which provides a basis for further const-ification further
> up the call stack (e.g. filesystems).
> 
> This patch series splitted into smaller patches was initially posted
> as a single large patch:
> 
>  https://lore.kernel.org/lkml/20250827192233.447920-1-max.kellermann@ionos.com/
> 
> I started this work when I tried to constify the Ceph filesystem code,
> but found that to be impossible because many "mm" functions accept
> non-const pointer, even though they modify nothing.

I think (and tried to verify with a lore search) it's the first time you
mention this motivation and it's very useful to state that, thanks!
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions
  2025-09-01 20:50 ` [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions Max Kellermann
@ 2025-09-02  6:13   ` Lorenzo Stoakes
  2025-09-02  8:04   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:13 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:15PM +0200, Max Kellermann wrote:
> For improved const-correctness.
>
> We select certain test functions which either invoke each other,
> functions that are already const-ified, or no further functions.
>
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
>
> (Even though seemingly unrelated, this also constifies the pointer
> parameter of mmap_is_legacy() in arch/s390/mm/mmap.c because a copy of
> the function exists in mm/util.c.)
>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  arch/s390/mm/mmap.c     |  2 +-
>  include/linux/mm.h      |  6 +++---
>  include/linux/pagemap.h |  2 +-
>  mm/util.c               | 10 +++++-----
>  4 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
> index 547104ccc22a..e188cb6d4946 100644
> --- a/arch/s390/mm/mmap.c
> +++ b/arch/s390/mm/mmap.c
> @@ -27,7 +27,7 @@ static unsigned long stack_maxrandom_size(void)
>  	return STACK_RND_MASK << PAGE_SHIFT;
>  }
>
> -static inline int mmap_is_legacy(struct rlimit *rlim_stack)
> +static inline int mmap_is_legacy(const struct rlimit *rlim_stack)
>  {
>  	if (current->personality & ADDR_COMPAT_LAYOUT)
>  		return 1;
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f70c6b4d5f80..23864c3519d6 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -986,7 +986,7 @@ static inline bool vma_is_shmem(const struct vm_area_struct *vma) { return false
>  static inline bool vma_is_anon_shmem(const struct vm_area_struct *vma) { return false; }
>  #endif
>
> -int vma_is_stack_for_current(struct vm_area_struct *vma);
> +int vma_is_stack_for_current(const struct vm_area_struct *vma);
>
>  /* flush_tlb_range() takes a vma, not a mm, and can care about flags */
>  #define TLB_FLUSH_VMA(mm,flags) { .vm_mm = (mm), .vm_flags = (flags) }
> @@ -2585,7 +2585,7 @@ void folio_add_pin(struct folio *folio);
>
>  int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc);
>  int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
> -			struct task_struct *task, bool bypass_rlim);
> +			const struct task_struct *task, bool bypass_rlim);
>
>  struct kvec;
>  struct page *get_dump_page(unsigned long addr, int *locked);
> @@ -3348,7 +3348,7 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
>  	     avc; avc = anon_vma_interval_tree_iter_next(avc, start, last))
>
>  /* mmap.c */
> -extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
> +extern int __vm_enough_memory(const struct mm_struct *mm, long pages, int cap_sys_admin);
>  extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
>  extern void exit_mmap(struct mm_struct *);
>  bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_struct *vma,
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 1d3803c397e9..185644e288ea 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -551,7 +551,7 @@ static inline void filemap_nr_thps_dec(struct address_space *mapping)
>  #endif
>  }
>
> -struct address_space *folio_mapping(struct folio *);
> +struct address_space *folio_mapping(const struct folio *folio);
>
>  /**
>   * folio_flush_mapping - Find the file mapping this folio belongs to.
> diff --git a/mm/util.c b/mm/util.c
> index d235b74f7aff..241d2eaf26ca 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -315,7 +315,7 @@ void *memdup_user_nul(const void __user *src, size_t len)
>  EXPORT_SYMBOL(memdup_user_nul);
>
>  /* Check if the vma is being used as a stack by this task */
> -int vma_is_stack_for_current(struct vm_area_struct *vma)
> +int vma_is_stack_for_current(const struct vm_area_struct *vma)
>  {
>  	struct task_struct * __maybe_unused t = current;
>
> @@ -410,7 +410,7 @@ unsigned long arch_mmap_rnd(void)
>  	return rnd << PAGE_SHIFT;
>  }
>
> -static int mmap_is_legacy(struct rlimit *rlim_stack)
> +static int mmap_is_legacy(const struct rlimit *rlim_stack)
>  {
>  	if (current->personality & ADDR_COMPAT_LAYOUT)
>  		return 1;
> @@ -504,7 +504,7 @@ EXPORT_SYMBOL_IF_KUNIT(arch_pick_mmap_layout);
>   * * -ENOMEM if RLIMIT_MEMLOCK would be exceeded.
>   */
>  int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc,
> -			struct task_struct *task, bool bypass_rlim)
> +			const struct task_struct *task, bool bypass_rlim)
>  {
>  	unsigned long locked_vm, limit;
>  	int ret = 0;
> @@ -688,7 +688,7 @@ struct anon_vma *folio_anon_vma(const struct folio *folio)
>   * You can call this for folios which aren't in the swap cache or page
>   * cache and it will return NULL.
>   */
> -struct address_space *folio_mapping(struct folio *folio)
> +struct address_space *folio_mapping(const struct folio *folio)
>  {
>  	struct address_space *mapping;
>
> @@ -926,7 +926,7 @@ EXPORT_SYMBOL_GPL(vm_memory_committed);
>   * Note this is a helper function intended to be used by LSMs which
>   * wish to use this logic.
>   */
> -int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
> +int __vm_enough_memory(const struct mm_struct *mm, long pages, int cap_sys_admin)
>  {
>  	long allowed;
>  	unsigned long bytes_failed;
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter
  2025-09-01 20:50 ` [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter Max Kellermann
@ 2025-09-02  6:13   ` Lorenzo Stoakes
  2025-09-02  8:04   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:13 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:16PM +0200, Max Kellermann wrote:
> For improved const-correctness.
>
> This piece is necessary to make the `rlim_stack` parameter to
> mmap_base() const.
>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  arch/parisc/include/asm/processor.h | 2 +-
>  arch/parisc/kernel/sys_parisc.c     | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h
> index 4c14bde39aac..dd0b5e199559 100644
> --- a/arch/parisc/include/asm/processor.h
> +++ b/arch/parisc/include/asm/processor.h
> @@ -48,7 +48,7 @@
>  #ifndef __ASSEMBLER__
>
>  struct rlimit;
> -unsigned long mmap_upper_limit(struct rlimit *rlim_stack);
> +unsigned long mmap_upper_limit(const struct rlimit *rlim_stack);
>  unsigned long calc_max_stack_size(unsigned long stack_max);
>
>  /*
> diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
> index f852fe274abe..b2cdbb8a12b1 100644
> --- a/arch/parisc/kernel/sys_parisc.c
> +++ b/arch/parisc/kernel/sys_parisc.c
> @@ -77,7 +77,7 @@ unsigned long calc_max_stack_size(unsigned long stack_max)
>   * indicating that "current" should be used instead of a passed-in
>   * value from the exec bprm as done with arch_pick_mmap_layout().
>   */
> -unsigned long mmap_upper_limit(struct rlimit *rlim_stack)
> +unsigned long mmap_upper_limit(const struct rlimit *rlim_stack)
>  {
>  	unsigned long stack_base;
>
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness Max Kellermann
@ 2025-09-02  6:15   ` Lorenzo Stoakes
  2025-09-02  8:05   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:15 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:17PM +0200, Max Kellermann wrote:
> This function only reads from the rlimit pointer (but writes to the
> mm_struct pointer which is kept without `const`).
>
> All callees are already const-ified or (internal functions) are being
> constified by this patch.
>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  arch/s390/mm/mmap.c              | 4 ++--
>  arch/sparc/kernel/sys_sparc_64.c | 2 +-
>  arch/x86/mm/mmap.c               | 6 +++---
>  include/linux/sched/mm.h         | 4 ++--
>  mm/util.c                        | 6 +++---
>  5 files changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
> index e188cb6d4946..197c1d9497a7 100644
> --- a/arch/s390/mm/mmap.c
> +++ b/arch/s390/mm/mmap.c
> @@ -47,7 +47,7 @@ static unsigned long mmap_base_legacy(unsigned long rnd)
>  }
>
>  static inline unsigned long mmap_base(unsigned long rnd,
> -				      struct rlimit *rlim_stack)
> +				      const struct rlimit *rlim_stack)
>  {
>  	unsigned long gap = rlim_stack->rlim_cur;
>  	unsigned long pad = stack_maxrandom_size() + stack_guard_gap;
> @@ -169,7 +169,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
>   * This function, called very early during the creation of a new
>   * process VM image, sets up which VM layout function to use:
>   */
> -void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> +void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
>  {
>  	unsigned long random_factor = 0UL;
>
> diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
> index 785e9909340f..55faf2effa46 100644
> --- a/arch/sparc/kernel/sys_sparc_64.c
> +++ b/arch/sparc/kernel/sys_sparc_64.c
> @@ -294,7 +294,7 @@ static unsigned long mmap_rnd(void)
>  	return rnd << PAGE_SHIFT;
>  }
>
> -void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> +void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
>  {
>  	unsigned long random_factor = mmap_rnd();
>  	unsigned long gap;
> diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
> index 708f85dc9380..82f3a987f7cf 100644
> --- a/arch/x86/mm/mmap.c
> +++ b/arch/x86/mm/mmap.c
> @@ -80,7 +80,7 @@ unsigned long arch_mmap_rnd(void)
>  }
>
>  static unsigned long mmap_base(unsigned long rnd, unsigned long task_size,
> -			       struct rlimit *rlim_stack)
> +			       const struct rlimit *rlim_stack)
>  {
>  	unsigned long gap = rlim_stack->rlim_cur;
>  	unsigned long pad = stack_maxrandom_size(task_size) + stack_guard_gap;
> @@ -110,7 +110,7 @@ static unsigned long mmap_legacy_base(unsigned long rnd,
>   */
>  static void arch_pick_mmap_base(unsigned long *base, unsigned long *legacy_base,
>  		unsigned long random_factor, unsigned long task_size,
> -		struct rlimit *rlim_stack)
> +		const struct rlimit *rlim_stack)
>  {
>  	*legacy_base = mmap_legacy_base(random_factor, task_size);
>  	if (mmap_is_legacy())
> @@ -119,7 +119,7 @@ static void arch_pick_mmap_base(unsigned long *base, unsigned long *legacy_base,
>  		*base = mmap_base(random_factor, task_size, rlim_stack);
>  }
>
> -void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> +void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
>  {
>  	if (mmap_is_legacy())
>  		mm_flags_clear(MMF_TOPDOWN, mm);
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index 2201da0afecc..0232d983b715 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -178,7 +178,7 @@ static inline void mm_update_next_owner(struct mm_struct *mm)
>  #endif
>
>  extern void arch_pick_mmap_layout(struct mm_struct *mm,
> -				  struct rlimit *rlim_stack);
> +				  const struct rlimit *rlim_stack);
>
>  unsigned long
>  arch_get_unmapped_area(struct file *filp, unsigned long addr,
> @@ -211,7 +211,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
>  				  unsigned long flags, vm_flags_t vm_flags);
>  #else
>  static inline void arch_pick_mmap_layout(struct mm_struct *mm,
> -					 struct rlimit *rlim_stack) {}
> +					 const struct rlimit *rlim_stack) {}
>  #endif
>
>  static inline bool in_vfork(struct task_struct *tsk)
> diff --git a/mm/util.c b/mm/util.c
> index 241d2eaf26ca..77462027ad24 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -431,7 +431,7 @@ static int mmap_is_legacy(const struct rlimit *rlim_stack)
>  #define MIN_GAP		(SZ_128M)
>  #define MAX_GAP		(STACK_TOP / 6 * 5)
>
> -static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
> +static unsigned long mmap_base(const unsigned long rnd, const struct rlimit *rlim_stack)
>  {
>  #ifdef CONFIG_STACK_GROWSUP
>  	/*
> @@ -462,7 +462,7 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
>  #endif
>  }
>
> -void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> +void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
>  {
>  	unsigned long random_factor = 0UL;
>
> @@ -478,7 +478,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
>  	}
>  }
>  #elif defined(CONFIG_MMU) && !defined(HAVE_ARCH_PICK_MMAP_LAYOUT)
> -void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> +void arch_pick_mmap_layout(struct mm_struct *mm, const struct rlimit *rlim_stack)
>  {
>  	mm->mmap_base = TASK_UNMAPPED_BASE;
>  	mm_flags_clear(MMF_TOPDOWN, mm);
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness Max Kellermann
@ 2025-09-02  6:16   ` Lorenzo Stoakes
  2025-09-02  8:05   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:16 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:19PM +0200, Max Kellermann wrote:
> We select certain test functions plus folio_migrate_refs() from
> mm_inline.h which either invoke each other, functions that are already
> const-ified, or no further functions.
>
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
>
> One exception is the function folio_migrate_refs() which does write to
> the "new" folio pointer; there, only the "old" folio pointer is being
> constified; only its "flags" field is read, but nothing written.
>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  include/linux/mm_inline.h | 25 +++++++++++++------------
>  1 file changed, 13 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index 150302b4a905..d6c1011b38f2 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -25,7 +25,7 @@
>   * 0 if @folio is a normal anonymous folio, a tmpfs folio or otherwise
>   * ram or swap backed folio.
>   */
> -static inline int folio_is_file_lru(struct folio *folio)
> +static inline int folio_is_file_lru(const struct folio *folio)
>  {
>  	return !folio_test_swapbacked(folio);
>  }
> @@ -84,7 +84,7 @@ static __always_inline void __folio_clear_lru_flags(struct folio *folio)
>   * Return: The LRU list a folio should be on, as an index
>   * into the array of LRU lists.
>   */
> -static __always_inline enum lru_list folio_lru_list(struct folio *folio)
> +static __always_inline enum lru_list folio_lru_list(const struct folio *folio)
>  {
>  	enum lru_list lru;
>
> @@ -141,7 +141,7 @@ static inline int lru_tier_from_refs(int refs, bool workingset)
>  	return workingset ? MAX_NR_TIERS - 1 : order_base_2(refs);
>  }
>
> -static inline int folio_lru_refs(struct folio *folio)
> +static inline int folio_lru_refs(const struct folio *folio)
>  {
>  	unsigned long flags = READ_ONCE(folio->flags.f);
>
> @@ -154,14 +154,14 @@ static inline int folio_lru_refs(struct folio *folio)
>  	return ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1;
>  }
>
> -static inline int folio_lru_gen(struct folio *folio)
> +static inline int folio_lru_gen(const struct folio *folio)
>  {
>  	unsigned long flags = READ_ONCE(folio->flags.f);
>
>  	return ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
>  }
>
> -static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen)
> +static inline bool lru_gen_is_active(const struct lruvec *lruvec, int gen)
>  {
>  	unsigned long max_seq = lruvec->lrugen.max_seq;
>
> @@ -217,12 +217,13 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli
>  	VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen));
>  }
>
> -static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct folio *folio,
> +static inline unsigned long lru_gen_folio_seq(const struct lruvec *lruvec,
> +					      const struct folio *folio,
>  					      bool reclaiming)
>  {
>  	int gen;
>  	int type = folio_is_file_lru(folio);
> -	struct lru_gen_folio *lrugen = &lruvec->lrugen;
> +	const struct lru_gen_folio *lrugen = &lruvec->lrugen;
>
>  	/*
>  	 * +-----------------------------------+-----------------------------------+
> @@ -302,7 +303,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
>  	return true;
>  }
>
> -static inline void folio_migrate_refs(struct folio *new, struct folio *old)
> +static inline void folio_migrate_refs(struct folio *new, const struct folio *old)
>  {
>  	unsigned long refs = READ_ONCE(old->flags.f) & LRU_REFS_MASK;
>
> @@ -330,7 +331,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
>  	return false;
>  }
>
> -static inline void folio_migrate_refs(struct folio *new, struct folio *old)
> +static inline void folio_migrate_refs(struct folio *new, const struct folio *old)
>  {
>
>  }
> @@ -508,7 +509,7 @@ static inline void dec_tlb_flush_pending(struct mm_struct *mm)
>  	atomic_dec(&mm->tlb_flush_pending);
>  }
>
> -static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
> +static inline bool mm_tlb_flush_pending(const struct mm_struct *mm)
>  {
>  	/*
>  	 * Must be called after having acquired the PTL; orders against that
> @@ -521,7 +522,7 @@ static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
>  	return atomic_read(&mm->tlb_flush_pending);
>  }
>
> -static inline bool mm_tlb_flush_nested(struct mm_struct *mm)
> +static inline bool mm_tlb_flush_nested(const struct mm_struct *mm)
>  {
>  	/*
>  	 * Similar to mm_tlb_flush_pending(), we must have acquired the PTL
> @@ -605,7 +606,7 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
>  	return false;
>  }
>
> -static inline bool vma_has_recency(struct vm_area_struct *vma)
> +static inline bool vma_has_recency(const struct vm_area_struct *vma)
>  {
>  	if (vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))
>  		return false;
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 11/12] mm: constify assert/test functions in mm.h
  2025-09-01 20:50 ` [PATCH v6 11/12] mm: constify assert/test functions in mm.h Max Kellermann
@ 2025-09-02  6:17   ` Lorenzo Stoakes
  2025-09-02  8:06   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:17 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:20PM +0200, Max Kellermann wrote:
> For improved const-correctness.
>
> We select certain assert and test functions which either invoke each
> other, functions that are already const-ified, or no further
> functions.
>
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  include/linux/mm.h | 40 ++++++++++++++++++++--------------------
>  1 file changed, 20 insertions(+), 20 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 23864c3519d6..c3767688771c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -703,7 +703,7 @@ static inline void release_fault_lock(struct vm_fault *vmf)
>  		mmap_read_unlock(vmf->vma->vm_mm);
>  }
>
> -static inline void assert_fault_locked(struct vm_fault *vmf)
> +static inline void assert_fault_locked(const struct vm_fault *vmf)
>  {
>  	if (vmf->flags & FAULT_FLAG_VMA_LOCK)
>  		vma_assert_locked(vmf->vma);
> @@ -716,7 +716,7 @@ static inline void release_fault_lock(struct vm_fault *vmf)
>  	mmap_read_unlock(vmf->vma->vm_mm);
>  }
>
> -static inline void assert_fault_locked(struct vm_fault *vmf)
> +static inline void assert_fault_locked(const struct vm_fault *vmf)
>  {
>  	mmap_assert_locked(vmf->vma->vm_mm);
>  }
> @@ -859,7 +859,7 @@ static inline bool vma_is_initial_stack(const struct vm_area_struct *vma)
>  		vma->vm_end >= vma->vm_mm->start_stack;
>  }
>
> -static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
> +static inline bool vma_is_temporary_stack(const struct vm_area_struct *vma)
>  {
>  	int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
>
> @@ -873,7 +873,7 @@ static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
>  	return false;
>  }
>
> -static inline bool vma_is_foreign(struct vm_area_struct *vma)
> +static inline bool vma_is_foreign(const struct vm_area_struct *vma)
>  {
>  	if (!current->mm)
>  		return true;
> @@ -884,7 +884,7 @@ static inline bool vma_is_foreign(struct vm_area_struct *vma)
>  	return false;
>  }
>
> -static inline bool vma_is_accessible(struct vm_area_struct *vma)
> +static inline bool vma_is_accessible(const struct vm_area_struct *vma)
>  {
>  	return vma->vm_flags & VM_ACCESS_FLAGS;
>  }
> @@ -895,7 +895,7 @@ static inline bool is_shared_maywrite(vm_flags_t vm_flags)
>  		(VM_SHARED | VM_MAYWRITE);
>  }
>
> -static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
> +static inline bool vma_is_shared_maywrite(const struct vm_area_struct *vma)
>  {
>  	return is_shared_maywrite(vma->vm_flags);
>  }
> @@ -1839,7 +1839,7 @@ static inline struct folio *pfn_folio(unsigned long pfn)
>  }
>
>  #ifdef CONFIG_MMU
> -static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
> +static inline pte_t mk_pte(const struct page *page, pgprot_t pgprot)
>  {
>  	return pfn_pte(page_to_pfn(page), pgprot);
>  }
> @@ -1854,7 +1854,7 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
>   *
>   * Return: A page table entry suitable for mapping this folio.
>   */
> -static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot)
> +static inline pte_t folio_mk_pte(const struct folio *folio, pgprot_t pgprot)
>  {
>  	return pfn_pte(folio_pfn(folio), pgprot);
>  }
> @@ -1870,7 +1870,7 @@ static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot)
>   *
>   * Return: A page table entry suitable for mapping this folio.
>   */
> -static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot)
> +static inline pmd_t folio_mk_pmd(const struct folio *folio, pgprot_t pgprot)
>  {
>  	return pmd_mkhuge(pfn_pmd(folio_pfn(folio), pgprot));
>  }
> @@ -1886,7 +1886,7 @@ static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot)
>   *
>   * Return: A page table entry suitable for mapping this folio.
>   */
> -static inline pud_t folio_mk_pud(struct folio *folio, pgprot_t pgprot)
> +static inline pud_t folio_mk_pud(const struct folio *folio, pgprot_t pgprot)
>  {
>  	return pud_mkhuge(pfn_pud(folio_pfn(folio), pgprot));
>  }
> @@ -3488,7 +3488,7 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
>  	return mtree_load(&mm->mm_mt, addr);
>  }
>
> -static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
> +static inline unsigned long stack_guard_start_gap(const struct vm_area_struct *vma)
>  {
>  	if (vma->vm_flags & VM_GROWSDOWN)
>  		return stack_guard_gap;
> @@ -3500,7 +3500,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
> +static inline unsigned long vm_start_gap(const struct vm_area_struct *vma)
>  {
>  	unsigned long gap = stack_guard_start_gap(vma);
>  	unsigned long vm_start = vma->vm_start;
> @@ -3511,7 +3511,7 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
>  	return vm_start;
>  }
>
> -static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
> +static inline unsigned long vm_end_gap(const struct vm_area_struct *vma)
>  {
>  	unsigned long vm_end = vma->vm_end;
>
> @@ -3523,7 +3523,7 @@ static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
>  	return vm_end;
>  }
>
> -static inline unsigned long vma_pages(struct vm_area_struct *vma)
> +static inline unsigned long vma_pages(const struct vm_area_struct *vma)
>  {
>  	return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
>  }
> @@ -3540,7 +3540,7 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
>  	return vma;
>  }
>
> -static inline bool range_in_vma(struct vm_area_struct *vma,
> +static inline bool range_in_vma(const struct vm_area_struct *vma,
>  				unsigned long start, unsigned long end)
>  {
>  	return (vma && vma->vm_start <= start && end <= vma->vm_end);
> @@ -3656,7 +3656,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
>   * Indicates whether GUP can follow a PROT_NONE mapped page, or whether
>   * a (NUMA hinting) fault is required.
>   */
> -static inline bool gup_can_follow_protnone(struct vm_area_struct *vma,
> +static inline bool gup_can_follow_protnone(const struct vm_area_struct *vma,
>  					   unsigned int flags)
>  {
>  	/*
> @@ -3786,7 +3786,7 @@ static inline bool debug_guardpage_enabled(void)
>  	return static_branch_unlikely(&_debug_guardpage_enabled);
>  }
>
> -static inline bool page_is_guard(struct page *page)
> +static inline bool page_is_guard(const struct page *page)
>  {
>  	if (!debug_guardpage_enabled())
>  		return false;
> @@ -3817,7 +3817,7 @@ static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {}
>  static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {}
>  static inline unsigned int debug_guardpage_minorder(void) { return 0; }
>  static inline bool debug_guardpage_enabled(void) { return false; }
> -static inline bool page_is_guard(struct page *page) { return false; }
> +static inline bool page_is_guard(const struct page *page) { return false; }
>  static inline bool set_page_guard(struct zone *zone, struct page *page,
>  			unsigned int order) { return false; }
>  static inline void clear_page_guard(struct zone *zone, struct page *page,
> @@ -3899,7 +3899,7 @@ void vmemmap_free(unsigned long start, unsigned long end,
>  #endif
>
>  #ifdef CONFIG_SPARSEMEM_VMEMMAP
> -static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
> +static inline unsigned long vmem_altmap_offset(const struct vmem_altmap *altmap)
>  {
>  	/* number of pfns from base where pfn_to_page() is valid */
>  	if (altmap)
> @@ -3913,7 +3913,7 @@ static inline void vmem_altmap_free(struct vmem_altmap *altmap,
>  	altmap->alloc -= nr_pfns;
>  }
>  #else
> -static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
> +static inline unsigned long vmem_altmap_offset(const struct vmem_altmap *altmap)
>  {
>  	return 0;
>  }
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness Max Kellermann
@ 2025-09-02  6:17   ` Lorenzo Stoakes
  2025-09-02  8:11   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:17 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:21PM +0200, Max Kellermann wrote:
> Lots of functions in mm/highmem.c do not write to the given pointers
> and do not call functions that take non-const pointers and can
> therefore be constified.
>
> This includes functions like kunmap() which might be implemented in a
> way that writes to the pointer (e.g. to update reference counters or
> mapping fields), but currently are not.
>
> kmap() on the other hand cannot be made const because it calls
> set_page_address() which is non-const in some
> architectures/configurations.
>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  arch/arm/include/asm/highmem.h    |  6 +++---
>  arch/xtensa/include/asm/highmem.h |  2 +-
>  include/linux/highmem-internal.h  | 36 +++++++++++++++----------------
>  include/linux/highmem.h           |  8 +++----
>  mm/highmem.c                      | 10 ++++-----
>  5 files changed, 31 insertions(+), 31 deletions(-)
>
> diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
> index b4b66220952d..bdb209e002a4 100644
> --- a/arch/arm/include/asm/highmem.h
> +++ b/arch/arm/include/asm/highmem.h
> @@ -46,9 +46,9 @@ extern pte_t *pkmap_page_table;
>  #endif
>
>  #ifdef ARCH_NEEDS_KMAP_HIGH_GET
> -extern void *kmap_high_get(struct page *page);
> +extern void *kmap_high_get(const struct page *page);
>
> -static inline void *arch_kmap_local_high_get(struct page *page)
> +static inline void *arch_kmap_local_high_get(const struct page *page)
>  {
>  	if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !cache_is_vivt())
>  		return NULL;
> @@ -57,7 +57,7 @@ static inline void *arch_kmap_local_high_get(struct page *page)
>  #define arch_kmap_local_high_get arch_kmap_local_high_get
>
>  #else /* ARCH_NEEDS_KMAP_HIGH_GET */
> -static inline void *kmap_high_get(struct page *page)
> +static inline void *kmap_high_get(const struct page *page)
>  {
>  	return NULL;
>  }
> diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
> index 34b8b620e7f1..b55235f4adac 100644
> --- a/arch/xtensa/include/asm/highmem.h
> +++ b/arch/xtensa/include/asm/highmem.h
> @@ -29,7 +29,7 @@
>
>  #if DCACHE_WAY_SIZE > PAGE_SIZE
>  #define get_pkmap_color get_pkmap_color
> -static inline int get_pkmap_color(struct page *page)
> +static inline int get_pkmap_color(const struct page *page)
>  {
>  	return DCACHE_ALIAS(page_to_phys(page));
>  }
> diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
> index 36053c3d6d64..0574c21ca45d 100644
> --- a/include/linux/highmem-internal.h
> +++ b/include/linux/highmem-internal.h
> @@ -7,7 +7,7 @@
>   */
>  #ifdef CONFIG_KMAP_LOCAL
>  void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot);
> -void *__kmap_local_page_prot(struct page *page, pgprot_t prot);
> +void *__kmap_local_page_prot(const struct page *page, pgprot_t prot);
>  void kunmap_local_indexed(const void *vaddr);
>  void kmap_local_fork(struct task_struct *tsk);
>  void __kmap_local_sched_out(void);
> @@ -33,7 +33,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { }
>  #endif
>
>  void *kmap_high(struct page *page);
> -void kunmap_high(struct page *page);
> +void kunmap_high(const struct page *page);
>  void __kmap_flush_unused(void);
>  struct page *__kmap_to_page(void *addr);
>
> @@ -50,7 +50,7 @@ static inline void *kmap(struct page *page)
>  	return addr;
>  }
>
> -static inline void kunmap(struct page *page)
> +static inline void kunmap(const struct page *page)
>  {
>  	might_sleep();
>  	if (!PageHighMem(page))
> @@ -68,12 +68,12 @@ static inline void kmap_flush_unused(void)
>  	__kmap_flush_unused();
>  }
>
> -static inline void *kmap_local_page(struct page *page)
> +static inline void *kmap_local_page(const struct page *page)
>  {
>  	return __kmap_local_page_prot(page, kmap_prot);
>  }
>
> -static inline void *kmap_local_page_try_from_panic(struct page *page)
> +static inline void *kmap_local_page_try_from_panic(const struct page *page)
>  {
>  	if (!PageHighMem(page))
>  		return page_address(page);
> @@ -81,13 +81,13 @@ static inline void *kmap_local_page_try_from_panic(struct page *page)
>  	return NULL;
>  }
>
> -static inline void *kmap_local_folio(struct folio *folio, size_t offset)
> +static inline void *kmap_local_folio(const struct folio *folio, size_t offset)
>  {
> -	struct page *page = folio_page(folio, offset / PAGE_SIZE);
> +	const struct page *page = folio_page(folio, offset / PAGE_SIZE);
>  	return __kmap_local_page_prot(page, kmap_prot) + offset % PAGE_SIZE;
>  }
>
> -static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
> +static inline void *kmap_local_page_prot(const struct page *page, pgprot_t prot)
>  {
>  	return __kmap_local_page_prot(page, prot);
>  }
> @@ -102,7 +102,7 @@ static inline void __kunmap_local(const void *vaddr)
>  	kunmap_local_indexed(vaddr);
>  }
>
> -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> +static inline void *kmap_atomic_prot(const struct page *page, pgprot_t prot)
>  {
>  	if (IS_ENABLED(CONFIG_PREEMPT_RT))
>  		migrate_disable();
> @@ -113,7 +113,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
>  	return __kmap_local_page_prot(page, prot);
>  }
>
> -static inline void *kmap_atomic(struct page *page)
> +static inline void *kmap_atomic(const struct page *page)
>  {
>  	return kmap_atomic_prot(page, kmap_prot);
>  }
> @@ -173,32 +173,32 @@ static inline void *kmap(struct page *page)
>  	return page_address(page);
>  }
>
> -static inline void kunmap_high(struct page *page) { }
> +static inline void kunmap_high(const struct page *page) { }
>  static inline void kmap_flush_unused(void) { }
>
> -static inline void kunmap(struct page *page)
> +static inline void kunmap(const struct page *page)
>  {
>  #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
>  	kunmap_flush_on_unmap(page_address(page));
>  #endif
>  }
>
> -static inline void *kmap_local_page(struct page *page)
> +static inline void *kmap_local_page(const struct page *page)
>  {
>  	return page_address(page);
>  }
>
> -static inline void *kmap_local_page_try_from_panic(struct page *page)
> +static inline void *kmap_local_page_try_from_panic(const struct page *page)
>  {
>  	return page_address(page);
>  }
>
> -static inline void *kmap_local_folio(struct folio *folio, size_t offset)
> +static inline void *kmap_local_folio(const struct folio *folio, size_t offset)
>  {
>  	return folio_address(folio) + offset;
>  }
>
> -static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
> +static inline void *kmap_local_page_prot(const struct page *page, pgprot_t prot)
>  {
>  	return kmap_local_page(page);
>  }
> @@ -215,7 +215,7 @@ static inline void __kunmap_local(const void *addr)
>  #endif
>  }
>
> -static inline void *kmap_atomic(struct page *page)
> +static inline void *kmap_atomic(const struct page *page)
>  {
>  	if (IS_ENABLED(CONFIG_PREEMPT_RT))
>  		migrate_disable();
> @@ -225,7 +225,7 @@ static inline void *kmap_atomic(struct page *page)
>  	return page_address(page);
>  }
>
> -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> +static inline void *kmap_atomic_prot(const struct page *page, pgprot_t prot)
>  {
>  	return kmap_atomic(page);
>  }
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index 6234f316468c..105cc4c00cc3 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -43,7 +43,7 @@ static inline void *kmap(struct page *page);
>   * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of
>   * pages in the low memory area.
>   */
> -static inline void kunmap(struct page *page);
> +static inline void kunmap(const struct page *page);
>
>  /**
>   * kmap_to_page - Get the page for a kmap'ed address
> @@ -93,7 +93,7 @@ static inline void kmap_flush_unused(void);
>   * disabling migration in order to keep the virtual address stable across
>   * preemption. No caller of kmap_local_page() can rely on this side effect.
>   */
> -static inline void *kmap_local_page(struct page *page);
> +static inline void *kmap_local_page(const struct page *page);
>
>  /**
>   * kmap_local_folio - Map a page in this folio for temporary usage
> @@ -129,7 +129,7 @@ static inline void *kmap_local_page(struct page *page);
>   * Context: Can be invoked from any context.
>   * Return: The virtual address of @offset.
>   */
> -static inline void *kmap_local_folio(struct folio *folio, size_t offset);
> +static inline void *kmap_local_folio(const struct folio *folio, size_t offset);
>
>  /**
>   * kmap_atomic - Atomically map a page for temporary usage - Deprecated!
> @@ -176,7 +176,7 @@ static inline void *kmap_local_folio(struct folio *folio, size_t offset);
>   * kunmap_atomic(vaddr2);
>   * kunmap_atomic(vaddr1);
>   */
> -static inline void *kmap_atomic(struct page *page);
> +static inline void *kmap_atomic(const struct page *page);
>
>  /* Highmem related interfaces for management code */
>  static inline unsigned long nr_free_highpages(void);
> diff --git a/mm/highmem.c b/mm/highmem.c
> index ef3189b36cad..b5c8e4c2d5d4 100644
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -61,7 +61,7 @@ static inline int kmap_local_calc_idx(int idx)
>  /*
>   * Determine color of virtual address where the page should be mapped.
>   */
> -static inline unsigned int get_pkmap_color(struct page *page)
> +static inline unsigned int get_pkmap_color(const struct page *page)
>  {
>  	return 0;
>  }
> @@ -334,7 +334,7 @@ EXPORT_SYMBOL(kmap_high);
>   *
>   * This can be called from any context.
>   */
> -void *kmap_high_get(struct page *page)
> +void *kmap_high_get(const struct page *page)
>  {
>  	unsigned long vaddr, flags;
>
> @@ -356,7 +356,7 @@ void *kmap_high_get(struct page *page)
>   * If ARCH_NEEDS_KMAP_HIGH_GET is not defined then this may be called
>   * only from user context.
>   */
> -void kunmap_high(struct page *page)
> +void kunmap_high(const struct page *page)
>  {
>  	unsigned long vaddr;
>  	unsigned long nr;
> @@ -508,7 +508,7 @@ static inline void kmap_local_idx_pop(void)
>  #endif
>
>  #ifndef arch_kmap_local_high_get
> -static inline void *arch_kmap_local_high_get(struct page *page)
> +static inline void *arch_kmap_local_high_get(const struct page *page)
>  {
>  	return NULL;
>  }
> @@ -572,7 +572,7 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot)
>  }
>  EXPORT_SYMBOL_GPL(__kmap_local_pfn_prot);
>
> -void *__kmap_local_page_prot(struct page *page, pgprot_t prot)
> +void *__kmap_local_page_prot(const struct page *page, pgprot_t prot)
>  {
>  	void *kmap;
>
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 00/12] mm: establish const-correctness for pointer parameters
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (12 preceding siblings ...)
  2025-09-01 21:34 ` [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Vlastimil Babka
@ 2025-09-02  6:19 ` Lorenzo Stoakes
  2025-09-02  8:12   ` David Hildenbrand
  2025-09-02 10:02 ` Mike Rapoport
  14 siblings, 1 reply; 32+ messages in thread
From: Lorenzo Stoakes @ 2025-09-02  6:19 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, Liam.Howlett, vbabka, rppt, surenb,
	vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:09PM +0200, Max Kellermann wrote:
> For improved const-correctness in the low-level memory-management
> subsystem, which provides a basis for further const-ification further
> up the call stack (e.g. filesystems).

Great, this succinctly expresses what you want!

>
> This patch series splitted into smaller patches was initially posted
> as a single large patch:
>
>  https://lore.kernel.org/lkml/20250827192233.447920-1-max.kellermann@ionos.com/
>
> I started this work when I tried to constify the Ceph filesystem code,
> but found that to be impossible because many "mm" functions accept
> non-const pointer, even though they modify nothing.

And as Vlasta said, this is great context.

>
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>

Glad we got there in the end :) hopefully having more const-ification will have
a knock-on effect anyway for future development both by:

a. People needing to ensure it downstream of this stuff &.
b. Seeing the pattern and adopting it.

So this should have a positive impact I think :)

Cheers, Lorenzo

> ---
> v1 -> v2:
> - made several parameter values const (i.e. the pointer address, not
>   just the pointed-to memory), as suggested by Andrew Morton and
>   Yuanchu Xie
> - drop existing+obsolete "extern" keywords on lines modified by these
>   patches (suggested by Vishal Moola)
> - add missing parameter names on lines modified by these patches
>   (suggested by Vishal Moola)
> - more "const" pointers (e.g. the task_struct passed to
>   process_shares_mm())
> - add missing "const" to s390, fixing s390 build failure
> - moved the mmap_is_legacy() change in arch/s390/mm/mmap.c from 08/12
>   to 06/12 (suggested by Vishal Moola)
>
> v2 -> v3:
> - remove garbage from 06/12
> - changed tags on subject line (suggested by Matthew Wilcox)
>
> v3 -> v4:
> - more verbose commit messages including a listing of function names
>   (suggested by David Hildenbrand and Lorenzo Stoakes)
>
> v4 -> v5:
> - back to shorter commit messages after an agreement between David
>   Hildenbrand and Lorenzo Stoakes was found
>
> v5 -> v6:
> - fix inconsistent constness of assert_fault_locked()
> - revert the const parameter value change from v2 (requested by
>   Lorenzo Stoakes)
> - revert the long cover letter, removing long explanations again
>   (requested by Lorenzo Stoakes)
>
> Max Kellermann (12):
>   mm: constify shmem related test functions for improved
>     const-correctness
>   mm: constify pagemap related test/getter functions
>   mm: constify zone related test/getter functions
>   fs: constify mapping related test functions for improved
>     const-correctness
>   mm: constify process_shares_mm() for improved const-correctness
>   mm, s390: constify mapping related test/getter functions
>   parisc: constify mmap_upper_limit() parameter
>   mm: constify arch_pick_mmap_layout() for improved const-correctness
>   mm: constify ptdesc_pmd_pts_count() and folio_get_private()
>   mm: constify various inline functions for improved const-correctness
>   mm: constify assert/test functions in mm.h
>   mm: constify highmem related functions for improved const-correctness
>
>  arch/arm/include/asm/highmem.h      |  6 +--
>  arch/parisc/include/asm/processor.h |  2 +-
>  arch/parisc/kernel/sys_parisc.c     |  2 +-
>  arch/s390/mm/mmap.c                 |  6 +--
>  arch/sparc/kernel/sys_sparc_64.c    |  2 +-
>  arch/x86/mm/mmap.c                  |  6 +--
>  arch/xtensa/include/asm/highmem.h   |  2 +-
>  include/linux/fs.h                  |  6 +--
>  include/linux/highmem-internal.h    | 36 +++++++++---------
>  include/linux/highmem.h             |  8 ++--
>  include/linux/mm.h                  | 56 +++++++++++++--------------
>  include/linux/mm_inline.h           | 25 ++++++------
>  include/linux/mm_types.h            |  4 +-
>  include/linux/mmzone.h              | 42 ++++++++++----------
>  include/linux/pagemap.h             | 59 +++++++++++++++--------------
>  include/linux/sched/mm.h            |  4 +-
>  include/linux/shmem_fs.h            |  4 +-
>  mm/highmem.c                        | 10 ++---
>  mm/oom_kill.c                       |  6 +--
>  mm/shmem.c                          |  6 +--
>  mm/util.c                           | 16 ++++----
>  21 files changed, 155 insertions(+), 153 deletions(-)
>
> --
> 2.47.2
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 05/12] mm: constify process_shares_mm() for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 05/12] mm: constify process_shares_mm() " Max Kellermann
@ 2025-09-02  8:03   ` David Hildenbrand
  0 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:03 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> This function only reads from the pointer arguments.
> 
> Local (loop) variables are also annotated with `const` to clarify that
> these will not be written to.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions
  2025-09-01 20:50 ` [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions Max Kellermann
  2025-09-02  6:13   ` Lorenzo Stoakes
@ 2025-09-02  8:04   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:04 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> For improved const-correctness.
> 
> We select certain test functions which either invoke each other,
> functions that are already const-ified, or no further functions.
> 
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
> 
> (Even though seemingly unrelated, this also constifies the pointer
> parameter of mmap_is_legacy() in arch/s390/mm/mmap.c because a copy of
> the function exists in mm/util.c.)
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter
  2025-09-01 20:50 ` [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter Max Kellermann
  2025-09-02  6:13   ` Lorenzo Stoakes
@ 2025-09-02  8:04   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:04 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> For improved const-correctness.
> 
> This piece is necessary to make the `rlim_stack` parameter to
> mmap_base() const.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness Max Kellermann
  2025-09-02  6:15   ` Lorenzo Stoakes
@ 2025-09-02  8:05   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:05 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> This function only reads from the rlimit pointer (but writes to the
> mm_struct pointer which is kept without `const`).
> 
> All callees are already const-ified or (internal functions) are being
> constified by this patch.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness Max Kellermann
  2025-09-02  6:16   ` Lorenzo Stoakes
@ 2025-09-02  8:05   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:05 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> We select certain test functions plus folio_migrate_refs() from
> mm_inline.h which either invoke each other, functions that are already
> const-ified, or no further functions.
> 
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
> 
> One exception is the function folio_migrate_refs() which does write to
> the "new" folio pointer; there, only the "old" folio pointer is being
> constified; only its "flags" field is read, but nothing written.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 11/12] mm: constify assert/test functions in mm.h
  2025-09-01 20:50 ` [PATCH v6 11/12] mm: constify assert/test functions in mm.h Max Kellermann
  2025-09-02  6:17   ` Lorenzo Stoakes
@ 2025-09-02  8:06   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:06 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> For improved const-correctness.
> 
> We select certain assert and test functions which either invoke each
> other, functions that are already const-ified, or no further
> functions.
> 
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness Max Kellermann
  2025-09-02  6:17   ` Lorenzo Stoakes
@ 2025-09-02  8:11   ` David Hildenbrand
  1 sibling, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:11 UTC (permalink / raw)
  To: Max Kellermann, akpm, axelrasmussen, yuanchu, willy, hughd,
	mhocko, linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, vishal.moola, linux, James.Bottomley,
	deller, agordeev, gerald.schaefer, hca, gor, borntraeger, svens,
	davem, andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86,
	hpa, chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On 01.09.25 22:50, Max Kellermann wrote:
> Lots of functions in mm/highmem.c do not write to the given pointers
> and do not call functions that take non-const pointers and can
> therefore be constified.
> 
> This includes functions like kunmap() which might be implemented in a
> way that writes to the pointer (e.g. to update reference counters or
> mapping fields), but currently are not.
> 
> kmap() on the other hand cannot be made const because it calls
> set_page_address() which is non-const in some
> architectures/configurations.

Right, we store the address in page->virtual.

It's interesting that kumap() won't set that field to NULL. That seems 
to happen in flush_all_zero_pkmaps(). And we seem to flush during kmap() 
in case we have to empty slots.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 00/12] mm: establish const-correctness for pointer parameters
  2025-09-02  6:19 ` Lorenzo Stoakes
@ 2025-09-02  8:12   ` David Hildenbrand
  0 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2025-09-02  8:12 UTC (permalink / raw)
  To: Lorenzo Stoakes, Max Kellermann
  Cc: akpm, axelrasmussen, yuanchu, willy, hughd, mhocko, linux-kernel,
	linux-mm, Liam.Howlett, vbabka, rppt, surenb, vishal.moola, linux,
	James.Bottomley, deller, agordeev, gerald.schaefer, hca, gor,
	borntraeger, svens, davem, andreas, dave.hansen, luto, peterz,
	tglx, mingo, bp, x86, hpa, chris, jcmvbkbc, viro, brauner, jack,
	weixugc, baolin.wang, rientjes, shakeel.butt, thuth, broonie,
	osalvador, jfalempe, mpe, nysal, linux-arm-kernel, linux-parisc,
	linux-s390, sparclinux, linux-fsdevel

On 02.09.25 08:19, Lorenzo Stoakes wrote:
> On Mon, Sep 01, 2025 at 10:50:09PM +0200, Max Kellermann wrote:
>> For improved const-correctness in the low-level memory-management
>> subsystem, which provides a basis for further const-ification further
>> up the call stack (e.g. filesystems).
> 
> Great, this succinctly expresses what you want!
> 
>>
>> This patch series splitted into smaller patches was initially posted
>> as a single large patch:
>>
>>   https://lore.kernel.org/lkml/20250827192233.447920-1-max.kellermann@ionos.com/
>>
>> I started this work when I tried to constify the Ceph filesystem code,
>> but found that to be impossible because many "mm" functions accept
>> non-const pointer, even though they modify nothing.
> 
> And as Vlasta said, this is great context.

Yes, that's valuable information, and the series is looking lovely now.

-- 
Cheers

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 00/12] mm: establish const-correctness for pointer parameters
  2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
                   ` (13 preceding siblings ...)
  2025-09-02  6:19 ` Lorenzo Stoakes
@ 2025-09-02 10:02 ` Mike Rapoport
  14 siblings, 0 replies; 32+ messages in thread
From: Mike Rapoport @ 2025-09-02 10:02 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	surenb, vishal.moola, linux, James.Bottomley, deller, agordeev,
	gerald.schaefer, hca, gor, borntraeger, svens, davem, andreas,
	dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa, chris,
	jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:09PM +0200, Max Kellermann wrote:
> For improved const-correctness in the low-level memory-management
> subsystem, which provides a basis for further const-ification further
> up the call stack (e.g. filesystems).
> 
> This patch series splitted into smaller patches was initially posted
> as a single large patch:
> 
>  https://lore.kernel.org/lkml/20250827192233.447920-1-max.kellermann@ionos.com/
> 
> I started this work when I tried to constify the Ceph filesystem code,
> but found that to be impossible because many "mm" functions accept
> non-const pointer, even though they modify nothing.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness Max Kellermann
@ 2025-09-02 10:42   ` Jan Kara
  2025-09-02 10:57   ` Christian Brauner
  1 sibling, 0 replies; 32+ messages in thread
From: Jan Kara @ 2025-09-02 10:42 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, brauner, jack, weixugc, baolin.wang,
	rientjes, shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe,
	nysal, linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon 01-09-25 22:50:13, Max Kellermann wrote:
> We select certain test functions which either invoke each other,
> functions that are already const-ified, or no further functions.
> 
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  include/linux/fs.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 3b9f54446db0..0b43edb33be2 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -537,7 +537,7 @@ struct address_space {
>  /*
>   * Returns true if any of the pages in the mapping are marked with the tag.
>   */
> -static inline bool mapping_tagged(struct address_space *mapping, xa_mark_t tag)
> +static inline bool mapping_tagged(const struct address_space *mapping, xa_mark_t tag)
>  {
>  	return xa_marked(&mapping->i_pages, tag);
>  }
> @@ -585,7 +585,7 @@ static inline void i_mmap_assert_write_locked(struct address_space *mapping)
>  /*
>   * Might pages of this file be mapped into userspace?
>   */
> -static inline int mapping_mapped(struct address_space *mapping)
> +static inline int mapping_mapped(const struct address_space *mapping)
>  {
>  	return	!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root);
>  }
> @@ -599,7 +599,7 @@ static inline int mapping_mapped(struct address_space *mapping)
>   * If i_mmap_writable is negative, no new writable mappings are allowed. You
>   * can only deny writable mappings, if none exists right now.
>   */
> -static inline int mapping_writably_mapped(struct address_space *mapping)
> +static inline int mapping_writably_mapped(const struct address_space *mapping)
>  {
>  	return atomic_read(&mapping->i_mmap_writable) > 0;
>  }
> -- 
> 2.47.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness
  2025-09-01 20:50 ` [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness Max Kellermann
  2025-09-02 10:42   ` Jan Kara
@ 2025-09-02 10:57   ` Christian Brauner
  1 sibling, 0 replies; 32+ messages in thread
From: Christian Brauner @ 2025-09-02 10:57 UTC (permalink / raw)
  To: Max Kellermann
  Cc: akpm, david, axelrasmussen, yuanchu, willy, hughd, mhocko,
	linux-kernel, linux-mm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, vishal.moola, linux, James.Bottomley, deller,
	agordeev, gerald.schaefer, hca, gor, borntraeger, svens, davem,
	andreas, dave.hansen, luto, peterz, tglx, mingo, bp, x86, hpa,
	chris, jcmvbkbc, viro, jack, weixugc, baolin.wang, rientjes,
	shakeel.butt, thuth, broonie, osalvador, jfalempe, mpe, nysal,
	linux-arm-kernel, linux-parisc, linux-s390, sparclinux,
	linux-fsdevel

On Mon, Sep 01, 2025 at 10:50:13PM +0200, Max Kellermann wrote:
> We select certain test functions which either invoke each other,
> functions that are already const-ified, or no further functions.
> 
> It is therefore relatively trivial to const-ify them, which
> provides a basis for further const-ification further up the call
> stack.
> 
> Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---

Reviewed-by: Christian Brauner <brauner@kernel.org>

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2025-09-02 10:57 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-01 20:50 [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Max Kellermann
2025-09-01 20:50 ` [PATCH v6 01/12] mm: constify shmem related test functions for improved const-correctness Max Kellermann
2025-09-01 20:50 ` [PATCH v6 02/12] mm: constify pagemap related test/getter functions Max Kellermann
2025-09-01 20:50 ` [PATCH v6 03/12] mm: constify zone " Max Kellermann
2025-09-01 20:50 ` [PATCH v6 04/12] fs: constify mapping related test functions for improved const-correctness Max Kellermann
2025-09-02 10:42   ` Jan Kara
2025-09-02 10:57   ` Christian Brauner
2025-09-01 20:50 ` [PATCH v6 05/12] mm: constify process_shares_mm() " Max Kellermann
2025-09-02  8:03   ` David Hildenbrand
2025-09-01 20:50 ` [PATCH v6 06/12] mm, s390: constify mapping related test/getter functions Max Kellermann
2025-09-02  6:13   ` Lorenzo Stoakes
2025-09-02  8:04   ` David Hildenbrand
2025-09-01 20:50 ` [PATCH v6 07/12] parisc: constify mmap_upper_limit() parameter Max Kellermann
2025-09-02  6:13   ` Lorenzo Stoakes
2025-09-02  8:04   ` David Hildenbrand
2025-09-01 20:50 ` [PATCH v6 08/12] mm: constify arch_pick_mmap_layout() for improved const-correctness Max Kellermann
2025-09-02  6:15   ` Lorenzo Stoakes
2025-09-02  8:05   ` David Hildenbrand
2025-09-01 20:50 ` [PATCH v6 09/12] mm: constify ptdesc_pmd_pts_count() and folio_get_private() Max Kellermann
2025-09-01 20:50 ` [PATCH v6 10/12] mm: constify various inline functions for improved const-correctness Max Kellermann
2025-09-02  6:16   ` Lorenzo Stoakes
2025-09-02  8:05   ` David Hildenbrand
2025-09-01 20:50 ` [PATCH v6 11/12] mm: constify assert/test functions in mm.h Max Kellermann
2025-09-02  6:17   ` Lorenzo Stoakes
2025-09-02  8:06   ` David Hildenbrand
2025-09-01 20:50 ` [PATCH v6 12/12] mm: constify highmem related functions for improved const-correctness Max Kellermann
2025-09-02  6:17   ` Lorenzo Stoakes
2025-09-02  8:11   ` David Hildenbrand
2025-09-01 21:34 ` [PATCH v6 00/12] mm: establish const-correctness for pointer parameters Vlastimil Babka
2025-09-02  6:19 ` Lorenzo Stoakes
2025-09-02  8:12   ` David Hildenbrand
2025-09-02 10:02 ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).