From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 157EE17A2E6 for ; Mon, 1 Sep 2025 12:30:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.54 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756729841; cv=none; b=XT0IM5lvsLi1d6x6t/4mFRlfk9NxW8M4iu5Q+TMGfpBb3STDc9BmI/BwTQ3L1p55AblpqE1kdFZoKpvKw7c0qoKOcLQ3qfLWAEwe7Awc8AxW/6MMPsygqvQ0TyzI9f5TbyhKhnJ2llOZ7cV1YRiouPXQqp3QOcYhO9DzJigQG7E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756729841; c=relaxed/simple; bh=WKGKwNmSvYkiyEAwtOVKCoUN37h0vtciXL26zW1FBGM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ledy/FwXte+85r8GxfHqwYgXMempUvNYsjYEHD5u3Wayt56UE07YJN2RIO4dPgzv6CZOMal721ViU4OwIzpF6CHwhpg6z8EHu2n8m832J6IJeuTccuLHMx2EYef31zbZ4y8D+np/Qv7lHRVgGN7gGfNmYW76ksFvbbsiQiuIW+A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ionos.com; spf=pass smtp.mailfrom=ionos.com; dkim=pass (2048-bit key) header.d=ionos.com header.i=@ionos.com header.b=Vmj4celk; arc=none smtp.client-ip=209.85.208.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ionos.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ionos.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ionos.com header.i=@ionos.com header.b="Vmj4celk" Received: by mail-ed1-f54.google.com with SMTP id 4fb4d7f45d1cf-61cd6089262so6667344a12.3 for ; Mon, 01 Sep 2025 05:30:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; t=1756729837; x=1757334637; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=zsZOhkPaRGuLceh4X0WyYGraOTpsD0tyj5yjo8FiiY0=; b=Vmj4celk/zDy4B2tCcu182ZCZ9Qm//yeMajHUrREycCA1pWfhlmM7Zk+KtUPtbqFmz 9n84t/QO/c5fRKzNcRsqM6tkKq0oYnHdXfqdrKn+EIHUYqnxC4DUR4dZVgTvw3bSugOK qIDrA3SXAh47KA0t28aMZD3hF0d8VxSH8OfGX3oRZbgVJ6rAFwXkIjwzSuMg4UOq+9I9 X6oxI+pvVboFcuAncTMnoAXvXTUs/b1q6jpVfVCYmzUgQbCpAEdw0EGI2GmNXTCrMLk0 zo1SIGBMydu/569myLoGecIULXgtfrLvG7mJ4pZFZSEOodTNi08eqbNipUugorbQlZVG z6eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756729837; x=1757334637; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zsZOhkPaRGuLceh4X0WyYGraOTpsD0tyj5yjo8FiiY0=; b=vpI9Pg4J5jrNbDXrfjnRPLgRmL38VXhZMMB81MttA+83GMNYUSrAnk3ranZINbsPOC tftQD+IK8xl7rYRzUb4Ih4+3HbQE3bGjow3VV92KIsed+2sXdQOA0+nMLHT/cKwcTwVu OEBdbOm0+xKb/Yb2/iYJU9MWbhi9RPS8iIS51HgTv99IMNqhc6nJN0IQEWx6+VIuNiMO UXHvjtUI3KNShGhlV/ereKVHrMzGktfu9kEwEmZO81lHa4jbS8WcHyFQMQSM+rwv2W8u JKWTvE6m1j9yLwEeuSG2trBLKJHfe9Tm4IwyCuPzOd7H7JUOBFz5NjfVGnQD7KwPLr6Z 0chg== X-Forwarded-Encrypted: i=1; AJvYcCWuy+oUC7mDbJWGZnJHkZhqzurbF8KaZ0Q0Fx0YEFoNTIcJBCkzw60TbIxs6r1gjjAAMavOMkmobTb1@vger.kernel.org X-Gm-Message-State: AOJu0YxVrUe2MHtOWQG4Pz3i9hgol/2JM6drgyCPcfwXhJ+xV/wZV2RL Xr7t3a+1l7owXik8UbLQjl8MGQUKd6LYX/QnB9anIfLlSz/2PulQ5kV3ZcXmg6sUiC0= X-Gm-Gg: ASbGncuOgB22TaDHmqshaclipr6yMOEON3fZe+JiX6OSkpwSYV60wR26zZkHfyudaxb ergj7xnCCjsS19ueOWZbntU/WUlNIBGqH5BerXOyIszgmXY1qgp8lFNV6IKGlQAgc/Tqer/G4Hn gdBarcxB2Ghl4mCuHVji6gax2fGqEe6Nu3k6RpyxaKnTGvRRng9dqP6oS07S2lcHpgm4tgPoDTk /+NPNBJfF1T2LicyM0+7fLB1Qy5fuDkwL8Yre+1BlaKMODfP7GjFMwdTd3WF8Nu4T2wPnpmfBaK gRCAsJOy0un69umzaTqAKW0CEPGX8XScj7KC6P9EqnxMFjsjxceg5bVARKFrhQlpAIZcAaJTnFs +IGk//aelZbXIJHcDFGohLJJ7LJ0hxFWbbH50jNbhsp/iQ5joSdXmjDE4C46I4I+OHTlX6qCidu XaaCjLrBKyhmvoSrK61NZkg3ElzD2PYu7z5koTwgUJDFI= X-Google-Smtp-Source: AGHT+IFstp7E/MWWwOR+qTBJQyGJxGcYxozoNshVEznm012PDsrkf/9rgyBeaLmaNYqMwjSZul3Srg== X-Received: by 2002:a05:6402:5108:b0:61a:843c:2dec with SMTP id 4fb4d7f45d1cf-61d26d79037mr6760187a12.30.1756729836992; Mon, 01 Sep 2025 05:30:36 -0700 (PDT) Received: from raven.intern.cm-ag (p200300dc6f1d0f00023064fffe740809.dip0.t-ipconnect.de. [2003:dc:6f1d:f00:230:64ff:fe74:809]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-61eaf5883b6sm255566a12.20.2025.09.01.05.30.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Sep 2025 05:30:36 -0700 (PDT) From: Max Kellermann To: akpm@linux-foundation.org, david@redhat.com, axelrasmussen@google.com, yuanchu@google.com, willy@infradead.org, hughd@google.com, mhocko@suse.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, vishal.moola@gmail.com, linux@armlinux.org.uk, James.Bottomley@HansenPartnership.com, deller@gmx.de, agordeev@linux.ibm.com, gerald.schaefer@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, davem@davemloft.net, andreas@gaisler.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, chris@zankel.net, jcmvbkbc@gmail.com, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, weixugc@google.com, baolin.wang@linux.alibaba.com, rientjes@google.com, shakeel.butt@linux.dev, max.kellermann@ionos.com, thuth@redhat.com, broonie@kernel.org, osalvador@suse.de, jfalempe@redhat.com, mpe@ellerman.id.au, nysal@linux.ibm.com, linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v5 02/12] mm: constify pagemap related test functions for improved const-correctness Date: Mon, 1 Sep 2025 14:30:18 +0200 Message-ID: <20250901123028.3383461-3-max.kellermann@ionos.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250901123028.3383461-1-max.kellermann@ionos.com> References: <20250901123028.3383461-1-max.kellermann@ionos.com> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit We select certain test functions which either invoke each other, functions that are already const-ified, or no further functions. It is therefore relatively trivial to const-ify them, which provides a basis for further const-ification further up the call stack. Signed-off-by: Max Kellermann Reviewed-by: Vishal Moola (Oracle) --- include/linux/pagemap.h | 57 +++++++++++++++++++++-------------------- 1 file changed, 29 insertions(+), 28 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a3e16d74792f..1d35f9e1416e 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -140,7 +140,7 @@ static inline int inode_drain_writes(struct inode *inode) return filemap_write_and_wait(inode->i_mapping); } -static inline bool mapping_empty(struct address_space *mapping) +static inline bool mapping_empty(const struct address_space *const mapping) { return xa_empty(&mapping->i_pages); } @@ -166,7 +166,7 @@ static inline bool mapping_empty(struct address_space *mapping) * refcount and the referenced bit, which will be elevated or set in * the process of adding new cache pages to an inode. */ -static inline bool mapping_shrinkable(struct address_space *mapping) +static inline bool mapping_shrinkable(const struct address_space *const mapping) { void *head; @@ -267,7 +267,7 @@ static inline void mapping_clear_unevictable(struct address_space *mapping) clear_bit(AS_UNEVICTABLE, &mapping->flags); } -static inline bool mapping_unevictable(struct address_space *mapping) +static inline bool mapping_unevictable(const struct address_space *const mapping) { return mapping && test_bit(AS_UNEVICTABLE, &mapping->flags); } @@ -277,7 +277,7 @@ static inline void mapping_set_exiting(struct address_space *mapping) set_bit(AS_EXITING, &mapping->flags); } -static inline int mapping_exiting(struct address_space *mapping) +static inline int mapping_exiting(const struct address_space *const mapping) { return test_bit(AS_EXITING, &mapping->flags); } @@ -287,7 +287,7 @@ static inline void mapping_set_no_writeback_tags(struct address_space *mapping) set_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags); } -static inline int mapping_use_writeback_tags(struct address_space *mapping) +static inline int mapping_use_writeback_tags(const struct address_space *const mapping) { return !test_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags); } @@ -333,7 +333,7 @@ static inline void mapping_set_inaccessible(struct address_space *mapping) set_bit(AS_INACCESSIBLE, &mapping->flags); } -static inline bool mapping_inaccessible(struct address_space *mapping) +static inline bool mapping_inaccessible(const struct address_space *const mapping) { return test_bit(AS_INACCESSIBLE, &mapping->flags); } @@ -343,18 +343,18 @@ static inline void mapping_set_writeback_may_deadlock_on_reclaim(struct address_ set_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags); } -static inline bool mapping_writeback_may_deadlock_on_reclaim(struct address_space *mapping) +static inline bool mapping_writeback_may_deadlock_on_reclaim(const struct address_space *const mapping) { return test_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags); } -static inline gfp_t mapping_gfp_mask(struct address_space * mapping) +static inline gfp_t mapping_gfp_mask(const struct address_space *const mapping) { return mapping->gfp_mask; } /* Restricts the given gfp_mask to what the mapping allows. */ -static inline gfp_t mapping_gfp_constraint(struct address_space *mapping, +static inline gfp_t mapping_gfp_constraint(const struct address_space *mapping, gfp_t gfp_mask) { return mapping_gfp_mask(mapping) & gfp_mask; @@ -477,13 +477,13 @@ mapping_min_folio_order(const struct address_space *mapping) } static inline unsigned long -mapping_min_folio_nrpages(struct address_space *mapping) +mapping_min_folio_nrpages(const struct address_space *const mapping) { return 1UL << mapping_min_folio_order(mapping); } static inline unsigned long -mapping_min_folio_nrbytes(struct address_space *mapping) +mapping_min_folio_nrbytes(const struct address_space *const mapping) { return mapping_min_folio_nrpages(mapping) << PAGE_SHIFT; } @@ -497,7 +497,7 @@ mapping_min_folio_nrbytes(struct address_space *mapping) * new folio to the page cache and need to know what index to give it, * call this function. */ -static inline pgoff_t mapping_align_index(struct address_space *mapping, +static inline pgoff_t mapping_align_index(const struct address_space *const mapping, pgoff_t index) { return round_down(index, mapping_min_folio_nrpages(mapping)); @@ -507,7 +507,7 @@ static inline pgoff_t mapping_align_index(struct address_space *mapping, * Large folio support currently depends on THP. These dependencies are * being worked on but are not yet fixed. */ -static inline bool mapping_large_folio_support(struct address_space *mapping) +static inline bool mapping_large_folio_support(const struct address_space *mapping) { /* AS_FOLIO_ORDER is only reasonable for pagecache folios */ VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON, @@ -522,7 +522,7 @@ static inline size_t mapping_max_folio_size(const struct address_space *mapping) return PAGE_SIZE << mapping_max_folio_order(mapping); } -static inline int filemap_nr_thps(struct address_space *mapping) +static inline int filemap_nr_thps(const struct address_space *const mapping) { #ifdef CONFIG_READ_ONLY_THP_FOR_FS return atomic_read(&mapping->nr_thps); @@ -936,7 +936,7 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping, * * Return: The index of the folio which follows this folio in the file. */ -static inline pgoff_t folio_next_index(struct folio *folio) +static inline pgoff_t folio_next_index(const struct folio *const folio) { return folio->index + folio_nr_pages(folio); } @@ -965,7 +965,7 @@ static inline struct page *folio_file_page(struct folio *folio, pgoff_t index) * e.g., shmem did not move this folio to the swap cache. * Return: true or false. */ -static inline bool folio_contains(struct folio *folio, pgoff_t index) +static inline bool folio_contains(const struct folio *const folio, pgoff_t index) { VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); return index - folio->index < folio_nr_pages(folio); @@ -1042,13 +1042,13 @@ static inline loff_t page_offset(struct page *page) /* * Get the offset in PAGE_SIZE (even for hugetlb folios). */ -static inline pgoff_t folio_pgoff(struct folio *folio) +static inline pgoff_t folio_pgoff(const struct folio *const folio) { return folio->index; } -static inline pgoff_t linear_page_index(struct vm_area_struct *vma, - unsigned long address) +static inline pgoff_t linear_page_index(const struct vm_area_struct *const vma, + const unsigned long address) { pgoff_t pgoff; pgoff = (address - vma->vm_start) >> PAGE_SHIFT; @@ -1468,7 +1468,7 @@ static inline unsigned int __readahead_batch(struct readahead_control *rac, * readahead_pos - The byte offset into the file of this readahead request. * @rac: The readahead request. */ -static inline loff_t readahead_pos(struct readahead_control *rac) +static inline loff_t readahead_pos(const struct readahead_control *const rac) { return (loff_t)rac->_index * PAGE_SIZE; } @@ -1477,7 +1477,7 @@ static inline loff_t readahead_pos(struct readahead_control *rac) * readahead_length - The number of bytes in this readahead request. * @rac: The readahead request. */ -static inline size_t readahead_length(struct readahead_control *rac) +static inline size_t readahead_length(const struct readahead_control *const rac) { return rac->_nr_pages * PAGE_SIZE; } @@ -1486,7 +1486,7 @@ static inline size_t readahead_length(struct readahead_control *rac) * readahead_index - The index of the first page in this readahead request. * @rac: The readahead request. */ -static inline pgoff_t readahead_index(struct readahead_control *rac) +static inline pgoff_t readahead_index(const struct readahead_control *const rac) { return rac->_index; } @@ -1495,7 +1495,7 @@ static inline pgoff_t readahead_index(struct readahead_control *rac) * readahead_count - The number of pages in this readahead request. * @rac: The readahead request. */ -static inline unsigned int readahead_count(struct readahead_control *rac) +static inline unsigned int readahead_count(const struct readahead_control *const rac) { return rac->_nr_pages; } @@ -1504,12 +1504,12 @@ static inline unsigned int readahead_count(struct readahead_control *rac) * readahead_batch_length - The number of bytes in the current batch. * @rac: The readahead request. */ -static inline size_t readahead_batch_length(struct readahead_control *rac) +static inline size_t readahead_batch_length(const struct readahead_control *const rac) { return rac->_batch_count * PAGE_SIZE; } -static inline unsigned long dir_pages(struct inode *inode) +static inline unsigned long dir_pages(const struct inode *const inode) { return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT; @@ -1523,8 +1523,8 @@ static inline unsigned long dir_pages(struct inode *inode) * Return: the number of bytes in the folio up to EOF, * or -EFAULT if the folio was truncated. */ -static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio, - struct inode *inode) +static inline ssize_t folio_mkwrite_check_truncate(const struct folio *const folio, + const struct inode *const inode) { loff_t size = i_size_read(inode); pgoff_t index = size >> PAGE_SHIFT; @@ -1555,7 +1555,8 @@ static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio, * Return: The number of filesystem blocks covered by this folio. */ static inline -unsigned int i_blocks_per_folio(struct inode *inode, struct folio *folio) +unsigned int i_blocks_per_folio(const struct inode *const inode, + const struct folio *const folio) { return folio_size(folio) >> inode->i_blkbits; } -- 2.47.2