From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B1E3C433DF for ; Wed, 3 Jun 2020 23:01:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3CEFC20C56 for ; Wed, 3 Jun 2020 23:01:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="SeN7WO5e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3CEFC20C56 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C375B280059; Wed, 3 Jun 2020 19:01:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BBC11280003; Wed, 3 Jun 2020 19:01:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF975280059; Wed, 3 Jun 2020 19:01:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 98382280003 for ; Wed, 3 Jun 2020 19:01:20 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6741033CD for ; Wed, 3 Jun 2020 23:01:20 +0000 (UTC) X-FDA: 76889423520.30.bears75_28bd099a32f2c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 429F1180B3C83 for ; Wed, 3 Jun 2020 23:01:20 +0000 (UTC) X-HE-Tag: bears75_28bd099a32f2c X-Filterd-Recvd-Size: 5305 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:01:19 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C895420C09; Wed, 3 Jun 2020 23:01:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225279; bh=pcxyHNPAZzPLQsXN78Jg2BVpYXoWiP9tPKh01+R1Lj0=; h=Date:From:To:Subject:In-Reply-To:From; b=SeN7WO5ezOLF8KZRP2619RP8VmdiSbr96O3c/aVgY9lHYXaKT+Y43K/n9/c0OW7G8 sfj3E145I5MCm5E7jIot9KQZ6x/+G/3gW5rTp6pa8px4rc/ZPhdW93wSPcEpOJ2FQr YfuZXBLDGRx9SLj5cmL7KJ9ADT8/W/9XTEcvv9Rw= Date: Wed, 03 Jun 2020 16:01:18 -0700 From: Andrew Morton To: a.sahrawat@samsung.com, akpm@linux-foundation.org, linux-mm@kvack.org, maninder1.s@samsung.com, mgorman@suse.de, mhocko@suse.com, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, v.narang@samsung.com, vbabka@suse.cz Subject: [patch 081/131] mm/vmscan.c: change prototype for shrink_page_list Message-ID: <20200603230118.R13utbptC%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 429F1180B3C83 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Maninder Singh Subject: mm/vmscan.c: change prototype for shrink_page_list commit 3c710c1ad11b ("mm, vmscan extract shrink_page_list reclaim counters into a struct") changed data type for the function, so changing return type for funciton and its caller. Link: http://lkml.kernel.org/r/1588168259-25604-1-git-send-email-maninder1.s@samsung.com Signed-off-by: Vaneet Narang Signed-off-by: Maninder Singh Acked-by: Michal Hocko Cc: Amit Sahrawat Cc: Mel Gorman Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- mm/internal.h | 2 +- mm/page_alloc.c | 2 +- mm/vmscan.c | 24 ++++++++++++------------ 3 files changed, 14 insertions(+), 14 deletions(-) --- a/mm/internal.h~mm-vmscanc-change-prototype-for-shrink_page_list +++ a/mm/internal.h @@ -538,7 +538,7 @@ extern unsigned long __must_check vm_mm unsigned long, unsigned long); extern void set_pageblock_order(void); -unsigned long reclaim_clean_pages_from_list(struct zone *zone, +unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct list_head *page_list); /* The ALLOC_WMARK bits are used as an index to zone->watermark */ #define ALLOC_WMARK_MIN WMARK_MIN --- a/mm/page_alloc.c~mm-vmscanc-change-prototype-for-shrink_page_list +++ a/mm/page_alloc.c @@ -8355,7 +8355,7 @@ static int __alloc_contig_migrate_range( unsigned long start, unsigned long end) { /* This function is based on compact_zone() from compaction.c. */ - unsigned long nr_reclaimed; + unsigned int nr_reclaimed; unsigned long pfn = start; unsigned int tries = 0; int ret = 0; --- a/mm/vmscan.c~mm-vmscanc-change-prototype-for-shrink_page_list +++ a/mm/vmscan.c @@ -1066,17 +1066,17 @@ static void page_check_dirty_writeback(s /* * shrink_page_list() returns the number of reclaimed pages */ -static unsigned long shrink_page_list(struct list_head *page_list, - struct pglist_data *pgdat, - struct scan_control *sc, - enum ttu_flags ttu_flags, - struct reclaim_stat *stat, - bool ignore_references) +static unsigned int shrink_page_list(struct list_head *page_list, + struct pglist_data *pgdat, + struct scan_control *sc, + enum ttu_flags ttu_flags, + struct reclaim_stat *stat, + bool ignore_references) { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); - unsigned nr_reclaimed = 0; - unsigned pgactivate = 0; + unsigned int nr_reclaimed = 0; + unsigned int pgactivate = 0; memset(stat, 0, sizeof(*stat)); cond_resched(); @@ -1487,7 +1487,7 @@ keep: return nr_reclaimed; } -unsigned long reclaim_clean_pages_from_list(struct zone *zone, +unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct list_head *page_list) { struct scan_control sc = { @@ -1496,7 +1496,7 @@ unsigned long reclaim_clean_pages_from_l .may_unmap = 1, }; struct reclaim_stat stat; - unsigned long nr_reclaimed; + unsigned int nr_reclaimed; struct page *page, *next; LIST_HEAD(clean_pages); @@ -1910,7 +1910,7 @@ shrink_inactive_list(unsigned long nr_to { LIST_HEAD(page_list); unsigned long nr_scanned; - unsigned long nr_reclaimed = 0; + unsigned int nr_reclaimed = 0; unsigned long nr_taken; struct reclaim_stat stat; int file = is_file_lru(lru); @@ -2106,7 +2106,7 @@ static void shrink_active_list(unsigned unsigned long reclaim_pages(struct list_head *page_list) { int nid = NUMA_NO_NODE; - unsigned long nr_reclaimed = 0; + unsigned int nr_reclaimed = 0; LIST_HEAD(node_page_list); struct reclaim_stat dummy_stat; struct page *page; _