* Re: [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page @ 2019-06-04 4:20 Hillf Danton 2019-06-04 4:51 ` Minchan Kim 0 siblings, 1 reply; 3+ messages in thread From: Hillf Danton @ 2019-06-04 4:20 UTC (permalink / raw) To: Minchan Kim Cc: Andrew Morton, linux-mm, LKML, linux-api@vger.kernel.org, Michal Hocko, Johannes Weiner, Tim Murray, Joel Fernandes, Suren Baghdasaryan, Daniel Colascione, Shakeel Butt, Sonny Rao, Brian Geffon, jannh@google.com, oleg@redhat.com, christian@brauner.io, oleksandr@redhat.com, hdanton@sina.com Hi Minchan On Mon, 3 Jun 2019 13:37:27 +0800 Minchan Kim wrote: > @@ -1181,10 +1179,17 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, > return -ENOMEM; > > if (page_count(page) == 1) { > + bool is_lru = !__PageMovable(page); > + > /* page was freed from under us. So we are done. */ > ClearPageActive(page); > ClearPageUnevictable(page); > - if (unlikely(__PageMovable(page))) { > + if (likely(is_lru)) > + mod_node_page_state(page_pgdat(page), > + NR_ISOLATED_ANON + > + page_is_file_cache(page), > + hpage_nr_pages(page)); > + else { > lock_page(page); > if (!PageMovable(page)) > __ClearPageIsolated(page); As this page will go down the path only through the MIGRATEPAGE_SUCCESS branches, with no putback ahead, the current code is, I think, doing right things for this work to keep isolated stat balanced. > @@ -1210,15 +1215,6 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, > * restored. > */ > list_del(&page->lru); > - > - /* > - * Compaction can migrate also non-LRU pages which are > - * not accounted to NR_ISOLATED_*. They can be recognized > - * as __PageMovable > - */ > - if (likely(!__PageMovable(page))) > - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + > - page_is_file_cache(page), -hpage_nr_pages(page)); > } > BR Hillf ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page 2019-06-04 4:20 [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page Hillf Danton @ 2019-06-04 4:51 ` Minchan Kim 0 siblings, 0 replies; 3+ messages in thread From: Minchan Kim @ 2019-06-04 4:51 UTC (permalink / raw) To: Hillf Danton Cc: Andrew Morton, linux-mm, LKML, linux-api@vger.kernel.org, Michal Hocko, Johannes Weiner, Tim Murray, Joel Fernandes, Suren Baghdasaryan, Daniel Colascione, Shakeel Butt, Sonny Rao, Brian Geffon, jannh@google.com, oleg@redhat.com, christian@brauner.io, oleksandr@redhat.com Hi Hillf, On Tue, Jun 04, 2019 at 12:20:47PM +0800, Hillf Danton wrote: > > Hi Minchan > > On Mon, 3 Jun 2019 13:37:27 +0800 Minchan Kim wrote: > > @@ -1181,10 +1179,17 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, > > return -ENOMEM; > > > > if (page_count(page) == 1) { > > + bool is_lru = !__PageMovable(page); > > + > > /* page was freed from under us. So we are done. */ > > ClearPageActive(page); > > ClearPageUnevictable(page); > > - if (unlikely(__PageMovable(page))) { > > + if (likely(is_lru)) > > + mod_node_page_state(page_pgdat(page), > > + NR_ISOLATED_ANON + > > + page_is_file_cache(page), > > + hpage_nr_pages(page)); That should be -hpage_nr_pages(page). It's a bug. > > + else { > > lock_page(page); > > if (!PageMovable(page)) > > __ClearPageIsolated(page); > > As this page will go down the path only through the MIGRATEPAGE_SUCCESS branches, > with no putback ahead, the current code is, I think, doing right things for this > work to keep isolated stat balanced. I guess that's the one you pointed out. Right? Thanks for the review! > > > @@ -1210,15 +1215,6 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, > > * restored. > > */ > > list_del(&page->lru); > > - > > - /* > > - * Compaction can migrate also non-LRU pages which are > > - * not accounted to NR_ISOLATED_*. They can be recognized > > - * as __PageMovable > > - */ > > - if (likely(!__PageMovable(page))) > > - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + > > - page_is_file_cache(page), -hpage_nr_pages(page)); > > } > > > > BR > Hillf > ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v1 0/4] Introduce MADV_COLD and MADV_PAGEOUT @ 2019-06-03 5:36 Minchan Kim 2019-06-03 5:36 ` [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page Minchan Kim 0 siblings, 1 reply; 3+ messages in thread From: Minchan Kim @ 2019-06-03 5:36 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, LKML, linux-api, Michal Hocko, Johannes Weiner, Tim Murray, Joel Fernandes, Suren Baghdasaryan, Daniel Colascione, Shakeel Butt, Sonny Rao, Brian Geffon, jannh, oleg, christian, oleksandr, hdanton, Minchan Kim This patch is part of previous series: https://lore.kernel.org/lkml/20190531064313.193437-1-minchan@kernel.org/T/#u Originally, it was created for external madvise hinting feature. https://lkml.org/lkml/2019/5/31/463 Michal wanted to separte the discussion from external hinting interface so this patchset includes only first part of my entire patchset - introduce MADV_COLD and MADV_PAGEOUT hint to madvise. However, I keep entire description for others for easier understanding why this kinds of hint was born. Thanks. This patchset is against on next-20190530. Below is description of previous entire patchset. ================= &< ===================== - Background The Android terminology used for forking a new process and starting an app from scratch is a cold start, while resuming an existing app is a hot start. While we continually try to improve the performance of cold starts, hot starts will always be significantly less power hungry as well as faster so we are trying to make hot start more likely than cold start. To increase hot start, Android userspace manages the order that apps should be killed in a process called ActivityManagerService. ActivityManagerService tracks every Android app or service that the user could be interacting with at any time and translates that into a ranked list for lmkd(low memory killer daemon). They are likely to be killed by lmkd if the system has to reclaim memory. In that sense they are similar to entries in any other cache. Those apps are kept alive for opportunistic performance improvements but those performance improvements will vary based on the memory requirements of individual workloads. - Problem Naturally, cached apps were dominant consumers of memory on the system. However, they were not significant consumers of swap even though they are good candidate for swap. Under investigation, swapping out only begins once the low zone watermark is hit and kswapd wakes up, but the overall allocation rate in the system might trip lmkd thresholds and cause a cached process to be killed(we measured performance swapping out vs. zapping the memory by killing a process. Unsurprisingly, zapping is 10x times faster even though we use zram which is much faster than real storage) so kill from lmkd will often satisfy the high zone watermark, resulting in very few pages actually being moved to swap. - Approach The approach we chose was to use a new interface to allow userspace to proactively reclaim entire processes by leveraging platform information. This allowed us to bypass the inaccuracy of the kernel’s LRUs for pages that are known to be cold from userspace and to avoid races with lmkd by reclaiming apps as soon as they entered the cached state. Additionally, it could provide many chances for platform to use much information to optimize memory efficiency. To achieve the goal, the patchset introduce two new options for madvise. One is MADV_COLD which will deactivate activated pages and the other is MADV_PAGEOUT which will reclaim private pages instantly. These new options complement MADV_DONTNEED and MADV_FREE by adding non-destructive ways to gain some free memory space. MADV_PAGEOUT is similar to MADV_DONTNEED in a way that it hints the kernel that memory region is not currently needed and should be reclaimed immediately; MADV_COLD is similar to MADV_FREE in a way that it hints the kernel that memory region is not currently needed and should be reclaimed when memory pressure rises. This approach is similar in spirit to madvise(MADV_WONTNEED), but the information required to make the reclaim decision is not known to the app. Instead, it is known to a centralized userspace daemon, and that daemon must be able to initiate reclaim on its own without any app involvement. To solve the concern, this patch introduces new syscall - struct pr_madvise_param { int size; /* the size of this structure */ int cookie; /* reserved to support atomicity */ int nr_elem; /* count of below arrary fields */ int __user *hints; /* hints for each range */ /* to store result of each operation */ const struct iovec __user *results; /* input address ranges */ const struct iovec __user *ranges; }; int process_madvise(int pidfd, struct pr_madvise_param *u_param, unsigned long flags); The syscall get pidfd to give hints to external process and provides pair of result/ranges vector arguments so that it could give several hints to each address range all at once. It also has cookie variable to support atomicity of the API for address ranges operations. IOW, if target process changes address space since monitor process has parsed address ranges via map_files or maps, the API can detect the race so could cancel entire address space operation. It's not implemented yet. Daniel Colascione suggested a idea(Please read description in patch[6/6]) and this patchset adds cookie a variable for the future. - Experiment We did bunch of testing with several hundreds of real users, not artificial benchmark on android. We saw about 17% cold start decreasement without any significant battery/app startup latency issues. And with artificial benchmark which launches and switching apps, we saw average 7% app launching improvement, 18% less lmkd kill and good stat from vmstat. A is vanilla and B is process_madvise. A B delta ratio(%) allocstall_dma 0 0 0 0.00 allocstall_movable 1464 457 -1007 -69.00 allocstall_normal 263210 190763 -72447 -28.00 allocstall_total 264674 191220 -73454 -28.00 compact_daemon_wake 26912 25294 -1618 -7.00 compact_fail 17885 14151 -3734 -21.00 compact_free_scanned 4204766409 3835994922 -368771487 -9.00 compact_isolated 3446484 2967618 -478866 -14.00 compact_migrate_scanned 1621336411 1324695710 -296640701 -19.00 compact_stall 19387 15343 -4044 -21.00 compact_success 1502 1192 -310 -21.00 kswapd_high_wmark_hit_quickly 234 184 -50 -22.00 kswapd_inodesteal 221635 233093 11458 5.00 kswapd_low_wmark_hit_quickly 66065 54009 -12056 -19.00 nr_dirtied 259934 296476 36542 14.00 nr_vmscan_immediate_reclaim 2587 2356 -231 -9.00 nr_vmscan_write 1274232 2661733 1387501 108.00 nr_written 1514060 2937560 1423500 94.00 pageoutrun 67561 55133 -12428 -19.00 pgactivate 2335060 1984882 -350178 -15.00 pgalloc_dma 13743011 14096463 353452 2.00 pgalloc_movable 0 0 0 0.00 pgalloc_normal 18742440 16802065 -1940375 -11.00 pgalloc_total 32485451 30898528 -1586923 -5.00 pgdeactivate 4262210 2930670 -1331540 -32.00 pgfault 30812334 31085065 272731 0.00 pgfree 33553970 31765164 -1788806 -6.00 pginodesteal 33411 15084 -18327 -55.00 pglazyfreed 0 0 0 0.00 pgmajfault 551312 1508299 956987 173.00 pgmigrate_fail 43927 29330 -14597 -34.00 pgmigrate_success 1399851 1203922 -195929 -14.00 pgpgin 24141776 19032156 -5109620 -22.00 pgpgout 959344 1103316 143972 15.00 pgpgoutclean 4639732 3765868 -873864 -19.00 pgrefill 4884560 3006938 -1877622 -39.00 pgrotated 37828 25897 -11931 -32.00 pgscan_direct 1456037 957567 -498470 -35.00 pgscan_direct_throttle 0 0 0 0.00 pgscan_kswapd 6667767 5047360 -1620407 -25.00 pgscan_total 8123804 6004927 -2118877 -27.00 pgskip_dma 0 0 0 0.00 pgskip_movable 0 0 0 0.00 pgskip_normal 14907 25382 10475 70.00 pgskip_total 14907 25382 10475 70.00 pgsteal_direct 1118986 690215 -428771 -39.00 pgsteal_kswapd 4750223 3657107 -1093116 -24.00 pgsteal_total 5869209 4347322 -1521887 -26.00 pswpin 417613 1392647 975034 233.00 pswpout 1274224 2661731 1387507 108.00 slabs_scanned 13686905 10807200 -2879705 -22.00 workingset_activate 668966 569444 -99522 -15.00 workingset_nodereclaim 38957 32621 -6336 -17.00 workingset_refault 2816795 2179782 -637013 -23.00 workingset_restore 294320 168601 -125719 -43.00 pgmajfault is increased by 173% because swapin is increased by 200% by process_madvise hint. However, swap read based on zram is much cheaper than file IO in performance point of view and app hot start by swapin is also cheaper than cold start from the beginning of app which needs many IO from storage and initialization steps. Brian Geffon in ChromeOS team had an experiment with process_madvise(2) Quote form him: "What I found is that by using process_madvise after a tab has been back grounded for more than 45 seconds reduced the average tab switch times by 25%! This is a huge result and very obvious validation that process_madvise hints works well for the ChromeOS use case." This patchset is against on next-20190530. Minchan Kim (4): mm: introduce MADV_COLD mm: change PAGEREF_RECLAIM_CLEAN with PAGE_REFRECLAIM mm: account nr_isolated_xxx in [isolate|putback]_lru_page mm: introduce MADV_PAGEOUT include/linux/page-flags.h | 1 + include/linux/page_idle.h | 15 ++ include/linux/swap.h | 2 + include/uapi/asm-generic/mman-common.h | 2 + mm/compaction.c | 2 - mm/gup.c | 7 +- mm/internal.h | 2 +- mm/khugepaged.c | 3 - mm/madvise.c | 241 ++++++++++++++++++++++++- mm/memory-failure.c | 3 - mm/memory_hotplug.c | 4 - mm/mempolicy.c | 6 +- mm/migrate.c | 37 +--- mm/oom_kill.c | 2 +- mm/swap.c | 43 +++++ mm/vmscan.c | 62 ++++++- 16 files changed, 367 insertions(+), 65 deletions(-) -- 2.22.0.rc1.311.g5d7573a151-goog ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page 2019-06-03 5:36 [PATCH v1 0/4] Introduce MADV_COLD and MADV_PAGEOUT Minchan Kim @ 2019-06-03 5:36 ` Minchan Kim 0 siblings, 0 replies; 3+ messages in thread From: Minchan Kim @ 2019-06-03 5:36 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, LKML, linux-api, Michal Hocko, Johannes Weiner, Tim Murray, Joel Fernandes, Suren Baghdasaryan, Daniel Colascione, Shakeel Butt, Sonny Rao, Brian Geffon, jannh, oleg, christian, oleksandr, hdanton, Minchan Kim The isolate counting is pecpu counter so it would be not huge gain to work them by batch. Rather than complicating to make them batch, let's make it more stright-foward via adding the counting logic into [isolate|putback]_lru_page API. Link: http://lkml.kernel.org/r/20190531165927.GA20067@cmpxchg.org Suggested-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Minchan Kim <minchan@kernel.org> --- mm/compaction.c | 2 -- mm/gup.c | 7 +------ mm/khugepaged.c | 3 --- mm/memory-failure.c | 3 --- mm/memory_hotplug.c | 4 ---- mm/mempolicy.c | 6 +----- mm/migrate.c | 37 ++++++++----------------------------- mm/vmscan.c | 22 ++++++++++++++++------ 8 files changed, 26 insertions(+), 58 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 9e1b9acb116b..c6591682deda 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -982,8 +982,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Successfully isolated */ del_page_from_lru_list(page, lruvec, page_lru(page)); - inc_node_page_state(page, - NR_ISOLATED_ANON + page_is_file_cache(page)); isolate_success: list_add(&page->lru, &cc->migratepages); diff --git a/mm/gup.c b/mm/gup.c index 63ac50e48072..2d9a9bc358c7 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1360,13 +1360,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, drain_allow = false; } - if (!isolate_lru_page(head)) { + if (!isolate_lru_page(head)) list_add_tail(&head->lru, &cma_page_list); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + - page_is_file_cache(head), - hpage_nr_pages(head)); - } } } } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a335f7c1fac4..3359df994fb4 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -503,7 +503,6 @@ void __khugepaged_exit(struct mm_struct *mm) static void release_pte_page(struct page *page) { - dec_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); unlock_page(page); putback_lru_page(page); } @@ -602,8 +601,6 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, result = SCAN_DEL_PAGE_LRU; goto out; } - inc_node_page_state(page, - NR_ISOLATED_ANON + page_is_file_cache(page)); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index bc749265a8f3..2187bad7ceff 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1796,9 +1796,6 @@ static int __soft_offline_page(struct page *page, int flags) * so use !__PageMovable instead for LRU page's mapping * cannot have PAGE_MAPPING_MOVABLE. */ - if (!__PageMovable(page)) - inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_cache(page)); list_add(&page->lru, &pagelist); ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, MIGRATE_SYNC, MR_MEMORY_FAILURE); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a88c5f334e5a..a41bea24d0c9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1390,10 +1390,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE); if (!ret) { /* Success */ list_add_tail(&page->lru, &source); - if (!__PageMovable(page)) - inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_cache(page)); - } else { pr_warn("failed to isolate pfn %lx\n", pfn); dump_page(page, "isolation failed"); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 5b3bf1747c19..cfb0590f69bb 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -948,12 +948,8 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist, * Avoid migrating a page that is shared with others. */ if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(head) == 1) { - if (!isolate_lru_page(head)) { + if (!isolate_lru_page(head)) list_add_tail(&head->lru, pagelist); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_cache(head), - hpage_nr_pages(head)); - } } } diff --git a/mm/migrate.c b/mm/migrate.c index 572b4bc85d76..39b95ba04d3e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -190,8 +190,6 @@ void putback_movable_pages(struct list_head *l) unlock_page(page); put_page(page); } else { - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_cache(page), -hpage_nr_pages(page)); putback_lru_page(page); } } @@ -1181,10 +1179,17 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, return -ENOMEM; if (page_count(page) == 1) { + bool is_lru = !__PageMovable(page); + /* page was freed from under us. So we are done. */ ClearPageActive(page); ClearPageUnevictable(page); - if (unlikely(__PageMovable(page))) { + if (likely(is_lru)) + mod_node_page_state(page_pgdat(page), + NR_ISOLATED_ANON + + page_is_file_cache(page), + hpage_nr_pages(page)); + else { lock_page(page); if (!PageMovable(page)) __ClearPageIsolated(page); @@ -1210,15 +1215,6 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, * restored. */ list_del(&page->lru); - - /* - * Compaction can migrate also non-LRU pages which are - * not accounted to NR_ISOLATED_*. They can be recognized - * as __PageMovable - */ - if (likely(!__PageMovable(page))) - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_cache(page), -hpage_nr_pages(page)); } /* @@ -1572,9 +1568,6 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, err = 0; list_add_tail(&head->lru, pagelist); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_cache(head), - hpage_nr_pages(head)); } out_putpage: /* @@ -1890,8 +1883,6 @@ static struct page *alloc_misplaced_dst_page(struct page *page, static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { - int page_lru; - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); /* Avoid migrating to a node that is nearly full */ @@ -1913,10 +1904,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) return 0; } - page_lru = page_is_file_cache(page); - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru, - hpage_nr_pages(page)); - /* * Isolating the page has taken another reference, so the * caller's reference can be safely dropped without the page @@ -1971,8 +1958,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, if (nr_remaining) { if (!list_empty(&migratepages)) { list_del(&page->lru); - dec_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_cache(page)); putback_lru_page(page); } isolated = 0; @@ -2002,7 +1987,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, pg_data_t *pgdat = NODE_DATA(node); int isolated = 0; struct page *new_page = NULL; - int page_lru = page_is_file_cache(page); unsigned long start = address & HPAGE_PMD_MASK; new_page = alloc_pages_node(node, @@ -2048,8 +2032,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, /* Retake the callers reference and putback on LRU */ get_page(page); putback_lru_page(page); - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); goto out_unlock; } @@ -2099,9 +2081,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR); count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR); - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_lru, - -HPAGE_PMD_NR); return isolated; out_fail: diff --git a/mm/vmscan.c b/mm/vmscan.c index 0973a46a0472..56df55e8afcd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -999,6 +999,9 @@ int remove_mapping(struct address_space *mapping, struct page *page) void putback_lru_page(struct page *page) { lru_cache_add(page); + mod_node_page_state(page_pgdat(page), + NR_ISOLATED_ANON + page_is_file_cache(page), + -hpage_nr_pages(page)); put_page(page); /* drop ref from isolate */ } @@ -1464,6 +1467,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, */ nr_reclaimed += nr_pages; + mod_node_page_state(pgdat, NR_ISOLATED_ANON + + page_is_file_cache(page), + -nr_pages); /* * Is there need to periodically free_page_list? It would * appear not as the counts should be low @@ -1539,7 +1545,6 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone, ret = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc, TTU_IGNORE_ACCESS, &dummy_stat, true); list_splice(&clean_pages, page_list); - mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, -ret); return ret; } @@ -1615,6 +1620,9 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) */ ClearPageLRU(page); ret = 0; + __mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + + page_is_file_cache(page), + hpage_nr_pages(page)); } return ret; @@ -1746,6 +1754,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan, total_scan, skipped, nr_taken, mode, lru); update_lru_sizes(lruvec, lru, nr_zone_taken); + return nr_taken; } @@ -1794,6 +1803,9 @@ int isolate_lru_page(struct page *page) ClearPageLRU(page); del_page_from_lru_list(page, lruvec, lru); ret = 0; + mod_node_page_state(pgdat, NR_ISOLATED_ANON + + page_is_file_cache(page), + hpage_nr_pages(page)); } spin_unlock_irq(&pgdat->lru_lock); } @@ -1885,6 +1897,9 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); list_move(&page->lru, &lruvec->lists[lru]); + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + + page_is_file_cache(page), + -hpage_nr_pages(page)); if (put_page_testzero(page)) { __ClearPageLRU(page); __ClearPageActive(page); @@ -1962,7 +1977,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &page_list, &nr_scanned, sc, lru); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); reclaim_stat->recent_scanned[file] += nr_taken; item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; @@ -1988,8 +2002,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, move_pages_to_lru(lruvec, &page_list); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&pgdat->lru_lock); mem_cgroup_uncharge_list(&page_list); @@ -2048,7 +2060,6 @@ static void shrink_active_list(unsigned long nr_to_scan, nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &l_hold, &nr_scanned, sc, lru); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); reclaim_stat->recent_scanned[file] += nr_taken; __count_vm_events(PGREFILL, nr_scanned); @@ -2117,7 +2128,6 @@ static void shrink_active_list(unsigned long nr_to_scan, __count_vm_events(PGDEACTIVATE, nr_deactivate); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); spin_unlock_irq(&pgdat->lru_lock); mem_cgroup_uncharge_list(&l_active); -- 2.22.0.rc1.311.g5d7573a151-goog ^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-06-04 4:51 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-06-04 4:20 [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page Hillf Danton 2019-06-04 4:51 ` Minchan Kim -- strict thread matches above, loose matches on Subject: below -- 2019-06-03 5:36 [PATCH v1 0/4] Introduce MADV_COLD and MADV_PAGEOUT Minchan Kim 2019-06-03 5:36 ` [PATCH v1 3/4] mm: account nr_isolated_xxx in [isolate|putback]_lru_page Minchan Kim
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).