From: Gabriel Krisman Bertazi <krisman@suse.de>
To: Michal Hocko <mhocko@suse.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
Mel Gorman <mgorman@suse.de>, Vlastimil Babka <vbabka@suse.cz>,
Baoquan He <bhe@redhat.com>
Subject: Re: [PATCH] Revert "mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone"
Date: Wed, 26 Feb 2025 11:05:10 -0500 [thread overview]
Message-ID: <87h64gsnd5.fsf@mailhost.krisman.be> (raw)
In-Reply-To: <Z766q9qWtvHA_-kZ@tiehlicka> (Michal Hocko's message of "Wed, 26 Feb 2025 07:54:35 +0100")
[-- Attachment #1: Type: text/plain, Size: 2197 bytes --]
Michal Hocko <mhocko@suse.com> writes:
> On Tue 25-02-25 22:22:58, Gabriel Krisman Bertazi wrote:
>> Commit 96a5c186efff ("mm/page_alloc.c: don't show protection in zone's
>> ->lowmem_reserve[] for empty zone") removes the protection of lower
>> zones from allocations targeting memory-less high zones. This had an
>> unintended impact on the pattern of reclaims because it makes the
>> high-zone-targeted allocation more likely to succeed in lower zones,
>> which adds pressure to said zones. I.e, the following corresponding
>> checks in zone_watermark_ok/zone_watermark_fast are less likely to
>> trigger:
>>
>> if (free_pages <= min + z->lowmem_reserve[highest_zoneidx])
>> return false;
>>
>> As a result, we are observing an increase in reclaim and kswapd scans,
>> due to the increased pressure. This was initially observed as increased
>> latency in filesystem operations when benchmarking with fio on a machine
>> with some memory-less zones, but it has since been associated with
>> increased contention in locks related to memory reclaim. By reverting
>> this patch, the original performance was recovered on that machine.
>
> I think it would be nice to show the memory layout on that machine (is
> there any movable or device zone)?
>
> Exact reclaim patterns are really hard to predict and it is little bit
> surprising the said patch has caused an increased kswapd activity
> because I would expect that there will be more reclaim with the lowmem
> reserves in place. But it is quite possible that the higher zone memory
> pressure is just tipping over and increase the lowmem pressure enough
> that it shows up.
For reference, I collected vmstat with and without this patch on a
freshly booted system running intensive randread io from an nvme for 5
minutes. I got:
rpm-6.12.0-slfo.1.2 -> pgscan_kswapd 5629543865
Patched -> pgscan_kswapd 33580844
33M scans is similar to what we had in kernels predating this patch.
These numbers is fairly representative of the workload on this machine, as
measured in several runs. So we are talking about a 2-order of
magnitude increase.
Attached is the zoneinfo with my revert patch applied.
[-- Attachment #2: zoneinfo --]
[-- Type: application/octet-stream, Size: 30034 bytes --]
Node 0, zone DMA
per-node stats
nr_inactive_anon 2442
nr_active_anon 92
nr_inactive_file 1178
nr_active_file 5819
nr_unevictable 768
nr_slab_reclaimable 3514
nr_slab_unreclaimable 53499
nr_isolated_anon 0
nr_isolated_file 0
workingset_nodes 79
workingset_refault_anon 0
workingset_refault_file 1226
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
nr_anon_pages 2286
nr_mapped 2457
nr_file_pages 8014
nr_dirty 0
nr_writeback 0
nr_writeback_temp 0
nr_shmem 1017
nr_shmem_hugepages 0
nr_shmem_pmdmapped 0
nr_file_hugepages 0
nr_file_pmdmapped 0
nr_anon_transparent_hugepages 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_dirtied 62518655
nr_written 62518655
nr_throttled_written 0
nr_kernel_misc_reclaimable 0
nr_foll_pin_acquired 130
nr_foll_pin_released 130
nr_kernel_stack 3428
nr_page_table_pages 200
nr_sec_page_table_pages 1806
nr_iommu_pages 1806
nr_swapcached 0
pgpromote_success 0
pgpromote_candidate 0
pgdemote_kswapd 0
pgdemote_direct 0
pgdemote_khugepaged 0
nr_hugetlb 0
pages free 3840
boost 0
min 2
low 5
high 8
promo 11
spanned 4095
present 3998
managed 3840
cma 0
protection: (0, 1510, 63945, 63945, 63945)
nr_free_pages 3840
nr_zone_inactive_anon 0
nr_zone_active_anon 0
nr_zone_inactive_file 0
nr_zone_active_file 0
nr_zone_unevictable 0
nr_zone_write_pending 0
nr_mlock 0
nr_bounce 0
nr_zspages 0
nr_free_cma 0
nr_unaccepted 0
numa_hit 0
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 0
numa_other 0
pagesets
cpu: 0
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 1
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 2
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 3
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 4
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 5
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 6
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 7
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 8
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 9
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 10
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 11
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 12
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 13
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 14
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 15
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 16
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 17
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 18
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 19
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 20
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 21
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 22
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 23
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 24
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 25
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 26
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 27
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 28
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 29
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 30
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
cpu: 31
count: 0
high: 0
batch: 1
high_min: 4
high_max: 30
vm stats threshold: 12
node_unreclaimable: 0
start_pfn: 1
Node 0, zone DMA32
pages free 383047
boost 0
min 265
low 651
high 1037
promo 1423
spanned 1044480
present 403592
managed 386627
cma 0
protection: (0, 0, 62435, 62435, 62435)
nr_free_pages 383047
nr_zone_inactive_anon 0
nr_zone_active_anon 0
nr_zone_inactive_file 0
nr_zone_active_file 0
nr_zone_unevictable 0
nr_zone_write_pending 0
nr_mlock 0
nr_bounce 0
nr_zspages 0
nr_free_cma 0
nr_unaccepted 0
numa_hit 935941
numa_miss 6666
numa_foreign 0
numa_interleave 0
numa_local 935941
numa_other 6666
pagesets
cpu: 0
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 1
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 2
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 3
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 4
count: 190
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 5
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 6
count: 24
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 7
count: 143
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 8
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 9
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 10
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 11
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 12
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 13
count: 0
high: 0
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 14
count: 0
high: 0
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 15
count: 0
high: 0
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 16
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 17
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 18
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 19
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 20
count: 199
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 21
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 22
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 23
count: 252
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 24
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 25
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 26
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 27
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 28
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 29
count: 0
high: 252
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 30
count: 0
high: 0
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
cpu: 31
count: 0
high: 0
batch: 63
high_min: 252
high_max: 3020
vm stats threshold: 60
node_unreclaimable: 0
start_pfn: 4096
Node 0, zone Normal
pages free 15800625
boost 0
min 10958
low 26941
high 42924
promo 58907
spanned 16252928
present 16252928
managed 15983508
cma 0
protection: (0, 0, 0, 0, 0)
nr_free_pages 15800625
nr_zone_inactive_anon 2442
nr_zone_active_anon 92
nr_zone_inactive_file 1178
nr_zone_active_file 5819
nr_zone_unevictable 768
nr_zone_write_pending 0
nr_mlock 0
nr_bounce 0
nr_zspages 0
nr_free_cma 0
nr_unaccepted 0
numa_hit 109927768
numa_miss 698232
numa_foreign 5735015
numa_interleave 6476
numa_local 109925700
numa_other 700300
pagesets
cpu: 0
count: 471
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 1
count: 483
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 2
count: 281
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 3
count: 1616
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 4
count: 1317
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 5
count: 290
high: 1746
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 6
count: 662
high: 1746
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 7
count: 1627
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 8
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 9
count: 1116
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 10
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 11
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 12
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 13
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 14
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 15
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 16
count: 252
high: 1998
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 17
count: 445
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 18
count: 216
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 19
count: 238
high: 1809
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 20
count: 891
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 21
count: 1556
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 22
count: 410
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 23
count: 223
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 24
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 25
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 26
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 27
count: 39
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 28
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 29
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 30
count: 31
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
cpu: 31
count: 0
high: 1683
batch: 63
high_min: 1683
high_max: 124871
vm stats threshold: 120
node_unreclaimable: 0
start_pfn: 1048576
Node 0, zone Movable
pages free 0
boost 0
min 32
low 32
high 32
promo 32
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Node 0, zone Device
pages free 0
boost 0
min 0
low 0
high 0
promo 0
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Node 1, zone DMA
pages free 0
boost 0
min 0
low 0
high 0
promo 0
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Node 1, zone DMA32
pages free 0
boost 0
min 0
low 0
high 0
promo 0
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Node 1, zone Normal
per-node stats
nr_inactive_anon 4127
nr_active_anon 121
nr_inactive_file 3690
nr_active_file 10718
nr_unevictable 0
nr_slab_reclaimable 6504
nr_slab_unreclaimable 21331
nr_isolated_anon 0
nr_isolated_file 0
workingset_nodes 146
workingset_refault_anon 0
workingset_refault_file 4050
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
nr_anon_pages 4003
nr_mapped 7490
nr_file_pages 14661
nr_dirty 1
nr_writeback 0
nr_writeback_temp 0
nr_shmem 253
nr_shmem_hugepages 0
nr_shmem_pmdmapped 0
nr_file_hugepages 0
nr_file_pmdmapped 0
nr_anon_transparent_hugepages 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_dirtied 3345081
nr_written 3345076
nr_throttled_written 0
nr_kernel_misc_reclaimable 0
nr_foll_pin_acquired 0
nr_foll_pin_released 0
nr_kernel_stack 4824
nr_page_table_pages 237
nr_sec_page_table_pages 1806
nr_iommu_pages 1806
nr_swapcached 0
pgpromote_success 0
pgpromote_candidate 0
pgdemote_kswapd 0
pgdemote_direct 0
pgdemote_khugepaged 0
nr_hugetlb 0
pages free 16395139
boost 0
min 11301
low 27783
high 44265
promo 60747
spanned 16777216
present 16777216
managed 16490125
cma 0
protection: (0, 0, 0, 0, 0)
nr_free_pages 16395139
nr_zone_inactive_anon 4127
nr_zone_active_anon 121
nr_zone_inactive_file 3690
nr_zone_active_file 10718
nr_zone_unevictable 0
nr_zone_write_pending 1
nr_mlock 0
nr_bounce 0
nr_zspages 0
nr_free_cma 0
nr_unaccepted 0
numa_hit 32499922
numa_miss 5735015
numa_foreign 704898
numa_interleave 6936
numa_local 32488458
numa_other 5746479
pagesets
cpu: 0
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 1
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 2
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 3
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 4
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 5
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 6
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 7
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 8
count: 308
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 9
count: 238
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 10
count: 127
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 11
count: 435
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 12
count: 202
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 13
count: 288
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 14
count: 215
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 15
count: 300
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 16
count: 11
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 17
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 18
count: 2
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 19
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 20
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 21
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 22
count: 31
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 23
count: 0
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 24
count: 376
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 25
count: 930
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 26
count: 1643
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 27
count: 1700
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 28
count: 1540
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 29
count: 1709
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 30
count: 1629
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
cpu: 31
count: 153
high: 1736
batch: 63
high_min: 1736
high_max: 128773
vm stats threshold: 120
node_unreclaimable: 0
start_pfn: 17301504
Node 1, zone Movable
pages free 0
boost 0
min 32
low 32
high 32
promo 32
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Node 1, zone Device
pages free 0
boost 0
min 0
low 0
high 0
promo 0
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
[-- Attachment #3: Type: text/plain, Size: 29 bytes --]
--
Gabriel Krisman Bertazi
next prev parent reply other threads:[~2025-02-26 16:05 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-26 3:22 [PATCH] Revert "mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone" Gabriel Krisman Bertazi
2025-02-26 6:54 ` Michal Hocko
2025-02-26 10:00 ` Baoquan He
2025-02-26 10:52 ` Michal Hocko
2025-02-26 11:00 ` Michal Hocko
2025-02-26 11:51 ` Baoquan He
2025-02-26 12:01 ` Michal Hocko
2025-02-26 15:57 ` Baoquan He
2025-02-26 17:46 ` Michal Hocko
2025-02-27 9:41 ` Baoquan He
2025-02-27 9:16 ` Vlastimil Babka
2025-02-27 10:24 ` Baoquan He
2025-02-27 13:16 ` Vlastimil Babka
2025-02-27 15:53 ` Baoquan He
2025-02-26 13:07 ` Vlastimil Babka
2025-02-26 16:05 ` Gabriel Krisman Bertazi [this message]
2025-02-26 23:00 ` Andrew Morton
2025-02-26 13:00 ` Vlastimil Babka
2025-02-27 11:50 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87h64gsnd5.fsf@mailhost.krisman.be \
--to=krisman@suse.de \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox