* Re: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching
[not found] <20260228161008.707-1-lenohou@gmail.com>
@ 2026-02-28 19:23 ` kernel test robot
2026-02-28 20:15 ` kernel test robot
1 sibling, 0 replies; 2+ messages in thread
From: kernel test robot @ 2026-02-28 19:23 UTC (permalink / raw)
To: Leno Hou, linux-mm, linux-kernel
Cc: llvm, oe-kbuild-all, Leno Hou, Andrew Morton,
Linux Memory Management List, Axel Rasmussen, Yuanchu Xie, Wei Xu,
Barry Song, Jialing Wang, Yafang Shao, Yu Zhao
Hi Leno,
kernel test robot noticed the following build warnings:
[auto build test WARNING on v7.0-rc1]
[also build test WARNING on linus/master next-20260227]
[cannot apply to akpm-mm/mm-everything]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Leno-Hou/mm-mglru-fix-cgroup-OOM-during-MGLRU-state-switching/20260301-001148
base: v7.0-rc1
patch link: https://lore.kernel.org/r/20260228161008.707-1-lenohou%40gmail.com
patch subject: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching
config: um-defconfig (https://download.01.org/0day-ci/archive/20260301/202603010300.t6GYRWjK-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 9a109fbb6e184ec9bcce10615949f598f4c974a9)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260301/202603010300.t6GYRWjK-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603010300.t6GYRWjK-lkp@intel.com/
All warnings (new ones prefixed by >>):
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
>> mm/vmscan.c:5788:37: warning: '&&' within '||' [-Wlogical-op-parentheses]
5788 | if (lrugen_enabled || lru_draining && !root_reclaim(sc)) {
| ~~ ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
mm/vmscan.c:5788:37: note: place parentheses around the '&&' expression to silence this warning
5788 | if (lrugen_enabled || lru_draining && !root_reclaim(sc)) {
| ^
| ( )
1 warning and 18 errors generated.
vim +5788 mm/vmscan.c
5774
5775 static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
5776 {
5777 unsigned long nr[NR_LRU_LISTS];
5778 unsigned long targets[NR_LRU_LISTS];
5779 unsigned long nr_to_scan;
5780 enum lru_list lru;
5781 unsigned long nr_reclaimed = 0;
5782 unsigned long nr_to_reclaim = sc->nr_to_reclaim;
5783 bool proportional_reclaim;
5784 struct blk_plug plug;
5785 bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
5786 bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
5787
> 5788 if (lrugen_enabled || lru_draining && !root_reclaim(sc)) {
5789 lru_gen_shrink_lruvec(lruvec, sc);
5790
5791 if (!lru_draining)
5792 return;
5793
5794 }
5795
5796 get_scan_count(lruvec, sc, nr);
5797
5798 /* Record the original scan target for proportional adjustments later */
5799 memcpy(targets, nr, sizeof(nr));
5800
5801 /*
5802 * Global reclaiming within direct reclaim at DEF_PRIORITY is a normal
5803 * event that can occur when there is little memory pressure e.g.
5804 * multiple streaming readers/writers. Hence, we do not abort scanning
5805 * when the requested number of pages are reclaimed when scanning at
5806 * DEF_PRIORITY on the assumption that the fact we are direct
5807 * reclaiming implies that kswapd is not keeping up and it is best to
5808 * do a batch of work at once. For memcg reclaim one check is made to
5809 * abort proportional reclaim if either the file or anon lru has already
5810 * dropped to zero at the first pass.
5811 */
5812 proportional_reclaim = (!cgroup_reclaim(sc) && !current_is_kswapd() &&
5813 sc->priority == DEF_PRIORITY);
5814
5815 blk_start_plug(&plug);
5816 while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
5817 nr[LRU_INACTIVE_FILE]) {
5818 unsigned long nr_anon, nr_file, percentage;
5819 unsigned long nr_scanned;
5820
5821 for_each_evictable_lru(lru) {
5822 if (nr[lru]) {
5823 nr_to_scan = min(nr[lru], SWAP_CLUSTER_MAX);
5824 nr[lru] -= nr_to_scan;
5825
5826 nr_reclaimed += shrink_list(lru, nr_to_scan,
5827 lruvec, sc);
5828 }
5829 }
5830
5831 cond_resched();
5832
5833 if (nr_reclaimed < nr_to_reclaim || proportional_reclaim)
5834 continue;
5835
5836 /*
5837 * For kswapd and memcg, reclaim at least the number of pages
5838 * requested. Ensure that the anon and file LRUs are scanned
5839 * proportionally what was requested by get_scan_count(). We
5840 * stop reclaiming one LRU and reduce the amount scanning
5841 * proportional to the original scan target.
5842 */
5843 nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
5844 nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];
5845
5846 /*
5847 * It's just vindictive to attack the larger once the smaller
5848 * has gone to zero. And given the way we stop scanning the
5849 * smaller below, this makes sure that we only make one nudge
5850 * towards proportionality once we've got nr_to_reclaim.
5851 */
5852 if (!nr_file || !nr_anon)
5853 break;
5854
5855 if (nr_file > nr_anon) {
5856 unsigned long scan_target = targets[LRU_INACTIVE_ANON] +
5857 targets[LRU_ACTIVE_ANON] + 1;
5858 lru = LRU_BASE;
5859 percentage = nr_anon * 100 / scan_target;
5860 } else {
5861 unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
5862 targets[LRU_ACTIVE_FILE] + 1;
5863 lru = LRU_FILE;
5864 percentage = nr_file * 100 / scan_target;
5865 }
5866
5867 /* Stop scanning the smaller of the LRU */
5868 nr[lru] = 0;
5869 nr[lru + LRU_ACTIVE] = 0;
5870
5871 /*
5872 * Recalculate the other LRU scan count based on its original
5873 * scan target and the percentage scanning already complete
5874 */
5875 lru = (lru == LRU_FILE) ? LRU_BASE : LRU_FILE;
5876 nr_scanned = targets[lru] - nr[lru];
5877 nr[lru] = targets[lru] * (100 - percentage) / 100;
5878 nr[lru] -= min(nr[lru], nr_scanned);
5879
5880 lru += LRU_ACTIVE;
5881 nr_scanned = targets[lru] - nr[lru];
5882 nr[lru] = targets[lru] * (100 - percentage) / 100;
5883 nr[lru] -= min(nr[lru], nr_scanned);
5884 }
5885 blk_finish_plug(&plug);
5886 sc->nr_reclaimed += nr_reclaimed;
5887
5888 /*
5889 * Even if we did not try to evict anon pages at all, we want to
5890 * rebalance the anon lru active/inactive ratio.
5891 */
5892 if (can_age_anon_pages(lruvec, sc) &&
5893 inactive_is_low(lruvec, LRU_INACTIVE_ANON))
5894 shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
5895 sc, LRU_ACTIVE_ANON);
5896 }
5897
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching
[not found] <20260228161008.707-1-lenohou@gmail.com>
2026-02-28 19:23 ` [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching kernel test robot
@ 2026-02-28 20:15 ` kernel test robot
1 sibling, 0 replies; 2+ messages in thread
From: kernel test robot @ 2026-02-28 20:15 UTC (permalink / raw)
To: Leno Hou, linux-mm, linux-kernel
Cc: llvm, oe-kbuild-all, Leno Hou, Andrew Morton,
Linux Memory Management List, Axel Rasmussen, Yuanchu Xie, Wei Xu,
Barry Song, Jialing Wang, Yafang Shao, Yu Zhao
Hi Leno,
kernel test robot noticed the following build errors:
[auto build test ERROR on v7.0-rc1]
[also build test ERROR on linus/master next-20260227]
[cannot apply to akpm-mm/mm-everything]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Leno-Hou/mm-mglru-fix-cgroup-OOM-during-MGLRU-state-switching/20260301-001148
base: v7.0-rc1
patch link: https://lore.kernel.org/r/20260228161008.707-1-lenohou%40gmail.com
patch subject: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching
config: um-defconfig (https://download.01.org/0day-ci/archive/20260301/202603010435.MBtvBCTp-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 9a109fbb6e184ec9bcce10615949f598f4c974a9)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260301/202603010435.MBtvBCTp-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603010435.MBtvBCTp-lkp@intel.com/
All errors (new ones prefixed by >>):
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
>> mm/vmscan.c:5785:50: error: no member named 'lrugen' in 'struct lruvec'
5785 | bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5786:48: error: no member named 'lrugen' in 'struct lruvec'
5786 | bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
| ~~~~~~ ^
mm/vmscan.c:5788:37: warning: '&&' within '||' [-Wlogical-op-parentheses]
5788 | if (lrugen_enabled || lru_draining && !root_reclaim(sc)) {
| ~~ ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
mm/vmscan.c:5788:37: note: place parentheses around the '&&' expression to silence this warning
5788 | if (lrugen_enabled || lru_draining && !root_reclaim(sc)) {
| ^
| ( )
1 warning and 18 errors generated.
vim +5785 mm/vmscan.c
5774
5775 static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
5776 {
5777 unsigned long nr[NR_LRU_LISTS];
5778 unsigned long targets[NR_LRU_LISTS];
5779 unsigned long nr_to_scan;
5780 enum lru_list lru;
5781 unsigned long nr_reclaimed = 0;
5782 unsigned long nr_to_reclaim = sc->nr_to_reclaim;
5783 bool proportional_reclaim;
5784 struct blk_plug plug;
> 5785 bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
5786 bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
5787
5788 if (lrugen_enabled || lru_draining && !root_reclaim(sc)) {
5789 lru_gen_shrink_lruvec(lruvec, sc);
5790
5791 if (!lru_draining)
5792 return;
5793
5794 }
5795
5796 get_scan_count(lruvec, sc, nr);
5797
5798 /* Record the original scan target for proportional adjustments later */
5799 memcpy(targets, nr, sizeof(nr));
5800
5801 /*
5802 * Global reclaiming within direct reclaim at DEF_PRIORITY is a normal
5803 * event that can occur when there is little memory pressure e.g.
5804 * multiple streaming readers/writers. Hence, we do not abort scanning
5805 * when the requested number of pages are reclaimed when scanning at
5806 * DEF_PRIORITY on the assumption that the fact we are direct
5807 * reclaiming implies that kswapd is not keeping up and it is best to
5808 * do a batch of work at once. For memcg reclaim one check is made to
5809 * abort proportional reclaim if either the file or anon lru has already
5810 * dropped to zero at the first pass.
5811 */
5812 proportional_reclaim = (!cgroup_reclaim(sc) && !current_is_kswapd() &&
5813 sc->priority == DEF_PRIORITY);
5814
5815 blk_start_plug(&plug);
5816 while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
5817 nr[LRU_INACTIVE_FILE]) {
5818 unsigned long nr_anon, nr_file, percentage;
5819 unsigned long nr_scanned;
5820
5821 for_each_evictable_lru(lru) {
5822 if (nr[lru]) {
5823 nr_to_scan = min(nr[lru], SWAP_CLUSTER_MAX);
5824 nr[lru] -= nr_to_scan;
5825
5826 nr_reclaimed += shrink_list(lru, nr_to_scan,
5827 lruvec, sc);
5828 }
5829 }
5830
5831 cond_resched();
5832
5833 if (nr_reclaimed < nr_to_reclaim || proportional_reclaim)
5834 continue;
5835
5836 /*
5837 * For kswapd and memcg, reclaim at least the number of pages
5838 * requested. Ensure that the anon and file LRUs are scanned
5839 * proportionally what was requested by get_scan_count(). We
5840 * stop reclaiming one LRU and reduce the amount scanning
5841 * proportional to the original scan target.
5842 */
5843 nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
5844 nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];
5845
5846 /*
5847 * It's just vindictive to attack the larger once the smaller
5848 * has gone to zero. And given the way we stop scanning the
5849 * smaller below, this makes sure that we only make one nudge
5850 * towards proportionality once we've got nr_to_reclaim.
5851 */
5852 if (!nr_file || !nr_anon)
5853 break;
5854
5855 if (nr_file > nr_anon) {
5856 unsigned long scan_target = targets[LRU_INACTIVE_ANON] +
5857 targets[LRU_ACTIVE_ANON] + 1;
5858 lru = LRU_BASE;
5859 percentage = nr_anon * 100 / scan_target;
5860 } else {
5861 unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
5862 targets[LRU_ACTIVE_FILE] + 1;
5863 lru = LRU_FILE;
5864 percentage = nr_file * 100 / scan_target;
5865 }
5866
5867 /* Stop scanning the smaller of the LRU */
5868 nr[lru] = 0;
5869 nr[lru + LRU_ACTIVE] = 0;
5870
5871 /*
5872 * Recalculate the other LRU scan count based on its original
5873 * scan target and the percentage scanning already complete
5874 */
5875 lru = (lru == LRU_FILE) ? LRU_BASE : LRU_FILE;
5876 nr_scanned = targets[lru] - nr[lru];
5877 nr[lru] = targets[lru] * (100 - percentage) / 100;
5878 nr[lru] -= min(nr[lru], nr_scanned);
5879
5880 lru += LRU_ACTIVE;
5881 nr_scanned = targets[lru] - nr[lru];
5882 nr[lru] = targets[lru] * (100 - percentage) / 100;
5883 nr[lru] -= min(nr[lru], nr_scanned);
5884 }
5885 blk_finish_plug(&plug);
5886 sc->nr_reclaimed += nr_reclaimed;
5887
5888 /*
5889 * Even if we did not try to evict anon pages at all, we want to
5890 * rebalance the anon lru active/inactive ratio.
5891 */
5892 if (can_age_anon_pages(lruvec, sc) &&
5893 inactive_is_low(lruvec, LRU_INACTIVE_ANON))
5894 shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
5895 sc, LRU_ACTIVE_ANON);
5896 }
5897
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-02-28 20:15 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260228161008.707-1-lenohou@gmail.com>
2026-02-28 19:23 ` [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching kernel test robot
2026-02-28 20:15 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox