From: kernel test robot <lkp@intel.com>
To: "JP Kobryn (Meta)" <jp.kobryn@linux.dev>,
linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@suse.com,
vbabka@suse.cz
Cc: oe-kbuild-all@lists.linux.dev, apopple@nvidia.com,
axelrasmussen@google.com, byungchul@sk.com,
cgroups@vger.kernel.org, david@kernel.org, eperezma@redhat.com,
gourry@gourry.net, jasowang@redhat.com, hannes@cmpxchg.org,
joshua.hahnjy@gmail.com, Liam.Howlett@oracle.com,
linux-kernel@vger.kernel.org, lorenzo.stoakes@oracle.com,
matthew.brost@intel.com, mst@redhat.com, rppt@kernel.org,
muchun.song@linux.dev, zhengqi.arch@bytedance.com,
rakie.kim@sk.com, roman.gushchin@linux.dev,
shakeel.butt@linux.dev, surenb@google.com,
virtualization@lists.linux.dev, weixugc@google.com,
xuanzhuo@linux.alibaba.com, ying.huang@linux.alibaba.com
Subject: Re: [PATCH v2] mm/mempolicy: track page allocations per mempolicy
Date: Sat, 7 Mar 2026 22:32:47 +0800 [thread overview]
Message-ID: <202603072210.TSPUKsyq-lkp@intel.com> (raw)
In-Reply-To: <20260307045520.247998-1-jp.kobryn@linux.dev>
Hi JP,
kernel test robot noticed the following build errors:
[auto build test ERROR on akpm-mm/mm-everything]
url: https://github.com/intel-lab-lkp/linux/commits/JP-Kobryn-Meta/mm-mempolicy-track-page-allocations-per-mempolicy/20260307-125642
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20260307045520.247998-1-jp.kobryn%40linux.dev
patch subject: [PATCH v2] mm/mempolicy: track page allocations per mempolicy
config: x86_64-randconfig-074-20260307 (https://download.01.org/0day-ci/archive/20260307/202603072210.TSPUKsyq-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260307/202603072210.TSPUKsyq-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603072210.TSPUKsyq-lkp@intel.com/
All errors (new ones prefixed by >>):
mm/mempolicy.c: In function 'mpol_count_numa_alloc':
mm/mempolicy.c:2489:17: error: implicit declaration of function 'mem_cgroup_from_task'; did you mean 'mem_cgroup_from_css'? [-Wimplicit-function-declaration]
2489 | memcg = mem_cgroup_from_task(current);
| ^~~~~~~~~~~~~~~~~~~~
| mem_cgroup_from_css
>> mm/mempolicy.c:2489:15: error: assignment to 'struct mem_cgroup *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
2489 | memcg = mem_cgroup_from_task(current);
| ^
vim +2489 mm/mempolicy.c
2429
2430 /*
2431 * Count a mempolicy allocation. Stats are tracked per-node and per-cgroup.
2432 * The following numa_{hit/miss/foreign} pattern is used:
2433 *
2434 * hit
2435 * - for BIND and PREFERRED_MANY, allocation succeeded on node in nodemask
2436 * - for other policies, allocation succeeded on intended node
2437 * - counted on the node of the allocation
2438 * miss
2439 * - allocation intended for other node, but happened on this one
2440 * - counted on other node
2441 * foreign
2442 * - allocation intended on this node, but happened on other node
2443 * - counted on this node
2444 */
2445 static void mpol_count_numa_alloc(struct mempolicy *pol, int intended_nid,
2446 struct page *page, unsigned int order)
2447 {
2448 int actual_nid = page_to_nid(page);
2449 long nr_pages = 1L << order;
2450 enum node_stat_item hit_idx;
2451 struct mem_cgroup *memcg;
2452 struct lruvec *lruvec;
2453 bool is_hit;
2454
2455 if (!root_mem_cgroup || mem_cgroup_disabled())
2456 return;
2457
2458 /*
2459 * Start with hit then use +1 or +2 later on to change to miss or
2460 * foreign respectively if needed.
2461 */
2462 switch (pol->mode) {
2463 case MPOL_PREFERRED:
2464 hit_idx = NUMA_MPOL_PREFERRED_HIT;
2465 break;
2466 case MPOL_PREFERRED_MANY:
2467 hit_idx = NUMA_MPOL_PREFERRED_MANY_HIT;
2468 break;
2469 case MPOL_BIND:
2470 hit_idx = NUMA_MPOL_BIND_HIT;
2471 break;
2472 case MPOL_INTERLEAVE:
2473 hit_idx = NUMA_MPOL_INTERLEAVE_HIT;
2474 break;
2475 case MPOL_WEIGHTED_INTERLEAVE:
2476 hit_idx = NUMA_MPOL_WEIGHTED_INTERLEAVE_HIT;
2477 break;
2478 default:
2479 hit_idx = NUMA_MPOL_LOCAL_HIT;
2480 break;
2481 }
2482
2483 if (pol->mode == MPOL_BIND || pol->mode == MPOL_PREFERRED_MANY)
2484 is_hit = node_isset(actual_nid, pol->nodes);
2485 else
2486 is_hit = (actual_nid == intended_nid);
2487
2488 rcu_read_lock();
> 2489 memcg = mem_cgroup_from_task(current);
2490
2491 if (is_hit) {
2492 lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(actual_nid));
2493 mod_lruvec_state(lruvec, hit_idx, nr_pages);
2494 } else {
2495 /* account for miss on the fallback node */
2496 lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(actual_nid));
2497 mod_lruvec_state(lruvec, hit_idx + 1, nr_pages);
2498
2499 /* account for foreign on the intended node */
2500 lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(intended_nid));
2501 mod_lruvec_state(lruvec, hit_idx + 2, nr_pages);
2502 }
2503
2504 rcu_read_unlock();
2505 }
2506
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next prev parent reply other threads:[~2026-03-07 14:33 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-07 4:55 [PATCH v2] mm/mempolicy: track page allocations per mempolicy JP Kobryn (Meta)
2026-03-07 12:27 ` Huang, Ying
2026-03-08 19:20 ` Gregory Price
2026-03-09 4:11 ` JP Kobryn (Meta)
2026-03-09 4:31 ` JP Kobryn (Meta)
2026-03-11 2:56 ` Huang, Ying
2026-03-11 17:31 ` JP Kobryn (Meta)
2026-03-07 14:32 ` kernel test robot [this message]
2026-03-07 19:57 ` kernel test robot
2026-03-08 19:24 ` Usama Arif
2026-03-09 3:30 ` JP Kobryn (Meta)
2026-03-11 18:06 ` Johannes Weiner
2026-03-09 23:35 ` Shakeel Butt
2026-03-09 23:43 ` Shakeel Butt
2026-03-10 4:17 ` JP Kobryn (Meta)
2026-03-10 14:53 ` Shakeel Butt
2026-03-10 17:01 ` JP Kobryn (Meta)
2026-03-12 13:40 ` Vlastimil Babka (SUSE)
2026-03-12 16:13 ` JP Kobryn (Meta)
2026-03-13 5:07 ` Huang, Ying
2026-03-13 6:14 ` JP Kobryn (Meta)
2026-03-13 7:34 ` Vlastimil Babka (SUSE)
2026-03-13 9:31 ` Huang, Ying
2026-03-13 18:28 ` JP Kobryn (Meta)
2026-03-13 18:09 ` JP Kobryn (Meta)
2026-03-16 2:54 ` Huang, Ying
2026-03-17 4:37 ` JP Kobryn (Meta)
2026-03-17 6:44 ` Huang, Ying
2026-03-17 11:10 ` Vlastimil Babka (SUSE)
2026-03-17 17:55 ` JP Kobryn (Meta)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202603072210.TSPUKsyq-lkp@intel.com \
--to=lkp@intel.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=byungchul@sk.com \
--cc=cgroups@vger.kernel.org \
--cc=david@kernel.org \
--cc=eperezma@redhat.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jasowang@redhat.com \
--cc=joshua.hahnjy@gmail.com \
--cc=jp.kobryn@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=mst@redhat.com \
--cc=muchun.song@linux.dev \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=rakie.kim@sk.com \
--cc=roman.gushchin@linux.dev \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=virtualization@lists.linux.dev \
--cc=weixugc@google.com \
--cc=xuanzhuo@linux.alibaba.com \
--cc=ying.huang@linux.alibaba.com \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox