* Re: [PATCH RFC] memcg: add per-cgroup dirty page controls (dirty_ratio, dirty_min)
[not found] <20260501-rfc-memcg-dirty-v1-v1-1-9a8c80036ec1@uber.com>
@ 2026-05-03 9:55 ` kernel test robot
0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2026-05-03 9:55 UTC (permalink / raw)
To: Alireza Haghdoost via B4 Relay; +Cc: llvm, oe-kbuild-all
Hi Alireza,
[This is a private test report for your RFC patch.]
kernel test robot noticed the following build errors:
[auto build test ERROR on 254f49634ee16a731174d2ae34bc50bd5f45e731]
url: https://github.com/intel-lab-lkp/linux/commits/Alireza-Haghdoost-via-B4-Relay/memcg-add-per-cgroup-dirty-page-controls-dirty_ratio-dirty_min/20260502-235916
base: 254f49634ee16a731174d2ae34bc50bd5f45e731
patch link: https://lore.kernel.org/r/20260501-rfc-memcg-dirty-v1-v1-1-9a8c80036ec1%40uber.com
patch subject: [PATCH RFC] memcg: add per-cgroup dirty page controls (dirty_ratio, dirty_min)
config: um-allnoconfig (https://download.01.org/0day-ci/archive/20260503/202605031710.4QHTfWdf-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 5bac06718f502014fade905512f1d26d578a18f3)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260503/202605031710.4QHTfWdf-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202605031710.4QHTfWdf-lkp@intel.com/
All errors (new ones prefixed by >>):
>> mm/page-writeback.c:426:33: error: no member named 'memcg_css' in 'struct bdi_writeback'
426 | mem_cgroup_from_css(dtc->wb->memcg_css);
| ~~~~~~~ ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:428:19: error: incomplete definition of type 'struct mem_cgroup'
428 | READ_ONCE(memcg->dirty_ratio) : 0;
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
>> mm/page-writeback.c:1912:5: error: call to undeclared function 'mem_cgroup_from_task'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1912 | mem_cgroup_from_task(current);
| ^
mm/page-writeback.c:1912:5: note: did you mean 'mem_cgroup_from_css'?
include/linux/memcontrol.h:1211:20: note: 'mem_cgroup_from_css' declared here
1211 | struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css)
| ^
>> mm/page-writeback.c:1911:23: error: incompatible integer to pointer conversion initializing 'struct mem_cgroup *' with an expression of type 'int' [-Wint-conversion]
1911 | struct mem_cgroup *memcg =
| ^
1912 | mem_cgroup_from_task(current);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1915:35: error: incomplete definition of type 'struct mem_cgroup'
1915 | cg_dirty_min = READ_ONCE(memcg->dirty_min);
| ~~~~~^
include/linux/shrinker.h:55:9: note: forward declaration of 'struct mem_cgroup'
55 | struct mem_cgroup *memcg;
| ^
mm/page-writeback.c:1968:36: error: no member named 'memcg_css' in 'struct bdi_writeback'
1968 | mem_cgroup_from_css(mdtc->wb->memcg_css);
| ~~~~~~~~ ^
18 errors generated.
vim +426 mm/page-writeback.c
341
342 /**
343 * domain_dirty_limits - calculate thresh and bg_thresh for a wb_domain
344 * @dtc: dirty_throttle_control of interest
345 *
346 * Calculate @dtc->thresh and ->bg_thresh considering
347 * vm_dirty_{bytes|ratio} and dirty_background_{bytes|ratio}. The caller
348 * must ensure that @dtc->avail is set before calling this function. The
349 * dirty limits will be lifted by 1/4 for real-time tasks.
350 */
351 static void domain_dirty_limits(struct dirty_throttle_control *dtc)
352 {
353 const unsigned long available_memory = dtc->avail;
354 struct dirty_throttle_control *gdtc = mdtc_gdtc(dtc);
355 unsigned long bytes = vm_dirty_bytes;
356 unsigned long bg_bytes = dirty_background_bytes;
357 /* convert ratios to per-PAGE_SIZE for higher precision */
358 unsigned long ratio = (vm_dirty_ratio * PAGE_SIZE) / 100;
359 unsigned long bg_ratio = (dirty_background_ratio * PAGE_SIZE) / 100;
360 unsigned long thresh;
361 unsigned long bg_thresh;
362 struct task_struct *tsk;
363
364 /* gdtc is !NULL iff @dtc is for memcg domain */
365 if (gdtc) {
366 unsigned long global_avail = gdtc->avail;
367
368 /*
369 * The byte settings can't be applied directly to memcg
370 * domains. Convert them to ratios by scaling against
371 * globally available memory. As the ratios are in
372 * per-PAGE_SIZE, they can be obtained by dividing bytes by
373 * number of pages.
374 */
375 if (bytes)
376 ratio = min(DIV_ROUND_UP(bytes, global_avail),
377 PAGE_SIZE);
378 if (bg_bytes)
379 bg_ratio = min(DIV_ROUND_UP(bg_bytes, global_avail),
380 PAGE_SIZE);
381 bytes = bg_bytes = 0;
382 }
383
384 if (bytes)
385 thresh = DIV_ROUND_UP(bytes, PAGE_SIZE);
386 else
387 thresh = (ratio * available_memory) / PAGE_SIZE;
388
389 if (bg_bytes)
390 bg_thresh = DIV_ROUND_UP(bg_bytes, PAGE_SIZE);
391 else
392 bg_thresh = (bg_ratio * available_memory) / PAGE_SIZE;
393
394 tsk = current;
395 if (rt_or_dl_task(tsk)) {
396 bg_thresh += bg_thresh / 4 + global_wb_domain.dirty_limit / 32;
397 thresh += thresh / 4 + global_wb_domain.dirty_limit / 32;
398 }
399
400 /*
401 * Apply the per-memcg dirty_ratio clamp on mdtc (gdtc != NULL
402 * iff @dtc is a memcg dtc). dirty_ratio is scaled against
403 * the memcg's own dirtyable memory (@available_memory), matching
404 * the semantics of vm_dirty_ratio so the two knobs share a base
405 * and compose via a plain min() on thresh. The clamp is keyed
406 * on wb->memcg_css (the inode-owner's memcg) rather than on
407 * current's memcg, so balance_dirty_pages(), wb_over_bg_thresh()
408 * (flusher kworker context), and cgwb_calc_thresh() all see the
409 * same clamped value.
410 *
411 * Published on dtc->cg_dirty_cap as well so hard_dirty_limit()
412 * callers in balance_dirty_pages() can ignore the slower
413 * dom->dirty_limit smoothing when deriving setpoint/
414 * rate-limit from the clamped ceiling.
415 *
416 * Clamp is applied after the rt/dl boost: dirty_ratio is a
417 * strict override, not widened by priority. bg_thresh is
418 * scaled by the same factor we apply to thresh so the
419 * user-configured bg/thresh ratio survives clamping instead
420 * of snapping to thresh/2 via the bg_thresh >= thresh guard
421 * below. mult_frac() preserves precision for small memcgs
422 * where a plain "(avail / 100) * ratio" would collapse to 0.
423 */
424 if (gdtc) {
425 struct mem_cgroup *memcg =
> 426 mem_cgroup_from_css(dtc->wb->memcg_css);
427 unsigned int cg_ratio = memcg ?
> 428 READ_ONCE(memcg->dirty_ratio) : 0;
429
430 /*
431 * dtc is reused across balance_dirty_pages() iterations,
432 * so reset the published clamp every call -- an admin
433 * clearing memory.dirty_ratio mid-flight must take effect
434 * on the next pass.
435 */
436 dtc->cg_dirty_cap = PAGE_COUNTER_MAX;
437
438 if (cg_ratio) {
439 unsigned long cg_thresh = mult_frac(available_memory,
440 cg_ratio, 100);
441
442 if (cg_thresh < thresh) {
443 bg_thresh = mult_frac(bg_thresh, cg_thresh,
444 thresh);
445 thresh = cg_thresh;
446 dtc->cg_dirty_cap = cg_thresh;
447 }
448 }
449 }
450
451 /*
452 * Dirty throttling logic assumes the limits in page units fit into
453 * 32-bits. This gives 16TB dirty limits max which is hopefully enough.
454 */
455 if (thresh > UINT_MAX)
456 thresh = UINT_MAX;
457 /* This makes sure bg_thresh is within 32-bits as well */
458 if (bg_thresh >= thresh)
459 bg_thresh = thresh / 2;
460 dtc->thresh = thresh;
461 dtc->bg_thresh = bg_thresh;
462
463 /* we should eventually report the domain in the TP */
464 if (!gdtc)
465 trace_global_dirty_state(bg_thresh, thresh);
466 }
467
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] only message in thread