public inbox for llvm@lists.linux.dev
 help / color / mirror / Atom feed
* Re: [RFC PATCH 1/2] sched/fair: Only throttle CFS tasks on return to userspace
       [not found] <20231130161245.3894682-2-vschneid@redhat.com>
@ 2023-12-05 18:50 ` kernel test robot
  2023-12-05 18:50 ` kernel test robot
  1 sibling, 0 replies; 2+ messages in thread
From: kernel test robot @ 2023-12-05 18:50 UTC (permalink / raw)
  To: Valentin Schneider; +Cc: llvm, oe-kbuild-all

Hi Valentin,

[This is a private test report for your RFC patch.]
kernel test robot noticed the following build errors:

[auto build test ERROR on tip/sched/core]
[also build test ERROR on tip/master linus/master v6.7-rc4 next-20231205]
[cannot apply to tip/auto-latest]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Valentin-Schneider/sched-fair-Only-throttle-CFS-tasks-on-return-to-userspace/20231201-001908
base:   tip/sched/core
patch link:    https://lore.kernel.org/r/20231130161245.3894682-2-vschneid%40redhat.com
patch subject: [RFC PATCH 1/2] sched/fair: Only throttle CFS tasks on return to userspace
config: arm-randconfig-002-20231202 (https://download.01.org/0day-ci/archive/20231206/202312060204.DzviMIAg-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231206/202312060204.DzviMIAg-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312060204.DzviMIAg-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/sched/fair.c:8575:30: error: no member named 'in_throttle_limbo' in 'struct cfs_rq'
    8575 |         if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
         |                      ~~~~~~~~~~~~~  ^
   include/linux/compiler.h:77:42: note: expanded from macro 'unlikely'
      77 | # define unlikely(x)    __builtin_expect(!!(x), 0)
         |                                             ^
   kernel/sched/fair.c:8575:52: error: call to undeclared function 'task_has_throttle_work'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    8575 |         if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
         |                                                           ^
   kernel/sched/fair.c:8575:52: note: did you mean 'init_cfs_throttle_work'?
   kernel/sched/sched.h:2457:13: note: 'init_cfs_throttle_work' declared here
    2457 | extern void init_cfs_throttle_work(struct task_struct *p);
         |             ^
>> kernel/sched/fair.c:8576:24: error: no member named 'sched_throttle_work' in 'struct task_struct'
    8576 |                 task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
         |                                   ~  ^
   kernel/sched/fair.c:13197:6: warning: no previous prototype for function 'free_fair_sched_group' [-Wmissing-prototypes]
    13197 | void free_fair_sched_group(struct task_group *tg) { }
          |      ^
   kernel/sched/fair.c:13197:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
    13197 | void free_fair_sched_group(struct task_group *tg) { }
          | ^
          | static 
   kernel/sched/fair.c:13199:5: warning: no previous prototype for function 'alloc_fair_sched_group' [-Wmissing-prototypes]
    13199 | int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
          |     ^
   kernel/sched/fair.c:13199:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
    13199 | int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
          | ^
          | static 
   kernel/sched/fair.c:13204:6: warning: no previous prototype for function 'online_fair_sched_group' [-Wmissing-prototypes]
    13204 | void online_fair_sched_group(struct task_group *tg) { }
          |      ^
   kernel/sched/fair.c:13204:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
    13204 | void online_fair_sched_group(struct task_group *tg) { }
          | ^
          | static 
   kernel/sched/fair.c:13206:6: warning: no previous prototype for function 'unregister_fair_sched_group' [-Wmissing-prototypes]
    13206 | void unregister_fair_sched_group(struct task_group *tg) { }
          |      ^
   kernel/sched/fair.c:13206:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
    13206 | void unregister_fair_sched_group(struct task_group *tg) { }
          | ^
          | static 
   4 warnings and 3 errors generated.


vim +8576 kernel/sched/fair.c

  8471	
  8472	struct task_struct *
  8473	pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
  8474	{
  8475		struct cfs_rq *cfs_rq = &rq->cfs;
  8476		struct sched_entity *se;
  8477		struct task_struct *p;
  8478		int new_tasks;
  8479	
  8480	again:
  8481		if (!sched_fair_runnable(rq))
  8482			goto idle;
  8483	
  8484	#ifdef CONFIG_FAIR_GROUP_SCHED
  8485		if (!prev || prev->sched_class != &fair_sched_class)
  8486			goto simple;
  8487	
  8488		/*
  8489		 * Because of the set_next_buddy() in dequeue_task_fair() it is rather
  8490		 * likely that a next task is from the same cgroup as the current.
  8491		 *
  8492		 * Therefore attempt to avoid putting and setting the entire cgroup
  8493		 * hierarchy, only change the part that actually changes.
  8494		 */
  8495	
  8496		do {
  8497			struct sched_entity *curr = cfs_rq->curr;
  8498	
  8499			/*
  8500			 * Since we got here without doing put_prev_entity() we also
  8501			 * have to consider cfs_rq->curr. If it is still a runnable
  8502			 * entity, update_curr() will update its vruntime, otherwise
  8503			 * forget we've ever seen it.
  8504			 */
  8505			if (curr) {
  8506				if (curr->on_rq)
  8507					update_curr(cfs_rq);
  8508				else
  8509					curr = NULL;
  8510	
  8511				/*
  8512				 * This call to check_cfs_rq_runtime() will do the
  8513				 * throttle and dequeue its entity in the parent(s).
  8514				 * Therefore the nr_running test will indeed
  8515				 * be correct.
  8516				 */
  8517				if (unlikely(check_cfs_rq_runtime(cfs_rq))) {
  8518					cfs_rq = &rq->cfs;
  8519	
  8520					if (!cfs_rq->nr_running)
  8521						goto idle;
  8522	
  8523					goto simple;
  8524				}
  8525			}
  8526	
  8527			se = pick_next_entity(cfs_rq);
  8528			cfs_rq = group_cfs_rq(se);
  8529		} while (cfs_rq);
  8530	
  8531		p = task_of(se);
  8532	
  8533		if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
  8534			task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
  8535		/*
  8536		 * Since we haven't yet done put_prev_entity and if the selected task
  8537		 * is a different task than we started out with, try and touch the
  8538		 * least amount of cfs_rqs.
  8539		 */
  8540		if (prev != p) {
  8541			struct sched_entity *pse = &prev->se;
  8542	
  8543			while (!(cfs_rq = is_same_group(se, pse))) {
  8544				int se_depth = se->depth;
  8545				int pse_depth = pse->depth;
  8546	
  8547				if (se_depth <= pse_depth) {
  8548					put_prev_entity(cfs_rq_of(pse), pse);
  8549					pse = parent_entity(pse);
  8550				}
  8551				if (se_depth >= pse_depth) {
  8552					set_next_entity(cfs_rq_of(se), se);
  8553					se = parent_entity(se);
  8554				}
  8555			}
  8556	
  8557			put_prev_entity(cfs_rq, pse);
  8558			set_next_entity(cfs_rq, se);
  8559		}
  8560	
  8561		goto done;
  8562	simple:
  8563	#endif
  8564		if (prev)
  8565			put_prev_task(rq, prev);
  8566	
  8567		do {
  8568			se = pick_next_entity(cfs_rq);
  8569			set_next_entity(cfs_rq, se);
  8570			cfs_rq = group_cfs_rq(se);
  8571		} while (cfs_rq);
  8572	
  8573		p = task_of(se);
  8574	
  8575		if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
> 8576			task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
  8577	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [RFC PATCH 1/2] sched/fair: Only throttle CFS tasks on return to userspace
       [not found] <20231130161245.3894682-2-vschneid@redhat.com>
  2023-12-05 18:50 ` [RFC PATCH 1/2] sched/fair: Only throttle CFS tasks on return to userspace kernel test robot
@ 2023-12-05 18:50 ` kernel test robot
  1 sibling, 0 replies; 2+ messages in thread
From: kernel test robot @ 2023-12-05 18:50 UTC (permalink / raw)
  To: Valentin Schneider; +Cc: llvm, oe-kbuild-all

Hi Valentin,

[This is a private test report for your RFC patch.]
kernel test robot noticed the following build errors:

[auto build test ERROR on tip/sched/core]
[also build test ERROR on tip/master linus/master v6.7-rc4 next-20231205]
[cannot apply to tip/auto-latest]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Valentin-Schneider/sched-fair-Only-throttle-CFS-tasks-on-return-to-userspace/20231201-001908
base:   tip/sched/core
patch link:    https://lore.kernel.org/r/20231130161245.3894682-2-vschneid%40redhat.com
patch subject: [RFC PATCH 1/2] sched/fair: Only throttle CFS tasks on return to userspace
config: arm-mvebu_v5_defconfig (https://download.01.org/0day-ci/archive/20231206/202312060231.Bv5FMoPW-lkp@intel.com/config)
compiler: clang version 15.0.7 (https://github.com/llvm/llvm-project.git 8dfdcc7b7bf66834a761bd8de445840ef68e4d1a)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231206/202312060231.Bv5FMoPW-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312060231.Bv5FMoPW-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/sched/fair.c:8575:30: error: no member named 'in_throttle_limbo' in 'struct cfs_rq'
           if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
                        ~~~~~~~~~~~~~  ^
   include/linux/compiler.h:77:42: note: expanded from macro 'unlikely'
   # define unlikely(x)    __builtin_expect(!!(x), 0)
                                               ^
>> kernel/sched/fair.c:8575:52: error: call to undeclared function 'task_has_throttle_work'; ISO C99 and later do not support implicit function declarations [-Werror,-Wimplicit-function-declaration]
           if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
                                                             ^
   kernel/sched/fair.c:8575:52: note: did you mean 'init_cfs_throttle_work'?
   kernel/sched/sched.h:2457:13: note: 'init_cfs_throttle_work' declared here
   extern void init_cfs_throttle_work(struct task_struct *p);
               ^
   kernel/sched/fair.c:8576:24: error: no member named 'sched_throttle_work' in 'struct task_struct'
                   task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
                                     ~  ^
   kernel/sched/fair.c:13197:6: warning: no previous prototype for function 'free_fair_sched_group' [-Wmissing-prototypes]
   void free_fair_sched_group(struct task_group *tg) { }
        ^
   kernel/sched/fair.c:13197:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   void free_fair_sched_group(struct task_group *tg) { }
   ^
   static 
   kernel/sched/fair.c:13199:5: warning: no previous prototype for function 'alloc_fair_sched_group' [-Wmissing-prototypes]
   int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
       ^
   kernel/sched/fair.c:13199:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
   ^
   static 
   kernel/sched/fair.c:13204:6: warning: no previous prototype for function 'online_fair_sched_group' [-Wmissing-prototypes]
   void online_fair_sched_group(struct task_group *tg) { }
        ^
   kernel/sched/fair.c:13204:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   void online_fair_sched_group(struct task_group *tg) { }
   ^
   static 
   kernel/sched/fair.c:13206:6: warning: no previous prototype for function 'unregister_fair_sched_group' [-Wmissing-prototypes]
   void unregister_fair_sched_group(struct task_group *tg) { }
        ^
   kernel/sched/fair.c:13206:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   void unregister_fair_sched_group(struct task_group *tg) { }
   ^
   static 
   4 warnings and 3 errors generated.


vim +/task_has_throttle_work +8575 kernel/sched/fair.c

  8471	
  8472	struct task_struct *
  8473	pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
  8474	{
  8475		struct cfs_rq *cfs_rq = &rq->cfs;
  8476		struct sched_entity *se;
  8477		struct task_struct *p;
  8478		int new_tasks;
  8479	
  8480	again:
  8481		if (!sched_fair_runnable(rq))
  8482			goto idle;
  8483	
  8484	#ifdef CONFIG_FAIR_GROUP_SCHED
  8485		if (!prev || prev->sched_class != &fair_sched_class)
  8486			goto simple;
  8487	
  8488		/*
  8489		 * Because of the set_next_buddy() in dequeue_task_fair() it is rather
  8490		 * likely that a next task is from the same cgroup as the current.
  8491		 *
  8492		 * Therefore attempt to avoid putting and setting the entire cgroup
  8493		 * hierarchy, only change the part that actually changes.
  8494		 */
  8495	
  8496		do {
  8497			struct sched_entity *curr = cfs_rq->curr;
  8498	
  8499			/*
  8500			 * Since we got here without doing put_prev_entity() we also
  8501			 * have to consider cfs_rq->curr. If it is still a runnable
  8502			 * entity, update_curr() will update its vruntime, otherwise
  8503			 * forget we've ever seen it.
  8504			 */
  8505			if (curr) {
  8506				if (curr->on_rq)
  8507					update_curr(cfs_rq);
  8508				else
  8509					curr = NULL;
  8510	
  8511				/*
  8512				 * This call to check_cfs_rq_runtime() will do the
  8513				 * throttle and dequeue its entity in the parent(s).
  8514				 * Therefore the nr_running test will indeed
  8515				 * be correct.
  8516				 */
  8517				if (unlikely(check_cfs_rq_runtime(cfs_rq))) {
  8518					cfs_rq = &rq->cfs;
  8519	
  8520					if (!cfs_rq->nr_running)
  8521						goto idle;
  8522	
  8523					goto simple;
  8524				}
  8525			}
  8526	
  8527			se = pick_next_entity(cfs_rq);
  8528			cfs_rq = group_cfs_rq(se);
  8529		} while (cfs_rq);
  8530	
  8531		p = task_of(se);
  8532	
  8533		if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
  8534			task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
  8535		/*
  8536		 * Since we haven't yet done put_prev_entity and if the selected task
  8537		 * is a different task than we started out with, try and touch the
  8538		 * least amount of cfs_rqs.
  8539		 */
  8540		if (prev != p) {
  8541			struct sched_entity *pse = &prev->se;
  8542	
  8543			while (!(cfs_rq = is_same_group(se, pse))) {
  8544				int se_depth = se->depth;
  8545				int pse_depth = pse->depth;
  8546	
  8547				if (se_depth <= pse_depth) {
  8548					put_prev_entity(cfs_rq_of(pse), pse);
  8549					pse = parent_entity(pse);
  8550				}
  8551				if (se_depth >= pse_depth) {
  8552					set_next_entity(cfs_rq_of(se), se);
  8553					se = parent_entity(se);
  8554				}
  8555			}
  8556	
  8557			put_prev_entity(cfs_rq, pse);
  8558			set_next_entity(cfs_rq, se);
  8559		}
  8560	
  8561		goto done;
  8562	simple:
  8563	#endif
  8564		if (prev)
  8565			put_prev_task(rq, prev);
  8566	
  8567		do {
  8568			se = pick_next_entity(cfs_rq);
  8569			set_next_entity(cfs_rq, se);
  8570			cfs_rq = group_cfs_rq(se);
  8571		} while (cfs_rq);
  8572	
  8573		p = task_of(se);
  8574	
> 8575		if (unlikely(cfs_rq_of(se)->in_throttle_limbo && !task_has_throttle_work(p)))
  8576			task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
  8577	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-12-05 18:51 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20231130161245.3894682-2-vschneid@redhat.com>
2023-12-05 18:50 ` [RFC PATCH 1/2] sched/fair: Only throttle CFS tasks on return to userspace kernel test robot
2023-12-05 18:50 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox