From: kernel test robot <lkp@intel.com>
To: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev
Subject: Re: [RFC PATCH 3/5] sched/core: Track blocked tasks retained on rq for proxy
Date: Tue, 18 Nov 2025 09:46:02 +0800 [thread overview]
Message-ID: <202511180828.6BVP2q32-lkp@intel.com> (raw)
In-Reply-To: <20251117185550.365156-4-kprateek.nayak@amd.com>
Hi Prateek,
[This is a private test report for your RFC patch.]
kernel test robot noticed the following build errors:
[auto build test ERROR on 33cf66d88306663d16e4759e9d24766b0aaa2e17]
url: https://github.com/intel-lab-lkp/linux/commits/K-Prateek-Nayak/sched-psi-Make-psi-stubs-consistent-for-CONFIG_PSI/20251118-030832
base: 33cf66d88306663d16e4759e9d24766b0aaa2e17
patch link: https://lore.kernel.org/r/20251117185550.365156-4-kprateek.nayak%40amd.com
patch subject: [RFC PATCH 3/5] sched/core: Track blocked tasks retained on rq for proxy
config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20251118/202511180828.6BVP2q32-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251118/202511180828.6BVP2q32-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511180828.6BVP2q32-lkp@intel.com/
All errors (new ones prefixed by >>):
>> kernel/sched/core.c:6681:37: error: parameter 'p' was not declared, defaults to 'int'; ISO C99 and later do not support implicit int [-Wimplicit-int]
6681 | static inline void clear_task_proxy(p) { }
| ^
>> kernel/sched/core.c:6681:20: error: a function definition without a prototype is deprecated in all versions of C and is not supported in C23 [-Werror,-Wdeprecated-non-prototype]
6681 | static inline void clear_task_proxy(p) { }
| ^
>> kernel/sched/core.c:6681:20: error: conflicting types for 'clear_task_proxy'
kernel/sched/core.c:3663:20: note: previous declaration is here
3663 | static inline void clear_task_proxy(struct task_struct *p);
| ^
>> kernel/sched/core.c:6681:20: error: a function definition without a prototype is deprecated in all versions of C and is not supported in C23 [-Werror,-Wdeprecated-non-prototype]
6681 | static inline void clear_task_proxy(p) { }
| ^
kernel/sched/core.c:7764:12: warning: array index -1 is before the beginning of the array [-Warray-bounds]
7764 | preempt_modes[preempt_dynamic_mode] : "undef",
| ^ ~~~~~~~~~~~~~~~~~~~~
kernel/sched/core.c:7739:1: note: array 'preempt_modes' declared here
7739 | const char *preempt_modes[] = {
| ^
1 warning and 4 errors generated.
vim +6681 kernel/sched/core.c
6557
6558 /*
6559 * Find runnable lock owner to proxy for mutex blocked donor
6560 *
6561 * Follow the blocked-on relation:
6562 * task->blocked_on -> mutex->owner -> task...
6563 *
6564 * Lock order:
6565 *
6566 * p->pi_lock
6567 * rq->lock
6568 * mutex->wait_lock
6569 *
6570 * Returns the task that is going to be used as execution context (the one
6571 * that is actually going to be run on cpu_of(rq)).
6572 */
6573 static struct task_struct *
6574 find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
6575 {
6576 struct task_struct *owner = NULL;
6577 int this_cpu = cpu_of(rq);
6578 struct task_struct *p;
6579 struct mutex *mutex;
6580
6581 /* Follow blocked_on chain. */
6582 for (p = donor; task_is_blocked(p); p = owner) {
6583 mutex = p->blocked_on;
6584 /* Something changed in the chain, so pick again */
6585 if (!mutex)
6586 return NULL;
6587 /*
6588 * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
6589 * and ensure @owner sticks around.
6590 */
6591 guard(raw_spinlock)(&mutex->wait_lock);
6592
6593 /* Check again that p is blocked with wait_lock held */
6594 if (mutex != __get_task_blocked_on(p)) {
6595 /*
6596 * Something changed in the blocked_on chain and
6597 * we don't know if only at this level. So, let's
6598 * just bail out completely and let __schedule()
6599 * figure things out (pick_again loop).
6600 */
6601 return NULL;
6602 }
6603
6604 owner = __mutex_owner(mutex);
6605 if (!owner) {
6606 __clear_task_blocked_on(p, mutex);
6607 return p;
6608 }
6609
6610 if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) {
6611 /* XXX Don't handle blocked owners/delayed dequeue yet */
6612 return proxy_deactivate(rq, donor);
6613 }
6614
6615 if (task_cpu(owner) != this_cpu) {
6616 /* XXX Don't handle migrations yet */
6617 return proxy_deactivate(rq, donor);
6618 }
6619
6620 if (task_on_rq_migrating(owner)) {
6621 /*
6622 * One of the chain of mutex owners is currently migrating to this
6623 * CPU, but has not yet been enqueued because we are holding the
6624 * rq lock. As a simple solution, just schedule rq->idle to give
6625 * the migration a chance to complete. Much like the migrate_task
6626 * case we should end up back in find_proxy_task(), this time
6627 * hopefully with all relevant tasks already enqueued.
6628 */
6629 return proxy_resched_idle(rq);
6630 }
6631
6632 /*
6633 * Its possible to race where after we check owner->on_rq
6634 * but before we check (owner_cpu != this_cpu) that the
6635 * task on another cpu was migrated back to this cpu. In
6636 * that case it could slip by our checks. So double check
6637 * we are still on this cpu and not migrating. If we get
6638 * inconsistent results, try again.
6639 */
6640 if (!task_on_rq_queued(owner) || task_cpu(owner) != this_cpu)
6641 return NULL;
6642
6643 if (owner == p) {
6644 /*
6645 * It's possible we interleave with mutex_unlock like:
6646 *
6647 * lock(&rq->lock);
6648 * find_proxy_task()
6649 * mutex_unlock()
6650 * lock(&wait_lock);
6651 * donor(owner) = current->blocked_donor;
6652 * unlock(&wait_lock);
6653 *
6654 * wake_up_q();
6655 * ...
6656 * ttwu_runnable()
6657 * __task_rq_lock()
6658 * lock(&wait_lock);
6659 * owner == p
6660 *
6661 * Which leaves us to finish the ttwu_runnable() and make it go.
6662 *
6663 * So schedule rq->idle so that ttwu_runnable() can get the rq
6664 * lock and mark owner as running.
6665 */
6666 return proxy_resched_idle(rq);
6667 }
6668 /*
6669 * OK, now we're absolutely sure @owner is on this
6670 * rq, therefore holding @rq->lock is sufficient to
6671 * guarantee its existence, as per ttwu_remote().
6672 */
6673 }
6674
6675 WARN_ON_ONCE(owner && !owner->on_rq);
6676 return owner;
6677 }
6678 #else /* SCHED_PROXY_EXEC */
6679 bool is_proxy_task(struct task_struct *p) { return false; }
6680 static inline void set_task_proxy(struct task_struct *p) { }
> 6681 static inline void clear_task_proxy(p) { }
6682 static struct task_struct *
6683 find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
6684 {
6685 WARN_ONCE(1, "This should never be called in the !SCHED_PROXY_EXEC case\n");
6686 return donor;
6687 }
6688 #endif /* SCHED_PROXY_EXEC */
6689
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
parent reply other threads:[~2025-11-18 1:46 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <20251117185550.365156-4-kprateek.nayak@amd.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202511180828.6BVP2q32-lkp@intel.com \
--to=lkp@intel.com \
--cc=kprateek.nayak@amd.com \
--cc=llvm@lists.linux.dev \
--cc=oe-kbuild-all@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).