public inbox for llvm@lists.linux.dev
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: Anatoly Stepanov <stepanov.anatoly@huawei.com>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev
Subject: Re: [RFC PATCH v2 3/5] kernel: introduce general hq-lock support
Date: Sun, 7 Dec 2025 23:39:58 +0800	[thread overview]
Message-ID: <202512072334.p6FnpvnH-lkp@intel.com> (raw)
In-Reply-To: <20251206062106.2109014-4-stepanov.anatoly@huawei.com>

Hi Anatoly,

[This is a private test report for your RFC patch.]
kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/locking/core]
[also build test WARNING on tip/x86/core arnd-asm-generic/master v6.18]
[cannot apply to linus/master next-20251205]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Anatoly-Stepanov/kernel-introduce-Hierarchical-Queued-spinlock/20251206-070428
base:   tip/locking/core
patch link:    https://lore.kernel.org/r/20251206062106.2109014-4-stepanov.anatoly%40huawei.com
patch subject: [RFC PATCH v2 3/5] kernel: introduce general hq-lock support
config: x86_64-allmodconfig (https://download.01.org/0day-ci/archive/20251207/202512072334.p6FnpvnH-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251207/202512072334.p6FnpvnH-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512072334.p6FnpvnH-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from kernel/locking/qspinlock.c:423:
>> kernel/locking/hqlock_core.h:611:17: warning: variable 'node_tail' set but not used [-Wunused-but-set-variable]
     611 |         u16 node_head, node_tail;
         |                        ^
   1 warning generated.


vim +/node_tail +611 kernel/locking/hqlock_core.h

1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  597  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  598  static inline void handoff_remote(struct qspinlock *lock,
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  599  						struct numa_qnode *qnode,
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  600  						u32 tail, int handoff_info)
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  601  {
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  602  	struct numa_queue *next_queue = NULL;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  603  	struct mcs_spinlock *mcs_head = NULL;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  604  	struct numa_qnode *qhead = NULL;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  605  	u16 lock_id = qnode->lock_id;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  606  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  607  	struct lock_metadata *lock_meta = get_meta(lock_id);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  608  	struct numa_queue *queue = get_local_queue(qnode);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  609  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  610  	u16 next_node_id;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06 @611  	u16 node_head, node_tail;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  612  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  613  	node_tail = READ_ONCE(lock_meta->tail_node);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  614  	node_head = READ_ONCE(lock_meta->head_node);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  615  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  616  	/*
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  617  	 * 'handoffs_not_head > 0' means at the head of NUMA-level queue we have a node
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  618  	 * which is heavily loaded and has performed a remote handoff upon reaching the threshold.
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  619  	 *
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  620  	 * Perform handoff to the head instead of next node in the NUMA-level queue,
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  621  	 * if handoffs_not_head >= nr_online_nodes
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  622  	 * (It means other contended nodes have been taking the lock at least once after the head one)
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  623  	 */
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  624  	u16 handoffs_not_head = READ_ONCE(queue->handoffs_not_head);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  625  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  626  	if (handoff_info > 0 && (handoffs_not_head < nr_online_nodes)) {
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  627  		next_node_id = handoff_info;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  628  		if (node_head != qnode->numa_node)
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  629  			handoffs_not_head++;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  630  	} else {
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  631  		if (!node_head) {
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  632  			/* If we're here - we have defintely other node-contenders, let's wait */
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  633  			next_node_id = smp_cond_load_relaxed(&lock_meta->head_node, (VAL));
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  634  		} else {
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  635  			next_node_id = node_head;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  636  		}
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  637  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  638  		handoffs_not_head = 0;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  639  	}
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  640  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  641  	next_queue = get_queue(lock_id, next_node_id);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  642  	WRITE_ONCE(next_queue->handoffs_not_head, handoffs_not_head);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  643  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  644  	qhead = READ_ONCE(next_queue->head);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  645  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  646  	mcs_head = (void *) qhead;
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  647  
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  648  	/* arch_mcs_spin_unlock_contended implies smp-barrier */
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  649  	arch_mcs_spin_unlock_contended(&mcs_head->locked);
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  650  }
1b1736c3ff9c9f Anatoly Stepanov 2025-12-06  651  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

           reply	other threads:[~2025-12-07 15:41 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <20251206062106.2109014-4-stepanov.anatoly@huawei.com>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202512072334.p6FnpvnH-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=llvm@lists.linux.dev \
    --cc=oe-kbuild-all@lists.linux.dev \
    --cc=stepanov.anatoly@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox