linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Nicholas Piggin <npiggin@gmail.com>
To: linuxppc-dev@lists.ozlabs.org
Cc: Nicholas Piggin <npiggin@gmail.com>
Subject: [RFC PATCH 12/14] powerpc/qspinlock: add ability to prod new queue head CPU
Date: Mon, 11 Jul 2022 13:04:51 +1000	[thread overview]
Message-ID: <20220711030453.150644-13-npiggin@gmail.com> (raw)
In-Reply-To: <20220711030453.150644-1-npiggin@gmail.com>

After the head of the queue acquires the lock, it releases the
next waiter in the queue to become the new head. Add an option
to prod the new head if its vCPU was preempted. This may only
have an effect if queue waiters are yielding.

Disable this option by default for now, i.e., no logical change.
---
 arch/powerpc/lib/qspinlock.c | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
index c39af19f006e..ce0563c56915 100644
--- a/arch/powerpc/lib/qspinlock.c
+++ b/arch/powerpc/lib/qspinlock.c
@@ -12,6 +12,7 @@
 struct qnode {
 	struct qnode	*next;
 	struct qspinlock *lock;
+	int		cpu;
 	int		yield_cpu;
 	u8		locked; /* 1 if lock acquired */
 };
@@ -30,6 +31,7 @@ static bool pv_yield_owner __read_mostly = true;
 static bool pv_yield_allow_steal __read_mostly = false;
 static bool pv_yield_prev __read_mostly = false;
 static bool pv_yield_propagate_owner __read_mostly = false;
+static bool pv_prod_head __read_mostly = false;
 
 static DEFINE_PER_CPU_ALIGNED(struct qnodes, qnodes);
 
@@ -378,6 +380,7 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
 	node = &qnodesp->nodes[idx];
 	node->next = NULL;
 	node->lock = lock;
+	node->cpu = smp_processor_id();
 	node->yield_cpu = -1;
 	node->locked = 0;
 
@@ -464,7 +467,14 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
 	 * this store to locked. The corresponding barrier is the smp_rmb()
 	 * acquire barrier for mcs lock, above.
 	 */
-	WRITE_ONCE(next->locked, 1);
+	if (paravirt && pv_prod_head) {
+		int next_cpu = next->cpu;
+		WRITE_ONCE(next->locked, 1);
+		if (vcpu_is_preempted(next_cpu))
+			prod_cpu(next_cpu);
+	} else {
+		WRITE_ONCE(next->locked, 1);
+	}
 
 release:
 	qnodesp->count--; /* release the node */
@@ -605,6 +615,22 @@ static int pv_yield_propagate_owner_get(void *data, u64 *val)
 
 DEFINE_SIMPLE_ATTRIBUTE(fops_pv_yield_propagate_owner, pv_yield_propagate_owner_get, pv_yield_propagate_owner_set, "%llu\n");
 
+static int pv_prod_head_set(void *data, u64 val)
+{
+	pv_prod_head = !!val;
+
+	return 0;
+}
+
+static int pv_prod_head_get(void *data, u64 *val)
+{
+	*val = pv_prod_head;
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(fops_pv_prod_head, pv_prod_head_get, pv_prod_head_set, "%llu\n");
+
 static __init int spinlock_debugfs_init(void)
 {
 	debugfs_create_file("qspl_steal_spins", 0600, arch_debugfs_dir, NULL, &fops_steal_spins);
@@ -614,6 +640,7 @@ static __init int spinlock_debugfs_init(void)
 		debugfs_create_file("qspl_pv_yield_allow_steal", 0600, arch_debugfs_dir, NULL, &fops_pv_yield_allow_steal);
 		debugfs_create_file("qspl_pv_yield_prev", 0600, arch_debugfs_dir, NULL, &fops_pv_yield_prev);
 		debugfs_create_file("qspl_pv_yield_propagate_owner", 0600, arch_debugfs_dir, NULL, &fops_pv_yield_propagate_owner);
+		debugfs_create_file("qspl_pv_prod_head", 0600, arch_debugfs_dir, NULL, &fops_pv_prod_head);
 	}
 
 	return 0;
-- 
2.35.1


  parent reply	other threads:[~2022-07-11  3:13 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-11  3:04 [RFC PATCH 00/14] add our own qspinlock implementation Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 01/14] powerpc/qspinlock: powerpc " Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 02/14] powerpc/qspinlock: add mcs queueing for contended waiters Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 03/14] powerpc/qspinlock: use a half-word store to unlock to avoid larx/stcx Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 04/14] powerpc/qspinlock: convert atomic operations to assembly Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 05/14] powerpc/qspinlock: allow new waiters to steal the lock before queueing Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 06/14] powerpc/qspinlock: theft prevention to control latency Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 07/14] powerpc/qspinlock: store owner CPU in lock word Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 08/14] powerpc/qspinlock: paravirt yield to lock owner Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 09/14] powerpc/qspinlock: implement option to yield to previous node Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 10/14] powerpc/qspinlock: allow stealing when head of queue yields Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 11/14] powerpc/qspinlock: allow propagation of yield CPU down the queue Nicholas Piggin
2022-07-11  3:04 ` Nicholas Piggin [this message]
2022-07-11  3:04 ` [RFC PATCH 13/14] powerpc/qspinlock: trylock and initial lock attempt may steal Nicholas Piggin
2022-07-11  3:04 ` [RFC PATCH 14/14] powerpc/qspinlock: use spin_begin/end API Nicholas Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220711030453.150644-13-npiggin@gmail.com \
    --to=npiggin@gmail.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).