From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC98123BCE3 for ; Tue, 24 Mar 2026 00:38:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774312728; cv=none; b=atCbYQfJJ3OgVqKWXggPlEV+D7TXpCvr8aFL2MqMqDezg6EXN5/3olGvH8EEOhBOjSh3YZectT4T76J+bKPCEsYFQoDdM8poJvNWv4gi8Lkg0fhqNSmI06X7WpKA1pFB3ReJKNG/jeP2unE2PI2NttoWgxHCwMbVyL0i8wIefZI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774312728; c=relaxed/simple; bh=qtghmL0UFINV1uppYSK4ITZqRlXnNBRKmxsT7BeX9pc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type:Content-Disposition; b=Sj8IUvROLOaUi8ktFalSz+DLYYJVVcAyS0IlZSlYNeSwCHcZlwmkU9Lnmx/EYS+5AIQ1pBVwvtfiZleHHkQVRAXUPmW1i9YpI5hV3f2RTP+Nykx5KS7p3Ami1Niu9a02d4kUN5now4LArW3WfELv4aFHBxi4ce7+iDv2nuoum0E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iZUZ3x2f; arc=none smtp.client-ip=209.85.128.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iZUZ3x2f" Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-486fd27754bso5658405e9.3 for ; Mon, 23 Mar 2026 17:38:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774312724; x=1774917524; darn=vger.kernel.org; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=L8Fgds8o/HF7x3RJydQeNzUk7oi06BCIPFOXT38dGlU=; b=iZUZ3x2fAfvXA6Dq63MeUil1zBU73lJnhwPkxYZqAd/z//l9T+FC/tDDr/0kt9d6Og IiSg3/yD9OscqPC2kIQEib01KJw72aOQ6uvb+54tA2xAsb0cKhWLpE92Pe1dpEuomMNY YIsKpFtQQw197z6kPWMe+L++QWtsUUsS/hs3HZsYiVWheCq+n7Wtt+rMWW+6bSa72TGm tP6s0NE/mFKyZngcBl5JsKk+mZxDMwlBHtlKKjGrmOzcglEEXAuAfUMPPmPrpddA/p6h CpcCII807UqAvKXzRTx1vRqdUNJ3YJlgZmQpsET5OhYWHwsR2rUg7TDaCaiRMDtHJv3s JW2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774312724; x=1774917524; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L8Fgds8o/HF7x3RJydQeNzUk7oi06BCIPFOXT38dGlU=; b=T6gKB1k1mEK7KEOUid0GqsHD0VygiFW2oNSWJN5jr1d+UUP8Y+hMgHlElj2XXTDqfW lWomZ8Yk4ZFmyBFy092PhJIfabY48OLmICAWcdfPVAWJnvC5QPlmGjA+sOu43gc20e4g 5GSGETcF6nMdqTvZ5Veiae4GPS+K1+N4itOCDK+44xVd26YXIFwwvXmGx7zAP/bZYKmh mLAX2u6t2BZodCccn1LphfU+kpzSxA44ZMX52q8ZOb0euqKyzD3SoPuqdfsG7Uea5yqT 7qRnKiIN4nIBSevg41c5jCZJFEvI7oSjGzJ1hRKBVHZHPG5RS9/W6uJR7ZQQNBxM1vPo MKLg== X-Forwarded-Encrypted: i=1; AJvYcCXI/a02KBQn45Q0zegZBpnDUBu+2eWqPErIOn6+piGOXtIr71+DWq7mhROJtYxHQpny2HEor81eCbeC5kk=@vger.kernel.org X-Gm-Message-State: AOJu0YyUKISRSBmAJ7DlRsSTH5wRy2KP+F5DrqZ+thb01sv5hL/YM/p0 QH0O0x+/9MTC8IbLLqpAijMqkuV+fEzVVQuykM2josPxkHmnELQKnpCa X-Gm-Gg: ATEYQzyvHkb3RJWB9EFbsr31VLcpvTmbuTofRzF8UfNUjN6HI97Xgvgnm4pC5ZXmmBV QwLF3Wu24QGrRl5Ah16e27xj0Zh4hVzlk5VCooFjfozOU3vaLwATSw8O9GE+8jFfJv/xVCbjpah QEckZ+mEwXTGeFGZAEw2sNvBN2Q+msW+5EDJOHznSs+R5Vit7wXytSFK6msgv3MXy2Z0VE6BrFN i/v1IJHg/vZWskU+qCkeVvc4JsFTLBIa1DhHOMyP6CQgqf4rQWk+ttLnEL2n/3lt/t49UEMuxWe pd81OvkQMlFP/KyYv4zFpvv2SoSgR40WVSd66i/LpQ7nQqWF0teh/51rO5ohXyt0vr3pt4QmeF7 QQE58zHGCbPUKtzBgugiidI/60KC5/iZq+UhqHXGsqGIArUcrZe31LRluSVO86jGtykgE2gswhm hpd6oTqNajvJgQGqaMtHMO0yo4LETC5/tyUXM= X-Received: by 2002:a05:600c:c167:b0:483:9139:4c1d with SMTP id 5b1f17b1804b1-486fedd4143mr191783925e9.14.1774312723636; Mon, 23 Mar 2026 17:38:43 -0700 (PDT) Received: from WindFlash.powerhub ([2a0a:ef40:1b2a:fa01:9944:6a8c:dc37:eba5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-487116abecesm8709775e9.5.2026.03.23.17.38.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Mar 2026 17:38:42 -0700 (PDT) From: Leonardo Bras To: Marcelo Tosatti Cc: Leonardo Bras , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Thomas Gleixner , Waiman Long , Boqun Feun , Frederic Weisbecker Subject: Re: [PATCH v3 1/4] Introducing qpw_lock() and per-cpu queue & flush work Date: Mon, 23 Mar 2026 21:38:37 -0300 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323180150.242567098@redhat.com> References: <20260323175544.807534301@redhat.com> <20260323180150.242567098@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit On Mon, Mar 23, 2026 at 02:55:45PM -0300, Marcelo Tosatti wrote: > Some places in the kernel implement a parallel programming strategy > consisting on local_locks() for most of the work, and some rare remote > operations are scheduled on target cpu. This keeps cache bouncing low since > cacheline tends to be mostly local, and avoids the cost of locks in non-RT > kernels, even though the very few remote operations will be expensive due > to scheduling overhead. > > On the other hand, for RT workloads this can represent a problem: > scheduling work on remote cpu that are executing low latency tasks > is undesired and can introduce unexpected deadline misses. > > It's interesting, though, that local_lock()s in RT kernels become > spinlock(). We can make use of those to avoid scheduling work on a remote > cpu by directly updating another cpu's per_cpu structure, while holding > it's spinlock(). > > In order to do that, it's necessary to introduce a new set of functions to > make it possible to get another cpu's per-cpu "local" lock (qpw_{un,}lock*) > and also the corresponding queue_percpu_work_on() and flush_percpu_work() > helpers to run the remote work. > > Users of non-RT kernels but with low latency requirements can select > similar functionality by using the CONFIG_QPW compile time option. > > On CONFIG_QPW disabled kernels, no changes are expected, as every > one of the introduced helpers work the exactly same as the current > implementation: > qpw_{un,}lock*() -> local_{un,}lock*() (ignores cpu parameter) > queue_percpu_work_on() -> queue_work_on() > flush_percpu_work() -> flush_work() > > For QPW enabled kernels, though, qpw_{un,}lock*() will use the extra > cpu parameter to select the correct per-cpu structure to work on, > and acquire the spinlock for that cpu. > > queue_percpu_work_on() will just call the requested function in the current > cpu, which will operate in another cpu's per-cpu object. Since the > local_locks() become spinlock()s in QPW enabled kernels, we are > safe doing that. > > flush_percpu_work() then becomes a no-op since no work is actually > scheduled on a remote cpu. > > Some minimal code rework is needed in order to make this mechanism work: > The calls for local_{un,}lock*() on the functions that are currently > scheduled on remote cpus need to be replaced by qpw_{un,}lock_n*(), so in > QPW enabled kernels they can reference a different cpu. It's also > necessary to use a qpw_struct instead of a work_struct, but it just > contains a work struct and, in CONFIG_QPW, the target cpu. > > This should have almost no impact on non-CONFIG_QPW kernels: few > this_cpu_ptr() will become per_cpu_ptr(,smp_processor_id()). > > On CONFIG_QPW kernels, this should avoid deadlines misses by > removing scheduling noise. > > Signed-off-by: Leonardo Bras > Signed-off-by: Marcelo Tosatti > --- > Documentation/admin-guide/kernel-parameters.txt | 10 > Documentation/locking/qpwlocks.rst | 70 ++++++ > MAINTAINERS | 7 > include/linux/qpw.h | 256 ++++++++++++++++++++++++ > init/Kconfig | 35 +++ > kernel/Makefile | 2 > kernel/qpw.c | 26 ++ > 7 files changed, 406 insertions(+) > create mode 100644 include/linux/qpw.h > create mode 100644 kernel/qpw.c > > Index: linux/Documentation/admin-guide/kernel-parameters.txt > =================================================================== > --- linux.orig/Documentation/admin-guide/kernel-parameters.txt > +++ linux/Documentation/admin-guide/kernel-parameters.txt > @@ -2841,6 +2841,16 @@ Kernel parameters > > The format of is described above. > > + qpw= [KNL,SMP] Select a behavior on per-CPU resource sharing > + and remote interference mechanism on a kernel built with > + CONFIG_QPW. > + Format: { "0" | "1" } > + 0 - local_lock() + queue_work_on(remote_cpu) > + 1 - spin_lock() for both local and remote operations > + > + Selecting 1 may be interesting for systems that want > + to avoid interruption & context switches from IPIs. > + > iucv= [HW,NET] > > ivrs_ioapic [HW,X86-64] > Index: linux/MAINTAINERS > =================================================================== > --- linux.orig/MAINTAINERS > +++ linux/MAINTAINERS > @@ -21536,6 +21536,13 @@ F: Documentation/networking/device_drive > F: drivers/bus/fsl-mc/ > F: include/uapi/linux/fsl_mc.h > > +QPW > +M: Leonardo Bras > +S: Supported > +F: Documentation/locking/qpwlocks.rst > +F: include/linux/qpw.h > +F: kernel/qpw.c > + > QT1010 MEDIA DRIVER > L: linux-media@vger.kernel.org > S: Orphan > Index: linux/include/linux/qpw.h > =================================================================== > --- /dev/null > +++ linux/include/linux/qpw.h > @@ -0,0 +1,264 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _LINUX_QPW_H > +#define _LINUX_QPW_H > + > +#include "linux/spinlock.h" > +#include "linux/local_lock.h" > +#include "linux/workqueue.h" > + > +#ifndef CONFIG_QPW > + > +typedef local_lock_t qpw_lock_t; > +typedef local_trylock_t qpw_trylock_t; > + > +struct qpw_struct { > + struct work_struct work; > +}; > + > +#define qpw_lock_init(lock) \ > + local_lock_init(lock) > + > +#define qpw_trylock_init(lock) \ > + local_trylock_init(lock) > + > +#define qpw_lock(lock, cpu) \ > + local_lock(lock) > + > +#define local_qpw_lock(lock) \ > + local_lock(lock) > + > +#define qpw_lock_irqsave(lock, flags, cpu) \ > + local_lock_irqsave(lock, flags) > + > +#define local_qpw_lock_irqsave(lock, flags) \ > + local_lock_irqsave(lock, flags) > + > +#define qpw_trylock(lock, cpu) \ > + local_trylock(lock) > + > +#define local_qpw_trylock(lock) \ > + local_trylock(lock) > + > +#define qpw_trylock_irqsave(lock, flags, cpu) \ > + local_trylock_irqsave(lock, flags) > + > +#define qpw_unlock(lock, cpu) \ > + local_unlock(lock) > + > +#define local_qpw_unlock(lock) \ > + local_unlock(lock) > + > +#define qpw_unlock_irqrestore(lock, flags, cpu) \ > + local_unlock_irqrestore(lock, flags) > + > +#define local_qpw_unlock_irqrestore(lock, flags) \ > + local_unlock_irqrestore(lock, flags) > + > +#define qpw_lockdep_assert_held(lock) \ > + lockdep_assert_held(lock) > + > +#define queue_percpu_work_on(c, wq, qpw) \ > + queue_work_on(c, wq, &(qpw)->work) > + > +#define flush_percpu_work(qpw) \ > + flush_work(&(qpw)->work) > + > +#define qpw_get_cpu(qpw) smp_processor_id() > + > +#define qpw_is_cpu_remote(cpu) (false) > + > +#define INIT_QPW(qpw, func, c) \ > + INIT_WORK(&(qpw)->work, (func)) > + > +#else /* CONFIG_QPW */ > + > +DECLARE_STATIC_KEY_MAYBE(CONFIG_QPW_DEFAULT, qpw_sl); > + > +typedef union { > + spinlock_t sl; > + local_lock_t ll; > +} qpw_lock_t; > + > +typedef union { > + spinlock_t sl; > + local_trylock_t ll; > +} qpw_trylock_t; > + > +struct qpw_struct { > + struct work_struct work; > + int cpu; > +}; > + > +#ifdef CONFIG_PREEMPT_RT > +#define preempt_or_migrate_disable migrate_disable > +#define preempt_or_migrate_enable migrate_enable > +#else > +#define preempt_or_migrate_disable preempt_disable > +#define preempt_or_migrate_enable preempt_enable > +#endif Nice! > + > +#define qpw_lock_init(lock) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + spin_lock_init(lock.sl); \ > + else \ > + local_lock_init(lock.ll); \ > + } while (0) > + > +#define qpw_trylock_init(lock) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + spin_lock_init(lock.sl); \ > + else \ > + local_trylock_init(lock.ll); \ > + } while (0) > + > +#define qpw_lock(lock, cpu) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + spin_lock(per_cpu_ptr(lock.sl, cpu)); \ > + else \ > + local_lock(lock.ll); \ > + } while (0) > + > +#define local_qpw_lock(lock) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + preempt_or_migrate_disable(); \ > + spin_lock(this_cpu_ptr(lock.sl)); \ > + } else \ > + local_lock(lock.ll); \ > + } while (0) > + > +#define qpw_lock_irqsave(lock, flags, cpu) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + spin_lock_irqsave(per_cpu_ptr(lock.sl, cpu), flags); \ > + else \ > + local_lock_irqsave(lock.ll, flags); \ > + } while (0) > + > +#define local_qpw_lock_irqsave(lock, flags) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + preempt_or_migrate_disable(); \ > + spin_lock_irqsave(this_cpu_ptr(lock.sl), flags); \ > + } else \ > + local_lock_irqsave(lock.ll, flags); \ > + } while (0) > + > + > +#define qpw_trylock(lock, cpu) \ > + ({ \ > + int t; \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + t = spin_trylock(per_cpu_ptr(lock.sl, cpu)); \ > + else \ > + t = local_trylock(lock.ll); \ > + t; \ > + }) > + > +#define local_qpw_trylock(lock) \ > + ({ \ > + int t; \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + preempt_or_migrate_disable(); \ > + t = spin_trylock(this_cpu_ptr(lock.sl)); \ > + if (!t) \ > + preempt_or_migrate_enable(); \ > + } else \ > + t = local_trylock(lock.ll); \ > + t; \ > + }) > + > +#define qpw_trylock_irqsave(lock, flags, cpu) \ > + ({ \ > + int t; \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + t = spin_trylock_irqsave(per_cpu_ptr(lock.sl, cpu), flags); \ > + else \ > + t = local_trylock_irqsave(lock.ll, flags); \ > + t; \ > + }) > + > +#define qpw_unlock(lock, cpu) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + spin_unlock(per_cpu_ptr(lock.sl, cpu)); \ > + } else { \ > + local_unlock(lock.ll); \ > + } \ > + } while (0) > + > +#define local_qpw_unlock(lock) \ > +do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + spin_unlock(this_cpu_ptr(lock.sl)); \ > + preempt_or_migrate_enable(); \ > + } else { \ > + local_unlock(lock.ll); \ > + } \ > +} while (0) > + > +#define qpw_unlock_irqrestore(lock, flags, cpu) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + spin_unlock_irqrestore(per_cpu_ptr(lock.sl, cpu), flags); \ > + else \ > + local_unlock_irqrestore(lock.ll, flags); \ > + } while (0) > + > +#define local_qpw_unlock_irqrestore(lock, flags) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + spin_unlock_irqrestore(this_cpu_ptr(lock.sl), flags); \ > + preempt_or_migrate_enable(); \ > + } else \ > + local_unlock_irqrestore(lock.ll, flags); \ > + } while (0) > + > +#define qpw_lockdep_assert_held(lock) \ > + do { \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ > + lockdep_assert_held(this_cpu_ptr(lock.sl)); \ > + else \ > + lockdep_assert_held(this_cpu_ptr(lock.ll)); \ > + } while (0) > + > +#define queue_percpu_work_on(c, wq, qpw) \ > + do { \ > + int __c = c; \ > + struct qpw_struct *__qpw = (qpw); \ > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + WARN_ON((__c) != __qpw->cpu); \ > + __qpw->work.func(&__qpw->work); \ > + } else { \ > + queue_work_on(__c, wq, &(__qpw)->work); \ > + } \ > + } while (0) > + > +/* > + * Does nothing if QPW is set to use spinlock, as the task is already done at the > + * time queue_percpu_work_on() returns. > + */ > +#define flush_percpu_work(qpw) \ > + do { \ > + struct qpw_struct *__qpw = (qpw); \ > + if (!static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \ > + flush_work(&__qpw->work); \ > + } \ > + } while (0) > + > +#define qpw_get_cpu(w) container_of((w), struct qpw_struct, work)->cpu > + > +#define qpw_is_cpu_remote(cpu) ((cpu) != smp_processor_id()) > + > +#define INIT_QPW(qpw, func, c) \ > + do { \ > + struct qpw_struct *__qpw = (qpw); \ > + INIT_WORK(&__qpw->work, (func)); \ > + __qpw->cpu = (c); \ > + } while (0) > + > +#endif /* CONFIG_QPW */ > +#endif /* LINUX_QPW_H */ > Index: linux/init/Kconfig > =================================================================== > --- linux.orig/init/Kconfig > +++ linux/init/Kconfig > @@ -762,6 +762,41 @@ config CPU_ISOLATION > > Say Y if unsure. > > +config QPW > + bool "Queue per-CPU Work" > + depends on SMP || COMPILE_TEST > + default n > + help > + Allow changing the behavior on per-CPU resource sharing with cache, > + from the regular local_locks() + queue_work_on(remote_cpu) to using > + per-CPU spinlocks on both local and remote operations. > + > + This is useful to give user the option on reducing IPIs to CPUs, and > + thus reduce interruptions and context switches. On the other hand, it > + increases generated code and will use atomic operations if spinlocks > + are selected. > + > + If set, will use the default behavior set in QPW_DEFAULT unless boot > + parameter qpw is passed with a different behavior. > + > + If unset, will use the local_lock() + queue_work_on() strategy, > + regardless of the boot parameter or QPW_DEFAULT. > + > + Say N if unsure. > + > +config QPW_DEFAULT > + bool "Use per-CPU spinlocks by default" > + depends on QPW > + default n > + help > + If set, will use per-CPU spinlocks as default behavior for per-CPU > + remote operations. > + > + If unset, will use local_lock() + queue_work_on(cpu) as default > + behavior for remote operations. > + > + Say N if unsure > + > source "kernel/rcu/Kconfig" > > config IKCONFIG > Index: linux/kernel/Makefile > =================================================================== > --- linux.orig/kernel/Makefile > +++ linux/kernel/Makefile > @@ -142,6 +142,8 @@ obj-$(CONFIG_WATCH_QUEUE) += watch_queue > obj-$(CONFIG_RESOURCE_KUNIT_TEST) += resource_kunit.o > obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o > > +obj-$(CONFIG_QPW) += qpw.o > + > CFLAGS_kstack_erase.o += $(DISABLE_KSTACK_ERASE) > CFLAGS_kstack_erase.o += $(call cc-option,-mgeneral-regs-only) > obj-$(CONFIG_KSTACK_ERASE) += kstack_erase.o > Index: linux/kernel/qpw.c > =================================================================== > --- /dev/null > +++ linux/kernel/qpw.c > @@ -0,0 +1,47 @@ > +// SPDX-License-Identifier: GPL-2.0 > +#include "linux/export.h" > +#include > +#include > +#include > +#include > + > +DEFINE_STATIC_KEY_MAYBE(CONFIG_QPW_DEFAULT, qpw_sl); > +EXPORT_SYMBOL(qpw_sl); > + > +static bool qpw_param_specified; > + > +static int __init qpw_setup(char *str) > +{ > + int opt; > + > + if (!get_option(&str, &opt)) { > + pr_warn("QPW: invalid qpw parameter: %s, ignoring.\n", str); > + return 0; > + } > + > + if (opt) > + static_branch_enable(&qpw_sl); > + else > + static_branch_disable(&qpw_sl); > + > + qpw_param_specified = true; > + > + return 1; > +} > +__setup("qpw=", qpw_setup); > + > +/* > + * Enable QPW if CPUs want to avoid kernel noise. > + */ > +static int __init qpw_init(void) > +{ > + if (qpw_param_specified == true) > + return 0; > + > + if (housekeeping_enabled(HK_TYPE_KERNEL_NOISE)) > + static_branch_enable(&qpw_sl); > + > + return 0; > +} > + > +late_initcall(qpw_init); Awesome! Clean and efficient! > Index: linux/Documentation/locking/qpwlocks.rst > =================================================================== > --- /dev/null > +++ linux/Documentation/locking/qpwlocks.rst > @@ -0,0 +1,76 @@ > +.. SPDX-License-Identifier: GPL-2.0 > + > +========= > +QPW locks > +========= > + > +Some places in the kernel implement a parallel programming strategy > +consisting on local_locks() for most of the work, and some rare remote > +operations are scheduled on target cpu. This keeps cache bouncing low since > +cacheline tends to be mostly local, and avoids the cost of locks in non-RT > +kernels, even though the very few remote operations will be expensive due > +to scheduling overhead. > + > +On the other hand, for RT workloads this can represent a problem: > +scheduling work on remote cpu that are executing low latency tasks > +is undesired and can introduce unexpected deadline misses. > + > +QPW locks help to convert sites that use local_locks (for cpu local operations) > +and queue_work_on (for queueing work remotely, to be executed > +locally on the owner cpu of the lock) to QPW locks. > + > +The lock is declared qpw_lock_t type. > +The lock is initialized with qpw_lock_init. > +The lock is locked with qpw_lock (takes a lock and cpu as a parameter). > +The lock is unlocked with qpw_unlock (takes a lock and cpu as a parameter). > + > +The qpw_lock_irqsave function disables interrupts and saves current interrupt state, > +cpu as a parameter. > + > +For trylock variant, there is the qpw_trylock_t type, initialized with > +qpw_trylock_init. Then the corresponding qpw_trylock and > +qpw_trylock_irqsave. > + > +work_struct should be replaced by qpw_struct, which contains a cpu parameter > +(owner cpu of the lock), initialized by INIT_QPW. > + > +The queue work related functions (analogous to queue_work_on and flush_work) are: > +queue_percpu_work_on and flush_percpu_work. > + > +The behaviour of the QPW functions is as follows: > + > +* !CONFIG_QPW (or CONFIG_QPW and qpw=off kernel boot parameter): > + - qpw_lock: local_lock > + - qpw_lock_irqsave: local_lock_irqsave > + - qpw_trylock: local_trylock > + - qpw_trylock_irqsave: local_trylock_irqsave > + - qpw_unlock: local_unlock > + - local_qpw_lock: local_lock > + - local_qpw_trylock: local_trylock > + - local_qpw_unlock: local_unlock > + - queue_percpu_work_on: queue_work_on > + - flush_percpu_work: flush_work > + > +* CONFIG_QPW (and CONFIG_QPW_DEFAULT=y or qpw=on kernel boot parameter), > + - qpw_lock: spin_lock > + - qpw_lock_irqsave: spin_lock_irqsave > + - qpw_trylock: spin_trylock > + - qpw_trylock_irqsave: spin_trylock_irqsave > + - qpw_unlock: spin_unlock > + - local_qpw_lock: preempt_disable OR migrate_disable + spin_lock > + - local_qpw_trylock: preempt_disable OR migrate_disable + spin_trylock > + - local_qpw_unlock: preempt_enable OR migrate_enable + spin_unlock > + - queue_percpu_work_on: executes work function on caller cpu > + - flush_percpu_work: empty > + > +qpw_get_cpu(work_struct), to be called from within qpw work function, > +returns the target cpu. > + > +In addition to the locking functions above, there are the local locking > +functions (local_qpw_lock, local_qpw_trylock and local_qpw_unlock). > +These must only be used to access per-CPU data from the CPU that owns > +that data, and not remotely. They disable preemption or migration > +and don't require a cpu parameter. > + > +These should only be used when accessing per-CPU data of the local CPU. > + Awesome! Thanks for this new version Marcelo! Leo