From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B0EB24A078 for ; Wed, 15 Apr 2026 21:11:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.41 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776287469; cv=none; b=CsmePtXst0wsT6mKRYYkOQvl0sfR25kY3VbuIeRUmrpfApWVkK9RNFsOGVUFDOqAQ/FoLfvD1N+v7YuGMfbBo9bfbY4SCYteHFNWh6SLrnSyp35BYAFCUyYAejaQqhwetwXVh3x97ayo9KwqF40HpBJ/BEgN6sgWAOcAM1dqza0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776287469; c=relaxed/simple; bh=7sWrIIp21fIYfjyE2Xfj8L4/CD1+lrBEmj6FBob3tzQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type:Content-Disposition; b=afjqGyd/wEc3L3A+nIiv+HcyaOCNpTNHj0yEkji+c9Kj8N88egzOyHsPidoOGzwRajNTVR7nBCEMVA8R2uNgut/7p7U9wORDQlbXLJNyUWhbkJGX7xQry4U9GCNwjYCILuPnHv0UgdxY5138m8LiqXcpuaXxEFNDQoKmuxEZARA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LzRP2brc; arc=none smtp.client-ip=209.85.128.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LzRP2brc" Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-488b8bc6bc9so52161195e9.3 for ; Wed, 15 Apr 2026 14:11:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776287466; x=1776892266; darn=vger.kernel.org; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=TULGaxumukw4hqiyMiXrmR8JwZrOcKaREIVruS3C59o=; b=LzRP2brcfJlLK3dO924laHoGf0VZDOUdNOJzK6F/dsX0Y2rzZ4kVjH8URlCF2Ea2Sz HrnxCpMX6VkAqBKVKowoCLHYnvqfT/D+d5B53KB7tdpcvEoOH0jR31EYghct/EPr3M46 l9HPd5XDACGR1UoL04iSjpVx8+dGn3AL3kJE7/Rh5eKrupozVvs4KBdfPV73kljgY22A YjUqqeiBnGSl5gOM2TldEjOjOHHqYb0nQyjUgfcPlepSh2/z5pnUHs3xMPiX/l1DmXLJ bBSoz73FM2Knm4uSEzrlC2MsyaQvclNpp91e3PzrGgc7RmA7kdKsqr8OVcXF/G+gBnEG PSNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776287466; x=1776892266; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TULGaxumukw4hqiyMiXrmR8JwZrOcKaREIVruS3C59o=; b=aedhQSqC0/HKLgbGCVs246cFB0A821mwkJ2kONFKm2Locwtg8I7TCO4u8yDeu6i6xl FoV46kud6L2H2/v705UDakmzsPY0kkF3OlvrOokZRKl93XuhHJBbEkn+vAfn+neF3mpR 6KHe8N+tdBB1/7K5ZcKOKfbdyaA3QHi/fSFHRRKxf4lFqzD2BK0ksQWXuzsUaEIDGjA3 +iaWUJPpdMylEhQFZNvVKKZy7daNJjnqoaiFiaiEfRamJAPyglni1XfUh+uP24huMyG+ vviNgMW8FpvJSNYE4PQDVcJl19iA1pUTd08lKMGQyep44xCwIXUesGzHvrlOAXVZkeWS Cnfg== X-Forwarded-Encrypted: i=1; AFNElJ+XCxzrPeLLsS8S6jJ4K+tGVuSMfP+d+R7w7brI06hxWF/hf1jHHFPzVFNv/ZL7eKM+YhCLqSlWpMK8hE8=@vger.kernel.org X-Gm-Message-State: AOJu0YyYz++vIUosptAnTeZ6FDefcCoQbZN0LwxQZJ00jn5WSrGGeC8R VsrpgNv00ywCFVy0/A2P0MN30cRQticR/0Iux32DsY+7nQ9aPjZveS+z X-Gm-Gg: AeBDievHsr5M1/UASH9RonqhvE55M+IUNYu4wT5TwwL7r4BKXCzUjxqJUcOmZKsPMyb 8aHYsDyledQhefm2vzFhwG+YgqLvMFsguWS2OY4SMPzYF8EWSZVzZgOEjPjiIRfEw2vyz3L8OLt 81/kDQ5mr7Gv1Nu1aydZ0CgXJII2PPv0UmLky/bK6L+uT8rsaqbugrOe64noDZYTbYViiQHXbmB hfJBNbZ8QYpyTOc/B6VRj09/9WkURL7B8F19krOxr34zc2vwPQ25ytHO+Auo80lCCz9QMD85jIl Ws+t3mFAcjYApY4dTROkf0s4DCgU6qCpkvbkTheUoYkSddr5VVqlZ0DvieU/MHYWWYgN88FfwI3 UJkfm0wjA/ASlg9ckyuGWP9grL3gvjYPzj7DcO7zdChTmNVczIJYr04q5hZNFC7wTxygykqpErc TrPhJ/UwiCavTyFqV22x+8Zxnx3mu9kWDWWyhxc9E1xGgtPQ== X-Received: by 2002:a05:600c:528c:b0:488:a2ac:a337 with SMTP id 5b1f17b1804b1-488d68c2bffmr299093265e9.21.1776287465591; Wed, 15 Apr 2026 14:11:05 -0700 (PDT) Received: from WindFlash.powerhub ([2a0a:ef40:1b17:2901:1431:3a2e:b55b:4b5f]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-488ede1e050sm270798335e9.5.2026.04.15.14.11.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Apr 2026 14:11:04 -0700 (PDT) From: Leonardo Bras To: Marcelo Tosatti Cc: Leonardo Bras , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Thomas Gleixner , Waiman Long , Boqun Feun , Frederic Weisbecker , Peter Zijlstra Subject: Re: [PATCH v3 0/4] Introduce QPW for per-cpu operations (v3) Date: Wed, 15 Apr 2026 18:10:58 -0300 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323175544.807534301@redhat.com> References: <20260323175544.807534301@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit Cc: Peter Zijlstra On Mon, Mar 23, 2026 at 02:55:44PM -0300, Marcelo Tosatti wrote: > The problem: > Some places in the kernel implement a parallel programming strategy > consisting on local_locks() for most of the work, and some rare remote > operations are scheduled on target cpu. This keeps cache bouncing low since > cacheline tends to be mostly local, and avoids the cost of locks in non-RT > kernels, even though the very few remote operations will be expensive due > to scheduling overhead. > > On the other hand, for RT workloads this can represent a problem: getting > an important workload scheduled out to deal with remote requests is > sure to introduce unexpected deadline misses. > > The idea: > Currently with PREEMPT_RT=y, local_locks() become per-cpu spinlocks. > In this case, instead of scheduling work on a remote cpu, it should > be safe to grab that remote cpu's per-cpu spinlock and run the required > work locally. That major cost, which is un/locking in every local function, > already happens in PREEMPT_RT. > > Also, there is no need to worry about extra cache bouncing: > The cacheline invalidation already happens due to schedule_work_on(). > > This will avoid schedule_work_on(), and thus avoid scheduling-out an > RT workload. > > Proposed solution: > A new interface called Queue PerCPU Work (QPW), which should replace > Work Queue in the above mentioned use case. > > If CONFIG_QPW=n this interfaces just wraps the current > local_locks + WorkQueue behavior, so no expected change in runtime. > > If CONFIG_QPW=y, and qpw kernel boot option =1, > queue_percpu_work_on(cpu,...) will lock that cpu's per-cpu structure > and perform work on it locally. This is possible because on > functions that can be used for performing remote work on remote > per-cpu structures, the local_lock (which is already > a this_cpu spinlock()), will be replaced by a qpw_spinlock(), which > is able to get the per_cpu spinlock() for the cpu passed as parameter. > > v2->v3: > - Use preempt_disable/preempt_enable on !CONFIG_PREEMPT_RT (Vlastimil Babka). > - Improve documentation to include local_qpw_lock on operations table > (Leonardo Bras). > - Enable qpw=1 automatically if CPU isolation is enabled (Vlastimil Babka). > > v1->v2: > - Introduce local_qpw_lock and unlock functions, move preempt_disable/ > preempt_enable to it (Leonardo Bras). This reduces performance > overhead of the patch. > - Documentation and changelog typo fixes (Leonardo Bras). > - Fix places where preempt_disable/preempt_enable was not being > correctly performed. > - Add performance measurements. > > RFC->v1: > > - Introduce CONFIG_QPW and qpw= kernel boot option to enable > remote spinlocking and execution even on !CONFIG_PREEMPT_RT > kernels (Leonardo Bras). > - Move buffer_head draining to separate workqueue (Marcelo Tosatti). > - Convert mlock per-CPU page lists to QPW (Marcelo Tosatti). > - Drop memcontrol convertion (as isolated CPUs are not targets > of queue_work_on anymore). > - Rebase SLUB against Vlastimil's slab/next. > - Add basic document for QPW (Waiman Long). > > The performance numbers, as measured by the following test program, > are as follows: > > CONFIG_PREEMPT_DYNAMIC=y > Unpatched kernel: 60 cycles > Patched kernel, CONFIG_QPW=n: 62 cycles > Patched kernel, CONFIG_QPW=y, qpw=0: 62 cycles > Patched kernel, CONFIG_QPW=y, qpw=1: 75 cycles > > CONFIG_PREEMPT_RT: > Unpatched kernel: 95 cycles > Patched kernel, CONFIG_QPW=y, qpw=0: 99 cycles > Patched kernel, CONFIG_QPW=y, qpw=1: 97 cycles > > kmalloc_bench.c: > #include > #include > #include > #include > #include > #include > #include > > MODULE_LICENSE("GPL"); > MODULE_AUTHOR("Gemini AI"); > MODULE_DESCRIPTION("A simple kmalloc performance benchmark"); > > static int size = 64; // Default allocation size in bytes > module_param(size, int, 0644); > > static int iterations = 9000000; // Default number of iterations > module_param(iterations, int, 0644); > > static int __init kmalloc_bench_init(void) { > void **ptrs; > cycles_t start, end; > uint64_t total_cycles; > int i; > pr_info("kmalloc_bench: Starting test (size=%d, iterations=%d)\n", size, iterations); > > // Allocate an array to store pointers to avoid immediate kfree-reuse optimization > ptrs = vmalloc(sizeof(void *) * iterations); > if (!ptrs) { > pr_err("kmalloc_bench: Failed to allocate pointer array\n"); > return -ENOMEM; > } > > preempt_disable(); > start = get_cycles(); > > for (i = 0; i < iterations; i++) { > ptrs[i] = kmalloc(size, GFP_ATOMIC); > } > > end = get_cycles(); > > total_cycles = end - start; > preempt_enable(); > > pr_info("kmalloc_bench: Total cycles for %d allocs: %llu\n", iterations, total_cycles); > pr_info("kmalloc_bench: Avg cycles per kmalloc: %llu\n", total_cycles / iterations); > > // Cleanup > for (i = 0; i < iterations; i++) { > kfree(ptrs[i]); > } > vfree(ptrs); > > return 0; > } > > static void __exit kmalloc_bench_exit(void) { > pr_info("kmalloc_bench: Module unloaded\n"); > } > > module_init(kmalloc_bench_init); > module_exit(kmalloc_bench_exit); > > The following testcase triggers lru_add_drain_all on an isolated CPU > (that does sys_write to a file before entering its realtime > loop). > > /* > * Simulates a low latency loop program that is interrupted > * due to lru_add_drain_all. To trigger lru_add_drain_all, run: > * > * blockdev --flushbufs /dev/sdX > * > */ > #define _GNU_SOURCE > #include > #include > #include > #include > #include > #include > #include > #include > #include > #include > #include > #include > > int cpu; > > static void *run(void *arg) > { > pthread_t current_thread; > cpu_set_t cpuset; > int ret, nrloops; > struct sched_param sched_p; > pid_t pid; > int fd; > char buf[] = "xxxxxxxxxxx"; > > CPU_ZERO(&cpuset); > CPU_SET(cpu, &cpuset); > > current_thread = pthread_self(); > ret = pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset); > if (ret) { > perror("pthread_setaffinity_np failed\n"); > exit(0); > } > > memset(&sched_p, 0, sizeof(struct sched_param)); > sched_p.sched_priority = 1; > pid = gettid(); > ret = sched_setscheduler(pid, SCHED_FIFO, &sched_p); > if (ret) { > perror("sched_setscheduler"); > exit(0); > } > > fd = open("/tmp/tmpfile", O_RDWR|O_CREAT|O_TRUNC); > if (fd == -1) { > perror("open"); > exit(0); > } > > ret = write(fd, buf, sizeof(buf)); > if (ret == -1) { > perror("write"); > exit(0); > } > > do { > nrloops = nrloops+2; > nrloops--; > } while (1); > } > > int main(int argc, char *argv[]) > { > int fd, ret; > pthread_t thread; > long val; > char *endptr, *str; > struct sched_param sched_p; > pid_t pid; > > if (argc != 2) { > printf("usage: %s cpu-nr\n", argv[0]); > printf("where CPU number is the CPU to pin thread to\n"); > exit(0); > } > str = argv[1]; > cpu = strtol(str, &endptr, 10); > if (cpu < 0) { > printf("strtol returns %d\n", cpu); > exit(0); > } > printf("cpunr=%d\n", cpu); > > memset(&sched_p, 0, sizeof(struct sched_param)); > sched_p.sched_priority = 1; > pid = getpid(); > ret = sched_setscheduler(pid, SCHED_FIFO, &sched_p); > if (ret) { > perror("sched_setscheduler"); > exit(0); > } > > pthread_create(&thread, NULL, run, NULL); > > sleep(5000); > > pthread_join(thread, NULL); > } > > > > >