From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98EF5EDE998 for ; Wed, 11 Sep 2024 07:17:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE3B88D00FB; Wed, 11 Sep 2024 03:17:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E92888D0056; Wed, 11 Sep 2024 03:17:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5A868D00FB; Wed, 11 Sep 2024 03:17:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B8E4C8D0056 for ; Wed, 11 Sep 2024 03:17:34 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5EC9114079B for ; Wed, 11 Sep 2024 07:17:34 +0000 (UTC) X-FDA: 82551602028.30.2DC49AB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 2347C2000E for ; Wed, 11 Sep 2024 07:17:31 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aLnpR3zM; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of leobras@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=leobras@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726038968; a=rsa-sha256; cv=none; b=O7jSwmfLvfjOEskoQ3NZWkNe+lcNNA2rdF21G3hrvjbFAxPyOO0NHVy2906xnF0jcXMiUx I5r/T9iyNxonUg1vwLbKiPlxMH2y0OyWEcVEmc1e5f7EvysQwSdeP/y+pZul3SIIqaHom/ T6oDPh7b74RTh0naatD1SNRkZ4RVE30= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aLnpR3zM; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of leobras@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=leobras@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726038968; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EWS0zdSRA6QBuHeVPu9p9QQM9sNJybBO18pUWus00es=; b=G27kHino0ItS1tuyeuSMkOuL14ZDEkNPqewOqPmUOk5jBMIMcKj4fp+F0lUASxZroCZLfg uz5cI0opyJtum4Suq2XiZzEAuQ3C2fVuz7b/6KhOhOwNYactC3SA+G+Wjhukvl1W+0WnES CyvExBOTU7DTXEcKJFiUzXIWPDI2vLc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1726039051; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EWS0zdSRA6QBuHeVPu9p9QQM9sNJybBO18pUWus00es=; b=aLnpR3zMgnFoTWzjOX2FV1arDVLmrebiMwrfyAkj+znGb7cpMIsdLwgnOuG2qDsKekdVvj 35WTW82AGZuyDiG7YB0EsccKTuxP+IBdw1bT9ROTi03bFc0FX7uLOETBE/Zyv6sLKOG049 a7v2EoT6MtuiCfxWLHsEVUTOfyhgpxw= Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-447--xQUEV2fMiyPLpL5rsOoKA-1; Wed, 11 Sep 2024 03:17:30 -0400 X-MC-Unique: -xQUEV2fMiyPLpL5rsOoKA-1 Received: by mail-pg1-f200.google.com with SMTP id 41be03b00d2f7-7c6a9c1a9b8so4887779a12.0 for ; Wed, 11 Sep 2024 00:17:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726039049; x=1726643849; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EWS0zdSRA6QBuHeVPu9p9QQM9sNJybBO18pUWus00es=; b=jvZ1hrqdL9ar8bPtB7euiwYqZssPeQzd0pJS49+w2pP3UfDytp/NVqRIJp2UT+KjVA 0IQ8SPQ8TJw0n3IFbqdQxEBH9ZHKBkedwkUsAL7oWaPT1eEZKcuWpEJIKA/NOH3PGquA naCX/ztLYTUJJX9mJXl3ykDJXHTbNJsEh13IjBHzKEBbIcxgoEFf7BMxSMho95TPY1cH rHV0KAL/ODvQQEkaQ+9a6wKAJFd/qvDoUpNCNdUw67oWmgdPoqKMZLGedcTpTR+hDvwQ wkC+bCTU6MilbaZcG4mhDqm7NGG9wPCdWPhJGm781ElsxJf0wqSDJhd5a0wAHn6vernG aQ5w== X-Forwarded-Encrypted: i=1; AJvYcCWRjJvCRMm91JVjICVc3voyzANJJxlmTshAhXHolZN+SYyq3MVKBZ38mamRrmUdBNRixyJXFH/VOg==@kvack.org X-Gm-Message-State: AOJu0YwOxFrNsD8i0mC6MWb2UVG//1Xp9ZPm2E6ozgl5kaZaU2U/Rny4 L9ovKdgDdemjeMD36vKLbE9I02Sr0Tb3trB+Bik3DvJm3tXY9eFtToxiUaySmKpX/eyhlRxj9sc 8Tw5mfgUG1SDUgB5OsmLlr8sWSpdQYethe41O/EMi7WtJublG X-Received: by 2002:a05:6a20:d526:b0:1ce:cde2:4458 with SMTP id adf61e73a8af0-1cf5e15710amr5116840637.35.1726039049184; Wed, 11 Sep 2024 00:17:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGfPMdzQCpRyvsuzf833UGi7nTs6J1w1LxjL6dvBwUdC2XXOTOBOy5/7LWeNy6FfSCYJeydpA== X-Received: by 2002:a05:6a20:d526:b0:1ce:cde2:4458 with SMTP id adf61e73a8af0-1cf5e15710amr5116788637.35.1726039048478; Wed, 11 Sep 2024 00:17:28 -0700 (PDT) Received: from localhost.localdomain ([2804:1b3:a800:3c59:c8f1:7d33:571a:fde2]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7db12af2c69sm419278a12.38.2024.09.11.00.17.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Sep 2024 00:17:27 -0700 (PDT) From: Leonardo Bras To: Waiman Long Cc: Leonardo Bras , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Thomas Gleixner , Marcelo Tosatti , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH v1 1/4] Introducing qpw_lock() and per-cpu queue & flush work Date: Wed, 11 Sep 2024 04:17:05 -0300 Message-ID: X-Mailer: git-send-email 2.46.0 In-Reply-To: References: <20240622035815.569665-1-leobras@redhat.com> <20240622035815.569665-2-leobras@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2347C2000E X-Stat-Signature: 4caqxmi8yaqy34pf5b4w7iu8xkhi7u93 X-Rspam-User: X-HE-Tag: 1726039051-871291 X-HE-Meta: U2FsdGVkX18DBKPWo5XiBdjNasCJOmTGBWzsvGngthmSAn2TtmsE0W0QkTN0K76GRUhNEvuuetkLmzspCWzFK60+ZIdApLXetqZtFvPfOLNtg/ONaRS34aYumN+MvttsI/w95vV9oe2kD8rXp/mGEhLS88Rhx9PhE5mYW0AEdJqN7FBdfDXxjJ21FLwNBZyK0l/tBjSrzJwRDkJImAnFzXGXtZOgYneDh49iFvt8xAZc92nlrOkOVM2eUeSoTG7QOETb6WJAjXNTi16uWUBSVwd40p03BYymd7X16EdgYKEcpyo7bhd18lizb96Fcdym6UY1dAQrDq79/B5v1BbYeeLxBPJ0ifBjxh0zZuvRYOtdnFsesU1gdgDzNmzczsmOxl1WBX9oV2lJQyIQGs9nCRby7pFF8GYOD12usDUfWybZb3c/jgcDYc5jBx/tsTDWeBiv9yn789Mw26hUmNdBj/YkN1Lv+r3pOMN4CKraSjsIBmWWjOEQMhxbZyKNRCtnBqAsG97B4VBNFtC5pGzwTpTj80USmlCUrwZHKUX7MoDEHQThDPlhwSkx0ejNN0nP8+zcGLkPsDnSlL34R2wStcjFqD2FOH7hBj7h3IslibwV7Oo47zeZS/jRYo77cj5RYhM2NAgNbguHOfJqwZlMHbdZdccC9Uht2n9djwEThG0hV54HatNkckGwR43WJOwo8bWylrU6ntruk9A/4u5ZGaDJWXbcsa8V8M/9tY7addjz18eSwJacA0U5uAhAAT+Y/t98m3IWxQ3QDSZox9HLZWEog4rHIMak9kTWR5AWq7uzfgoOG21KjlOAQmQmqpPBSRkPfn0n9bMjNlAB4dAXErjcZFr+ARb5P7wk5QS/sMYiZbuKfIoQt7C8UMBOCCo8KjikYw3CgYR8+p+BS6zbMRyxX+XJ9OT2socknsWZECuco42cPD6NbHbIxkquRbXU5SW5kDChZGBqf+BfIiR DzrotVjV ZvI5CUDJp2W5rMThmsB3gxfI/brF6Kh10zB+bG/sgW3CZ6Xb/zcVOwzViy4hWhkqsHgA/UuKE4vEGVQMO5dm6yGZeQDUt25ZJcCgnIYOv/HlPz6Db2PXrMZAlC8IAmS6Qi1MS1UCcrWp4Gwo32l5ZX+UfUzapDeBItdsaj4kpInyfaqSBsNlBlDovUWmbjrDw1M0zN14ZqGc9Mwc2ctbI9G15/b0nzxReDGsIPX+jlY8wuaPIRKvwsSyKCPjiR+Hp1CwLKKSUXhWXV4s7j4RhwXWf9oWrdAx4fbrR1x9o0/GF3/lkISDITsDa/nQl6K55j0tjGH962td0XnUKk/UjJg1losr0ZxuriTHUpAwziW9brtoN4Z2y4JiNYPl0tQciaeIo3MojosjwpvvsqNOa9hu/oOlQUeizj//gendgyQ2CXgrhXvpU78vBKTfFtHT8LYF3kjib7gU8XMssP8UWDFPmBlenPCw+pfIq2gShSmsa66ieck7EyjlLf3rCX7tbRpJd0VaEQcxj0nS3Qk+QxRJ/Ag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Sep 04, 2024 at 05:39:01PM -0400, Waiman Long wrote: > On 6/21/24 23:58, Leonardo Bras wrote: > > Some places in the kernel implement a parallel programming strategy > > consisting on local_locks() for most of the work, and some rare remote > > operations are scheduled on target cpu. This keeps cache bouncing low since > > cacheline tends to be mostly local, and avoids the cost of locks in non-RT > > kernels, even though the very few remote operations will be expensive due > > to scheduling overhead. > > > > On the other hand, for RT workloads this can represent a problem: getting > > an important workload scheduled out to deal with some unrelated task is > > sure to introduce unexpected deadline misses. > > > > It's interesting, though, that local_lock()s in RT kernels become > > spinlock(). We can make use of those to avoid scheduling work on a remote > > cpu by directly updating another cpu's per_cpu structure, while holding > > it's spinlock(). > > > > In order to do that, it's necessary to introduce a new set of functions to > > make it possible to get another cpu's per-cpu "local" lock (qpw_{un,}lock*) > > and also the corresponding queue_percpu_work_on() and flush_percpu_work() > > helpers to run the remote work. > > > > On non-RT kernels, no changes are expected, as every one of the introduced > > helpers work the exactly same as the current implementation: > > qpw_{un,}lock*() -> local_{un,}lock*() (ignores cpu parameter) > > queue_percpu_work_on() -> queue_work_on() > > flush_percpu_work() -> flush_work() > > > > For RT kernels, though, qpw_{un,}lock*() will use the extra cpu parameter > > to select the correct per-cpu structure to work on, and acquire the > > spinlock for that cpu. > > > > queue_percpu_work_on() will just call the requested function in the current > > cpu, which will operate in another cpu's per-cpu object. Since the > > local_locks() become spinlock()s in PREEMPT_RT, we are safe doing that. > > > > flush_percpu_work() then becomes a no-op since no work is actually > > scheduled on a remote cpu. > > > > Some minimal code rework is needed in order to make this mechanism work: > > The calls for local_{un,}lock*() on the functions that are currently > > scheduled on remote cpus need to be replaced by qpw_{un,}lock_n*(), so in > > RT kernels they can reference a different cpu. It's also necessary to use a > > qpw_struct instead of a work_struct, but it just contains a work struct > > and, in PREEMPT_RT, the target cpu. > > > > This should have almost no impact on non-RT kernels: few this_cpu_ptr() > > will become per_cpu_ptr(,smp_processor_id()). > > > > On RT kernels, this should improve performance and reduce latency by > > removing scheduling noise. > > > > Signed-off-by: Leonardo Bras > > --- > > include/linux/qpw.h | 88 +++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 88 insertions(+) > > create mode 100644 include/linux/qpw.h > > > > diff --git a/include/linux/qpw.h b/include/linux/qpw.h > > new file mode 100644 > > index 000000000000..ea2686a01e5e > > --- /dev/null > > +++ b/include/linux/qpw.h > > @@ -0,0 +1,88 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +#ifndef _LINUX_QPW_H > > +#define _LINUX_QPW_H > > + > > +#include "linux/local_lock.h" > > +#include "linux/workqueue.h" > > + > > +#ifndef CONFIG_PREEMPT_RT > > + > > +struct qpw_struct { > > + struct work_struct work; > > +}; > > + > > +#define qpw_lock(lock, cpu) \ > > + local_lock(lock) > > + > > +#define qpw_unlock(lock, cpu) \ > > + local_unlock(lock) > > + > > +#define qpw_lock_irqsave(lock, flags, cpu) \ > > + local_lock_irqsave(lock, flags) > > + > > +#define qpw_unlock_irqrestore(lock, flags, cpu) \ > > + local_unlock_irqrestore(lock, flags) > > + > > +#define queue_percpu_work_on(c, wq, qpw) \ > > + queue_work_on(c, wq, &(qpw)->work) > > + > > +#define flush_percpu_work(qpw) \ > > + flush_work(&(qpw)->work) > > + > > +#define qpw_get_cpu(qpw) \ > > + smp_processor_id() > > + > > +#define INIT_QPW(qpw, func, c) \ > > + INIT_WORK(&(qpw)->work, (func)) > > + > > +#else /* !CONFIG_PREEMPT_RT */ > > + > > +struct qpw_struct { > > + struct work_struct work; > > + int cpu; > > +}; > > + > > +#define qpw_lock(__lock, cpu) \ > > + do { \ > > + migrate_disable(); \ > > + spin_lock(per_cpu_ptr((__lock), cpu)); \ > > + } while (0) > > + > > +#define qpw_unlock(__lock, cpu) \ > > + do { \ > > + spin_unlock(per_cpu_ptr((__lock), cpu)); \ > > + migrate_enable(); \ > > + } while (0) > > Why there is a migrate_disable/enable() call in qpw_lock/unlock()? The > rt_spin_lock/unlock() calls have already include a migrate_disable/enable() > pair. This was copied from PREEMPT_RT=y local_locks. In my tree, I see: #define __local_unlock(__lock) \ do { \ spin_unlock(this_cpu_ptr((__lock))); \ migrate_enable(); \ } while (0) But you are right: For PREEMPT_RT=y, spin_{un,}lock() will be defined in spinlock_rt.h as rt_spin{un,}lock(), which already runs migrate_{en,dis}able(). On the other hand, for spin_lock() will run migrate_disable() just before finishing the function, and local_lock() will run it before calling spin_lock() and thus, before spin_acquire(). (local_unlock looks like to have an unnecessary extra migrate_enable(), though). I am not sure if it's actually necessary to run this extra migrate_disable() in local_lock() case, maybe Thomas could help us understand this. But sure, if we can remove this from local_{un,}lock(), I am sure we can also remove this from qpw. > > > + > > +#define qpw_lock_irqsave(lock, flags, cpu) \ > > + do { \ > > + typecheck(unsigned long, flags); \ > > + flags = 0; \ > > + qpw_lock(lock, cpu); \ > > + } while (0) > > + > > +#define qpw_unlock_irqrestore(lock, flags, cpu) \ > > + qpw_unlock(lock, cpu) > > + > > +#define queue_percpu_work_on(c, wq, qpw) \ > > + do { \ > > + struct qpw_struct *__qpw = (qpw); \ > > + WARN_ON((c) != __qpw->cpu); \ > > + __qpw->work.func(&__qpw->work); \ > > + } while (0) > > + > > +#define flush_percpu_work(qpw) \ > > + do {} while (0) > > + > > +#define qpw_get_cpu(w) \ > > + container_of((w), struct qpw_struct, work)->cpu > > + > > +#define INIT_QPW(qpw, func, c) \ > > + do { \ > > + struct qpw_struct *__qpw = (qpw); \ > > + INIT_WORK(&__qpw->work, (func)); \ > > + __qpw->cpu = (c); \ > > + } while (0) > > + > > +#endif /* CONFIG_PREEMPT_RT */ > > +#endif /* LINUX_QPW_H */ > > You may also consider adding a documentation file about the > qpw_lock/unlock() calls. Sure, will do when I send the non-RFC version. Thanks for pointing that out! > > Cheers, > Longman > Thanks! Leo