From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64F32C4706C for ; Fri, 12 Jan 2024 18:28:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4C0C6B0089; Fri, 12 Jan 2024 13:28:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AFC366B0093; Fri, 12 Jan 2024 13:28:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C4018D0003; Fri, 12 Jan 2024 13:28:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8C50F6B0089 for ; Fri, 12 Jan 2024 13:28:03 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6684EA06B7 for ; Fri, 12 Jan 2024 18:28:03 +0000 (UTC) X-FDA: 81671493246.04.48AA583 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by imf20.hostedemail.com (Postfix) with ESMTP id 17C911C0009 for ; Fri, 12 Jan 2024 18:27:59 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=AM23TwqI; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf20.hostedemail.com: domain of tim.c.chen@linux.intel.com has no SPF policy when checking 192.198.163.7) smtp.mailfrom=tim.c.chen@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705084080; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q50cc231o/Gk2q28fhW3c9inUP+LNBLXe7PComw6Ql4=; b=gh4XoHCVcfGbmIvC6TxDOEqFsM4AVC4l4W/jYS8cZOOF2Rrm0acceSWf0dyG4NemoBAPsp sGXGUjcVgt4uNB0AhO8ZJFbdsZGqnhfCRVy4EWhcW18NL6xJUwCpfaqIP6as/M5+oznVLi Fm4rSkFErqz1Qd9e44+DrxE/3raCVxg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=AM23TwqI; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf20.hostedemail.com: domain of tim.c.chen@linux.intel.com has no SPF policy when checking 192.198.163.7) smtp.mailfrom=tim.c.chen@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705084080; a=rsa-sha256; cv=none; b=cPqW+TDV41rWWmUmzH2wFpZejAOhm/oIXJOCzjMvvmFfzs+cHkpA//xTjii1Pt4KvST3jV mzt1mPX1nTJEqjPeEGjK5bZAq37sT1WyokepwtVvgwHxSchj9HHG3kKwrTBPFSkSrP8sqv Mse0yaqvNHzyghpVeulpzuF6YaHP9f0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705084080; x=1736620080; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=esm7T4hjM8QAkRTfzrz5TRd+KgY+XxGQvjLSsmczK0Q=; b=AM23TwqIrGSj56z2JZgOEXf87e/f9IOFr98jElWlt3sVhqCYMuGDNPHx bAO+Arzxd6hiNrGrn+FLWwNO0DFJORUWDsekL0+8MuWll03CbWA9S3tOW 4X5/98nK3w/xZ6Nu5mdBMwiBH5GihPKkmomDwXoLdLLyzWGKhzoe+PCNl mxSkq5Vs/3CXsEHBwTJRkDBFVSs3sntyHrFWLvsm2o9i3D6I05uMC7OBd QbG3NFOdJ5o4b+CVzkJnXXxIO1M0FzVU2uguPKHn9PnMc+8k5RNwOSJmO M5bAGkeIQkzdJzuIrMZqfU0y/+wzUvVJo0OjjpSuZ1wC+L68jnnJ7fduX g==; X-IronPort-AV: E=McAfee;i="6600,9927,10951"; a="20727409" X-IronPort-AV: E=Sophos;i="6.04,190,1695711600"; d="scan'208";a="20727409" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2024 10:27:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,190,1695711600"; d="scan'208";a="31478061" Received: from jarteaga-mobl1.amr.corp.intel.com (HELO [10.212.169.181]) ([10.212.169.181]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2024 10:27:57 -0800 Message-ID: <1bd6ee64a600daad58866ce684b591d39879c470.camel@linux.intel.com> Subject: Re: [PATCH v3 3/7] padata: dispatch works on different nodes From: Tim Chen To: Gang Li Cc: linux-mm@kvack.org, Andrew Morton , Mike Kravetz , David Rientjes , linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, David Hildenbrand , Muchun Song Date: Fri, 12 Jan 2024 10:27:56 -0800 In-Reply-To: References: <20240102131249.76622-1-gang.li@linux.dev> <20240102131249.76622-4-gang.li@linux.dev> <1d9074955618ea0b4b155701f7c1b8b18a43fa8d.camel@linux.intel.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.44.4 (3.44.4-2.fc36) MIME-Version: 1.0 X-Rspamd-Queue-Id: 17C911C0009 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 7xmq1xaaqb3q5hfi7hpshgd9zaps7kp1 X-HE-Tag: 1705084079-680779 X-HE-Meta: U2FsdGVkX18vzMJVah9GJwtTj1IguOdrMaqXJovkKsz3oNjIU5CUqp84zhEJJxVbTDOVh/aBZ837Wp4f/AAgKUMR3wf57/vkBA7Und7rgdB+jdKTEw2Ro0wreVW0WVO5YJRYaVK+7A0hHaxnA6MdCJDjV9nuFSRJIeyTu5YN+Dn+XIlMGfYZZ1cSGH+UedlRaQo5KvPuXBgEPYoz3Ui04byU/iex/CsGt8g4TSsvWmFS1EKYYreD36Ef7k7k1rb9qnFzXT6tfUBGo7lMG/unIwcEG5kM/YORigtIuwVrN65v6TI/6EPUPZa3pWg6a86qBf2cFta2EeldSg+ZQFU+ko0Ck4nY/YbvLjpHuRt58TbH4jLHCVzzEm6fKT8gU0PhqZysXdVVegcIpwWV6fTyVlvosVXT2zAvpe85a8E1DhSBr675rqglBp2KfMXwFqdftjHTyOyRECfN0NC9Y6/eAorfxEXcJyyPvZz+D7PKqHi+lpJ459S6G28x4Iw0pliUKyp3vhsxA+MZAsdpvCvg6vrvkPwhnXO9qHa0BYOGwvOO2HRgucPKpak5Q7jKH4KQ5os6HQuBA6T1rKpjF+ml+Rkg9QRoWA92PMr5du81beb2x3Tw9IRV0+KbaiwoukK2ywLzUeLtYP6dz3EdsgAoqctm0b8EBsv0eizXSK3R2K3iM+A6RR8v0Lo/KriS2/Fk6VNonZZvbhLwN+22+J6cNjosgEmS2Q4lS+OU1xCfoQemjBmzuDqS5HJHPhwhYQzltiFliPPsmB9yx6ibF3WXGiPWKYD+3mI1WC6StF1vNAsQzchE1WpdOKaeia+Iw6UAVH7ziGvJ6+vyDbxZhG/uYR7v8a3FCMS4d3+g5yXXtgqzm78OwGHf2u0iIICl0nHTeMcrWL+DyTR5aVjxSGClNtL4jXm143EYqk7AAy6p9oMF4v+xau3e25lXgCbJtt0xkpn4ZXuGMokVxk8Sgdr WeAXPPxD eOBYln5ZZ/+5vvTXtYBOfpSXz2loqtki3UhyZyK2I95F2ZxaxrUje8dAR6h/31FQfaHj0ESpI1r14FDGSmQezMQrMKtmRu3ATGOSGnZQxh8U2zIEW06CfCHrTJ9iOzAxi4jjt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, 2024-01-12 at 15:09 +0800, Gang Li wrote: > On 2024/1/12 01:50, Tim Chen wrote: > > On Tue, 2024-01-02 at 21:12 +0800, Gang Li wrote: > > > When a group of tasks that access different nodes are scheduled on th= e > > > same node, they may encounter bandwidth bottlenecks and access latenc= y. > > >=20 > > > Thus, numa_aware flag is introduced here, allowing tasks to be > > > distributed across different nodes to fully utilize the advantage of > > > multi-node systems. > > >=20 > > > Signed-off-by: Gang Li > > > --- > > > include/linux/padata.h | 3 +++ > > > kernel/padata.c | 8 ++++++-- > > > mm/mm_init.c | 1 + > > > 3 files changed, 10 insertions(+), 2 deletions(-) > > >=20 > > > diff --git a/include/linux/padata.h b/include/linux/padata.h > > > index 495b16b6b4d72..f79ccd50e7f40 100644 > > > --- a/include/linux/padata.h > > > +++ b/include/linux/padata.h > > > @@ -137,6 +137,8 @@ struct padata_shell { > > > * appropriate for one worker thread to do at once. > > > * @max_threads: Max threads to use for the job, actual number may = be less > > > * depending on task size and minimum chunk size. > > > + * @numa_aware: Dispatch jobs to different nodes. If a node only has= memory but > > > + * no CPU, dispatch its jobs to a random CPU. > > > */ > > > struct padata_mt_job { > > > void (*thread_fn)(unsigned long start, unsigned long end, void *ar= g); > > > @@ -146,6 +148,7 @@ struct padata_mt_job { > > > unsigned long align; > > > unsigned long min_chunk; > > > int max_threads; > > > + bool numa_aware; > > > }; > > > =20 > > > /** > > > diff --git a/kernel/padata.c b/kernel/padata.c > > > index 179fb1518070c..1c2b3a337479e 100644 > > > --- a/kernel/padata.c > > > +++ b/kernel/padata.c > > > @@ -485,7 +485,7 @@ void __init padata_do_multithreaded(struct padata= _mt_job *job) > > > struct padata_work my_work, *pw; > > > struct padata_mt_job_state ps; > > > LIST_HEAD(works); > > > - int nworks; > > > + int nworks, nid =3D 0; > >=20 > > If we always start from 0, we may be biased towards the low numbered no= de, > > and not use high numbered nodes at all. Suggest you do > > static nid =3D 0; > >=20 >=20 > When we use `static`, if there are multiple parallel calls to > `padata_do_multithreaded`, it may result in an uneven distribution of > tasks for each padata_do_multithreaded. >=20 > We can make the following modifications to address this issue. >=20 > ``` > diff --git a/kernel/padata.c b/kernel/padata.c > index 1c2b3a337479e..925e48df6dd8d 100644 > --- a/kernel/padata.c > +++ b/kernel/padata.c > @@ -485,7 +485,8 @@ void __init padata_do_multithreaded(struct=20 > padata_mt_job *job) > struct padata_work my_work, *pw; > struct padata_mt_job_state ps; > LIST_HEAD(works); > - int nworks, nid =3D 0; > + int nworks, nid; > + static volatile int global_nid =3D 0; >=20 > if (job->size =3D=3D 0) > return; > @@ -516,12 +517,15 @@ void __init padata_do_multithreaded(struct=20 > padata_mt_job *job) > ps.chunk_size =3D max(ps.chunk_size, job->min_chunk); > ps.chunk_size =3D roundup(ps.chunk_size, job->align); >=20 > + nid =3D global_nid; > list_for_each_entry(pw, &works, pw_list) > - if (job->numa_aware) > - queue_work_node((++nid % num_node_state(N_MEMORY)= ), > - system_unbound_wq, &pw->pw_work); > - else > + if (job->numa_aware) { > + queue_work_node(nid, system_unbound_wq,=20 > &pw->pw_work); > + nid =3D next_node(nid, node_states[N_CPU]); > + } else > queue_work(system_unbound_wq, &pw->pw_work); > + if (job->numa_aware) > + global_nid =3D nid; Thinking more about it, there could still be multiple threads working at the same time with stale global_nid. We should probably do a compare exchange of global_nid with new nid only if the global nid was unchanged. Otherwise we should go to the next node with the changed global nid before we queue the job. Tim >=20 > /* Use the current thread, which saves starting a workqueue=20 > worker. */ > padata_work_init(&my_work, padata_mt_helper, &ps,=20 > PADATA_WORK_ONSTACK); > ``` >=20 >=20 > > > =20 > > > if (job->size =3D=3D 0) > > > return; > > > @@ -517,7 +517,11 @@ void __init padata_do_multithreaded(struct padat= a_mt_job *job) > > > ps.chunk_size =3D roundup(ps.chunk_size, job->align); > > > =20 > > > list_for_each_entry(pw, &works, pw_list) > > > - queue_work(system_unbound_wq, &pw->pw_work); > > > + if (job->numa_aware) > > > + queue_work_node((++nid % num_node_state(N_MEMORY)), > > > + system_unbound_wq, &pw->pw_work); > >=20 > > I think we should use nid =3D next_node(nid, node_states[N_CPU]) instea= d of > > ++nid % num_node_state(N_MEMORY). You are picking the next node with C= PU > > to handle the job. > >=20 > > Tim > >=20 >=20 > I agree.