From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Kogan Subject: Re: [PATCH v9 0/5] Add NUMA-awareness to qspinlock Date: Wed, 22 Jan 2020 14:29:36 -0500 Message-ID: <4F71A184-42C0-4865-9AAA-79A636743C25@oracle.com> References: <20200115035920.54451-1-alex.kogan@oracle.com> Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\)) Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane-mx.org@lists.infradead.org To: Lihao Liang Cc: linux-arch@vger.kernel.org, guohanjun@huawei.com, arnd@arndb.de, Peter Zijlstra , dave.dice@oracle.com, jglauber@marvell.com, x86@kernel.org, will.deacon@arm.com, linux@armlinux.org.uk, steven.sistare@oracle.com, linux-kernel@vger.kernel.org, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, longman@redhat.com, tglx@linutronix.de, daniel.m.jordan@oracle.com, Will Deacon , linux-arm-kernel@lists.infradead.org List-Id: linux-arch.vger.kernel.org SGksIExpaGFvLgoKPiBPbiBKYW4gMjIsIDIwMjAsIGF0IDY6NDUgQU0sIExpaGFvIExpYW5nIDxs aWhhb2xpYW5nQGdvb2dsZS5jb20+IHdyb3RlOgo+IAo+IEhpIEFsZXgsCj4gCj4gT24gV2VkLCBK YW4gMjIsIDIwMjAgYXQgMTA6MjggQU0gQWxleCBLb2dhbiA8YWxleC5rb2dhbkBvcmFjbGUuY29t PiB3cm90ZToKPj4gCj4+IFN1bW1hcnkKPj4gLS0tLS0tLQo+PiAKPj4gTG9jayB0aHJvdWdocHV0 IGNhbiBiZSBpbmNyZWFzZWQgYnkgaGFuZGluZyBhIGxvY2sgdG8gYSB3YWl0ZXIgb24gdGhlCj4+ IHNhbWUgTlVNQSBub2RlIGFzIHRoZSBsb2NrIGhvbGRlciwgcHJvdmlkZWQgY2FyZSBpcyB0YWtl biB0byBhdm9pZAo+PiBzdGFydmF0aW9uIG9mIHdhaXRlcnMgb24gb3RoZXIgTlVNQSBub2Rlcy4g VGhpcyBwYXRjaCBpbnRyb2R1Y2VzIENOQQo+PiAoY29tcGFjdCBOVU1BLWF3YXJlIGxvY2spIGFz IHRoZSBzbG93IHBhdGggZm9yIHFzcGlubG9jay4gSXQgaXMKPj4gZW5hYmxlZCB0aHJvdWdoIGEg Y29uZmlndXJhdGlvbiBvcHRpb24gKE5VTUFfQVdBUkVfU1BJTkxPQ0tTKS4KPj4gCj4gCj4gVGhh bmtzIGZvciB5b3VyIHBhdGNoZXMuIFRoZSBleHBlcmltZW50YWwgcmVzdWx0cyBsb29rIHByb21p c2luZyEKPiAKPiBJIHVuZGVyc3RhbmQgdGhhdCB0aGUgbmV3IENOQSBxc3BpbmxvY2sgdXNlcyBy YW5kb21pemF0aW9uIHRvIGFjaGlldmUKPiBsb25nLXRlcm0gZmFpcm5lc3MsIGFuZCBwcm92aWRl cyB0aGUgbnVtYV9zcGlubG9ja190aHJlc2hvbGQgcGFyYW1ldGVyCj4gZm9yIHVzZXJzIHRvIHR1 bmUuClRoaXMgaGFzIGJlZW4gdGhlIGNhc2UgaW4gdGhlIGZpcnN0IHZlcnNpb25zIG9mIHRoZSBz ZXJpZXMsIGJ1dCBpcyBub3QgdHJ1ZSBhbnltb3JlLgpUaGF0IGlzLCB0aGUgbG9uZy10ZXJtIGZh aXJuZXNzIGlzIGFjaGlldmVkIGRldGVybWluaXN0aWNhbGx5IChhbmQgeW91IGFyZSBjb3JyZWN0 IAp0aGF0IGl0IGlzIGRvbmUgdGhyb3VnaCB0aGUgbnVtYV9zcGlubG9ja190aHJlc2hvbGQgcGFy YW1ldGVyKS4KCj4gQXMgTGludXggcnVucyBleHRyZW1lbHkgZGl2ZXJzZSB3b3JrbG9hZHMsIGl0 IGlzIG5vdAo+IGNsZWFyIGhvdyByYW5kb21pemF0aW9uIGFmZmVjdHMgaXRzIGZhaXJuZXNzLCBh bmQgaG93IHVzZXJzIHdpdGgKPiBkaWZmZXJlbnQgcmVxdWlyZW1lbnRzIGFyZSBzdXBwb3NlZCB0 byB0dW5lIHRoaXMgcGFyYW1ldGVyLgo+IAo+IFRvIHRoaXMgZW5kLCBXaWxsIGFuZCBJIGNvbnNp ZGVyIGl0IGJlbmVmaWNpYWwgdG8gYmUgYWJsZSB0byBhbnN3ZXIgdGhlCj4gZm9sbG93aW5nIHF1 ZXN0aW9uOgo+IAo+IFdpdGggZGlmZmVyZW50IHZhbHVlcyBvZiBudW1hX3NwaW5sb2NrX3RocmVz aG9sZCBhbmQKPiBTSFVGRkxFX1JFRFVDVElPTl9QUk9CX0FSRywgaG93IGxvbmcgZG8gdGhyZWFk cyBydW5uaW5nIG9uIGRpZmZlcmVudAo+IHNvY2tldHMgaGF2ZSB0byB3YWl0IHRvIGFjcXVpcmUg dGhlIGxvY2s/ClRoZSBTSFVGRkxFX1JFRFVDVElPTl9QUk9CX0FSRyBwYXJhbWV0ZXIgaXMgaW50 ZW5kZWQgZm9yIHBlcmZvcm1hbmNlCm9wdGltaXphdGlvbiBvbmx5LCBhbmQgKmRvZXMgbm90KiBh ZmZlY3QgdGhlIGxvbmctdGVybSBmYWlybmVzcyAob3IsIGF0IHRoZSAKdmVyeSBsZWFzdCwgZG9l cyBub3QgbWFrZSBpdCBhbnkgd29yc2UpLiBBcyBMb25nbWFuIGNvcnJlY3RseSBwb2ludGVkIG91 dCBpbiAKaGlzIHJlc3BvbnNlIHRvIHRoaXMgZW1haWwsIHRoZSBzaHVmZmxlIHJlZHVjdGlvbiBv cHRpbWl6YXRpb24gaXMgcmVsZXZhbnQgb25seQp3aGVuIHRoZSBzZWNvbmRhcnkgcXVldWUgaXMg ZW1wdHkuIEluIHRoYXQgY2FzZSwgQ05BIGhhbmRzLW9mZiB0aGUgbG9jawpleGFjdGx5IGFzIE1D UyBkb2VzLCBpLmUuLCBpbiB0aGUgRklGTyBvcmRlci4gTm90ZSB0aGF0IHdoZW4gdGhlIHNlY29u ZGFyeQpxdWV1ZSBpcyBub3QgZW1wdHksIHdlIGRvIG5vdCBjYWxsIHByb2JhYmx5KCkuCgo+IFRo aXMgaXMgcGFydGljdWxhcmx5IHJlbGV2YW50Cj4gaW4gaGlnaCBjb250ZW50aW9uIHNpdHVhdGlv bnMgd2hlbiBuZXcgdGhyZWFkcyBrZWVwIGFycml2aW5nIG9uIHRoZSBzYW1lCj4gc29ja2V0IGFz IHRoZSBsb2NrIGhvbGRlci4KSW4gdGhpcyBjYXNlLCB0aGUgbG9jayB3aWxsIHN0YXkgb24gdGhl IHNhbWUgTlVNQSBub2RlL3NvY2tldCBmb3IgCjJebnVtYV9zcGlubG9ja190aHJlc2hvbGQgdGlt ZXMsIHdoaWNoIGlzIHRoZSB3b3JzdCBjYXNlIHNjZW5hcmlvIGlmIHdlIApjb25zaWRlciB0aGUg bG9uZy10ZXJtIGZhaXJuZXNzLiBBbmQgaWYgd2UgaGF2ZSBtdWx0aXBsZSBub2RlcywgaXQgd2ls bCB0YWtlIAp1cCB0byAyXm51bWFfc3BpbmxvY2tfdGhyZXNob2xkIFggKG5yX25vZGVzIC0gMSkg KyBucl9jcHVzX3Blcl9ub2RlCmxvY2sgdHJhbnNpdGlvbnMgdW50aWwgYW55IGdpdmVuIHRocmVh ZCB3aWxsIGFjcXVpcmUgdGhlIGxvY2sKKGFzc3VtaW5nIDJebnVtYV9zcGlubG9ja190aHJlc2hv bGQgPiBucl9jcHVzX3Blcl9ub2RlKS4KCkhvcGVmdWxseSwgaXQgYWRkcmVzc2VzIHlvdXIgY29u Y2Vybi4gTGV0IG1lIGtub3cgaWYgeW91IGhhdmUgYW55IGZ1cnRoZXIgCnF1ZXN0aW9ucy4KCkJl c3QgcmVnYXJkcywK4oCUIEFsZXgKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwpsaW51eC1hcm0ta2VybmVsIG1haWxpbmcgbGlzdApsaW51eC1hcm0ta2Vy bmVsQGxpc3RzLmluZnJhZGVhZC5vcmcKaHR0cDovL2xpc3RzLmluZnJhZGVhZC5vcmcvbWFpbG1h bi9saXN0aW5mby9saW51eC1hcm0ta2VybmVsCg== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from userp2120.oracle.com ([156.151.31.85]:49500 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725827AbgAVTau (ORCPT ); Wed, 22 Jan 2020 14:30:50 -0500 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\)) Subject: Re: [PATCH v9 0/5] Add NUMA-awareness to qspinlock From: Alex Kogan In-Reply-To: Date: Wed, 22 Jan 2020 14:29:36 -0500 Content-Transfer-Encoding: quoted-printable Message-ID: <4F71A184-42C0-4865-9AAA-79A636743C25@oracle.com> References: <20200115035920.54451-1-alex.kogan@oracle.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Lihao Liang Cc: linux@armlinux.org.uk, Peter Zijlstra , mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com, dave.dice@oracle.com, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, Will Deacon Message-ID: <20200122192936.bzb7EiMnT-XpBRyiTAOtvyn0msj_VjVvifL6wprSRr0@z> Hi, Lihao. > On Jan 22, 2020, at 6:45 AM, Lihao Liang = wrote: >=20 > Hi Alex, >=20 > On Wed, Jan 22, 2020 at 10:28 AM Alex Kogan = wrote: >>=20 >> Summary >> ------- >>=20 >> Lock throughput can be increased by handing a lock to a waiter on the >> same NUMA node as the lock holder, provided care is taken to avoid >> starvation of waiters on other NUMA nodes. This patch introduces CNA >> (compact NUMA-aware lock) as the slow path for qspinlock. It is >> enabled through a configuration option (NUMA_AWARE_SPINLOCKS). >>=20 >=20 > Thanks for your patches. The experimental results look promising! >=20 > I understand that the new CNA qspinlock uses randomization to achieve > long-term fairness, and provides the numa_spinlock_threshold parameter > for users to tune. This has been the case in the first versions of the series, but is not = true anymore. That is, the long-term fairness is achieved deterministically (and you = are correct=20 that it is done through the numa_spinlock_threshold parameter). > As Linux runs extremely diverse workloads, it is not > clear how randomization affects its fairness, and how users with > different requirements are supposed to tune this parameter. >=20 > To this end, Will and I consider it beneficial to be able to answer = the > following question: >=20 > With different values of numa_spinlock_threshold and > SHUFFLE_REDUCTION_PROB_ARG, how long do threads running on different > sockets have to wait to acquire the lock? The SHUFFLE_REDUCTION_PROB_ARG parameter is intended for performance optimization only, and *does not* affect the long-term fairness (or, at = the=20 very least, does not make it any worse). As Longman correctly pointed = out in=20 his response to this email, the shuffle reduction optimization is = relevant only when the secondary queue is empty. In that case, CNA hands-off the lock exactly as MCS does, i.e., in the FIFO order. Note that when the = secondary queue is not empty, we do not call probably(). > This is particularly relevant > in high contention situations when new threads keep arriving on the = same > socket as the lock holder. In this case, the lock will stay on the same NUMA node/socket for=20 2^numa_spinlock_threshold times, which is the worst case scenario if we=20= consider the long-term fairness. And if we have multiple nodes, it will = take=20 up to 2^numa_spinlock_threshold X (nr_nodes - 1) + nr_cpus_per_node lock transitions until any given thread will acquire the lock (assuming 2^numa_spinlock_threshold > nr_cpus_per_node). Hopefully, it addresses your concern. Let me know if you have any = further=20 questions. Best regards, =E2=80=94 Alex