From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ABD4C4332F for ; Thu, 22 Dec 2022 01:25:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E4FF8E0003; Wed, 21 Dec 2022 20:25:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 06D9B8E0001; Wed, 21 Dec 2022 20:25:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDB808E0003; Wed, 21 Dec 2022 20:25:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C9F978E0001 for ; Wed, 21 Dec 2022 20:25:48 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A8A5F1C6613 for ; Thu, 22 Dec 2022 01:25:48 +0000 (UTC) X-FDA: 80268200376.23.90C0511 Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) by imf03.hostedemail.com (Postfix) with ESMTP id A60E620008 for ; Thu, 22 Dec 2022 01:25:45 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=paWiXvp0; spf=pass (imf03.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.219.48 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671672345; a=rsa-sha256; cv=none; b=jUGeKNrbKBVIF00S16V+53DfzBdnJ3eNe72qJGxa/l7k8pRHitdnHYNCgAA6lWVrfqrAQd 7TGlNLx5ppOsz1xQq7N2f+BARrdmLjSypBfidaa3bUmB3L4Jj7Ce7KEgQFKunnkLRI+HzU P7zxlrykDYsmXTa0TXjS2Q/b+FrA7PY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=paWiXvp0; spf=pass (imf03.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.219.48 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671672345; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5SjxE/kmdtcXqhgloBP4CmAlPwzNLVF1sSk4Q1Ar3K0=; b=iEh0xQZ1jF7S0RMkz/Pu54b6wjvRVdwa4tbiiTQF7pZLTqtZn4DKpSWI+Gd7aFyWdJKH0Y 6HKk9zWZzXsiK3GrV5BV9LVBA56XuIBFB/Z1n0CjhfURI+mSZHqjBn6Iq5qbZ7R9eNONA6 dmaoUHAP2ufPgOX6wSg2ShBkDWIuLsM= Received: by mail-qv1-f48.google.com with SMTP id i12so363749qvs.2 for ; Wed, 21 Dec 2022 17:25:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:from:to:cc:subject:date :message-id:reply-to; bh=5SjxE/kmdtcXqhgloBP4CmAlPwzNLVF1sSk4Q1Ar3K0=; b=paWiXvp0dQsJsqWrO4Gy0X1iAanI6fktRVeGa7V7CZt9vRMzDRH8/gxua/f90yB8y6 unLhrb+lg5Hd6uDFRyMBnrkCaKezH/bBAACpDD1IYyUkPzKW26e7CAVKuBaItLErkSX/ dyxXvsYw6c+aCHh3eOuBNk3IrYRw0NCC063nPGNYeX8aknUvVRF4P9f5hRCDffGWpDKf wVAzaIsNrXyQovjYH6VD7Fj662ByrlFjBzmQrl/MsNGwvm5LVIWIwsvGvmI+c8puxJcI gPiaK4hfti2cZFVyNnCskkuGXdYnm8yKdSnPnlS6rIcQYK8cBBnhFgH1ryk8MGUUDQtd +ICw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5SjxE/kmdtcXqhgloBP4CmAlPwzNLVF1sSk4Q1Ar3K0=; b=T7PbwEJ+AdOBC6N8cMl7OZqlstNa4E+dOUwEJMSasP4BT64Yj3MkMSWDWaStAdT1qB MtaFyBKiHHOhdtjEY2AK4UmYPrRAoziNzqlh8DiuAZYRG2pLsPbl7GwFUJ3hdVYK0h0D FcJ8mNsqWxn74pkz3761fPzRXsGkAYy9CvGXIyVyGU7raMR3bkpbULKVc9A5sNrDYfPg koXFBdn/zXmpXY03mhP77lCm1wF1NkOO7ipT6nBuDLQNp2EU+LZ3RBl98ip7xzoESRTg qCk2JKxVmroocZS3bUchYWfbZ27HeB4HujH3ajStHEbLrcuo9x8FSw+y6+sOr6i2CW7G OEtQ== X-Gm-Message-State: AFqh2koV8h3LE3kWn8z5baPOOF1xGmY99Cnl4kCxQoB2K6PbceXW10Nl fGXhSyRCyaQCmyGY8hxLtvQ= X-Google-Smtp-Source: AMrXdXvMtuRFFkZgbjz5/EL3muFQk2UMmzjVgG8U2gJR+0GYGfSP/lIPrJ8tSNM9q+2+piTMAYuW0g== X-Received: by 2002:a05:6214:3507:b0:4c7:7370:3c07 with SMTP id nk7-20020a056214350700b004c773703c07mr25344451qvb.13.1671672344864; Wed, 21 Dec 2022 17:25:44 -0800 (PST) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id n17-20020a05620a223100b006fc40dafaa2sm11255394qkh.8.2022.12.21.17.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Dec 2022 17:25:44 -0800 (PST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 6558C27C0054; Wed, 21 Dec 2022 20:25:43 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 21 Dec 2022 20:25:43 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrgeelgdefiecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpeffhffvvefukfhfgggtuggjsehttdertddttddvnecuhfhrohhmpeeuohhquhhn ucfhvghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrth htvghrnhephedugfduffffteeutddvheeuveelvdfhleelieevtdeguefhgeeuveeiudff iedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsg hoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieeg qddujeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigi hmvgdrnhgrmhgv X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 21 Dec 2022 20:25:42 -0500 (EST) Date: Wed, 21 Dec 2022 17:25:20 -0800 From: Boqun Feng To: Peter Zijlstra Cc: torvalds@linux-foundation.org, corbet@lwn.net, will@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, Herbert Xu , davem@davemloft.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, joro@8bytes.org, suravee.suthikulpanit@amd.com, robin.murphy@arm.com, dwmw2@infradead.org, baolu.lu@linux.intel.com, Arnd Bergmann , penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, Andrew Morton , vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org Subject: Re: [RFC][PATCH 05/12] arch: Introduce arch_{,try_}_cmpxchg128{,_local}() Message-ID: References: <20221219153525.632521981@infradead.org> <20221219154119.154045458@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221219154119.154045458@infradead.org> X-Rspam-User: X-Rspamd-Queue-Id: A60E620008 X-Rspamd-Server: rspam01 X-Stat-Signature: urf59zonstg1qdz3x17d4bkju5zykasn X-HE-Tag: 1671672345-568942 X-HE-Meta: U2FsdGVkX19BXaZAqkk+CZGl+jD2FUvI6VxPVs7CrB0ZjW6kcGYKz+UBOe8l9Or0h19EXmMIDHyjnMfwU2KF86awJiTpb0bbGG3tA2SKJeyHeNN4rz7r/WNmpNYwr5CAcYarK5xv/BuA58DJqUL6L96vA8jeNk0US/fbc/82GE98GqYD3GaQ/rllsEpDpZhrGb+T36MWibnWlgOM0CWgkd86zfARRaKK3EBV/RRoDk7PpmQ0gCF/iCjLCwNATIncc/Ep5UcytK7AMoZcsvOs3EQj1XGYQKp5xL2qKOcUpkHiUVSbCPFglJcd1cErqMBAdmw3ahi52zH1VnwmAF6/kzLV2kouR+ruZiD9lxzU69YijKvZyjqdLl/Cb4+hWuw/VCcFDLo8oSUiKuIRlG1cG063PDisEoplmCkDQHuqzfX+Ns25XR6sr6oXolTn86jYcM8dg+8SRY+i61VKTTXt4WPVkA86EEbWvOxsqkWhOfMljPeF2gzAjn1l0vym0NYeZbm7zTHiQ6Mtgs5O8pSam48rvgfVV0Ps5krzdG3THp9M0xNMQVdi4kwHZ1UQ15w0nyT+cV/3d/MMlpbFk9Oa2OFQx3IW/jOm9KbVZuYXGQqKdsOYpx9czFY/WY3h56/4uEAiBi/t7bSrkN7zUkWzH8hUO5N5yldHmZHje/jrJ3A/cm1CTmbtGdL02PxvPgaJLXdw6/P+l3xVSxW4djA3ugXqjNNSKQzy+wQv7qP7cCQGIOowMHD8BbERWGo//gSECbtMW9k/ob1+HbHOoaXAzSRjOz4t1NNZI7rjVvBdHdMN9IAff6RTCcuBswxgDlFTz4TR1kZimYcULlhehMYv99tKDDLqkGNJg8VO0k2nQnAg+rCgQHKcRlJaCp/Cb9Mi+uPU9FE2rAEBZXPSxQ/z6JzRkY0HXehKbpGVFU0FAvbIRaXlBR6wiNyIlgo/6KMqoBObHpvUkHecIFixLWj /yeU1D3r Vu1mvGPhLeOFEnPGPT2T3eFDfn/E8jE+iNr7qrW9d/ifjFOiVUmrGEXlZ1b8dLQkQxMVaGL327R2FhOIe05OXASR8UHDtMfMe6JMRF71CcGtgDSlhWKfVP3w0syM6b4j0l0QEZM6hCIi2FHVhGQOV+Hoqr6XPfq/BAo3aa5U4lqms9k7ps6DUV4szH8NH8PuL/JwZLrXZQZdpYiIBfbiz8QGaDbbyR4nI50ZsDlbq1b3stM8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 19, 2022 at 04:35:30PM +0100, Peter Zijlstra wrote: > For all architectures that currently support cmpxchg_double() > implement the cmpxchg128() family of functions that is basically the > same but with a saner interface. > > Signed-off-by: Peter Zijlstra (Intel) > --- > arch/arm64/include/asm/atomic_ll_sc.h | 38 +++++++++++++++++++++++ > arch/arm64/include/asm/atomic_lse.h | 33 +++++++++++++++++++- > arch/arm64/include/asm/cmpxchg.h | 26 ++++++++++++++++ > arch/s390/include/asm/cmpxchg.h | 33 ++++++++++++++++++++ > arch/x86/include/asm/cmpxchg_32.h | 3 + > arch/x86/include/asm/cmpxchg_64.h | 55 +++++++++++++++++++++++++++++++++- > 6 files changed, 185 insertions(+), 3 deletions(-) > > --- a/arch/arm64/include/asm/atomic_ll_sc.h > +++ b/arch/arm64/include/asm/atomic_ll_sc.h > @@ -326,6 +326,44 @@ __CMPXCHG_DBL( , , , ) > __CMPXCHG_DBL(_mb, dmb ish, l, "memory") > > #undef __CMPXCHG_DBL > + > +union __u128_halves { > + u128 full; > + struct { > + u64 low, high; > + }; > +}; > + > +#define __CMPXCHG128(name, mb, rel, cl) \ > +static __always_inline u128 \ > +__ll_sc__cmpxchg128##name(volatile u128 *ptr, u128 old, u128 new) \ > +{ \ > + union __u128_halves r, o = { .full = (old) }, \ > + n = { .full = (new) }; \ > + \ > + asm volatile("// __cmpxchg128" #name "\n" \ > + " prfm pstl1strm, %2\n" \ > + "1: ldxp %0, %1, %2\n" \ > + " eor %3, %0, %3\n" \ > + " eor %4, %1, %4\n" \ > + " orr %3, %4, %3\n" \ > + " cbnz %3, 2f\n" \ > + " st" #rel "xp %w3, %5, %6, %2\n" \ > + " cbnz %w3, 1b\n" \ > + " " #mb "\n" \ > + "2:" \ > + : "=&r" (r.low), "=&r" (r.high), "+Q" (*(unsigned long *)ptr) \ > + : "r" (o.low), "r" (o.high), "r" (n.low), "r" (n.high) \ > + : cl); \ > + \ > + return r.full; \ > +} > + > +__CMPXCHG128( , , , ) > +__CMPXCHG128(_mb, dmb ish, l, "memory") > + > +#undef __CMPXCHG128 > + > #undef K > > #endif /* __ASM_ATOMIC_LL_SC_H */ > --- a/arch/arm64/include/asm/atomic_lse.h > +++ b/arch/arm64/include/asm/atomic_lse.h > @@ -151,7 +151,7 @@ __lse_atomic64_fetch_##op##name(s64 i, a > " " #asm_op #mb " %[i], %[old], %[v]" \ > : [v] "+Q" (v->counter), \ > [old] "=r" (old) \ > - : [i] "r" (i) \ > + : [i] "r" (i) \ > : cl); \ > \ > return old; \ > @@ -324,4 +324,35 @@ __CMPXCHG_DBL(_mb, al, "memory") > > #undef __CMPXCHG_DBL > > +#define __CMPXCHG128(name, mb, cl...) \ > +static __always_inline u128 \ > +__lse__cmpxchg128##name(volatile u128 *ptr, u128 old, u128 new) \ > +{ \ > + union __u128_halves r, o = { .full = (old) }, \ > + n = { .full = (new) }; \ > + register unsigned long x0 asm ("x0") = o.low; \ > + register unsigned long x1 asm ("x1") = o.high; \ > + register unsigned long x2 asm ("x2") = n.low; \ > + register unsigned long x3 asm ("x3") = n.high; \ > + register unsigned long x4 asm ("x4") = (unsigned long)ptr; \ > + \ > + asm volatile( \ > + __LSE_PREAMBLE \ > + " casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\ > + : [old1] "+&r" (x0), [old2] "+&r" (x1), \ > + [v] "+Q" (*(unsigned long *)ptr) \ > + : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ Issue #1: the line below can be removed, otherwise.. > + [oldval1] "r" (r.low), [oldval2] "r" (r.high) \ warning: ./arch/arm64/include/asm/atomic_lse.h: In function '__lse__cmpxchg128_mb': ./arch/arm64/include/asm/atomic_lse.h:309:27: warning: 'r..low' is used uninitialized [-Wuninitialized] 309 | [oldval1] "r" (r.low), [oldval2] "r" (r.high) > + : cl); \ > + \ > + r.low = x0; r.high = x1; \ > + \ > + return r.full; \ > +} > + > +__CMPXCHG128( , ) > +__CMPXCHG128(_mb, al, "memory") > + > +#undef __CMPXCHG128 > + > #endif /* __ASM_ATOMIC_LSE_H */ > --- a/arch/arm64/include/asm/cmpxchg.h > +++ b/arch/arm64/include/asm/cmpxchg.h > @@ -147,6 +147,19 @@ __CMPXCHG_DBL(_mb) > > #undef __CMPXCHG_DBL > > +#define __CMPXCHG128(name) \ > +static inline long __cmpxchg128##name(volatile u128 *ptr, \ Issue #2: this should be static inline u128 __cmpxchg128##name(..) because cmpxchg* needs to return the old value. Regards, Boqun > + u128 old, u128 new) \ > +{ \ > + return __lse_ll_sc_body(_cmpxchg128##name, \ > + ptr, old, new); \ > +} > + > +__CMPXCHG128( ) > +__CMPXCHG128(_mb) > + > +#undef __CMPXCHG128 > + > #define __CMPXCHG_GEN(sfx) \ > static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ > unsigned long old, \ > @@ -229,6 +242,19 @@ __CMPXCHG_GEN(_mb) > __ret; \ > }) > > +/* cmpxchg128 */ > +#define system_has_cmpxchg128() 1 > + > +#define arch_cmpxchg128(ptr, o, n) \ > +({ \ > + __cmpxchg128_mb((ptr), (o), (n)); \ > +}) > + > +#define arch_cmpxchg128_local(ptr, o, n) \ > +({ \ > + __cmpxchg128((ptr), (o), (n)); \ > +}) > + > #define __CMPWAIT_CASE(w, sfx, sz) \ > static inline void __cmpwait_case_##sz(volatile void *ptr, \ > unsigned long val) \ > --- a/arch/s390/include/asm/cmpxchg.h > +++ b/arch/s390/include/asm/cmpxchg.h > @@ -201,4 +201,37 @@ static __always_inline int __cmpxchg_dou > (unsigned long)(n1), (unsigned long)(n2)); \ > }) > > +#define system_has_cmpxchg128() 1 > + > +static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 new) > +{ > + asm volatile( > + " cdsg %[old],%[new],%[ptr]\n" > + : [old] "+&d" (old) > + : [new] "d" (new), > + [ptr] "QS" (*(unsigned long *)ptr) > + : "memory", "cc"); > + return old; > +} > + > +static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *oldp, u128 new) > +{ > + u128 old = *oldp; > + int cc; > + > + asm volatile( > + " cdsg %[old],%[new],%[ptr]\n" > + " ipm %[cc]\n" > + " srl %[cc],28\n" > + : [cc] "=&d" (cc), [old] "+&d" (old) > + : [new] "d" (new), > + [ptr] "QS" (*(unsigned long *)ptr) > + : "memory", "cc"); > + > + if (unlikely(!cc)) > + *oldp = old; > + > + return likely(cc); > +} > + > #endif /* __ASM_CMPXCHG_H */ > --- a/arch/x86/include/asm/cmpxchg_32.h > +++ b/arch/x86/include/asm/cmpxchg_32.h > @@ -103,6 +103,7 @@ static inline bool __try_cmpxchg64(volat > > #endif > > -#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX8) > +#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX8) > +#define system_has_cmpxchg64() boot_cpu_has(X86_FEATURE_CX8) > > #endif /* _ASM_X86_CMPXCHG_32_H */ > --- a/arch/x86/include/asm/cmpxchg_64.h > +++ b/arch/x86/include/asm/cmpxchg_64.h > @@ -20,6 +20,59 @@ > arch_try_cmpxchg((ptr), (po), (n)); \ > }) > > -#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16) > +union __u128_halves { > + u128 full; > + struct { > + u64 low, high; > + }; > +}; > + > +static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 new) > +{ > + union __u128_halves o = { .full = old, }, n = { .full = new, }; > + > + asm volatile(LOCK_PREFIX "cmpxchg16b %[ptr]" > + : [ptr] "+m" (*ptr), > + "+a" (o.low), "+d" (o.high) > + : "b" (n.low), "c" (n.high) > + : "memory"); > + > + return o.full; > +} > + > +static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new) > +{ > + union __u128_halves o = { .full = old, }, n = { .full = new, }; > + > + asm volatile("cmpxchg16b %[ptr]" > + : [ptr] "+m" (*ptr), > + "+a" (o.low), "+d" (o.high) > + : "b" (n.low), "c" (n.high) > + : "memory"); > + > + return o.full; > +} > + > +static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *old, u128 new) > +{ > + union __u128_halves o = { .full = *old, }, n = { .full = new, }; > + bool ret; > + > + asm volatile(LOCK_PREFIX "cmpxchg16b %[ptr]" > + CC_SET(e) > + : CC_OUT(e) (ret), > + [ptr] "+m" (*ptr), > + "+a" (o.low), "+d" (o.high) > + : "b" (n.low), "c" (n.high) > + : "memory"); > + > + if (unlikely(!ret)) > + *old = o.full; > + > + return likely(ret); > +} > + > +#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16) > +#define system_has_cmpxchg128() boot_cpu_has(X86_FEATURE_CX16) > > #endif /* _ASM_X86_CMPXCHG_64_H */ > >