From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E64F323A9BF for ; Wed, 21 May 2025 22:54:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747868054; cv=none; b=Bwj7wv+2HjY4K8YHSOITeT6cQAvoMZz5rE/aWEHOzUmmp/y/7guYW5bHcpXntKJZ4NKyvy2hwFJXpdKrlS2LRWsM+nCWvc33OCgBQnVZxsZkRe15PWu4xzHj15rXVYEZyFaQ96JB/QTT+mUMdjVeC9A2e57FwGj8AI67ShQK48k= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747868054; c=relaxed/simple; bh=i4qJcHC4AXmCz4+v5m4Aw3Tqwh0i/7XUMvLhkU4ocZg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Content-Type; b=Pu29aOETUODLeLT0ltf1sOB/1RBMomPOcoeEjXEHH5ZK9fAEqnrEB2cqgIu2f/QjSDjtvjkLleHBLq2IGtenWS2KvcDIZIDw+eC+bnphEbLA45QNdQ3cYP4E4eoMW8CL+yQT3rtavVPp84wNLvUZWfub8yNotkxKsQ/tlIKJFKI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuzhuo.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mYxN0tJP; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuzhuo.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mYxN0tJP" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-23222fdb4f2so69556115ad.3 for ; Wed, 21 May 2025 15:54:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747868052; x=1748472852; darn=lists.linux.dev; h=to:from:subject:message-id:references:mime-version:in-reply-to:date :from:to:cc:subject:date:message-id:reply-to; bh=MTjLK1VK7KWwntnP1zRVvSaeiNRGhMUiu9M8vIKuSKI=; b=mYxN0tJPrQxf89iYYtgVFtHvXMtdgL194C3GHzX/rdGL4onAqzlKd5LYbJmPhpnieD /mZvKbcg6uHQKk8hzMpECDlXVPE4hXDKE4Cvvtc29kyqtwNdHrBundO+XMVGSfAfQguC vuupoHgq9QvFWbOtv3GiggE9ERFDwiqzzdVn7ygUxNijTsBSWbCV4EB0xdELGMsWAYmy b8GbA9h2tMVLZAOxOKMRLKoYG0KOUbOKXwYHRJV9Rc2vIZ5kflwuwXVilOccEzrc562V CbayEJh2MnAB5JOEGgL0X7skdYyzdVTOB3F1zdYk2ki2U0qSp0W4pBMWoYEthcsfPk8C P73A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747868052; x=1748472852; h=to:from:subject:message-id:references:mime-version:in-reply-to:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MTjLK1VK7KWwntnP1zRVvSaeiNRGhMUiu9M8vIKuSKI=; b=rbQlYfa3Etu7ULeqnuw5jsKLu3aE1maE8MBtm1B+zihKVhaMFzRjtt6PvONAdB2wnM 6jOLKBraKlUCi52cRGgjfVorhjAME4Gu23yNYpLVqTL7m1qPs+pRa8GmpudB0f6DGQmE XBPWLoN/GPwDCisZJZlAUaz2F3DNwQf+Eblx2ZKt217JgCK7wXd+f8vVR6Q6TF1UnQpk YPrsx+lZmqUibcGoMNk5CtDRfkCnsZ9M8ydjARv+0ZEOlxcP/71VSa+6c+xNBIXvp75Z TNCNTe4cLap0ANdK5x31z4yg+owcQYlU35sX6rC0goNWUTOsgWiV6uWca2VMj0bk0Agt briw== X-Forwarded-Encrypted: i=1; AJvYcCV+5AO8Ncvv7d4AOz2g9H1qvNEFdoejEgACsE6kvR/mZIIWf3tzNXihqxnmMSIuZAuvd+3w@lists.linux.dev X-Gm-Message-State: AOJu0Yx51wzLaVGPtLg9l3R5UvHeD8DoXGFiOHOfVQj/s/aUI9iAXz9X 7nvHVPz/hOrhPVZ/r3oOZYnosvIoz/oVgt3283uCK8CH6NlIMMJriEDL+xT1gTUN2+MaS65pMC0 ocPDb6w== X-Google-Smtp-Source: AGHT+IFmnmxwO0MLdVfjk/jYioN8tCVaKs4uDbmfl+l9zYOM4srNZri9GV62ypwvXHSucke23gOipFD43Yk= X-Received: from plkg8.prod.google.com ([2002:a17:903:19c8:b0:223:242b:480a]) (user=yuzhuo job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d543:b0:22f:af3f:bf22 with SMTP id d9443c01a7336-231de3ba64fmr296480105ad.42.1747868052157; Wed, 21 May 2025 15:54:12 -0700 (PDT) Date: Wed, 21 May 2025 15:53:04 -0700 In-Reply-To: <20250521225307.743726-1-yuzhuo@google.com> Precedence: bulk X-Mailing-List: llvm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521225307.743726-1-yuzhuo@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250521225307.743726-2-yuzhuo@google.com> Subject: [PATCH v1 1/4] perf utils: Add support functions for sha1 utils From: Yuzhuo Jing To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Liang Kan , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , "Steven Rostedt (Google)" , James Clark , Tomas Glozar , Leo Yan , Guilherme Amadio , Yuzhuo Jing , Yang Jihong , "Masami Hiramatsu (Google)" , Adhemerval Zanella , Wei Yang , Ard Biesheuvel , "Mike Rapoport (Microsoft)" , Athira Rajeev , Kajol Jain , Aditya Gupta , Charlie Jenkins , "Steinar H. Gunderson" , "Dr. David Alan Gilbert" , Herbert Xu , Jeff Johnson , Al Viro , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, llvm@lists.linux.dev Content-Type: text/plain; charset="UTF-8" Add missing functions to the shrunk down version tools headers from the kernel headers to support sha1 utils. Signed-off-by: Yuzhuo Jing --- tools/include/linux/bitops.h | 10 ++++++++++ tools/include/linux/compiler.h | 17 +++++++++++++++++ tools/include/linux/string.h | 22 ++++++++++++++++++++++ 3 files changed, 49 insertions(+) diff --git a/tools/include/linux/bitops.h b/tools/include/linux/bitops.h index b4e4cd071f8c..6a027031225c 100644 --- a/tools/include/linux/bitops.h +++ b/tools/include/linux/bitops.h @@ -89,6 +89,16 @@ static inline __u32 rol32(__u32 word, unsigned int shift) return (word << shift) | (word >> ((-shift) & 31)); } +/** + * ror32 - rotate a 32-bit value right + * @word: value to rotate + * @shift: bits to roll + */ +static inline __u32 ror32(__u32 word, unsigned int shift) +{ + return (word >> (shift & 31)) | (word << ((-shift) & 31)); +} + /** * sign_extend64 - sign extend a 64-bit value using specified bit as sign-bit * @value: value to sign extend diff --git a/tools/include/linux/compiler.h b/tools/include/linux/compiler.h index 9c05a59f0184..72e92b202976 100644 --- a/tools/include/linux/compiler.h +++ b/tools/include/linux/compiler.h @@ -40,6 +40,23 @@ /* The "volatile" is due to gcc bugs */ #define barrier() __asm__ __volatile__("": : :"memory") +#ifndef barrier_data +/* + * This version is i.e. to prevent dead stores elimination on @ptr + * where gcc and llvm may behave differently when otherwise using + * normal barrier(): while gcc behavior gets along with a normal + * barrier(), llvm needs an explicit input variable to be assumed + * clobbered. The issue is as follows: while the inline asm might + * access any memory it wants, the compiler could have fit all of + * @ptr into memory registers instead, and since @ptr never escaped + * from that, it proved that the inline asm wasn't touching any of + * it. This version works well with both compilers, i.e. we're telling + * the compiler that the inline asm absolutely may see the contents + * of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495 + */ +# define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory") +#endif + #ifndef __always_inline # define __always_inline inline __attribute__((always_inline)) #endif diff --git a/tools/include/linux/string.h b/tools/include/linux/string.h index 8499f509f03e..df3c95792a51 100644 --- a/tools/include/linux/string.h +++ b/tools/include/linux/string.h @@ -3,6 +3,7 @@ #define _TOOLS_LINUX_STRING_H_ #include /* for size_t */ +#include /* for barrier_data */ #include void *memdup(const void *src, size_t len); @@ -52,4 +53,25 @@ extern void remove_spaces(char *s); extern void *memchr_inv(const void *start, int c, size_t bytes); extern unsigned long long memparse(const char *ptr, char **retptr); + +/** + * memzero_explicit - Fill a region of memory (e.g. sensitive + * keying data) with 0s. + * @s: Pointer to the start of the area. + * @count: The size of the area. + * + * Note: usually using memset() is just fine (!), but in cases + * where clearing out _local_ data at the end of a scope is + * necessary, memzero_explicit() should be used instead in + * order to prevent the compiler from optimising away zeroing. + * + * memzero_explicit() doesn't need an arch-specific version as + * it just invokes the one of memset() implicitly. + */ +static inline void memzero_explicit(void *s, size_t count) +{ + memset(s, 0, count); + barrier_data(s); +} + #endif /* _TOOLS_LINUX_STRING_H_ */ -- 2.49.0.1164.gab81da1b16-goog