From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 312AE28D8FD for ; Tue, 17 Feb 2026 10:01:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771322520; cv=none; b=tpLQHh6lj9gi+s20Yz05oTU5Bn6z2iPhRzWGd6gC7HvbcpnTnLgntV8S2kFYN13R3NfWmnckB8PWBye3dZ9i6wizDi5s/lY6DGVZ6lBzJ78NDVZTS20m0ul5FLCqxCIqsQgVKyiOJtHeVnXVHcsr93ZQ8gzWQGq5plNY4YQuiZQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771322520; c=relaxed/simple; bh=JHYuYSV/cFIZxf2/DEW1DjRbihxvnkQSdsFFOvDNJ30=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=muL5iwjI73NcypB8qSFgmuAJlhfjC2+62Ckb5PViZviu9vRccnXh1a0lcwBeAdUfKkriMvnOKS8BsBCWYZfYSNlEY1hy6UCA2ywDN8Gg3vvdPDNrB2dwqm/nrn1L8VOpb0KvTBjFjIDf2b7WerLxrMx0jn6CcPeChMdWMSm+p10= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kPfTFXY3; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kPfTFXY3" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-435a2de6ec0so2781579f8f.0 for ; Tue, 17 Feb 2026 02:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1771322517; x=1771927317; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aVqzBedJl0e/SY5rTfSCMuageeXR/z0e1KuV2G9as6U=; b=kPfTFXY3mi6JCTjoGr4ErkaRmJ+ur1IexSRM1Qb9w4ysoSjk3fnBr5spwVzIj3/tv6 acESPGFa7aNpDRITNFJNYBSqKio02yyRKtB2t8ToKABuuUCjtqSeRSWsUHKOISilXwNZ e1SEYeFhxlEgAHwI9UNkzE3bjC6NIK2hlMK6mzhPo/VCpgUZ7v65W0lmA5KXyqVAwOPw l8I0lBaaW1hFf9jbbIHJywBRPyfc/7+m0/I9eGVa/Th2FUv8AETG5qurEqUwN1mnCX58 00P2Y4kGSa9QWXBfmymcWbNrhLdUjcQ3DFMYI6jzH2HVbZQ+Xxqe0fU9hIRXub5HU0Nw xLxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771322517; x=1771927317; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aVqzBedJl0e/SY5rTfSCMuageeXR/z0e1KuV2G9as6U=; b=X0c5EpgepfVttffijgWBQCqysjZyZDCYgF/KmOca0DTMXtkpABaHiaV86Fi5prHCMO v4p/NcPsVNgBoE8moj3HuaJm/YHcIwDM/HXtHv0qT9xDZDr0VqbB7MK0TWFudiHMEI+r On0xQazwm+rNhYluUA1seCDJgcCNmfw1ljBQwuucDJFXhEL7WjqAoWjy/J6vNi2AEp7l AJ0p2SmrSWML/aXNFgnq4QD8r/pSed4W/JyL8PxTswMDfLaftYoq+O6sFbcQASU1F7Ge T/XvljNqnbjLG5QNukf2irp5n2+rQlFD0/XmSpIHwqqsflQsa2IgSxj4zntBJpbAxgfq jsZg== X-Forwarded-Encrypted: i=1; AJvYcCUH+fatisUHZD6t0RrVsP0bOJqxsB+XaA9s09p4jZZlvjlfl+vOilcOt2Of3ulsSXwZnCefJegYwsBWk9Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yz/PgkZe4EVWmF9fhM/FWtI+kyY5y6+cNQo2IRE8evVNvzSonSl 0Ey4EwHU6FyJWwnObIYB6pzFJ0P/ccUBFwaACp8nRdNaCmXSY0xkfAhC8fHtVD4oB2cCqJ6LFBy eWIZwB8wK9kEaSX5mFA== X-Received: from wrbfa8.prod.google.com ([2002:a05:6000:2588:b0:435:bb92:9d28]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64e7:0:b0:437:8f98:8c91 with SMTP id ffacd0b85a97d-4379db34172mr17645373f8f.3.1771322517142; Tue, 17 Feb 2026 02:01:57 -0800 (PST) Date: Tue, 17 Feb 2026 10:01:56 +0000 In-Reply-To: <20260217094515.GV1395266@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <2026021313-embody-deprive-9da5@gregkh> <873434u3yq.fsf@kernel.org> <20260213142608.GV2995752@noisy.programming.kicks-ass.net> <2026021311-shorten-veal-532c@gregkh> <2026021326-stark-coastline-c5bc@gregkh> <20260217091348.GT1395266@noisy.programming.kicks-ass.net> <20260217094515.GV1395266@noisy.programming.kicks-ass.net> Message-ID: Subject: Re: [PATCH v2] rust: page: add byte-wise atomic memory copy methods From: Alice Ryhl To: Peter Zijlstra Cc: Boqun Feng , Greg KH , Andreas Hindborg , Lorenzo Stoakes , "Liam R. Howlett" , Miguel Ojeda , Boqun Feng , Gary Guo , "=?utf-8?B?QmrDtnJu?= Roy Baron" , Benno Lossin , Trevor Gross , Danilo Krummrich , Will Deacon , Mark Rutland , linux-mm@kvack.org, rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On Tue, Feb 17, 2026 at 10:45:15AM +0100, Peter Zijlstra wrote: > On Tue, Feb 17, 2026 at 09:33:40AM +0000, Alice Ryhl wrote: > > On Tue, Feb 17, 2026 at 10:13:48AM +0100, Peter Zijlstra wrote: > > > On Fri, Feb 13, 2026 at 08:19:17AM -0800, Boqun Feng wrote: > > > > Well, in standard C, technically memcpy() has the same problem as Rust's > > > > `core::ptr::copy()` and `core::ptr::copy_nonoverlapping()`, i.e. they > > > > are vulnerable to data races. Our in-kernel memcpy() on the other hand > > > > doesn't have this problem. Why? Because it's volatile byte-wise atomic > > > > per the implementation. > > > > > > Look at arch/x86/lib/memcpy_64.S, plenty of movq variants there. Not > > > byte-wise. > > > > movq is a valid implementation of 8 byte-wise copies. > > > > > Also, not a single atomic operation in sight. > > > > Relaxed atomics are just mov ops. > > They are not atomics at all. Atomic loads and stores are just mov ops, right? Sure, RMW operations do more complex stuff, but I'm pretty sure that relaxed atomic loads/stores generally are compiled as mov ops. > Somewhere along the line 'atomic' seems to have lost any and all meaning > :-( > > It must be this C committee and their weasel speak for fear of reality > that has infected everyone or somesuch. > > Anyway, all you really want is a normal memcpy and somehow Rust cannot > provide? WTF?! Forget about Rust for a moment. Consider this code: // Is this ok? unsigned long *a, b; b = *a; if is_valid(b) { // do stuff } I can easily imagine that LLVM might optimize this into: // Uh oh! unsigned long *a, b; b = *a; if is_valid(*a) { // <- this was "optimized" // do stuff } the argument being that you used an ordinary load of `a`, so it can be assumed that there are no concurrent writes, so both reads are guaranteed to return the same value. So if `a` might be concurrently modified, then we are unhappy. Of course, if *a is replaced with an atomic load such as READ_ONCE(a) an optimization would no longer occur. // OK! unsigned long *a, b; b = READ_ONCE(a); if is_valid(b) { // do stuff } Now consider the following code: // Is this ok? unsigned long *a, b; memcpy(a, &b, sizeof(unsigned long)); if is_valid(b) { // do stuff } If LLVM understands the memcpy in the same way as how it understands b = *a; // same as memcpy, right? then by above discussion, the memcpy is not enough either. And Rust documents that it may treat copy_nonoverlapping() in exactly that way, which is why we want a memcpy where reading the values more than once is not a permitted optimization. In most discussions of that topic, that's called a per-byte atomic memcpy. Does this optimization happen in the real world? I have no clue. I'd rather not find out. Alice