From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
torvalds@linux-foundation.org
Subject: Re: [PATCH 1/9] kernel: Provide READ_ONCE and ASSIGN_ONCE
Date: Wed, 3 Dec 2014 16:07:49 -0800 [thread overview]
Message-ID: <20141204000749.GX25340@linux.vnet.ibm.com> (raw)
In-Reply-To: <1417645821-54731-2-git-send-email-borntraeger@de.ibm.com>
On Wed, Dec 03, 2014 at 11:30:13PM +0100, Christian Borntraeger wrote:
> ACCESS_ONCE does not work reliably on non-scalar types. For
> example gcc 4.6 and 4.7 might remove the volatile tag for such
> accesses during the SRA (scalar replacement of aggregates) step
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)
>
> Let's provide READ_ONCE/ASSIGN_ONCE that will do all accesses via
> scalar types as suggested by Linus Torvalds. Accesses larger than
> the machines word size cannot be guaranteed to be atomic. These
> macros will use memcpy and emit a build warning.
>
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
With or without some nits below addressed:
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> include/linux/compiler.h | 64 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 64 insertions(+)
>
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index d5ad7b1..947710e 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -186,6 +186,70 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect);
> # define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __LINE__)
> #endif
>
> +#include <linux/types.h>
> +
> +void data_access_exceeds_word_size(void)
> +__compiletime_warning("data access exceeds word size and won't be atomic");
> +
> +static __always_inline void __read_once_size(volatile void *p, void *res, int size)
> +{
> + switch (size) {
> + case 1: *(u8 *)res = *(volatile u8 *)p; break;
> + case 2: *(u16 *)res = *(volatile u16 *)p; break;
> + case 4: *(u32 *)res = *(volatile u32 *)p; break;
> +#ifdef CONFIG_64BIT
> + case 8: *(u64 *)res = *(volatile u64 *)p; break;
> +#endif
You could get rid of the #ifdef above by doing "case sizeof(long)"
and switching the casts from u64 to unsigned long.
> + default:
> + barrier();
> + __builtin_memcpy((void *)res, (const void *)p, size);
> + data_access_exceeds_word_size();
> + barrier();
> + }
> +}
> +
> +static __always_inline void __assign_once_size(volatile void *p, void *res, int size)
> +{
> + switch (size) {
> + case 1: *(volatile u8 *)p = *(u8 *)res; break;
> + case 2: *(volatile u16 *)p = *(u16 *)res; break;
> + case 4: *(volatile u32 *)p = *(u32 *)res; break;
> +#ifdef CONFIG_64BIT
> + case 8: *(volatile u64 *)p = *(u64 *)res; break;
> +#endif
Ditto.
> + default:
> + barrier();
> + __builtin_memcpy((void *)p, (const void *)res, size);
> + data_access_exceeds_word_size();
> + barrier();
> + }
> +}
> +
> +/*
> + * Prevent the compiler from merging or refetching reads or writes. The compiler
> + * is also forbidden from reordering successive instances of READ_ONCE,
> + * ASSIGN_ONCE and ACCESS_ONCE (see below), but only when the compiler is aware
> + * of some particular ordering. One way to make the compiler aware of ordering
> + * is to put the two invocations of READ_ONCE, ASSIGN_ONCE or ACCESS_ONCE() in
> + * different C statements.
> + *
> + * In contrast to ACCESS_ONCE these two macros will also work on aggregate data
> + * types like structs or unions. If the size of the accessed data type exceeds
> + * the word size of the machine (e.g. 32bit or 64bit), the access might happen
> + * in multiple chunks, though.
This last sentence might be more clear if it was something like the
following:
If the size of the accessed data type exceeeds the word size of
the machine (e.g., 32 bits or 64 bits), ACCESS_ONCE() might
carry out the access in multiple chunks, but READ_ONCE() and
ASSIGN_ONCE() will give a link-time error.
> + *
> + * These macros do absolutely -nothing- to prevent the CPU from reordering,
> + * merging, or refetching absolutely anything at any time. Their main intended
> + * use is to mediate communication between process-level code and irq/NMI
> + * handlers, all running on the same CPU.
This last sentence is now obsolete. How about something like this?
Their two major use cases are: (1) Mediating communication
between process-level code and irq/NMI handlers, all running
on the same CPU, and (2) Ensuring that the compiler does not
fold, spindle, or otherwise mutilate accesses that either do
not require ordering or that interact with an explicit memory
barrier or atomic instruction that provides the required ordering.
> + */
> +
> +#define READ_ONCE(p) \
> + ({ typeof(p) __val; __read_once_size(&p, &__val, sizeof(__val)); __val; })
> +
> +#define ASSIGN_ONCE(val, p) \
> + ({ typeof(p) __val; __val = val; __assign_once_size(&p, &__val, sizeof(__val)); __val; })
> +
> #endif /* __KERNEL__ */
>
> #endif /* __ASSEMBLY__ */
> --
> 1.9.3
>
next prev parent reply other threads:[~2014-12-04 0:07 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-03 22:30 [PATCHv4 0/9] ACCESS_ONCE and non-scalar accesses Christian Borntraeger
2014-12-03 22:30 ` [PATCH 1/9] kernel: Provide READ_ONCE and ASSIGN_ONCE Christian Borntraeger
2014-12-04 0:07 ` Paul E. McKenney [this message]
2014-12-04 9:24 ` Christian Borntraeger
2014-12-04 14:41 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 2/9] mm: replace ACCESS_ONCE with READ_ONCE or barriers Christian Borntraeger
2014-12-04 0:09 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 3/9] x86/spinlock: Replace ACCESS_ONCE with READ_ONCE Christian Borntraeger
2014-12-04 0:10 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 4/9] x86/gup: " Christian Borntraeger
2014-12-04 0:10 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 5/9] mips/gup: " Christian Borntraeger
2014-12-04 0:11 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 6/9] arm64/spinlock: Replace ACCESS_ONCE READ_ONCE Christian Borntraeger
2014-12-04 0:11 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 7/9] arm/spinlock: Replace ACCESS_ONCE with READ_ONCE Christian Borntraeger
2014-12-04 0:12 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 8/9] s390/kvm: REPLACE " Christian Borntraeger
2014-12-04 0:12 ` Paul E. McKenney
2014-12-03 22:30 ` [PATCH 9/9] kernel: tighten rules for ACCESS ONCE Christian Borntraeger
2014-12-04 0:16 ` Paul E. McKenney
2014-12-04 9:28 ` Christian Borntraeger
2014-12-04 14:41 ` Paul E. McKenney
2014-12-04 15:24 ` [PATCHv4 0/9] ACCESS_ONCE and non-scalar accesses Christian Borntraeger
2014-12-04 23:40 ` Linus Torvalds
-- strict thread matches above, loose matches on Subject: below --
2014-12-05 2:12 [PATCH 1/9] kernel: Provide READ_ONCE and ASSIGN_ONCE George Spelvin
2014-12-05 9:19 ` Christian Borntraeger
2014-12-05 16:00 ` George Spelvin
[not found] <CA+55aFzzEhbkoXnVGXAbq-HxejmWSyjMBN_aQM61J_zZLPXwAw@mail.gmail.com>
2014-12-05 21:38 ` George Spelvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20141204000749.GX25340@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=borntraeger@de.ibm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox