From: Kees Cook <keescook@chromium.org>
To: George Spelvin <lkml@SDF.ORG>
Cc: Dan Williams <dan.j.williams@intel.com>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Alexander Duyck <alexander.duyck@gmail.com>,
Randy Dunlap <rdunlap@infradead.org>
Subject: Re: [PATCH v4] mm/shuffle.c: Fix races in add_to_free_area_random()
Date: Fri, 20 Mar 2020 10:58:02 -0700 [thread overview]
Message-ID: <202003201057.C7A484A3@keescook> (raw)
In-Reply-To: <20200319120522.GA1484@SDF.ORG>
On Thu, Mar 19, 2020 at 12:05:22PM +0000, George Spelvin wrote:
> The separate "rand" and "rand_count" variables could get out of
> sync with bad results.
>
> In the worst case, two threads would see rand_count=1 and both
> decrement it, resulting in rand_count=255 and rand being filled
> with zeros for the next 255 calls.
>
> Instead, pack them both into a single, atomically updateable,
> variable. This makes it a lot easier to reason about race
> conditions. They are still there - the code deliberately eschews
> locking - but basically harmless on the rare occasions that
> they happen.
>
> Second, use READ_ONCE and WRITE_ONCE. Because the random bit
> buffer is accessed by multiple threads concurrently without
> locking, omitting those puts us deep in the land of nasal demons.
> The compiler would be free to spill to the static variable in
> arbitrarily perverse ways and create hard-to-find bugs.
>
> (I'm torn between this and just declaring the buffer "volatile".
> Linux tends to prefer marking accesses rather than variables,
> but in this case, every access to the buffer is volatile.
> It makes no difference to the generated code.)
>
> Third, use long rather than u64. This not only keeps the state
> atomically updateable, it also speeds up the fast path on 32-bit
> machines. Saving at least three instructions on the fast path (one
> load, one add-with-carry, and one store) is worth a second call
> to get_random_u*() per 64 bits. The fast path of get_random_u*
> is less than the 3*64 = 192 instructions saved, and the slow path
> happens every 64 bytes so isn't affected by the change.
>
> Fourth, make the function inline. It's small, and there's only
> one caller (in mm/page_alloc.c:__free_one_page()), so avoid the
> function call overhead.
>
> Fifth, use the msbits of the buffer first (left shift) rather
> than the lsbits (right shift). Testing the sign bit produces
> slightly smaller/faster code than testing the lsbit.
>
> I've tried shifting both ways, and copying the desired bit to a
> boolean before shifting rather than keeping separate full-width
> r and rshift variables, but both produce larger code:
>
> x86-64 text size
> Msbit 42236
> Lsbit 42242 (+6)
> Lsbit+bool 42258 (+22)
> Msbit+bool 42284 (+52)
>
> (Since this is straight-line code, size is a good proxy for number
> of instructions and execution time. Using READ/WRITE_ONCE instead of
> volatile makes no difference.)
>
> In a perfect world, on x86-64 the fast path would be:
> shlq rand(%eip)
> jz refill
> refill_complete:
> jc add_to_tail
>
> but I don't see how to get gcc to generate that, and this
> function isn't worth arch-specific implementation.
>
> Signed-off-by: George Spelvin <lkml@sdf.org>
> Acked-by: Kees Cook <keescook@chromium.org>
> Acked-by: Dan Williams <dan.j.williams@intel.com>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> Cc: Randy Dunlap <rdunlap@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> ---
> v2: Rewrote commit message to explain existing races better.
> Made local variables unsigned to avoid (technically undefined)
> signed overflow.
> v3: Typos fixed, Acked-by, expanded commit message.
> v4: Rebase against -next; function has changed from
> add_to_free_area_random() to shuffle_pick_tail. Move to
> inline function in shuffle.h.
> Not sure if it's okay to keep Acked-by: after such a
> significant change.
Good by me. Thanks!
-Kees
--
Kees Cook
prev parent reply other threads:[~2020-03-20 17:58 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-17 13:50 [PATCH] mm/shuffle.c: optimize add_to_free_area_random() George Spelvin
2020-03-17 21:44 ` Kees Cook
2020-03-17 23:06 ` George Spelvin
2020-03-17 23:38 ` Kees Cook
2020-03-18 1:44 ` [PATCH v2] mm/shuffle.c: Fix races in add_to_free_area_random() George Spelvin
2020-03-18 1:49 ` Randy Dunlap
2020-03-18 3:53 ` Dan Williams
2020-03-18 8:20 ` George Spelvin
2020-03-18 17:36 ` Dan Williams
2020-03-18 19:29 ` George Spelvin
2020-03-18 19:40 ` Dan Williams
2020-03-18 21:02 ` George Spelvin
2020-03-18 3:58 ` Kees Cook
2020-03-18 15:26 ` Alexander Duyck
2020-03-18 18:35 ` George Spelvin
2020-03-18 19:17 ` Alexander Duyck
2020-03-18 20:06 ` George Spelvin
2020-03-18 20:39 ` [PATCH v3] " George Spelvin
2020-03-18 21:34 ` Alexander Duyck
2020-03-18 22:49 ` George Spelvin
2020-03-18 22:57 ` Dan Williams
2020-03-18 23:18 ` George Spelvin
2020-03-19 12:05 ` [PATCH v4] " George Spelvin
2020-03-19 17:49 ` Alexander Duyck
2020-03-20 17:58 ` Kees Cook [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202003201057.C7A484A3@keescook \
--to=keescook@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=dan.j.williams@intel.com \
--cc=linux-mm@kvack.org \
--cc=lkml@SDF.ORG \
--cc=rdunlap@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).