From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83907C4332B for ; Wed, 18 Mar 2020 21:34:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37DD720752 for ; Wed, 18 Mar 2020 21:34:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iUi1rnor" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37DD720752 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DCC436B009C; Wed, 18 Mar 2020 17:34:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7F2C6B009D; Wed, 18 Mar 2020 17:34:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C93E16B009E; Wed, 18 Mar 2020 17:34:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id AEEF26B009C for ; Wed, 18 Mar 2020 17:34:16 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 72832181AEF15 for ; Wed, 18 Mar 2020 21:34:16 +0000 (UTC) X-FDA: 76609786512.08.soap87_2543d96030b44 X-HE-Tag: soap87_2543d96030b44 X-Filterd-Recvd-Size: 8314 Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Mar 2020 21:34:15 +0000 (UTC) Received: by mail-io1-f65.google.com with SMTP id q9so108813iod.4 for ; Wed, 18 Mar 2020 14:34:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7TJmkdKbBferlXBP0BTFt5ixnw24n0GbuqsyKVSxmoo=; b=iUi1rnorigLH5I34A8u9FxxUfAAOxE8DYskeFkiIoJ5K7W3pnHJxB+LXBOkZqUvy36 nXT3SYt8Zy+ZXBrzqWL36dD4gHB6sRv3kJD191VpeVXsPv/P729gcLp95RSKnbGVBg15 malwDKhyOvaCpv213+bS6yAN/+pKo4wMHWVZJ7cDx2sCKyDpc96gVIy72Gh/SEdVszFg MSXADyI/ehVn8wHicsowhHynz6vh/DMfVrIFCkYsKSVPM/D+h8wqPa2HY2iK426O4vqQ WejOVpq03eFaDVKFinRy3gVWWzab2RFETIV3TpMrLwRHS14+okmaxImG50VcTrdMh2QG WXcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7TJmkdKbBferlXBP0BTFt5ixnw24n0GbuqsyKVSxmoo=; b=LaMxuL/0CqbhehoUhnn4lsd1TQGOgGqOvv+ZWfDDIhFVW3uCamQnj9rXj66J4uq4rx /2CMZUnUGuP6scK9E1v5HXaFUfPWyi0CfIoBtQFDzd5w/tz2NO9Rlkib51FDnQvECcHg OCO3a6cMY0f30GP+PtdrJjQQViUMYs3s0u5w5oBeYXfCdSpQAO8ztTIfTYO1A46at9/1 1gHG4otJ/3YZoteHj8ZgiTfpRR+LkVoZBDDSiqJksYmOcYLdXUh6ZU+JOC043YXP/dOa /btiGP3DRlQOW+O1pNP2gv6itii9TM8GR9ZUWh4opdnOpXAa7EH5SATMRkHZcIjQY8eG NX2Q== X-Gm-Message-State: ANhLgQ0K4Ie12Epb/e99hq+Jleb9TNvUkRkGdNGRGVS4sUHMEzmSkzS+ BmLESqoSTIjWuoAkt2VJxjrPvuV4ADVTwuwXNO0= X-Google-Smtp-Source: ADFU+vtuZgegG59+METTYDu4IAXbiDVNlMW56XLXTYDip/XQHBJxWtGcuPIlDrFR3+d4dhjLA8EShiRiC6YF2JmJATo= X-Received: by 2002:a6b:5c12:: with SMTP id z18mr5761797ioh.138.1584567255091; Wed, 18 Mar 2020 14:34:15 -0700 (PDT) MIME-Version: 1.0 References: <20200317135035.GA19442@SDF.ORG> <202003171435.41F7F0DF9@keescook> <20200317230612.GB19442@SDF.ORG> <202003171619.23210A7E0@keescook> <20200318014410.GA2281@SDF.ORG> <20200318203914.GA16083@SDF.ORG> In-Reply-To: <20200318203914.GA16083@SDF.ORG> From: Alexander Duyck Date: Wed, 18 Mar 2020 14:34:04 -0700 Message-ID: Subject: Re: [PATCH v3] mm/shuffle.c: Fix races in add_to_free_area_random() To: George Spelvin Cc: Kees Cook , Dan Williams , linux-mm , Andrew Morton , Randy Dunlap Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 18, 2020 at 1:39 PM George Spelvin wrote: > > The old code had separate "rand" and "rand_count" variables, > which could get out of sync with bad results. > > In the worst case, two threads would see rand_count = 1 and > both decrement it, resulting in rand_count = 255 and rand being > filled with zeros for the next 255 calls. > > Instead, pack them both into a single, atomically updateable, > variable. This makes it a lot easier to reason about race > conditions. They are still there - the code deliberately eschews > locking - but basically harmless on the rare occasions that > they happen. > > Second, use READ_ONCE and WRITE_ONCE. Without them, we are deep > in the land of nasal demons. The compiler would be free to spill > temporaries to the static variables in arbitrarily perverse ways > and create hard-to-find bugs. > > (Alternatively, we could declare the static variable "volatile", > one of the few places in the Linux kernel that would be correct, > but it would probably annoy Linus.) > > Third, use long rather than u64. This not only keeps the state > atomically updateable, it also speeds up the fast path on 32-bit > machines. Saving at least three instructions on the fast path (one > load, one add-with-carry, and one store) is worth a second call > to get_random_u*() per 64 bits. The fast path of get_random_u* > is less than the 3*64 = 192 instructions saved, and the slow path > happens every 64 bytes so isn't affected by the change. > > Fourth, use left-shift rather than right. Testing the sign bit > produces slightly smaller/faster code than testing the lsbit. > > I've tried shifting both ways, and copying the desired bit to > a boolean before shifting rather than keeping separate full-width > r and rshift variables, but both produce larger code: > > x86_64 i386 > This code 94 95 > Explicit bool 103 99 > Lsbits 99 101 > Both 96 100 > > In a perfect world, the x86-64 object code would be tiny, > and dominated by the unavoidable unpredictable branch, but > this function doesn't warrant arch-specific optimization. > > add_to_free_area_random: > shlq rand(%rip) > jz 3f > 1: > jnc 2f > jmp add_to_free_area # tail call > 2: > jmp add_to_free_area_tail > 3: > pushq %rdx > pushq %rsi > pushq %rdi > call get_random_u64 > popq %rdi > popq %rsi > popq %rdx > stc > adcq %rax,%rax # not lea because we need carry out > movq %rax, rand(%rip) > jmp 1b > > Signed-off-by: George Spelvin > Acked-by: Kees Cook > Acked-by: Dan Williams > Cc: Alexander Duyck > Cc: Randy Dunlap > Cc: Andrew Morton > Cc: linux-mm@kvack.org What kernel is this based on? You might want to rebase on the latest linux-next as it occurs to me that this function was renamed to shuffle_pick_tail as I had incorporated a few bits of it into the logic for placing buddy pages and reported pages on the tail of the list. > --- > v2: Rewrote commit message to explain existing races better. > Made local variables unsigned to avoid (technically undefined) > signed overflow. > v3: Typos fixed, Acked-by, expanded commit message. > > mm/shuffle.c | 26 ++++++++++++++++---------- > 1 file changed, 16 insertions(+), 10 deletions(-) > > diff --git a/mm/shuffle.c b/mm/shuffle.c > index e0ed247f8d90..16c5fddc292f 100644 > --- a/mm/shuffle.c > +++ b/mm/shuffle.c > @@ -186,22 +186,28 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat) > void add_to_free_area_random(struct page *page, struct free_area *area, > int migratetype) > { > - static u64 rand; > - static u8 rand_bits; > + static unsigned long rand; /* buffered random bits */ > + unsigned long r = READ_ONCE(rand), rshift = r << 1; > > /* > - * The lack of locking is deliberate. If 2 threads race to > - * update the rand state it just adds to the entropy. > + * rand holds 0..BITS_PER_LONG-1 random msbits, followed by a > + * 1 bit, then zero-padding in the lsbits. This allows us to > + * maintain the pre-generated bits and the count of bits in a > + * single, atomically updatable, variable. > + * > + * The lack of locking is deliberate. If two threads race to > + * update the rand state it just adds to the entropy. The > + * worst that can happen is a random bit is used twice, or > + * get_random_long is called redundantly. > */ > - if (rand_bits == 0) { > - rand_bits = 64; > - rand = get_random_u64(); > + if (unlikely(rshift == 0)) { > + r = get_random_long(); > + rshift = r << 1 | 1; > } > + WRITE_ONCE(rand, rshift); > > - if (rand & 1) > + if ((long)r < 0) > add_to_free_area(page, area, migratetype); > else > add_to_free_area_tail(page, area, migratetype); > - rand_bits--; > - rand >>= 1; > } > -- > 2.26.0.rc2