From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42806C25B46 for ; Mon, 23 Oct 2023 20:50:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2ZaAuAJQC4wqM4IZoETWzFE8jy906QTFm2Vx9Jc0Tfk=; b=MSO6BWDivxc/XO qevduR7/w5vukYrb8obFhwdCBF/jA/JwWdd9Q40+kWkd35Cyts5Xsmn0kuiN2KWDQg35gL02E6VNh uqOY7sAclT3bydihYm8m5oCfKUoUZ6u7r7ucaPSJdMUxPzHY59sdDxgOvQliZ5V/wDu/VdAUNs0s0 UWwS++QIO9A2zsRQX/gYfhMPnXCVlKNs5vjeMYLMIWV081wuQrBthRaetFeWQQwZEpO6woAjfSOXA XTgs8nH6z7lYHkdREowzIQl9I78LP2TGnb+bW+xl2mtAZDa4ZI1iTe1pqc2oaotkhSI7KxTQE1nRf 7pwSAEBXdk2xglr3tRtg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qv1rq-008GAL-00; Mon, 23 Oct 2023 20:49:42 +0000 Received: from mail-pj1-x1029.google.com ([2607:f8b0:4864:20::1029]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qv1rm-008G9V-08 for linux-arm-kernel@lists.infradead.org; Mon, 23 Oct 2023 20:49:39 +0000 Received: by mail-pj1-x1029.google.com with SMTP id 98e67ed59e1d1-27d3c886671so3279269a91.3 for ; Mon, 23 Oct 2023 13:49:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1698094176; x=1698698976; darn=lists.infradead.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=2LvgP1VG0xj6u/wXr/pCsAPL29TlerjlSoxF4jdDyOE=; b=TVSGX1urE6k5m/3U6aurZmuMuzCz/GxpnWpobUasPVJK469nKtwW7Ye7utW6SngdBQ Qn5bu7f5my/Bj4WkhqzxUSUMan924vTpW/ehh1mp6TOH2IP6CLe//xZobK9jOuRRtCBq kZ1JducCgMuMWB6YQxAtvDC67+EaofTs1HdoGwXc1fHUQ1ltEoxQCzesK8BXiHqqsKLy pehWgneihRLLdgVMBBrEqNEpYwx2CGeLS2o5NrpNkthq5b9ZRPDC/+RoPsx7YL6F9Wuk +dB36WQmYVHCc80BSKs9JBFNS/nn+x/HdQVJw2gvEVvYHAKVihrg/kofjrxpoS6FbWdj qEAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698094176; x=1698698976; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=2LvgP1VG0xj6u/wXr/pCsAPL29TlerjlSoxF4jdDyOE=; b=H0ZBWvan+POKfVOfYdXqvdFBFb0cITcqPTprSsY71gblUvGyLuTQBcQDoOajslusYS Kypef9Ed/YRSXcgDV3LA8fem9KFgJ/0j5Bx/gg33dCIaGFmyj+Nq2GhTDS6GdqCf+Uwn b/5YstbZ32OP47Jt8freIzZpW3uVOmdemgqe4Egh/hwcTEo3QYfU8PTNExUQHlpeYItF BkOLLGrrKpbUHUhPJANDCmAD9YTGeLttU9MKY4PzRHVWlzBwbGOF90X1eyrTKyQPtzgt dBLhlubJAqbwH4SnYgyBuw20ylWK6ChVDaJw0GupqaIXflP3rzpxbAidDaOLWoxszHsv TL3A== X-Gm-Message-State: AOJu0Ywt956yOwAovZJJQVGSPsITy7OGGlWTqzZFrvFopg+punTap6K/ gn/3PhnQQKdRfG94neVbPoM= X-Google-Smtp-Source: AGHT+IEKJ67+tK6DRwWBnFd7NvzkCpWaT2VUbvB2+/m0pTqlmyJchXgz0TOQ05MojZipm+IcS13EJw== X-Received: by 2002:a17:90a:4b0f:b0:27d:ed83:fdff with SMTP id g15-20020a17090a4b0f00b0027ded83fdffmr10071086pjh.16.1698094175738; Mon, 23 Oct 2023 13:49:35 -0700 (PDT) Received: from localhost ([216.228.127.128]) by smtp.gmail.com with ESMTPSA id ge13-20020a17090b0e0d00b00271c5811019sm5901337pjb.38.2023.10.23.13.49.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 13:49:35 -0700 (PDT) Date: Mon, 23 Oct 2023 13:49:32 -0700 From: Yury Norov To: Alexander Potapenko Cc: catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, alexandru.elisei@arm.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org Subject: Re: [PATCH v8 2/2] lib/test_bitmap: add tests for bitmap_{read,write}() Message-ID: References: <20231023102327.3074212-1-glider@google.com> <20231023102327.3074212-2-glider@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231023102327.3074212-2-glider@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231023_134938_086424_949407A0 X-CRM114-Status: GOOD ( 40.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Oct 23, 2023 at 12:23:27PM +0200, Alexander Potapenko wrote: > Add basic tests ensuring that values can be added at arbitrary positions > of the bitmap, including those spanning into the adjacent unsigned > longs. > > Signed-off-by: Alexander Potapenko > Reviewed-by: Andy Shevchenko > > --- > This patch was previously part of the "Implement MTE tag compression for > swapped pages" series > (https://lore.kernel.org/linux-arm-kernel/20231011172836.2579017-4-glider@google.com/T/) > > This patch was previously called > "lib/test_bitmap: add tests for bitmap_{set,get}_value()" > (https://lore.kernel.org/lkml/20230720173956.3674987-3-glider@google.com/) > and > "lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned" > (https://lore.kernel.org/lkml/20230713125706.2884502-3-glider@google.com/) > > v8: > - as requested by Andy Shevchenko, add tests for reading/writing > sizes > BITS_PER_LONG > > v7: > - as requested by Yury Norov, add performance tests for bitmap_read() > and bitmap_write() > > v6: > - use bitmap API to initialize test bitmaps > - as requested by Yury Norov, do not check the return value of > bitmap_read(..., 0) > - fix a compiler warning on 32-bit systems > > v5: > - update patch title > - address Yury Norov's comments: > - rename the test cases > - factor out test_bitmap_write_helper() to test writing over > different background patterns; > - add a test case copying a nontrivial value bit-by-bit; > - drop volatile > > v4: > - Address comments by Andy Shevchenko: added Reviewed-by: and a link to > the previous discussion > - Address comments by Yury Norov: > - expand the bitmap to catch more corner cases > - add code testing that bitmap_set_value() does not touch adjacent > bits > - add code testing the nbits==0 case > - rename bitmap_{get,set}_value() to bitmap_{read,write}() > > v3: > - switch to using bitmap_{set,get}_value() > - change the expected bit pattern in test_set_get_value(), > as the test was incorrectly assuming 0 is the LSB. > --- > lib/test_bitmap.c | 174 ++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 174 insertions(+) > > diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c > index f2ea9f30c7c5d..ba567f53feff1 100644 > --- a/lib/test_bitmap.c > +++ b/lib/test_bitmap.c > @@ -71,6 +71,17 @@ __check_eq_uint(const char *srcfile, unsigned int line, > return true; > } > > +static bool __init > +__check_eq_ulong(const char *srcfile, unsigned int line, > + const unsigned long exp_ulong, unsigned long x) > +{ > + if (exp_ulong != x) { > + pr_err("[%s:%u] expected %lu, got %lu\n", > + srcfile, line, exp_ulong, x); > + return false; > + } > + return true; > +} > > static bool __init > __check_eq_bitmap(const char *srcfile, unsigned int line, > @@ -186,6 +197,7 @@ __check_eq_str(const char *srcfile, unsigned int line, > }) > > #define expect_eq_uint(...) __expect_eq(uint, ##__VA_ARGS__) > +#define expect_eq_ulong(...) __expect_eq(ulong, ##__VA_ARGS__) > #define expect_eq_bitmap(...) __expect_eq(bitmap, ##__VA_ARGS__) > #define expect_eq_pbl(...) __expect_eq(pbl, ##__VA_ARGS__) > #define expect_eq_u32_array(...) __expect_eq(u32_array, ##__VA_ARGS__) > @@ -1222,6 +1234,165 @@ static void __init test_bitmap_const_eval(void) > BUILD_BUG_ON(~var != ~BIT(25)); > } > > +/* > + * Test bitmap should be big enough to include the cases when start is not in > + * the first word, and start+nbits lands in the following word. > + */ > +#define TEST_BIT_LEN (1000) > + > +/* > + * Helper function to test bitmap_write() overwriting the chosen byte pattern. > + */ > +static void __init test_bitmap_write_helper(const char *pattern) > +{ > + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); > + DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN); > + DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN); > + unsigned long w, r, bit; > + int i, n, nbits; > + > + /* > + * Only parse the pattern once and store the result in the intermediate > + * bitmap. > + */ > + bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN); > + > + /* > + * Check that writing a single bit does not accidentally touch the > + * adjacent bits. > + */ > + for (i = 0; i < TEST_BIT_LEN; i++) { > + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); > + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); > + for (bit = 0; bit <= 1; bit++) { > + bitmap_write(bitmap, bit, i, 1); > + __assign_bit(i, exp_bitmap, bit); > + expect_eq_bitmap(exp_bitmap, bitmap, > + TEST_BIT_LEN); > + } > + } > + > + /* Ensure writing 0 bits does not change anything. */ > + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); > + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); > + for (i = 0; i < TEST_BIT_LEN; i++) { > + bitmap_write(bitmap, ~0UL, i, 0); > + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); > + } > + > + for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) { > + w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL > + : 0xdeadbeefUL; > + w >>= (BITS_PER_LONG - nbits); > + for (i = 0; i <= TEST_BIT_LEN - nbits; i++) { > + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); > + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); > + for (n = 0; n < nbits; n++) > + __assign_bit(i + n, exp_bitmap, w & BIT(n)); > + bitmap_write(bitmap, w, i, nbits); > + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); > + r = bitmap_read(bitmap, i, nbits); > + expect_eq_ulong(r, w); > + } > + } > +} > + > +static void __init test_bitmap_read_write(void) > +{ > + unsigned char *pattern[3] = {"", "all:1/2", "all"}; > + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); > + unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG; > + unsigned long val; > + int i, pi; > + > + /* > + * Reading/writing zero bits should not crash the kernel. > + * READ_ONCE() prevents constant folding. > + */ > + bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits)); > + /* Return value of bitmap_read() is undefined here. */ > + bitmap_read(NULL, 0, READ_ONCE(zero_bits)); > + > + /* > + * Reading/writing more than BITS_PER_LONG bits should not crash the > + * kernel. READ_ONCE() prevents constant folding. > + */ > + bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1); > + /* Return value of bitmap_read() is undefined here. */ > + bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1); > + > + /* > + * Ensure that bitmap_read() reads the same value that was previously > + * written, and two consequent values are correctly merged. > + * The resulting bit pattern is asymmetric to rule out possible issues > + * with bit numeration order. > + */ > + for (i = 0; i < TEST_BIT_LEN - 7; i++) { > + bitmap_zero(bitmap, TEST_BIT_LEN); > + > + bitmap_write(bitmap, 0b10101UL, i, 5); > + val = bitmap_read(bitmap, i, 5); > + expect_eq_ulong(0b10101UL, val); > + > + bitmap_write(bitmap, 0b101UL, i + 5, 3); > + val = bitmap_read(bitmap, i + 5, 3); > + expect_eq_ulong(0b101UL, val); > + > + val = bitmap_read(bitmap, i, 8); > + expect_eq_ulong(0b10110101UL, val); > + } > + > + for (pi = 0; pi < ARRAY_SIZE(pattern); pi++) > + test_bitmap_write_helper(pattern[pi]); > +} > + > +static void __init test_bitmap_read_perf(void) > +{ > + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); > + unsigned int cnt, nbits, i; > + unsigned long val; > + ktime_t time; > + > + bitmap_fill(bitmap, TEST_BIT_LEN); > + time = ktime_get(); > + for (cnt = 0; cnt < 5; cnt++) { > + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { > + for (i = 0; i < TEST_BIT_LEN; i++) { > + if (i + nbits > TEST_BIT_LEN) > + break; > + val = bitmap_read(bitmap, i, nbits); > + (void)val; > + } > + } > + } > + time = ktime_get() - time; > + pr_err("Time spent in %s:\t%llu\n", __func__, time); > +} > + > +static void __init test_bitmap_write_perf(void) > +{ > + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); > + unsigned int cnt, nbits, i; > + unsigned long val = 0xfeedface; > + ktime_t time; > + > + bitmap_zero(bitmap, TEST_BIT_LEN); > + time = ktime_get(); > + for (cnt = 0; cnt < 5; cnt++) { > + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { > + for (i = 0; i < TEST_BIT_LEN; i++) { > + if (i + nbits > TEST_BIT_LEN) > + break; > + bitmap_write(bitmap, val, i, nbits); > + } > + } > + } > + time = ktime_get() - time; > + pr_err("Time spent in %s:\t%llu\n", __func__, time); For the perf part, can you add the output example to the commit message, and compare numbers with non-optimized for-loop()? > +} > + > +#undef TEST_BIT_LEN > + > static void __init selftest(void) > { > test_zero_clear(); > @@ -1237,6 +1408,9 @@ static void __init selftest(void) > test_bitmap_cut(); > test_bitmap_print_buf(); > test_bitmap_const_eval(); > + test_bitmap_read_write(); > + test_bitmap_read_perf(); > + test_bitmap_write_perf(); > > test_find_nth_bit(); > test_for_each_set_bit(); > -- > 2.42.0.655.g421f12c284-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel