From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AFFFC4345F for ; Wed, 24 Apr 2024 13:15:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A51CB6B0284; Wed, 24 Apr 2024 09:14:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A01D16B0285; Wed, 24 Apr 2024 09:14:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C9826B0286; Wed, 24 Apr 2024 09:14:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6F5C26B0284 for ; Wed, 24 Apr 2024 09:14:59 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A7191A0CD2 for ; Wed, 24 Apr 2024 13:14:58 +0000 (UTC) X-FDA: 82044470676.22.2602E5D Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf01.hostedemail.com (Postfix) with ESMTP id 4B80F40003 for ; Wed, 24 Apr 2024 13:14:55 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qo7B4uMy; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713964497; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bvZxyg6FLe9IvLyU2WQ1UNbBNjBTnQ1LEFNXV+6l674=; b=Uuxrit4/b2Lhd0zzkqc7Ry7FKUUAPDrGteN+hJGvJhqh/vFkbcoJuArbSNgCR+mTWLccQi QWOfrHXjk4oQBpKyrx9q2xpoFSLmW3OZmPUhag6sly4BjnOmbqq5zBRrFHvvjI/yA1v7w0 2aw3m4hDyuj9YwLRBPwEkHpDyp625K8= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qo7B4uMy; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713964497; a=rsa-sha256; cv=none; b=oP7AxsQ42ZolXBDQPj6ejbYz9LsPAZ5mr+P0lc9d5fLpTeo0esWaq23pBNU4qICbhr9g45 miGrnf3eySYEOBVk20MArJBADW8MkRM4kvkCbgkghx+ZoMQLPbbtQgx0nZWZ9lGFsxk8nm ZMXSX7c3QHr5gatKTrKMQnZnCMmShKY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 6E083CE16DE; Wed, 24 Apr 2024 13:14:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29E54C113CE; Wed, 24 Apr 2024 13:14:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713964490; bh=uqTnX+jpGZm+uNMyo8ii5otFzwOc4gYoIczCfBTY/48=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=qo7B4uMytSaJDA58jWJmpycYzrZWH0YflWXRcmrxvLR+fZ3ww7B7IZBrnhDZ4/1RS 02wISrBYdkxMvw9n4csIqqLqs3vOqoaAU57wg+IZRmVYE18xJNBEL04CsGfuyg13Cw R5grMlj2n9FEVk/o2IcOgTi/IhAoQWe67szSUHmugA7yKcY2/o0u5048s5ZCmbbKVH awHy+ouzJr43qcGyRz8GbFr0/ZezBdGVEYaEp+DczQVS80DiE6kYw/ZavUHWo48wWG Z+/FZKCfI94zzpK5sbbriIIHLatrNzfMFnhCAAgqJwaCm9cpQ4CdzZDIR9vs/qHafX m8cFlkxdg+75A== Date: Wed, 24 Apr 2024 16:13:34 +0300 From: Mike Rapoport To: Wei Yang Cc: akpm@linux-foundation.org, linux-mm@kvack.org Subject: Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position Message-ID: References: <20240414004531.6601-1-richard.weiyang@gmail.com> <20240414004531.6601-2-richard.weiyang@gmail.com> <20240416125531.v7we4lo222pgyr2b@master> <20240419031520.fkxz56kgan4jjchk@master> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240419031520.fkxz56kgan4jjchk@master> X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 4B80F40003 X-Stat-Signature: tidqs7g8x77xdigdxpeys9ey84mqfaoj X-HE-Tag: 1713964495-237895 X-HE-Meta: U2FsdGVkX190XerZTvfCUYuWrJL2R9qePeuunKXF5Pwj9O6AwgppxvbgOwQvY32SlzcCBbgHHveJN/wirfZSzNvUHZBiaKfNvzbxw7UvHGojZZoEgBKk/cM+f5rdVgb6tQT7h06caPIax5DzPkZQaS2In9HFyWgAzwF8fDyspXXyWSMuqNw99L37bth8aexzyxrr6SEtiXJGHsnOmSGCdoF6+vN5C2vQAMsHSaJZAD9M7Y274nir8io2DMJeLkkqjuE+4mujjb0L9rZLTlcohzKrsaoxq3pIf+QBtFKa+iuz9J1U6dAT+03uqi3NJJwo2s7fV4MC2Z6Eaiz1pVxtprnCsuFjuDICl1OfDo4XvydXEb0Waapm4YQH6N42jBn/FtyS81ejoQiyxS3Mm6weDKSz9ziuZyKpgcRNSHSxzQaNYPJJan1CesdLZYYQEAd70l0XwB0KjeKWBEQBBx66DMoBWrdgTy5xxxLWegZe6yf91CmfkV0VsdOJlcoJGSQfUcZkhmqG9H7q+uwdexqdMrGcbckdHYTkngTcPudXhnu6et7I4Jc2uyWWJug9YRr0NWWEwDoGHh6rIeMfCn7z0UwMbB0KqWVtU1HtaPmMwVnFwZxJ0U5gI/ABf7293qZYegaKz8rojqcsvdte/xQ80oP1xI9hbgf51mzR0tF87T7E7PhOuMsBGSkREIFnnkX2WYJWOUQrxvcgcvSzEKdLdgmKx2TBNYuGSEt4YCASJtYxelSD3JDUBpNXk7ww/4IuRN614MuIxHltUaGjnbpp82+gOODzc76o0JCr4azv6sAzgxj2IY8Zhv6BwmCm4SYkyJ6Urb0oxdG//JycXOvRAT5YtfZWIE2MbIcpkwcH+k5eSh8NNDSoSVBKUdjysvziHZTXaRHCmgza0roGRdKtvnLhM0ngWm1gNvD7eD8tf64= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 19, 2024 at 03:15:20AM +0000, Wei Yang wrote: > On Wed, Apr 17, 2024 at 08:51:14AM +0300, Mike Rapoport wrote: > >On Tue, Apr 16, 2024 at 12:55:31PM +0000, Wei Yang wrote: > >> On Mon, Apr 15, 2024 at 06:19:42PM +0300, Mike Rapoport wrote: > >> >On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote: > >> >> After previous change, we may double the array based on the position of > >> >> the new range. > >> >> > >> >> Let's make sure the 129th memory block would double the size correctly > >> >> at all possible position. > >> > > >> >Rather than rewrite an existing test, just add a new one. > >> > >> Ok, will add a new one for this. > >> > >> >Besides, it would be more interesting to test additions to > >> >memblock.reserved and a mix of memblock_add() and memblock_reserve() that > >> >will require resizing the memblock arrays. > >> > >> I don't get this very clearly. Would you mind give more hint? > > > >There is memblock_reserve_many_check() that verifies that memblock.reserved > >is properly resized. I think it's better to add test that adds 129th block > >at multiple locations to memblock.reserved. > > > > I come up with another version, which could address the bug fixed by commit > 48c3b583bbdd ("mm/memblock: fix overlapping allocation when doubling reserved > array"). > > Comment out the fix, the test failed since cnt mismatch after double array. > > Not sure you prefer to have both or just leave this version by replacing > current memblock_reserve_many_check(). Let's have both of them. > diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c > index d2b8114921f9..fb76471108b2 100644 > --- a/tools/testing/memblock/tests/basic_api.c > +++ b/tools/testing/memblock/tests/basic_api.c > @@ -1006,6 +1006,119 @@ static int memblock_reserve_many_check(void) > return 0; > } > > +/* Keep the gap so these memory region will not be merged. */ > +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx)) > +static int memblock_reserve_many_conflict_check(void) > +{ > + int i, skip; > + void *orig_region; > + struct region r = { > + .base = SZ_16K, > + .size = SZ_16K, > + }; > + phys_addr_t new_reserved_regions_size; > + > + /* > + * 0 1 129 > + * +---+ +---+ +---+ > + * |32K| |32K| .. |32K| > + * +---+ +---+ +---+ > + * > + * Pre-allocate the range for 129 memory block + one range for double > + * memblock.reserved.regions at idx 0. > + * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation > + * when doubling reserved array") > + */ > + phys_addr_t memory_base = (phys_addr_t)malloc(130 * (2 * SZ_32K)); Just increase MEM_SIZE to, say, SZ_1M and use dummy_physical_memory_init() etc > + phys_addr_t offset = PAGE_ALIGN(memory_base); > + > + PREFIX_PUSH(); > + > + /* Reserve the 129th memory block for all possible positions*/ > + for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++) > + { > + reset_memblock_regions(); > + memblock_allow_resize(); > + > + reset_memblock_attributes(); > + /* Add a valid memory region used by double_array(). */ > + memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE); > + /* > + * Add a memory region which will be reserved as 129th memory > + * region. This is not expected to be used by double_array(). > + */ > + memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE); > + > + for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) { > + if (i == skip) > + continue; > + > + /* Reserve some fakes memory region to fulfill the memblock. */ > + memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE); > + > + if (i < skip) { > + ASSERT_EQ(memblock.reserved.cnt, i); > + ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE); > + } else { > + ASSERT_EQ(memblock.reserved.cnt, i - 1); > + ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE); > + } > + } > + > + orig_region = memblock.reserved.regions; > + > + /* This reserve the 129 memory_region, and makes it double array. */ > + memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE); > + > + /* > + * This is the memory region size used by the doubled reserved.regions, > + * and it has been reserved due to it has been used. The size is used to > + * calculate the total_size that the memblock.reserved have now. > + */ > + new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) * > + sizeof(struct memblock_region)); > + /* > + * The double_array() will find a free memory region as the new > + * reserved.regions, and the used memory region will be reserved, so > + * there will be one more region exist in the reserved memblock. And the > + * one more reserved region's size is new_reserved_regions_size. > + */ > + ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2); > + ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE + > + new_reserved_regions_size); > + ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2); > + > + /* > + * Now memblock_double_array() works fine. Let's check after the > + * double_array(), the memblock_reserve() still works as normal. > + */ > + memblock_reserve(r.base, r.size); > + ASSERT_EQ(memblock.reserved.regions[0].base, r.base); > + ASSERT_EQ(memblock.reserved.regions[0].size, r.size); > + > + ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3); > + ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE + > + new_reserved_regions_size + > + r.size); > + ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2); > + > + /* > + * The current reserved.regions is occupying a range of memory that > + * allocated from dummy_physical_memory_init(). After free the memory, > + * we must not use it. So restore the origin memory region to make sure > + * the tests can run as normal and not affected by the double array. > + */ > + memblock.reserved.regions = orig_region; > + memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS; > + } > + > + free((void *)memory_base); > + > + test_pass_pop(); > + > + return 0; > +} > + > static int memblock_reserve_checks(void) > { > prefix_reset(); > @@ -1021,6 +1134,7 @@ static int memblock_reserve_checks(void) > memblock_reserve_between_check(); > memblock_reserve_near_max_check(); > memblock_reserve_many_check(); > + memblock_reserve_many_conflict_check(); > > prefix_pop(); > > -- > 2.34.1 -- Sincerely yours, Mike.