From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14A7AE67803 for ; Mon, 22 Dec 2025 18:11:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/toT1W7OTQvTEtD4gZRzA7tSobi23a6qTYqR+de5foM=; b=xt/lyiAej//iDgetbtdkFjf7n2 9CmLS4M7Ibd7AuyQqL3qjo0K8NnfigcoKCNoFvxvXU3Wb+/n4WssZf1I/rG4dyCqxq31KRCSZDOaf frl7GvcaAEWOe2dDPMmG8VnELUe7CWMHlnGjoWUXltFLkCgkDY7gi+z3PdZ3AOrSe9ozU2XtBRdtl 80+nALJWv1PhSQ2dghHyTw/zVKfvzK3FgJYBUjJblDKVIU+aSHuIIaTE6rkKCjoIeG53iKuX/8JeC avlI1hHyxUlHmfWe/KqxCRNKdQOZ+vp3fqA4rHGfAIhFy7vcS4IRRMtqZWbdMl4GgHFPXFKx57qDP WqwVyvmA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vXkNZ-0000000E3PX-3RzZ; Mon, 22 Dec 2025 18:11:33 +0000 Received: from out-174.mta0.migadu.com ([91.218.175.174]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vXkNX-0000000E3Oz-0iYj; Mon, 22 Dec 2025 18:11:32 +0000 Date: Mon, 22 Dec 2025 12:11:18 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1766427082; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=/toT1W7OTQvTEtD4gZRzA7tSobi23a6qTYqR+de5foM=; b=fFWvoW/7kSGLJp45pHDVtgX9vgb+7so0qkJceS3LAGTyR1KaxF3L+F72RF4P9zBGHGrYUm aVaaQEkrE3EKHvupvEsBRhRlekKnoRyTkn4kjqm+11ROVQfl6TfmMiQ0RG7VjXg0tycJia qLd1QfPUM+9xCRLaIPN+tDyVVMt3i0Q= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Andrew Jones To: Fuad Tabba Cc: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, will@kernel.org, pbonzini@redhat.com, shuah@kernel.org, anup@brainfault.org Subject: Re: [PATCH v2 4/5] KVM: selftests: Move page_align() to shared header Message-ID: <20251222-ed460d0e29ab66fb4987e365@orel> References: <20251215165155.3451819-1-tabba@google.com> <20251215165155.3451819-5-tabba@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251215165155.3451819-5-tabba@google.com> X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251222_101131_387245_06AAD408 X-CRM114-Status: GOOD ( 16.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Dec 15, 2025 at 04:51:54PM +0000, Fuad Tabba wrote: > To avoid code duplication, move page_align() to the shared `kvm_util.h` > header file. > > No functional change intended. > > Signed-off-by: Fuad Tabba > --- > tools/testing/selftests/kvm/include/kvm_util.h | 5 +++++ > tools/testing/selftests/kvm/lib/arm64/processor.c | 5 ----- > tools/testing/selftests/kvm/lib/riscv/processor.c | 5 ----- > 3 files changed, 5 insertions(+), 10 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h > index 81f4355ff28a..dabbe4c3b93f 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util.h > +++ b/tools/testing/selftests/kvm/include/kvm_util.h > @@ -1258,6 +1258,11 @@ static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm) > return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0); > } > > +static inline uint64_t page_align(struct kvm_vm *vm, uint64_t v) > +{ > + return (v + vm->page_size - 1) & ~(vm->page_size - 1); > +} > + > /* > * Arch hook that is invoked via a constructor, i.e. before exeucting main(), > * to allow for arch-specific setup that is common to all tests, e.g. computing > diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c > index 607a4e462984..143632917766 100644 > --- a/tools/testing/selftests/kvm/lib/arm64/processor.c > +++ b/tools/testing/selftests/kvm/lib/arm64/processor.c > @@ -21,11 +21,6 @@ > > static vm_vaddr_t exception_handlers; > > -static uint64_t page_align(struct kvm_vm *vm, uint64_t v) > -{ > - return (v + vm->page_size - 1) & ~(vm->page_size - 1); > -} > - > static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva) > { > unsigned int shift = (vm->pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift; > diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c > index d5e8747b5e69..f8ff4bf938d9 100644 > --- a/tools/testing/selftests/kvm/lib/riscv/processor.c > +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c > @@ -26,11 +26,6 @@ bool __vcpu_has_ext(struct kvm_vcpu *vcpu, uint64_t ext) > return !ret && !!value; > } > > -static uint64_t page_align(struct kvm_vm *vm, uint64_t v) > -{ > - return (v + vm->page_size - 1) & ~(vm->page_size - 1); > -} > - > static uint64_t pte_addr(struct kvm_vm *vm, uint64_t entry) > { > return ((entry & PGTBL_PTE_ADDR_MASK) >> PGTBL_PTE_ADDR_SHIFT) << > -- > 2.52.0.239.gd5f0c6e74e-goog > > kvm_util.h is collecting a bit too much random stuff and it'd be nice to split stuff out rather than add more, but, for now, this is an overall improvement since we kill some duplication. Reviewed-by: Andrew Jones Thanks, drew