From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE632C48BC4 for ; Wed, 21 Feb 2024 01:43:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 287596B0075; Tue, 20 Feb 2024 20:43:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 20D5E6B0078; Tue, 20 Feb 2024 20:43:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0872B6B007B; Tue, 20 Feb 2024 20:43:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E5D446B0075 for ; Tue, 20 Feb 2024 20:43:29 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 59E641204EF for ; Wed, 21 Feb 2024 01:43:29 +0000 (UTC) X-FDA: 81814113738.02.F6C4B3B Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by imf23.hostedemail.com (Postfix) with ESMTP id 21AE414000A for ; Wed, 21 Feb 2024 01:43:26 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ViueOmiO; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf23.hostedemail.com: domain of binbin.wu@linux.intel.com has no SPF policy when checking 198.175.65.11) smtp.mailfrom=binbin.wu@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708479807; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6wN1UZuv4jAuLBba0HfHjqqubnU1P2NSOdf35GKBrV8=; b=L3PdfnjLZC3mwEOzXjju6nvXAMLLnZ+NxufoSwvuzW0E/CteEfAONE49h3DVIuDwbbL913 qYEwf4Ja8DBKeA4OpNPMSq7xBLvmnLj17+hOba56KSdTKmudxOcYtF79AKWr8KIzohyGRt xni2hNUGp8aSJVbpCsbgs/G6yjRbHRI= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ViueOmiO; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf23.hostedemail.com: domain of binbin.wu@linux.intel.com has no SPF policy when checking 198.175.65.11) smtp.mailfrom=binbin.wu@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708479807; a=rsa-sha256; cv=none; b=xtQVP2J0UNQLlhLCHc4U8Lx9cbnc4ArnZkuPIcMc7Jqrh9Zh7Pqo0cFlfcwbxSpWLG8B0i z7wRs2ew4/GWrdz9cHQeHL7/cy04J2N6Js1ian67Bm+L2T72YQLGJBAA7CAe+tI0tgjOTR vZ6+WzU0/EVr+ZuKtw9tjcjMAq08v2I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708479807; x=1740015807; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=lGGbcLxDjtl1FUL44Pi8sUhy6VBubv51KjZsxs/qefo=; b=ViueOmiO4F07I16NoBERcZcLBkIHe7IY+ROX1vFnP/aAZIMg9XLdXIjO FyqKokZap9UrW70oOi65BJBCBK8b67AiBsy0nz3FsQV1NjpUnAckT9enW L8m02dx5NmfKkFNn54RXqaKWgcj+ZgYrLL83kRrlbECOxkDMBlVwomH2B R1QewRkfgRRFRBTOFIgSbsbL8Hqzs509Xi997CXoN4chPTp8bAahnATzx LEaicWPlb7C3f1LvhTRhz7wBZI1jS7GCXmjGYMtUPHveZ+hhdz5vsTzhB Dg6W5h2fBRsr3nXnZ1ospnEr6m/edG7VUDTDIAYoTpUcNRkYRmQ8KHYyy g==; X-IronPort-AV: E=McAfee;i="6600,9927,10990"; a="13179404" X-IronPort-AV: E=Sophos;i="6.06,174,1705392000"; d="scan'208";a="13179404" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Feb 2024 17:43:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,174,1705392000"; d="scan'208";a="36004407" Received: from binbinwu-mobl.ccr.corp.intel.com (HELO [10.93.18.46]) ([10.93.18.46]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Feb 2024 17:43:19 -0800 Message-ID: <516247d2-7ba8-4b3e-8325-8c6dd89b929e@linux.intel.com> Date: Wed, 21 Feb 2024 09:43:16 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v5 01/29] KVM: selftests: Add function to allow one-to-one GVA to GPA mappings To: Sagi Shahar Cc: linux-kselftest@vger.kernel.org, Ackerley Tng , Erdem Aktas , Isaku Yamahata , Ryan Afranji , Sean Christopherson , Paolo Bonzini , Shuah Khan , Peter Gonda , Haibo Xu , Chao Peng , Vishal Annapurve , Roger Wang , Vipin Sharma , jmattson@google.com, dmatlack@google.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org References: <20231212204647.2170650-1-sagis@google.com> <20231212204647.2170650-2-sagis@google.com> From: Binbin Wu In-Reply-To: <20231212204647.2170650-2-sagis@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 21AE414000A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: p1bi58me539k8bttyb8tgdowa7ohceom X-HE-Tag: 1708479806-329454 X-HE-Meta: U2FsdGVkX1888BNLkytNGtHPRUCm6xv9yxvh0Lza0kZFZ3f8PPZzSBf4dNkXoUEJuT7+Su8+7zt7kyTOm4MS21/Iq6RAbgch3EkcOAFxgmx100J8GtBjISLChejHtq+j2FzCStLpx74UBHqa+Big1bJDmy398aIE5Ehy1OA+gxRcfHZ8Z5wTk2nkJtKt1WFmr0Sy6csKadN7fSAfS5Q8IYjGnLVZmcJIUWvELaARrl1uCysrxVzwOpyIzl3gm6rsxG9o3aZPfVbDfICkJN4xM5iN4SMk1gncVjm8X2UIU0LdCBzT8dA41iK4S3wWvY9Wi7EbEGV0koACYYFerh9c/rHN+vMtegwsKO0/kqMtb5cxr5ChAGdCpQbIT37Tz2ZqAzdtirDpDlGLUwGbGAjmLG8KVXg5L1reuxFqemp9yfvrmiPADhYtpXvQQDIPwZHcsAlj1ONsGXLM00mjYmjElzLSbxYQnO/Ck/dLiIB5DOEZeJBQVKsbFztJ0xM74rQongcnw9SRzXK+M0WQc6kFcmsFHXx8VxsXl3vbPbYYpXL6c04quR4kriX5ojFME5etYxau+pUyk0EgMrhgdOW22mVVt6lvGb7nG6dBZjGoyb2TWK/Puus92c0TdJ05OfBDBJ/5R2sOo8dMo5SlwlPLizJ78cmruWI8VX1nxPfSV17c0dnK4eBS+WXJHxz/QY39ivBHKEIYgvM3nvWr/EHQkaDpAKSgsr8exC7uPCnU+D9hHvEkAvp8ByCF0qa+WMYU8gFwcHVwgIj+RB8C5O//A4WZ4VgqcdmcM7U43VGzXfRGnJJVere3vssE5YqiPv5G+v1YMYBndQDXsPK8tBZ2Vj7VKu0cmXu4GUYxnN7r1uzvTON0CRP/kw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/13/2023 4:46 AM, Sagi Shahar wrote: > From: Ackerley Tng > > One-to-one GVA to GPA mappings can be used in the guest to set up boot > sequences during which paging is enabled, hence requiring a transition > from using physical to virtual addresses in consecutive instructions. > > Signed-off-by: Ackerley Tng > Signed-off-by: Ryan Afranji > Signed-off-by: Sagi Shahar > --- > .../selftests/kvm/include/kvm_util_base.h | 2 + > tools/testing/selftests/kvm/lib/kvm_util.c | 63 ++++++++++++++++--- > 2 files changed, 55 insertions(+), 10 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h > index 1426e88ebdc7..c2e5c5f25dfc 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h > @@ -564,6 +564,8 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); > vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, > enum kvm_mem_region_type type); > vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); > +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, > + vm_vaddr_t vaddr_min, uint32_t data_memslot); > vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); > vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, > enum kvm_mem_region_type type); > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index febc63d7a46b..4f1ae0f1eef0 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -1388,17 +1388,37 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, > return pgidx_start * vm->page_size; > } > > +/* > + * VM Virtual Address Allocate Shared/Encrypted > + * > + * Input Args: > + * vm - Virtual Machine > + * sz - Size in bytes > + * vaddr_min - Minimum starting virtual address > + * paddr_min - Minimum starting physical address > + * data_memslot - memslot number to allocate in > + * encrypt - Whether the region should be handled as encrypted > + * > + * Output Args: None > + * > + * Return: > + * Starting guest virtual address > + * > + * Allocates at least sz bytes within the virtual address space of the vm > + * given by vm. The allocated bytes are mapped to a virtual address >= > + * the address given by vaddr_min. Note that each allocation uses a > + * a unique set of pages, with the minimum real allocation being at least > + * a page. > + */ > static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > - vm_vaddr_t vaddr_min, > - enum kvm_mem_region_type type, > - bool encrypt) > + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min, > + uint32_t data_memslot, bool encrypt) > { > uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); > > virt_pgd_alloc(vm); > - vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, > - KVM_UTIL_MIN_PFN * vm->page_size, > - vm->memslots[type], encrypt); > + vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, paddr_min, > + data_memslot, encrypt); > > /* > * Find an unused range of virtual page addresses of at least > @@ -1408,8 +1428,7 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > > /* Map the virtual pages. */ > for (vm_vaddr_t vaddr = vaddr_start; pages > 0; > - pages--, vaddr += vm->page_size, paddr += vm->page_size) { > - > + pages--, vaddr += vm->page_size, paddr += vm->page_size) { > virt_pg_map(vm, vaddr, paddr); > > sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); > @@ -1421,12 +1440,16 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, > vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, > enum kvm_mem_region_type type) > { > - return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, vm->protected); > + return ____vm_vaddr_alloc(vm, sz, vaddr_min, > + KVM_UTIL_MIN_PFN * vm->page_size, > + vm->memslots[type], vm->protected); > } > > vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) > { > - return ____vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA, false); > + return ____vm_vaddr_alloc(vm, sz, vaddr_min, > + KVM_UTIL_MIN_PFN * vm->page_size, > + vm->memslots[MEM_REGION_TEST_DATA], false); > } > > /* > @@ -1453,6 +1476,26 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) > return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA); > } > > +/** > + * Allocate memory in @vm of size @sz in memslot with id @data_memslot, > + * beginning with the desired address of @vaddr_min. > + * > + * If there isn't enough memory at @vaddr_min, find the next possible address > + * that can meet the requested size in the given memslot. > + * > + * Return the address where the memory is allocated. > + */ > +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, > + vm_vaddr_t vaddr_min, uint32_t data_memslot) > +{ > + vm_vaddr_t gva = ____vm_vaddr_alloc(vm, sz, vaddr_min, > + (vm_paddr_t)vaddr_min, data_memslot, > + vm->protected); > + TEST_ASSERT_EQ(gva, addr_gva2gpa(vm, gva)); How can this be guaranteed? For ____vm_vaddr_alloc(), generically there is no enforcement about the identity of virtual and physical address. > + > + return gva; > +} > + > /* > * VM Virtual Address Allocate Pages > *