From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0453C4332D for ; Thu, 19 Mar 2020 09:19:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B00E920789 for ; Thu, 19 Mar 2020 09:19:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727281AbgCSJTi (ORCPT ); Thu, 19 Mar 2020 05:19:38 -0400 Received: from 8bytes.org ([81.169.241.247]:52110 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727261AbgCSJOa (ORCPT ); Thu, 19 Mar 2020 05:14:30 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id 336B7385; Thu, 19 Mar 2020 10:14:19 +0100 (CET) From: Joerg Roedel To: x86@kernel.org Cc: hpa@zytor.com, Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Hellstrom , Jiri Slaby , Dan Williams , Tom Lendacky , Juergen Gross , Kees Cook , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, Joerg Roedel , Joerg Roedel Subject: [PATCH 17/70] x86/boot/compressed/64: Change add_identity_map() to take start and end Date: Thu, 19 Mar 2020 10:13:14 +0100 Message-Id: <20200319091407.1481-18-joro@8bytes.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200319091407.1481-1-joro@8bytes.org> References: <20200319091407.1481-1-joro@8bytes.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Joerg Roedel Changing the function to take start and end as parameters instead of start and size simplifies the callers, which don't need to calculate the size if they already have start and end. Signed-off-by: Joerg Roedel --- arch/x86/boot/compressed/ident_map_64.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index ab7a3d9705c0..ba5b88189220 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -91,10 +91,8 @@ static struct x86_mapping_info mapping_info; /* * Adds the specified range to the identity mappings. */ -static void add_identity_map(unsigned long start, unsigned long size) +static void add_identity_map(unsigned long start, unsigned long end) { - unsigned long end = start + size; - /* Align boundary to 2M. */ start = round_down(start, PMD_SIZE); end = round_up(end, PMD_SIZE); @@ -109,8 +107,6 @@ static void add_identity_map(unsigned long start, unsigned long size) /* Locates and clears a region for a new top level page table. */ void initialize_identity_maps(void) { - unsigned long start, size; - /* If running as an SEV guest, the encryption mask is required. */ set_sev_encryption_mask(); @@ -157,9 +153,7 @@ void initialize_identity_maps(void) * New page-table is set up - map the kernel image and load it * into cr3. */ - start = (unsigned long)_head; - size = _end - _head; - add_identity_map(start, size); + add_identity_map((unsigned long)_head, (unsigned long)_end); write_cr3(top_level_pgt); } @@ -180,7 +174,8 @@ static void pf_error(unsigned long error_code, unsigned long address, void do_boot_page_fault(struct pt_regs *regs) { - unsigned long address = native_read_cr2(); + unsigned long address = native_read_cr2() & PMD_MASK; + unsigned long end = address + PMD_SIZE; unsigned long error_code = regs->orig_ax; /* @@ -196,5 +191,5 @@ void do_boot_page_fault(struct pt_regs *regs) * Error code is sane - now identity map the 2M region around * the faulting address. */ - add_identity_map(address & PMD_MASK, PMD_SIZE); + add_identity_map(address, end); } -- 2.17.1