From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63FD3C433B4 for ; Wed, 21 Apr 2021 06:51:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D812F61424 for ; Wed, 21 Apr 2021 06:51:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D812F61424 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 72AB66B0071; Wed, 21 Apr 2021 02:51:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7006D6B0072; Wed, 21 Apr 2021 02:51:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CA356B0073; Wed, 21 Apr 2021 02:51:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 440216B0071 for ; Wed, 21 Apr 2021 02:51:30 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 060A982499A8 for ; Wed, 21 Apr 2021 06:51:30 +0000 (UTC) X-FDA: 78055453140.17.8FEE4AC Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP id 8C313D2 for ; Wed, 21 Apr 2021 06:51:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 6CD626140F; Wed, 21 Apr 2021 06:51:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618987888; bh=zUrwg64gqglyOF98id6dXLH1hI0/Mw73IR9RL0Y7B2Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bfQMeyE6oMXjZ3bKIzHRvf03a/ex1yAut73q+rvs6m+unqw4Y1VIuyrhJXKElg/A5 aNez/Zh2x21tg9VaG3CmriAFze8pqaciW7qmQhWgCOZiK/zRL/94ch0mZxhE21ksdD cs1oW7MT4yim0o8okGwHWmkv8bRJvdWdnu1DXVeE+fLPZ7Xp7k8muc2iCpiydNM49c X56UfrdLOe5/+Ij7yZTf0SQZk8nHLEs76MVnraEPdIa1YINmv1JIr3mu8tIQ7Eohy/ lpuBvRla6GPWTHDl/UwJ19hQiSPF2eP8tMj4QYU9xxGdA/3sKk6qBfQx9KfEOG9mN6 xLhhHj8Z5OR9g== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() Date: Wed, 21 Apr 2021 09:51:07 +0300 Message-Id: <20210421065108.1987-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210421065108.1987-1-rppt@kernel.org> References: <20210421065108.1987-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8C313D2 X-Stat-Signature: wgy9xhuztr5sx3iun5fmhndyaawqkfwg Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618987881-549413 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in t= he linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_map_memory() wrapper for memblock_is_map_memory() to perform such check and use it where appropriate. Using a wrapper allows to avoid cyclic include dependencies. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 1 + arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 11 +++++++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 17 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/mem= ory.h index 0aabc3be9a75..194f9f993d30 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) =20 #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr =3D __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ }) =20 void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.= h index 012cffc574e8..99a6da91f870 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from); typedef struct page *pgtable_t; =20 extern int pfn_valid(unsigned long); +extern int pfn_is_map_memory(unsigned long); =20 #include =20 diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..23dd99e29b23 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) =20 static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_map_memory(pfn); } =20 /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3685e12aba9b..dc03bdc12c0f 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -258,6 +258,17 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); =20 +int pfn_is_map_memory(unsigned long pfn) +{ + phys_addr_t addr =3D PFN_PHYS(pfn); + + if (PHYS_PFN(addr) !=3D pfn) + return 0; +=09 + return memblock_is_map_memory(addr); +} +EXPORT_SYMBOL(pfn_is_map_memory); + static phys_addr_t memory_limit =3D PHYS_ADDR_MAX; =20 /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..b7c81dacabf0 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_= addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) return NULL; =20 area =3D get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); =20 return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5d9550fdb9cf..26045e9adbd7 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_map_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); --=20 2.28.0