* [PATCHv2 1/2] arm: Correct virt_addr_valid
@ 2013-12-16 19:01 Laura Abbott
2013-12-16 19:01 ` [PATCHv2 2/2] arm64: " Laura Abbott
2013-12-17 12:08 ` [PATCHv2 1/2] arm: " Will Deacon
0 siblings, 2 replies; 6+ messages in thread
From: Laura Abbott @ 2013-12-16 19:01 UTC (permalink / raw)
To: linux-arm-kernel
The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid. The check for
an address between PAGE_OFFSET and high_memory is still necessary
as vmalloc/highmem addresses are not valid with virt_to_page.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm/include/asm/memory.h | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 9ecccc8..6211df0 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -350,7 +350,8 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
+#define virt_addr_valid(kaddr) (pfn_valid(__pa(kaddr) >> PAGE_SHIFT) && \
+ ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory))
#endif
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCHv2 2/2] arm64: Correct virt_addr_valid
2013-12-16 19:01 [PATCHv2 1/2] arm: Correct virt_addr_valid Laura Abbott
@ 2013-12-16 19:01 ` Laura Abbott
2013-12-17 12:00 ` Catalin Marinas
2013-12-17 12:08 ` [PATCHv2 1/2] arm: " Will Deacon
1 sibling, 1 reply; 6+ messages in thread
From: Laura Abbott @ 2013-12-16 19:01 UTC (permalink / raw)
To: linux-arm-kernel
The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
arch/arm64/include/asm/memory.h | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 3776217..b8ec776 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -146,8 +146,9 @@ static inline void *phys_to_virt(phys_addr_t x)
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
- ((void *)(kaddr) < (void *)high_memory))
+#define virt_addr_valid(kaddr) (pfn_valid(__pa(kaddr) >> PAGE_SHIFT) && \
+ ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory))
+
#endif
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCHv2 2/2] arm64: Correct virt_addr_valid
2013-12-16 19:01 ` [PATCHv2 2/2] arm64: " Laura Abbott
@ 2013-12-17 12:00 ` Catalin Marinas
2013-12-18 18:20 ` Laura Abbott
0 siblings, 1 reply; 6+ messages in thread
From: Catalin Marinas @ 2013-12-17 12:00 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Dec 16, 2013 at 07:01:45PM +0000, Laura Abbott wrote:
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 3776217..b8ec776 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -146,8 +146,9 @@ static inline void *phys_to_virt(phys_addr_t x)
> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>
> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
> - ((void *)(kaddr) < (void *)high_memory))
> +#define virt_addr_valid(kaddr) (pfn_valid(__pa(kaddr) >> PAGE_SHIFT) && \
> + ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory))
> +
I still think the original patch was fine for arm64, no need for
additional checks since we don't have highmem and __pa() is always
linear.
--
Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCHv2 1/2] arm: Correct virt_addr_valid
2013-12-16 19:01 [PATCHv2 1/2] arm: Correct virt_addr_valid Laura Abbott
2013-12-16 19:01 ` [PATCHv2 2/2] arm64: " Laura Abbott
@ 2013-12-17 12:08 ` Will Deacon
1 sibling, 0 replies; 6+ messages in thread
From: Will Deacon @ 2013-12-17 12:08 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Dec 16, 2013 at 07:01:44PM +0000, Laura Abbott wrote:
> The definition of virt_addr_valid is that virt_addr_valid should
> return true if and only if virt_to_page returns a valid pointer.
> The current definition of virt_addr_valid only checks against the
> virtual address range. There's no guarantee that just because a
> virtual address falls bewteen PAGE_OFFSET and high_memory the
> associated physical memory has a valid backing struct page. Follow
> the example of other architectures and convert to pfn_valid to
> verify that the virtual address is actually valid. The check for
> an address between PAGE_OFFSET and high_memory is still necessary
> as vmalloc/highmem addresses are not valid with virt_to_page.
>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---
> arch/arm/include/asm/memory.h | 3 ++-
> 1 files changed, 2 insertions(+), 1 deletions(-)
>
> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> index 9ecccc8..6211df0 100644
> --- a/arch/arm/include/asm/memory.h
> +++ b/arch/arm/include/asm/memory.h
> @@ -350,7 +350,8 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>
> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
> +#define virt_addr_valid(kaddr) (pfn_valid(__pa(kaddr) >> PAGE_SHIFT) && \
> + ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory))
Can you change the order of the check please, so that we do the quicker
lowmem range checks before the pfn_valid check? I know I backed down on the
latter being slow, but it's still slower than what we had before and
changing the order of the conjunction is easy to do.
Cheers,
Will
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCHv2 2/2] arm64: Correct virt_addr_valid
2013-12-17 12:00 ` Catalin Marinas
@ 2013-12-18 18:20 ` Laura Abbott
2013-12-18 18:30 ` Catalin Marinas
0 siblings, 1 reply; 6+ messages in thread
From: Laura Abbott @ 2013-12-18 18:20 UTC (permalink / raw)
To: linux-arm-kernel
On 12/17/2013 4:00 AM, Catalin Marinas wrote:
> On Mon, Dec 16, 2013 at 07:01:45PM +0000, Laura Abbott wrote:
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 3776217..b8ec776 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -146,8 +146,9 @@ static inline void *phys_to_virt(phys_addr_t x)
>> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>>
>> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>> -#define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
>> - ((void *)(kaddr) < (void *)high_memory))
>> +#define virt_addr_valid(kaddr) (pfn_valid(__pa(kaddr) >> PAGE_SHIFT) && \
>> + ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory))
>> +
>
> I still think the original patch was fine for arm64, no need for
> additional checks since we don't have highmem and __pa() is always
> linear.
>
Okay, do you want to go ahead and just take the previous version then?
Laura
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCHv2 2/2] arm64: Correct virt_addr_valid
2013-12-18 18:20 ` Laura Abbott
@ 2013-12-18 18:30 ` Catalin Marinas
0 siblings, 0 replies; 6+ messages in thread
From: Catalin Marinas @ 2013-12-18 18:30 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Dec 18, 2013 at 06:20:32PM +0000, Laura Abbott wrote:
> On 12/17/2013 4:00 AM, Catalin Marinas wrote:
> > On Mon, Dec 16, 2013 at 07:01:45PM +0000, Laura Abbott wrote:
> >> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> >> index 3776217..b8ec776 100644
> >> --- a/arch/arm64/include/asm/memory.h
> >> +++ b/arch/arm64/include/asm/memory.h
> >> @@ -146,8 +146,9 @@ static inline void *phys_to_virt(phys_addr_t x)
> >> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
> >>
> >> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> >> -#define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
> >> - ((void *)(kaddr) < (void *)high_memory))
> >> +#define virt_addr_valid(kaddr) (pfn_valid(__pa(kaddr) >> PAGE_SHIFT) && \
> >> + ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory))
> >> +
> >
> > I still think the original patch was fine for arm64, no need for
> > additional checks since we don't have highmem and __pa() is always
> > linear.
>
> Okay, do you want to go ahead and just take the previous version then?
I already did (while Will is away ;)). Thanks.
--
Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-12-18 18:30 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-16 19:01 [PATCHv2 1/2] arm: Correct virt_addr_valid Laura Abbott
2013-12-16 19:01 ` [PATCHv2 2/2] arm64: " Laura Abbott
2013-12-17 12:00 ` Catalin Marinas
2013-12-18 18:20 ` Laura Abbott
2013-12-18 18:30 ` Catalin Marinas
2013-12-17 12:08 ` [PATCHv2 1/2] arm: " Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).