From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3632C0650F for ; Thu, 8 Aug 2019 07:58:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8B5CE218B8 for ; Thu, 8 Aug 2019 07:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731747AbfHHH6c (ORCPT ); Thu, 8 Aug 2019 03:58:32 -0400 Received: from verein.lst.de ([213.95.11.211]:44262 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731588AbfHHH6c (ORCPT ); Thu, 8 Aug 2019 03:58:32 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 3D39168B02; Thu, 8 Aug 2019 09:58:27 +0200 (CEST) Date: Thu, 8 Aug 2019 09:58:27 +0200 From: Christoph Hellwig To: Mark Rutland Cc: Rob Clark , Christoph Hellwig , Rob Clark , dri-devel , Catalin Marinas , Will Deacon , Maarten Lankhorst , Maxime Ripard , Sean Paul , David Airlie , Daniel Vetter , Allison Randal , Greg Kroah-Hartman , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, LKML Subject: Re: [PATCH 1/2] drm: add cache support for arm64 Message-ID: <20190808075827.GD30308@lst.de> References: <20190805211451.20176-1-robdclark@gmail.com> <20190806084821.GA17129@lst.de> <20190806143457.GF475@lakrids.cambridge.arm.com> <20190807123807.GD54191@lakrids.cambridge.arm.com> <20190807164958.GA44765@lakrids.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190807164958.GA44765@lakrids.cambridge.arm.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote: > I'm fairly confident that the linear/direct map cacheable alias is not > torn down when pages are allocated. The gneeric page allocation code > doesn't do so, and I see nothing the shmem code to do so. It is not torn down anywhere. > For arm64, we can tear down portions of the linear map, but that has to > be done explicitly, and this is only possible when using rodata_full. If > not using rodata_full, it is not possible to dynamically tear down the > cacheable alias. Interesting. For this or next merge window I plan to add support to the generic DMA code to remap pages as uncachable in place based on the openrisc code. AŃ• far as I can tell the requirement for that is basically just that the kernel direct mapping doesn't use PMD or bigger mapping so that it supports changing protection bits on a per-PTE basis. Is that the case with arm64 + rodata_full? > > My understanding is that a cacheable alias is "ok", with some > > caveats.. ie. that the cacheable alias is not accessed. > > Unfortunately, that is not true. You'll often get away with it in > practice, but that's a matter of probability rather than a guarantee. > > You cannot prevent a CPU from accessing a VA arbitrarily (e.g. as the > result of wild speculation). The ARM ARM (ARM DDI 0487E.a) points this > out explicitly: Well, if we want to fix this properly we'll have to remap in place for dma_alloc_coherent and friends.