From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F059CC48BE5 for ; Wed, 23 Jun 2021 09:32:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9DE42611CE for ; Wed, 23 Jun 2021 09:32:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9DE42611CE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95D4C6B0011; Wed, 23 Jun 2021 05:32:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90CCD6B0036; Wed, 23 Jun 2021 05:32:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 712796B006C; Wed, 23 Jun 2021 05:32:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 3A3486B0011 for ; Wed, 23 Jun 2021 05:32:26 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6F0E0181E997F for ; Wed, 23 Jun 2021 09:32:26 +0000 (UTC) X-FDA: 78284473092.36.14DE9A0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP id 1AABCE00025C for ; Wed, 23 Jun 2021 09:32:26 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 8EC9960724; Wed, 23 Jun 2021 09:32:23 +0000 (UTC) Date: Wed, 23 Jun 2021 10:32:21 +0100 From: Catalin Marinas To: Al Viro Cc: Xiaoming Ni , Chen Huang , Andrew Morton , Stephen Rothwell , "Matthew Wilcox (Oracle)" , Randy Dunlap , Will Deacon , Linux ARM , linux-mm , open list Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() Message-ID: <20210623093220.GA3718@arm.com> References: <92fa298d-9d88-0ca4-40d9-13690dcd42f9@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf21.hostedemail.com: domain of cmarinas@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspamd-Server: rspam02 X-Stat-Signature: ki5pgudwt1ob5hud6ouwyu7ek7wmss1e X-Rspamd-Queue-Id: 1AABCE00025C X-HE-Tag: 1624440746-98836 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jun 23, 2021 at 04:27:37AM +0000, Al Viro wrote: > On Wed, Jun 23, 2021 at 11:24:54AM +0800, Xiaoming Ni wrote: > > On 2021/6/23 10:50, Al Viro wrote: > > > On Wed, Jun 23, 2021 at 10:39:31AM +0800, Chen Huang wrote: > > > > > > > Then when kernel handles the alignment_fault, it will not panic. As the > > > > arm64 memory model spec said, when the address is not a multiple of the > > > > element size, the access is unaligned. Unaligned accesses are allowed to > > > > addresses marked as Normal, but not to Device regions. An unaligned access > > > > to a Device region will trigger an exception (alignment fault). > > > > > > > > do_alignment_fault > > > > do_bad_area > > > > __do_kernel_fault > > > > fixup_exception > > > > > > > > But that fixup cann't handle the unaligned copy, so the > > > > copy_page_from_iter_atomic returns 0 and traps in loop. > > > > > > Looks like you need to fix your raw_copy_from_user(), then... > > > > Exit loop when iov_iter_copy_from_user_atomic() returns 0. > > This should solve the problem, too, and it's easier. > > It might be easier, but it's not going to work correctly. > If the page gets evicted by memory pressure, you are going > to get spurious short write. > > Besides, it's simply wrong - write(2) does *NOT* require an > aligned source. It (and raw_copy_from_user()) should act the > same way memcpy(3) does. On arm64, neither memcpy() nor raw_copy_from_user() are expected to work on Device mappings, we have memcpy_fromio() for this but only for ioremap(). There's no (easy) way to distinguish in the write() syscall how the source buffer is mapped. generic_perform_write() does an iov_iter_fault_in_readable() check but that's not sufficient and it also breaks the cases where you can get intra-page faults (arm64 MTE or SPARC ADI). I think in the general case it's racy anyway (another thread doing an mprotect(PROT_NONE) after the readable check passed). So I think generic_perform_write() returning -EFAULT if copied == 0 would make sense (well, unless it breaks other cases I'm not aware of). -- Catalin