From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ECC3C48BE5 for ; Wed, 23 Jun 2021 04:28:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3646A6052B for ; Wed, 23 Jun 2021 04:28:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229938AbhFWEaY (ORCPT ); Wed, 23 Jun 2021 00:30:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229544AbhFWEaY (ORCPT ); Wed, 23 Jun 2021 00:30:24 -0400 Received: from zeniv-ca.linux.org.uk (zeniv-ca.linux.org.uk [IPv6:2607:5300:60:148a::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6709AC061574 for ; Tue, 22 Jun 2021 21:28:07 -0700 (PDT) Received: from viro by zeniv-ca.linux.org.uk with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvuUD-00BPOp-9G; Wed, 23 Jun 2021 04:27:37 +0000 Date: Wed, 23 Jun 2021 04:27:37 +0000 From: Al Viro To: Xiaoming Ni Cc: Chen Huang , Andrew Morton , Stephen Rothwell , "Matthew Wilcox (Oracle)" , Randy Dunlap , Catalin Marinas , Will Deacon , Linux ARM , linux-mm , open list Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() Message-ID: References: <92fa298d-9d88-0ca4-40d9-13690dcd42f9@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <92fa298d-9d88-0ca4-40d9-13690dcd42f9@huawei.com> Sender: Al Viro Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 23, 2021 at 11:24:54AM +0800, Xiaoming Ni wrote: > On 2021/6/23 10:50, Al Viro wrote: > > On Wed, Jun 23, 2021 at 10:39:31AM +0800, Chen Huang wrote: > > > > > Then when kernel handles the alignment_fault, it will not panic. As the > > > arm64 memory model spec said, when the address is not a multiple of the > > > element size, the access is unaligned. Unaligned accesses are allowed to > > > addresses marked as Normal, but not to Device regions. An unaligned access > > > to a Device region will trigger an exception (alignment fault). > > > > > > do_alignment_fault > > > do_bad_area > > > __do_kernel_fault > > > fixup_exception > > > > > > But that fixup cann't handle the unaligned copy, so the > > > copy_page_from_iter_atomic returns 0 and traps in loop. > > > > Looks like you need to fix your raw_copy_from_user(), then... > > . > > > > Exit loop when iov_iter_copy_from_user_atomic() returns 0. > This should solve the problem, too, and it's easier. It might be easier, but it's not going to work correctly. If the page gets evicted by memory pressure, you are going to get spurious short write. Besides, it's simply wrong - write(2) does *NOT* require an aligned source. It (and raw_copy_from_user()) should act the same way memcpy(3) does.