From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38DABC4360F for ; Tue, 26 Mar 2019 04:15:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1249420823 for ; Tue, 26 Mar 2019 04:15:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726342AbfCZEPR (ORCPT ); Tue, 26 Mar 2019 00:15:17 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:60142 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725298AbfCZEPR (ORCPT ); Tue, 26 Mar 2019 00:15:17 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92 #3 (Red Hat Linux)) id 1h8dUU-0007gH-6R; Tue, 26 Mar 2019 04:15:10 +0000 Date: Tue, 26 Mar 2019 04:15:10 +0000 From: Al Viro To: Mark Fasheh Cc: Dave Chinner , Linus Torvalds , syzbot , Alexei Starovoitov , Daniel Borkmann , linux-fsdevel , Linux List Kernel Mailing , syzkaller-bugs , Jan Kara , Jaegeuk Kim , Joel Becker Subject: Re: KASAN: use-after-free Read in path_lookupat Message-ID: <20190326041509.GZ2217@ZenIV.linux.org.uk> References: <0000000000006946d2057bbd0eef@google.com> <20190325045744.GK2217@ZenIV.linux.org.uk> <20190325194332.GO2217@ZenIV.linux.org.uk> <20190325224823.GF26298@dastard> <20190325230211.GR2217@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 25, 2019 at 08:18:25PM -0700, Mark Fasheh wrote: > Hey Al, > > It's been a while since I've looked at that bit of code but it looks like > Ocfs2 is syncing the inode to disk and disposing of it's memory > representation (which would include the cluster locks held) so that other > nodes get a chance to delete the potentially orphaned inode. In Ocfs2 we > won't delete an inode if it exists in another nodes cache. Wait a sec - what's the reason for forcing that write_inode_now(); why doesn't the normal mechanism work? I'm afraid I still don't get it - we do wait for writeback in evict_inode(), or the local filesystems wouldn't work.