From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sasha Levin Subject: Re: mm: shm: hang in shmem_fallocate Date: Sun, 09 Feb 2014 20:41:54 -0500 Message-ID: <52F82E62.2010709@oracle.com> References: <52AE7B10.2080201@oracle.com> <52F6898A.50101@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Andrew Morton , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, LKML To: Hugh Dickins , Dave Jones Return-path: Received: from aserp1040.oracle.com ([141.146.126.69]:22847 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751996AbaBJBmE (ORCPT ); Sun, 9 Feb 2014 20:42:04 -0500 In-Reply-To: Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On 02/08/2014 10:25 PM, Hugh Dickins wrote: > Would trinity be likely to have a thread or process repeatedly faulting > in pages from the hole while it is being punched? I can see how trinity would do that, but just to be certain - Cc davej. On 02/08/2014 10:25 PM, Hugh Dickins wrote: > Does this happen with other holepunch filesystems? If it does not, > I'd suppose it's because the tmpfs fault-in-newly-created-page path > is lighter than a consistent disk-based filesystem's has to be. > But we don't want to make the tmpfs path heavier to match them. No, this is strictly limited to tmpfs, and AFAIK trinity tests hole punching in other filesystems and I make sure to get a bunch of those mounted before starting testing. Thanks, Sasha