From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boaz Harrosh Subject: Re: OOM problem caused by fs Date: Mon, 07 Jun 2010 11:42:44 +0300 Message-ID: <4C0CB104.70407@panasas.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: linux-fsdevel@vger.kernel.org To: Seth Huang Return-path: Received: from daytona.panasas.com ([67.152.220.89]:41247 "EHLO daytona.int.panasas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751521Ab0FGImt (ORCPT ); Mon, 7 Jun 2010 04:42:49 -0400 In-Reply-To: Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On 06/07/2010 11:34 AM, Seth Huang wrote: > Hello everyone, > > Our group is developing a new file system for linux and we got stuck > with out-of-memory problem. > > When creating large files in our fs, the system will run out of > memory(I mean the kernel starts to dump memory usage repeatedly and > the oom killer begins to kill processes) as long as the amount of data > exceeds the capacity of free memory, even if the kernel is flushing > out dirty pages. > > If i'm right, when available memory is low, the writes will be blocked > in page cache allocation until some dirty pages are cleaned. I've > checked pdflush, it works fine in our system, which means dirty pages > can be flushed out and cleaned in time. However, it still crashes the > system. I've no idea how could this happen. > > Has anyone experienced the same thing? Any advices will be appreciated. > > Thanks, > Seth Code!? a git tree URL? ...? Boaz