From mboxrd@z Thu Jan 1 00:00:00 1970 From: Seth Huang Subject: OOM problem caused by fs Date: Mon, 7 Jun 2010 16:34:16 +0800 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 To: linux-fsdevel@vger.kernel.org Return-path: Received: from mail-vw0-f46.google.com ([209.85.212.46]:56258 "EHLO mail-vw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751666Ab0FGIeR (ORCPT ); Mon, 7 Jun 2010 04:34:17 -0400 Received: by vws5 with SMTP id 5so3156192vws.19 for ; Mon, 07 Jun 2010 01:34:16 -0700 (PDT) Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Hello everyone, Our group is developing a new file system for linux and we got stuck with out-of-memory problem. When creating large files in our fs, the system will run out of memory(I mean the kernel starts to dump memory usage repeatedly and the oom killer begins to kill processes) as long as the amount of data exceeds the capacity of free memory, even if the kernel is flushing out dirty pages. If i'm right, when available memory is low, the writes will be blocked in page cache allocation until some dirty pages are cleaned. I've checked pdflush, it works fine in our system, which means dirty pages can be flushed out and cleaned in time. However, it still crashes the system. I've no idea how could this happen. Has anyone experienced the same thing? Any advices will be appreciated. Thanks, Seth