From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Shishkin Subject: Re: [PATCH] [RESEND] RFC: List per-process file descriptor consumption when hitting file-max Date: Sun, 11 Oct 2009 15:08:47 +0300 Message-ID: <71a0d6ff0910110508t11da5f62x34a7a8886c087a0b@mail.gmail.com> References: <1244538071-992-1-git-send-email-alexander.shishckin@gmail.com> <20090729171248.764570f2.akpm@linux-foundation.org> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, linux-kernel@vger.kernel.org To: Andrew Morton Return-path: In-Reply-To: <20090729171248.764570f2.akpm@linux-foundation.org> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org 2009/7/30 Andrew Morton : > If there's some reason why the problem is particularly severe and > particularly hard to resolve by other means then sure, perhaps explic= it > kernel support is justified. =C2=A0But is that the case with this spe= cific > userspace bug? Well, this can be figured by userspace by traversing procfs and counting entries of fd/ for each, but that is likely to require more available file descriptors and given we are at the point when the limit is hit, this may not work. There is, of course, a good chance that the process that tried to open the one-too-many descriptor is going to crash upon failing to do so (and thus free a bunch of descriptors), but that is going to create more confusion. Most of the time, the application that crashes when file-max is reached is not the one that ate them all. So, all in all, in certain cases there's no other way to figure out who was leaking descriptors. Regards, -- Alex