From mboxrd@z Thu Jan 1 00:00:00 1970 From: andrzej-kardas@o2.pl (Andrzej Kardas) Date: Thu, 26 May 2011 18:05:20 +0200 Subject: How to limit the total size used by core files or automatically delete old corefiles. In-Reply-To: References: Message-ID: <4DDE7A40.9020408@o2.pl> To: kernelnewbies@lists.kernelnewbies.org List-Id: kernelnewbies.lists.kernelnewbies.org On 26.05.2011 14:31, SADA SIVA REDDY S wrote: > My Questions: > > 1. Is there a provision in Linux to automatically cleanup the old > corefiles when we reach a certain limit ? > I think there is no such feature. Core dump is regular file saved in default process directory, and system doesn't trace these files, it simply generates core dump and forget about it (on other words, system treats core dumps as regular file and doesn't know that is a core dump file). > > 1. Is there a provision in Linux to set a upper limit for space > occupied by all core files (not individual core files) ? > I think no, you can limit size of generated core dump per file, per user (ulimit -c). But, you can change destination of all core dump files by add line kernel.core_pattern = /vol/allcoredumps/%u/%e in /etc/sysctl.conf After that, you can write a simple script to check amount of free space, schedule it into crontab. When free space will be below certain limit, script should remove oldest or biggest files from above location. Below, list of available patterns: | %p: pid %: '%' is dropped %%: output one '%' %u: uid %g: gid %s: signal number %t: UNIX time of dump %h: hostname %e: executable filename %: both are dropped| -- regards Andrzej Kardas http://www.linux.mynotes.pl -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110526/a0ea1a7f/attachment.html