* inode cache, dentry cache, buffer heads usage @ 2005-03-09 18:55 Badari Pulavarty 2005-03-09 21:27 ` Dipankar Sarma 2005-03-11 1:47 ` Andrew Morton 0 siblings, 2 replies; 11+ messages in thread From: Badari Pulavarty @ 2005-03-09 18:55 UTC (permalink / raw) To: ext2-devel, Linux Kernel Mailing List Hi, We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as our server to keep source code, cscopes and do the builds. This machine seems to slow down over the time. One thing we keep noticing is it keeps running out of lowmem. Most of the lowmem is used for ext3 inode cache + dentry cache + bufferheads + Buffers. So we did 2:2 split - but it improved thing, but again run into same issues. So, why is these slab cache are not getting purged/shrinked even under memory pressure ? (I have seen lowmem as low as 6MB). What can I do to keep the machine healthy ? Thanks, Badari Meminfo: ======== $ cat /proc/meminfo MemTotal: 16377076 kB MemFree: 9400604 kB Buffers: 577368 kB Cached: 4002012 kB SwapCached: 0 kB Active: 2152196 kB Inactive: 3578624 kB HighTotal: 14548952 kB HighFree: 9387328 kB LowTotal: 1828124 kB LowFree: 13276 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB Mapped: 301432 kB Slab: 1227268 kB Committed_AS: 695920 kB PageTables: 5684 kB VmallocTotal: 114680 kB VmallocUsed: 312 kB VmallocChunk: 114368 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB Slabinfo (top users): ===================== ext3_inode_cache 1405201 1615312 480 8 1 : tunables 54 27 8 : slabdata 201914 201914 0 dentry_cache 1505485 1864917 144 27 1 : tunables 120 60 8 : slabdata 69071 69071 0 buffer_head 1099832 1755375 52 75 1 : tunables 120 60 8 : slabdata 23405 23405 0 radix_tree_node 99919 102522 276 14 1 : tunables 54 27 8 : slabdata 7323 7323 0 ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: inode cache, dentry cache, buffer heads usage 2005-03-09 18:55 inode cache, dentry cache, buffer heads usage Badari Pulavarty @ 2005-03-09 21:27 ` Dipankar Sarma 2005-03-09 21:29 ` Badari Pulavarty 2005-03-11 1:47 ` Andrew Morton 1 sibling, 1 reply; 11+ messages in thread From: Dipankar Sarma @ 2005-03-09 21:27 UTC (permalink / raw) To: Badari Pulavarty; +Cc: ext2-devel, Linux Kernel Mailing List On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > Hi, > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > our server to keep source code, cscopes and do the builds. > This machine seems to slow down over the time. One thing we > keep noticing is it keeps running out of lowmem. Most of > the lowmem is used for ext3 inode cache + dentry cache + > bufferheads + Buffers. So we did 2:2 split - but it improved > thing, but again run into same issues. > > So, why is these slab cache are not getting purged/shrinked even > under memory pressure ? (I have seen lowmem as low as 6MB). What > can I do to keep the machine healthy ? How does /proc/sys/fs/dentry-state look when you run low on lowmem ? Thanks Dipankar ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: inode cache, dentry cache, buffer heads usage 2005-03-09 21:27 ` Dipankar Sarma @ 2005-03-09 21:29 ` Badari Pulavarty 2005-03-09 21:53 ` Dipankar Sarma 0 siblings, 1 reply; 11+ messages in thread From: Badari Pulavarty @ 2005-03-09 21:29 UTC (permalink / raw) To: Dipankar Sharma; +Cc: ext2-devel, Linux Kernel Mailing List On Wed, 2005-03-09 at 13:27, Dipankar Sarma wrote: > On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > > Hi, > > > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > > our server to keep source code, cscopes and do the builds. > > This machine seems to slow down over the time. One thing we > > keep noticing is it keeps running out of lowmem. Most of > > the lowmem is used for ext3 inode cache + dentry cache + > > bufferheads + Buffers. So we did 2:2 split - but it improved > > thing, but again run into same issues. > > > > So, why is these slab cache are not getting purged/shrinked even > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > can I do to keep the machine healthy ? > > How does /proc/sys/fs/dentry-state look when you run low on lowmem ? badari@kernel:~$ cat /proc/sys/fs/dentry-state 1434093 1348947 45 0 0 0 badari@kernel:~$ grep dentry /proc/slabinfo dentry_cache 1434094 1857519 144 27 1 : tunables 120 60 8 : slabdata 68797 68797 0 badari@kernel:~$ cat /proc/meminfo MemTotal: 16377076 kB MemFree: 8343724 kB Buffers: 579232 kB Cached: 5051848 kB SwapCached: 0 kB Active: 2911084 kB Inactive: 3878044 kB HighTotal: 14548952 kB HighFree: 8330944 kB LowTotal: 1828124 kB LowFree: 12780 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 216 kB Writeback: 0 kB Mapped: 301940 kB Slab: 1225772 kB Committed_AS: 771340 kB PageTables: 5768 kB VmallocTotal: 114680 kB VmallocUsed: 312 kB VmallocChunk: 114368 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: inode cache, dentry cache, buffer heads usage 2005-03-09 21:29 ` Badari Pulavarty @ 2005-03-09 21:53 ` Dipankar Sarma 2005-03-09 21:57 ` [Ext2-devel] " Sonny Rao 0 siblings, 1 reply; 11+ messages in thread From: Dipankar Sarma @ 2005-03-09 21:53 UTC (permalink / raw) To: Badari Pulavarty; +Cc: ext2-devel, Linux Kernel Mailing List On Wed, Mar 09, 2005 at 01:29:23PM -0800, Badari Pulavarty wrote: > On Wed, 2005-03-09 at 13:27, Dipankar Sarma wrote: > > On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > > > Hi, > > > > > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > > > our server to keep source code, cscopes and do the builds. > > > This machine seems to slow down over the time. One thing we > > > keep noticing is it keeps running out of lowmem. Most of > > > the lowmem is used for ext3 inode cache + dentry cache + > > > bufferheads + Buffers. So we did 2:2 split - but it improved > > > thing, but again run into same issues. > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > can I do to keep the machine healthy ? > > > > How does /proc/sys/fs/dentry-state look when you run low on lowmem ? > > > > badari@kernel:~$ cat /proc/sys/fs/dentry-state > 1434093 1348947 45 0 0 0 > badari@kernel:~$ grep dentry /proc/slabinfo > dentry_cache 1434094 1857519 144 27 1 : tunables 120 > 60 8 : slabdata 68797 68797 0 Hmm.. so we are not shrinking dcache despite a large number of unsed dentries. That is where we need to look. Will dig a bit tomorrow. Thanks Dipankar ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage 2005-03-09 21:53 ` Dipankar Sarma @ 2005-03-09 21:57 ` Sonny Rao 0 siblings, 0 replies; 11+ messages in thread From: Sonny Rao @ 2005-03-09 21:57 UTC (permalink / raw) To: Dipankar Sarma; +Cc: Badari Pulavarty, ext2-devel, Linux Kernel Mailing List [-- Attachment #1: Type: text/plain, Size: 1607 bytes --] On Thu, Mar 10, 2005 at 03:23:49AM +0530, Dipankar Sarma wrote: > On Wed, Mar 09, 2005 at 01:29:23PM -0800, Badari Pulavarty wrote: > > On Wed, 2005-03-09 at 13:27, Dipankar Sarma wrote: > > > On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > > > > Hi, > > > > > > > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > > > > our server to keep source code, cscopes and do the builds. > > > > This machine seems to slow down over the time. One thing we > > > > keep noticing is it keeps running out of lowmem. Most of > > > > the lowmem is used for ext3 inode cache + dentry cache + > > > > bufferheads + Buffers. So we did 2:2 split - but it improved > > > > thing, but again run into same issues. > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > can I do to keep the machine healthy ? > > > > > > How does /proc/sys/fs/dentry-state look when you run low on lowmem ? > > > > > > > > badari@kernel:~$ cat /proc/sys/fs/dentry-state > > 1434093 1348947 45 0 0 0 > > badari@kernel:~$ grep dentry /proc/slabinfo > > dentry_cache 1434094 1857519 144 27 1 : tunables 120 > > 60 8 : slabdata 68797 68797 0 > > Hmm.. so we are not shrinking dcache despite a large number of > unsed dentries. That is where we need to look. Will dig a bit > tomorrow. Here's my really old patch where I saw some improvement for this scenario... I haven't tried this in a really long time, so I have no idea if it'll work :-) Sonny [-- Attachment #2: linux-2.6.8-rc1-dcache-reclaim-rb.patch --] [-- Type: text/plain, Size: 6553 bytes --] --- fs/dcache.c.original 2004-08-02 15:43:42.629539312 -0500 +++ fs/dcache.c 2004-08-03 18:16:45.007809144 -0500 @@ -31,6 +31,7 @@ #include <linux/seqlock.h> #include <linux/swap.h> #include <linux/bootmem.h> +#include <linux/rbtree.h> /* #define DCACHE_DEBUG 1 */ @@ -60,12 +61,61 @@ static unsigned int d_hash_mask; static unsigned int d_hash_shift; static struct hlist_head *dentry_hashtable; static LIST_HEAD(dentry_unused); +static struct rb_root dentry_tree = RB_ROOT; + +#define RB_NONE (2) +#define ON_RB(node) ((node)->rb_color != RB_NONE) +#define RB_CLEAR(node) ((node)->rb_color = RB_NONE ) /* Statistics gathering. */ struct dentry_stat_t dentry_stat = { .age_limit = 45, }; + +/* take a dentry safely off the rbtree */ +static void drb_delete(struct dentry* dentry) +{ + // printk("drb_delete: 0x%p (%s) proc %d\n",dentry,dentry->d_iname,smp_processor_id()); + if (ON_RB(&dentry->d_rb)) { + rb_erase(&dentry->d_rb, &dentry_tree); + RB_CLEAR(&dentry->d_rb); + } else { + /* All allocated dentry objs should be in the tree */ + BUG_ON(1); + } +} + +static +struct dentry * drb_insert(struct dentry * dentry) +{ + struct rb_node ** p = &dentry_tree.rb_node; + struct rb_node * parent = NULL; + struct rb_node * node = &dentry->d_rb; + struct dentry * cur = NULL; + + // printk("drb_insert: 0x%p (%s)\n",dentry,dentry->d_iname); + + while (*p) + { + parent = *p; + cur = rb_entry(parent, struct dentry, d_rb); + + if (dentry < cur) + p = &(*p)->rb_left; + else if (dentry > cur) + p = &(*p)->rb_right; + else { + return cur; + } + } + + rb_link_node(node, parent, p); + rb_insert_color(node,&dentry_tree); + return NULL; +} + + static void d_callback(struct rcu_head *head) { struct dentry * dentry = container_of(head, struct dentry, d_rcu); @@ -189,6 +239,7 @@ kill_it: { list_del(&dentry->d_child); dentry_stat.nr_dentry--; /* For d_free, below */ /*drops the locks, at that point nobody can reach this dentry */ + drb_delete(dentry); dentry_iput(dentry); parent = dentry->d_parent; d_free(dentry); @@ -351,6 +402,7 @@ static inline void prune_one_dentry(stru __d_drop(dentry); list_del(&dentry->d_child); dentry_stat.nr_dentry--; /* For d_free, below */ + drb_delete(dentry); dentry_iput(dentry); parent = dentry->d_parent; d_free(dentry); @@ -360,7 +412,7 @@ static inline void prune_one_dentry(stru } /** - * prune_dcache - shrink the dcache + * prune_lru - shrink the lru list * @count: number of entries to try and free * * Shrink the dcache. This is done when we need @@ -372,7 +424,7 @@ static inline void prune_one_dentry(stru * all the dentries are in use. */ -static void prune_dcache(int count) +static void prune_lru(int count) { spin_lock(&dcache_lock); for (; count ; count--) { @@ -410,6 +462,93 @@ static void prune_dcache(int count) spin_unlock(&dcache_lock); } +/** + * prune_dcache - try and "intelligently" shrink the dcache + * @requested - num of dentrys to try and free + * + * The basic strategy here is to scan through our tree of dentrys + * in-order and put them at the end of the lru - free list + * Why in-order? Because, we want the chances of actually freeing + * all 15-27 (depending on arch) dentrys on a given page, instead + * of just in random lru order, which tends to lower dcache utilization + * and not free many pages. + */ +static void prune_dcache(unsigned requested) +{ + /* ------ debug --------- */ + //static int mod = 0; + //int flag = 0, removed = 0; + /* ------ debug --------- */ + + unsigned found = 0; + unsigned count; + struct rb_node * next; + struct dentry *dentry; +#define NUM_LRU_PTRS 8 + struct rb_node *lru_ptrs[NUM_LRU_PTRS]; + struct list_head *cur; + int i; + + spin_lock(&dcache_lock); + + cur = dentry_unused.prev; + + /* grab NUM_LRU_PTRS entrys off the end of lru list */ + /* we'll use these as pseudo-random starting points in the tree */ + for (i = 0 ; i < NUM_LRU_PTRS ; i++ ){ + if ( cur == &dentry_unused ) { + /* if there aren't NUM_LRU_PTRS entrys, we probably + can't even free a page now, give up */ + spin_unlock(&dcache_lock); + return; + } + lru_ptrs[i] = &(list_entry(cur,struct dentry, d_lru)->d_rb); + cur = cur->prev; + } + + i = 0; + + do { + count = 4 * PAGE_SIZE / sizeof(struct dentry) ; /* abitrary heuristic */ + next = lru_ptrs[i]; + for (; count ; count--) { + if( ! next ) { + //flag = 1; /* ------ debug --------- */ + break; + } + dentry = list_entry(next, struct dentry, d_rb); + next = rb_next(next); + prefetch(next); + if( ! list_empty( &dentry->d_lru) ) { + list_del_init(&dentry->d_lru); + dentry_stat.nr_unused--; + } + if (atomic_read(&dentry->d_count)) { + //removed++; /* ------ debug --------- */ + continue; + } else { + list_add_tail(&dentry->d_lru, &dentry_unused); + dentry_stat.nr_unused++; + found++; + } + } + i++; + } while ( (found < requested / 2) && (i < NUM_LRU_PTRS ) ); +#undef NUM_LRU_PTRS + + spin_unlock(&dcache_lock); + + /* ------ debug --------- */ + //mod++; + //if ( ! (mod & 64) ) { + // mod = 0; + // printk("prune_dcache: i %d flag %d, found %d removed %d\n",i,flag,found,removed); + //} + /* ------ debug --------- */ + + prune_lru(found); +} + /* * Shrink the dcache for the specified super block. * This allows us to unmount a device without disturbing @@ -604,7 +743,7 @@ void shrink_dcache_parent(struct dentry int found; while ((found = select_parent(parent)) != 0) - prune_dcache(found); + prune_lru(found); } /** @@ -642,7 +781,7 @@ void shrink_dcache_anon(struct hlist_hea } } spin_unlock(&dcache_lock); - prune_dcache(found); + prune_lru(found); } while(found); } @@ -730,6 +869,7 @@ struct dentry *d_alloc(struct dentry * p if (parent) list_add(&dentry->d_child, &parent->d_subdirs); dentry_stat.nr_dentry++; + drb_insert(dentry); spin_unlock(&dcache_lock); return dentry; --- include/linux/dcache.h.original 2004-08-03 18:20:40.800963136 -0500 +++ include/linux/dcache.h 2004-08-02 15:36:19.886846432 -0500 @@ -9,6 +9,7 @@ #include <linux/cache.h> #include <linux/rcupdate.h> #include <asm/bug.h> +#include <linux/rbtree.h> struct nameidata; struct vfsmount; @@ -94,6 +95,7 @@ struct dentry { struct hlist_head *d_bucket; /* lookup hash bucket */ struct qstr d_name; + struct rb_node d_rb; struct list_head d_lru; /* LRU list */ struct list_head d_child; /* child of parent list */ struct list_head d_subdirs; /* our children */ ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: inode cache, dentry cache, buffer heads usage 2005-03-09 18:55 inode cache, dentry cache, buffer heads usage Badari Pulavarty 2005-03-09 21:27 ` Dipankar Sarma @ 2005-03-11 1:47 ` Andrew Morton 2005-03-14 21:28 ` Badari Pulavarty 1 sibling, 1 reply; 11+ messages in thread From: Andrew Morton @ 2005-03-11 1:47 UTC (permalink / raw) To: Badari Pulavarty; +Cc: ext2-devel, linux-kernel Badari Pulavarty <pbadari@us.ibm.com> wrote: > > So, why is these slab cache are not getting purged/shrinked even > under memory pressure ? (I have seen lowmem as low as 6MB). What > can I do to keep the machine healthy ? Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in 2.6.8 though). ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: inode cache, dentry cache, buffer heads usage 2005-03-11 1:47 ` Andrew Morton @ 2005-03-14 21:28 ` Badari Pulavarty 2005-03-14 22:11 ` Andrew Morton 0 siblings, 1 reply; 11+ messages in thread From: Badari Pulavarty @ 2005-03-14 21:28 UTC (permalink / raw) To: Andrew Morton; +Cc: ext2-devel, Linux Kernel Mailing List On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > So, why is these slab cache are not getting purged/shrinked even > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > can I do to keep the machine healthy ? > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > 2.6.8 though). > > Yep. This helped shrink the slabs, but we end up eating up lots of the lowmem in Buffers. Is there a way to shrink buffers ? $ cat /proc/meminfo MemTotal: 16377076 kB MemFree: 7495824 kB Buffers: 1081708 kB Cached: 4162492 kB SwapCached: 0 kB Active: 3660756 kB Inactive: 4473476 kB HighTotal: 14548952 kB HighFree: 7489600 kB LowTotal: 1828124 kB LowFree: 6224 kB ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: inode cache, dentry cache, buffer heads usage 2005-03-14 21:28 ` Badari Pulavarty @ 2005-03-14 22:11 ` Andrew Morton 2005-03-14 22:13 ` [Ext2-devel] " Badari Pulavarty 0 siblings, 1 reply; 11+ messages in thread From: Andrew Morton @ 2005-03-14 22:11 UTC (permalink / raw) To: Badari Pulavarty; +Cc: ext2-devel, linux-kernel Badari Pulavarty <pbadari@us.ibm.com> wrote: > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > can I do to keep the machine healthy ? > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > 2.6.8 though). > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > the lowmem in Buffers. Is there a way to shrink buffers ? It would require some patchwork. Why is it a problem? That memory is reclaimable. > $ cat /proc/meminfo > MemTotal: 16377076 kB > MemFree: 7495824 kB > Buffers: 1081708 kB > Cached: 4162492 kB > SwapCached: 0 kB > Active: 3660756 kB > Inactive: 4473476 kB > HighTotal: 14548952 kB > HighFree: 7489600 kB > LowTotal: 1828124 kB > LowFree: 6224 kB > How'd you get 1.8gig of lowmem? ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage 2005-03-14 22:11 ` Andrew Morton @ 2005-03-14 22:13 ` Badari Pulavarty 2005-03-14 22:41 ` Andrew Morton 0 siblings, 1 reply; 11+ messages in thread From: Badari Pulavarty @ 2005-03-14 22:13 UTC (permalink / raw) To: Andrew Morton; +Cc: ext2-devel, Linux Kernel Mailing List On Mon, 2005-03-14 at 14:11, Andrew Morton wrote: > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > can I do to keep the machine healthy ? > > > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > > 2.6.8 though). > > > > > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > > the lowmem in Buffers. Is there a way to shrink buffers ? > > It would require some patchwork. Why is it a problem? That memory is > reclaimable. > Well, machine pauses for 5-30 seconds for each vi,cscope, write() etc. There is 7.5 GB of highmem free, but only 6MB of lowmem. Just trying to free "lowmem" as much as possible. > > $ cat /proc/meminfo > > MemTotal: 16377076 kB > > MemFree: 7495824 kB > > Buffers: 1081708 kB > > Cached: 4162492 kB > > SwapCached: 0 kB > > Active: 3660756 kB > > Inactive: 4473476 kB > > HighTotal: 14548952 kB > > HighFree: 7489600 kB > > LowTotal: 1828124 kB > > LowFree: 6224 kB > > > > How'd you get 1.8gig of lowmem? 2:2 split - Badari ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage 2005-03-14 22:13 ` [Ext2-devel] " Badari Pulavarty @ 2005-03-14 22:41 ` Andrew Morton 2005-03-15 16:17 ` Badari Pulavarty 0 siblings, 1 reply; 11+ messages in thread From: Andrew Morton @ 2005-03-14 22:41 UTC (permalink / raw) To: Badari Pulavarty; +Cc: ext2-devel, linux-kernel Badari Pulavarty <pbadari@us.ibm.com> wrote: > > On Mon, 2005-03-14 at 14:11, Andrew Morton wrote: > > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > > > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > > can I do to keep the machine healthy ? > > > > > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > > > 2.6.8 though). > > > > > > > > > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > > > the lowmem in Buffers. Is there a way to shrink buffers ? > > > > It would require some patchwork. Why is it a problem? That memory is > > reclaimable. > > > > Well, machine pauses for 5-30 seconds for each vi,cscope, write() etc. Why? > > How'd you get 1.8gig of lowmem? > > 2:2 split > Does a normal kernel exhibit the pauses? ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage 2005-03-14 22:41 ` Andrew Morton @ 2005-03-15 16:17 ` Badari Pulavarty 0 siblings, 0 replies; 11+ messages in thread From: Badari Pulavarty @ 2005-03-15 16:17 UTC (permalink / raw) To: Andrew Morton; +Cc: ext2-devel, Linux Kernel Mailing List On Mon, 2005-03-14 at 14:41, Andrew Morton wrote: > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > On Mon, 2005-03-14 at 14:11, Andrew Morton wrote: > > > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > > > > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > > > > Badari Pulavarty <pbadari@us.ibm.com> wrote: > > > > > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > > > can I do to keep the machine healthy ? > > > > > > > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > > > > 2.6.8 though). > > > > > > > > > > > > > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > > > > the lowmem in Buffers. Is there a way to shrink buffers ? > > > > > > It would require some patchwork. Why is it a problem? That memory is > > > reclaimable. > > > > > > > Well, machine pauses for 5-30 seconds for each vi,cscope, write() etc. > > Why? Dunno. Trying to figure out whats happening here. Lowmem pressure was the top on our list - but nothing to prove it - yet. > > > > How'd you get 1.8gig of lowmem? > > > > 2:2 split > > > > Does a normal kernel exhibit the pauses? We haven't tried 3:1 split on this machine for a while. This machine starts to slow down over the time. (It is up for last 70 days). We are trying to collect all the info and also try everything possible to understand issues - before we reboot. Thanks, Badari ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2005-03-15 16:24 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2005-03-09 18:55 inode cache, dentry cache, buffer heads usage Badari Pulavarty 2005-03-09 21:27 ` Dipankar Sarma 2005-03-09 21:29 ` Badari Pulavarty 2005-03-09 21:53 ` Dipankar Sarma 2005-03-09 21:57 ` [Ext2-devel] " Sonny Rao 2005-03-11 1:47 ` Andrew Morton 2005-03-14 21:28 ` Badari Pulavarty 2005-03-14 22:11 ` Andrew Morton 2005-03-14 22:13 ` [Ext2-devel] " Badari Pulavarty 2005-03-14 22:41 ` Andrew Morton 2005-03-15 16:17 ` Badari Pulavarty
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox