public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Zlatko Calusic <Zlatko.Calusic@CARNet.hr>
To: "Stephen C. Tweedie" <sct@redhat.com>
Cc: "Eric W. Biederman" <ebiederm+eric@npwt.net>, linux-mm@kvack.org
Subject: Re: More info: 2.1.108 page cache performance on low memory
Date: 23 Jul 1998 12:06:05 +0200	[thread overview]
Message-ID: <87iukovq42.fsf@atlas.CARNet.hr> (raw)
In-Reply-To: "Stephen C. Tweedie"'s message of "Wed, 22 Jul 1998 11:40:48 +0100"

"Stephen C. Tweedie" <sct@redhat.com> writes:

> Hi,
> 
> On 20 Jul 1998 11:15:12 +0200, Zlatko Calusic <Zlatko.Calusic@CARNet.hr>
> said:
> 
> > I don't know if its easy, but we probably should get rid of buffer
> > cache completely, at one point in time. It's hard to balance things
> > between two caches, not to mention other memory objects in kernel.
> 
> No, we need the buffer cache for all sorts of things.  You'd have to
> reinvent it if you got rid of it, since it is the main mechanism by
> which we can reliably label IO for the block device driver layer, and we
> also cache non-page-aligned filesystem metadata there.

Yes, I'm aware of lots of problems that would need to be resolved in
order to get rid of buffer cache (probably just to reinvent it, as you
said :)). But, then again, if I understand you completely, we will
always have the buffer cache as it is implemented now?!

Non-page aligned filesystem metadata, really looks like a hard problem
to solve without buffer cache mechanism, that's out of question, but
is there any posibility that we will introduce some logic to use
somewhat improved page cache with buffer head functionality (or
similar) that will allow us to use page cache in similar way that we
use buffer cache now?

Even I didn't investigate it that lot, I still see Erics work on
adding dirty page functionality as a step toward this.

Disclaimer: I really don't see myself as any kind of expert in this
area. But that's a one motivation more for me to try to understand
things that I don't have at control presently. :)

I've been browsing Linux source actively for the last 12 months, as
time permitted. MM area is by far of the biggest interest for me. But,
I'm still learning.

> 
> > Then again, I have made some changes that make my system very stable
> > wrt memory fragmentation:
> 
> > #define SLAB_MIN_OBJS_PER_SLAB  1
> > #define SLAB_BREAK_GFP_ORDER    1
> 
> The SLAB_BREAK_GFP_ORDER one is the important one on low memory
> configurations.  I need to use this setting to get 2.1.110 to work at
> all with NFS in low memory.
> 
> > I discussed this privately with slab maintainer Mark Hemment, where
> > he pointed out that with this setting slab is probably not as
> > efficient as it could be. Also, slack is bigger, obviously.
> 
> Correct, but then the main user of these larger packets is networking,
> where the memory is typically short lived anyway.

Two days ago, I rebooted unpatched 2.1.110 with mem=32m, just to find
it dead today:

I left at cca 22:00h on Jul 21.

Jul 21 22:16:43 atlas kernel: eth0: media is 100Mb/s full duplex. 
Jul 21 22:34:31 atlas kernel: eth0: Insufficient memory; nuking packet. 
Jul 21 22:34:44 atlas last message repeated 174 times
Jul 22 16:03:40 atlas kernel: eth0: media is TP full duplex. 
Jul 22 16:03:43 atlas kernel: eth0: media is unconnected, link down or incompati
ble connection. 
...

Used to patch every kernel that I download, I forgot how unstable
official kernels are. And that's not good. :(

Machine's only task, when I'm not logged in, is to transfer mail
(fetchmail + sendmail).

> 
> > But system is much more stable, and it is now very *very* hard to get
> > that annoying "Couldn't get a free page..." message than before (with
> > default setup), when it was as easy as clicking a button in the
> > Netscape.
> 
> I can still reproduce it if I let the inode cache grow too large: it
> behaves really badly and seems to lock up rather a lot of memory.  Still
> chasing this one; it's a killer right now.
> 

My observations with low memory machines led me to conclusion that
inode memory grows monotonically until it takes cca 1.5MB of
unswappable memory. That is around half of usable memory on 5MB
machine. You seconded that in private mail you sent me in January.

Is there any possibility that we could use slab allocator for inode
allocation/deallocation?

Reagrds,
-- 
Posted by Zlatko Calusic           E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
		  So much time, and so little to do.
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

  reply	other threads:[~1998-07-23 10:06 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1998-07-13 16:53 More info: 2.1.108 page cache performance on low memory Stephen C. Tweedie
1998-07-13 18:08 ` Eric W. Biederman
1998-07-13 18:29   ` Zlatko Calusic
1998-07-14 17:32     ` Stephen C. Tweedie
1998-07-16 12:31       ` Zlatko Calusic
1998-07-14 17:30   ` Stephen C. Tweedie
1998-07-18  1:10     ` Eric W. Biederman
1998-07-18 13:28       ` Zlatko Calusic
1998-07-18 16:40         ` Eric W. Biederman
1998-07-20  9:15           ` Zlatko Calusic
1998-07-22 10:40             ` Stephen C. Tweedie
1998-07-23 10:06               ` Zlatko Calusic [this message]
1998-07-23 12:22                 ` Stephen C. Tweedie
1998-07-23 14:07                   ` Zlatko Calusic
1998-07-23 17:18                     ` Stephen C. Tweedie
1998-07-23 19:33                       ` Zlatko Calusic
1998-07-27 10:57                         ` Stephen C. Tweedie
1998-07-26 14:49                 ` Eric W Biederman
1998-07-27 11:02                   ` Stephen C. Tweedie
1998-08-02  5:19                     ` Eric W Biederman
1998-08-17 13:57                       ` Stephen C. Tweedie
1998-08-17 15:35                       ` Stephen C. Tweedie
1998-08-20 12:40                         ` Eric W. Biederman
1998-07-20 15:58           ` Stephen C. Tweedie
1998-07-22 10:36           ` Stephen C. Tweedie
1998-07-22 18:01             ` Rik van Riel
1998-07-23 10:59               ` Stephen C. Tweedie
1998-07-22 10:33         ` Stephen C. Tweedie
1998-07-23 10:59           ` Zlatko Calusic
1998-07-23 12:23             ` Stephen C. Tweedie
1998-07-23 15:06               ` Zlatko Calusic
1998-07-23 15:17                 ` Benjamin C.R. LaHaise
1998-07-23 15:25                   ` Zlatko Calusic
1998-07-23 17:27                     ` Benjamin C.R. LaHaise
1998-07-23 19:17                       ` Dr. Werner Fink
1998-07-23 17:12             ` Stephen C. Tweedie
1998-07-23 17:42               ` Zlatko Calusic
1998-07-23 19:12             ` Dr. Werner Fink
1998-07-27 10:40               ` Stephen C. Tweedie
1998-07-23 19:51             ` Rik van Riel
1998-07-24 11:21               ` Zlatko Calusic
1998-07-24 14:25                 ` Rik van Riel
1998-07-24 17:01                   ` Zlatko Calusic
1998-07-24 21:55                     ` Rik van Riel
1998-07-25 13:05                       ` Zlatko Calusic
1998-07-27 10:54                       ` Stephen C. Tweedie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87iukovq42.fsf@atlas.CARNet.hr \
    --to=zlatko.calusic@carnet.hr \
    --cc=ebiederm+eric@npwt.net \
    --cc=linux-mm@kvack.org \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox