public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Stephan von Krawczynski <skraw@ithnet.com>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: Lorenzo Allegrucci <lenstra@tiscalinet.it>,
	<linux-kernel@vger.kernel.org>, Andrea Arcangeli <andrea@suse.de>
Subject: Re: new OOM heuristic failure  (was: Re: VM: qsbench)
Date: Fri, 02 Nov 2001 03:17:17 +0100	[thread overview]
Message-ID: <200111020217.DAA30459@webserver.ithnet.com> (raw)
In-Reply-To: <Pine.LNX.4.33.0111011634340.12377-100000@penguin.transmeta.com>

>                                                                     
> On Fri, 2 Nov 2001, Stephan von Krawczynski wrote:                  
> >                                                                   
> > To clarify this one a bit:                                        
> > shrink_cache is thought to do what it says, it is given a number  
of                                                                    
> > pages it should somehow manage to free by shrinking the cache.    
What my                                                               
> > patch does is go after the _whole_ list to fulfill that.          
>                                                                     
> I would suggest a slight modification: make "max_mapped" grow as the
> priority goes up.                                                   
>                                                                     
> Right now max_mapped is fixed at "nr_pages*10".                     
>                                                                     
> You could have something like                                       
>                                                                     
> 	max_mapped = nr_pages * 60 / priority;                             
>                                                                     
> instead, which might also alleviate the problem with not even       
bothering to                                                          
> scan much of the inactive list simply because 99% of all pages are  
mapped.                                                               
>                                                                     
> That way you don't waste time on looking at the rest of the inactive
list                                                                  
> until you _need_ to.                                                
                                                                      
Wait a minute: there is something illogical in this approach:         
Basically you say by making max_mapped bigger that the "early exit"   
from shrink_cache shouldn't be that early. But if you _know_ that     
merely all pages are mapped, then why don't you just go to swap_out   
right away without even walking through the list, because in the end, 
you will go to swap_out anyway (simply because of the high percentage 
of mapped pages). That makes scanning somehow superfluous. Making it  
priority-dependant sounds like you want to swap_out earlier the       
_lower_ memory pressure is. In the end it sounds just like a hack to  
hold up the early exit against every logic (but not against some      
benchmark of course).                                                 
It doesn't sound like the right thing.                                
Is the inactive list somehow sorted currently? If not, could it be    
implicitly sorted to match this criteria (not mapped versa mapped), so
that shrink_cache finds the not-mapped first (with a chance to fulfill
nr_pages-request). If it isn't fulfilled and hits the first mapped    
page, it can go to swap_out right away, because more scanning doesn't 
make sense and can only end in swap_out anyways.                      
                                                                      
I am no fan of complete list scanning, but if you are looking for     
something you have to scan until you find it.                         
                                                                      
Regards,                                                              
Stephan                                                               
                                                                      
PS: I am still no pro in this area, so I try to go after the global   
picture and find the right direction...                               
                                                                      
                                                                      
                                                                      

  reply	other threads:[~2001-11-02  2:17 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <200111012108.WAA28044@webserver.ithnet.com>
     [not found] ` <3.0.6.32.20011101214957.01feaa70@pop.tiscalinet.it>
2001-11-01 21:59   ` new OOM heuristic failure (was: Re: VM: qsbench) Lorenzo Allegrucci
2001-11-01 23:35     ` Stephan von Krawczynski
2001-11-02  0:37       ` Linus Torvalds
2001-11-02  2:17         ` Stephan von Krawczynski [this message]
2001-11-02  2:21           ` Linus Torvalds
2001-11-02  2:30         ` Stephan von Krawczynski
2001-11-02  2:55           ` Stephan von Krawczynski
2001-11-02  2:37 Ed Tomlinson
2001-11-02  3:01 ` Stephan von Krawczynski
     [not found] <Pine.LNX.3.96.1011031133645.448B-100000@gollum.norang.ca>
2001-10-31 19:46 ` Linus Torvalds
  -- strict thread matches above, loose matches on Subject: below --
2001-10-31 12:12 VM: qsbench Lorenzo Allegrucci
2001-10-31 15:00 ` new OOM heuristic failure (was: Re: VM: qsbench) Rik van Riel
2001-10-31 15:52   ` Linus Torvalds
2001-10-31 16:04     ` Rik van Riel
2001-10-31 17:42       ` Stephan von Krawczynski
2001-10-31 18:22         ` Linus Torvalds
2001-10-31 17:55   ` Lorenzo Allegrucci
2001-10-31 18:06     ` Linus Torvalds
2001-10-31 21:31     ` Lorenzo Allegrucci
2001-11-02 13:00     ` Stephan von Krawczynski
2001-11-02 17:36     ` Lorenzo Allegrucci

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200111020217.DAA30459@webserver.ithnet.com \
    --to=skraw@ithnet.com \
    --cc=andrea@suse.de \
    --cc=lenstra@tiscalinet.it \
    --cc=linux-kernel@vger.kernel.org \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox