netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Robert Olsson <Robert.Olsson@data.slu.se>
To: David Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org, dada1@cosmosbay.com,
	robert.olsson@its.uu.se, npiggin@suse.de
Subject: [RFC PATCH]: Dynamically sized routing cache hash table.
Date: Tue, 6 Mar 2007 14:26:04 +0100	[thread overview]
Message-ID: <17901.27628.548105.353342@robur.slu.se> (raw)
In-Reply-To: <20070305.202632.74752497.davem@davemloft.net>


David Miller writes:
 
 Interesting.
 
 > Actually, more accurately, the conflict exists in how this GC
 > logic is implemented.  The core issue is that hash table size
 > guides the GC processing, and hash table growth therefore
 > modifies those GC goals.  So with the patch below we'll just
 > keep growing the hash table instead of giving GC some time to
 > try to keep the working set in equilibrium before doing the
 > hash grow.
 
 AFIK the equilibrium is resizing function as well but using fixed 
 hash table. So can we do without equilibrium resizing if tables 
 are dynamic?  I think so....

 With the hash data structure we could monitor the average chain 
 length or just size and resize hash after that.

 > One idea is to put the hash grow check in the garbage collector,
 > and put the hash shrink check in rt_del().
 > 
 > In fact, it would be a good time to perhaps hack up some entirely
 > new passive GC logic for the routing cache.

 Could be, remeber GC in the hash chain also which was added after
 although it does's decrease the number of entries but it gives
 an upper limit. Also gc-goal must picked so it does not force 
 unwanted resizing.

 > BTW, another thing that plays into this is that Robert's TRASH work
 > could make this patch not necessary :-)

 It has "built-in" resize and chain control and the gc-goal is chosen not 
 to unnecessary resize the root node. 

 Cheers.
					--ro

  parent reply	other threads:[~2007-03-06 13:26 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-03-06  4:26 [RFC PATCH]: Dynamically sized routing cache hash table David Miller
2007-03-06  7:14 ` Eric Dumazet
2007-03-06  7:23   ` David Miller
2007-03-06  7:58     ` Eric Dumazet
2007-03-06  9:05       ` David Miller
2007-03-06 10:33         ` [PATCH] NET : Optimizes inet_getpeer() Eric Dumazet
2007-03-07  4:23           ` David Miller
2007-03-06 13:42   ` [RFC PATCH]: Dynamically sized routing cache hash table Robert Olsson
2007-03-06 14:18     ` Eric Dumazet
2007-03-06 17:05       ` Robert Olsson
2007-03-06 17:20         ` Eric Dumazet
2007-03-06 18:55           ` Robert Olsson
2007-03-06  9:11 ` Nick Piggin
2007-03-06  9:17   ` David Miller
2007-03-06  9:22     ` Nick Piggin
2007-03-06  9:23   ` Eric Dumazet
2007-03-06  9:41     ` Nick Piggin
2007-03-06 13:26 ` Robert Olsson [this message]
2007-03-06 22:20   ` David Miller
2007-03-08  6:26     ` Nick Piggin
2007-03-08 13:35     ` Robert Olsson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=17901.27628.548105.353342@robur.slu.se \
    --to=robert.olsson@data.slu.se \
    --cc=dada1@cosmosbay.com \
    --cc=davem@davemloft.net \
    --cc=netdev@vger.kernel.org \
    --cc=npiggin@suse.de \
    --cc=robert.olsson@its.uu.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).