netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "David S. Miller" <davem@davemloft.net>
To: Christoph Lameter <christoph@lameter.com>
Cc: netdev@oss.sgi.com
Subject: Re: atomic_dec_and_test for child dst needed in dst_destroy?
Date: Tue, 5 Apr 2005 13:12:59 -0700	[thread overview]
Message-ID: <20050405131259.4b00b8de.davem@davemloft.net> (raw)
In-Reply-To: <Pine.LNX.4.58.0504051252280.14264@server.graphe.net>

On Tue, 5 Apr 2005 12:58:15 -0700 (PDT)
Christoph Lameter <christoph@lameter.com> wrote:

> On Tue, 5 Apr 2005, David S. Miller wrote:
> 
> > > I fail to see what the point of having a single instance of
> > > atomic_dec_and_test for __refcnt is. In particular since the upper layers
> > > guarantee that dst_destroy is not called multiple times for the same dst
> > > entry.
> >
> > If this is true, what performance improvement could you possibly be
> > seeing from this change?
> 
> We could make refcnt into an array of pointers that point to node specific
> memory. This avoids cache line bouncing. However, you cannot atomically
> dec_and_test an array. This is the only location where a dec_and_test is
> used on dst->__refcnt.

The dst object is already too large.  You have to show a serious
performance improvement to justify bloating it up further.

> We can put an explicit barier in if that is the only reason for the
> atomic_dec_and_test.

Then we lose the optimizations of those memory barriers that platforms
do in their atomic_op assembly.

Let me check out if your assertions about dst_destroy() usage are correct
first, hold on for a bit.

  reply	other threads:[~2005-04-05 20:12 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-04-05 18:55 atomic_dec_and_test for child dst needed in dst_destroy? Christoph Lameter
2005-04-05 19:34 ` David S. Miller
2005-04-05 19:47   ` Christoph Lameter
2005-04-05 19:50     ` David S. Miller
2005-04-05 19:58       ` Christoph Lameter
2005-04-05 20:12         ` David S. Miller [this message]
2005-04-05 21:45 ` Herbert Xu
2005-04-05 21:48   ` David S. Miller
2005-04-05 22:14   ` Christoph Lameter
2005-04-06  2:19     ` Herbert Xu
2005-04-06  3:19       ` Christoph Lameter
2005-04-06  8:32         ` Herbert Xu
2005-04-06 18:17           ` David S. Miller
2005-04-06 18:48             ` Christoph Lameter
2005-04-07 11:07             ` Herbert Xu
2005-04-07 16:00               ` Christoph Lameter
2005-04-07 21:25                 ` Herbert Xu
2005-04-07 22:30                   ` Christoph Lameter
2005-04-07 23:07                     ` Herbert Xu
2005-04-08  5:45                   ` Christoph Lameter
2005-04-08  5:48                     ` Herbert Xu
2005-04-08 15:05                       ` Christoph Lameter
2005-04-08 21:45                         ` Herbert Xu
2005-04-09 15:28                           ` Christoph Lameter
     [not found]   ` <b82a8917050406002339f732ca@mail.gmail.com>
2005-04-06  8:53     ` Fwd: " pravin b shelar
2005-04-07 11:23       ` Herbert Xu
2005-04-07 12:30         ` pravin b shelar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050405131259.4b00b8de.davem@davemloft.net \
    --to=davem@davemloft.net \
    --cc=christoph@lameter.com \
    --cc=netdev@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).