From: Dipankar Sarma <dipankar@sequent.com>
To: Mark Hahn <hahn@coffee.psychology.mcmaster.ca>
Cc: lkml <linux-kernel@vger.kernel.org>
Subject: Re: [RFC][PATCH] Scalable FD Management using Read-Copy-Update
Date: Tue, 17 Apr 2001 14:58:09 +0530 [thread overview]
Message-ID: <20010417145809.A21310@in.ibm.com> (raw)
In-Reply-To: <20010409201311.D9013@in.ibm.com> <Pine.LNX.4.10.10104161140190.7022-100000@coffee.psychology.mcmaster.ca>
In-Reply-To: <Pine.LNX.4.10.10104161140190.7022-100000@coffee.psychology.mcmaster.ca>; from hahn@coffee.psychology.mcmaster.ca on Mon, Apr 16, 2001 at 12:16:25PM -0400
Hi Mike,
[I am not sure if my earlier mail from lycos went out or not, if
it did, I apologize]
On Mon, Apr 16, 2001 at 12:16:25PM -0400, Mark Hahn wrote:
> > The improvement in performance while runnig "chat" benchmark
> > (from http://lbs.sourceforge.net/) is about 30% in average throughput.
>
> isn't this a solution in search of a problem?
> does it make sense to redesign parts of the kernel for the sole
> purpose of making a completely unrealistic benchmark run faster?
Irrespective of the usefulness of the "chat" benchmark, it seems
that there is a problem of scalability as long as CLONE_FILES is
supported. John Hawkes (SGI) posted some nasty numbers on a
32 CPU mips machine in the lse-tech list some time ago.
>
> (the chat "benchmark" is a simple pingpong load-generator; it is
> not in the same category as, say, specweb, since it does not do *any*
> realistic (nonlocal) IO. the numbers "chat" returns are interesting,
> but not indicative of any problem; perhaps even less than lmbench
> components.)
"chat" results for large numbers of CPUs is indicative of a problem -
if a large number of threads share the file_struct through
CLONE_FILES, the performance of the application will deteriorate
beyond 8 CPUs (going by John's numbers). It also indicates how
sensitive can performance be to write access of shared-memory
locations like spin-waiting locks.
Thanks
Dipankar
--
Dipankar Sarma (dipankar@sequent.com)
IBM Linux Technology Center
IBM Software Lab, Bangalore, India.
Project Page: http://lse.sourceforge.net
next prev parent reply other threads:[~2001-04-17 9:24 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-04-09 14:43 [RFC][PATCH] Scalable FD Management using Read-Copy-Update Maneesh Soni
2001-04-12 1:29 ` Anton Blanchard
2001-04-12 15:43 ` [Lse-tech] " Maneesh Soni
2001-04-12 15:51 ` Anton Blanchard
2001-04-17 10:49 ` Scalable FD Management Maneesh Soni
2001-04-17 14:36 ` Andi Kleen
2001-04-12 18:23 ` [Lse-tech] Re: [RFC][PATCH] Scalable FD Management using Read-Copy-Update John Hawkes
2001-04-16 16:16 ` Mark Hahn
2001-04-17 9:28 ` Dipankar Sarma [this message]
2001-04-17 16:59 ` Mark Hahn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20010417145809.A21310@in.ibm.com \
--to=dipankar@sequent.com \
--cc=hahn@coffee.psychology.mcmaster.ca \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox