From: Vara Prasad <prasadav@us.ibm.com>
To: "Tomasz Kłoczko" <kloczek@rudy.mif.pg.gda.pl>
Cc: linux-kernel@vger.kernel.org, Linus Torvalds <torvalds@osdl.org>,
akpm@osdl.org
Subject: Re: Merging relayfs?
Date: Wed, 13 Jul 2005 08:56:45 -0700 [thread overview]
Message-ID: <42D539BD.9060109@us.ibm.com> (raw)
In-Reply-To: <Pine.BSO.4.62.0507131440480.6919@rudy.mif.pg.gda.pl>
Tomasz Kłoczko wrote:
> On Tue, 12 Jul 2005, Vara Prasad wrote:
> [..]
>
[..]
> If I can suggest something about order prepare some feactures:
>
> 1) prepare base infrastructure for counters,
>
> this "tool" will take very small amount of data and can be performad
> by very small pieces of binary codes. Even this will allow perform some
> *very* interesting experinments on existing kernel code.
> And after above:
>
> 2) prepare base infrastructure for association tables of couters (for
> collecting data for example about I/O operations or other two or more
> arguments operations),
> 3) prepare user space tool with some kind of language which will allow
> hanging ptrobes with aboove tho (simple counters and association
> tables
> of couters)
> 4) base functions for measure time (with KProbes overhead and without)
> and
> store them in couters and association tables,
>
> All above base "tools" for above will take small or medium amount of
> data and can be performad small or medium pieces of binary codes. And
> after above:
>
> 5) prepare infrastrucrute for probes which will store data in diffrent
> containers depending on initiator process and/or thread (and maybe in
> next etap also will be good have something more common which will
> depend on stack path),
> 6) prepare base functions for tracing stack paths (counting them and
> store
> in association tables),
> 7) make some kind of study where is it will be good compute something
> more complicated like base "speculative probes" (lookin on
> working DTrace probably answer in this point will be "yes").
>
Looks like you have not looked at systemtap project although Tom pointed
about it to you in his previous postings. The URL for systemtap is
http://sourceware.org/systemtap/, i strongly suggest you to look at that
project. We are implementing most of the above what you are suggesting
in the systemtap project. I don't agree with you that implementing the
above features is trivial and takes small amount of code, can you submit
patches to show the simple implementation you are talking about.
> All to this moment will not require relayfs because amount of transfered
> data will be _very low_.
I think you are forgetting the fact that relayfs has two different
portions one is the buffering scheme another is the data transfer
mechanism. Some of the above features you are talking of needs a
buffering scheme.
> Details of above will be probably different (I have only some very
> common knowledge about DTrace implementations details and some
> avarange about using dtrace tool) but I want count/pint *only*
> feactutres which will not require using relayfs.
I beg to differ, as i mentioned in my earlier postings Dtrace has a
similar per-CPU buffering scheme according to their USENIX paper
http://www.sun.com/bigadmin/content/dtrace/dtrace_usenix.pdf refer to
section 3.3, can you explain why?
[...]
>
> But if you will build all infrastructure even for simple couters on
> relayfs fundament it will be (IMO) badly/incorrectly designed .. and
> using
> even simple couters will introduce to high overhead for system.
Do you have any performance data to justify your claim of high overhead?
[...]
>
>
> regards
>
> kloczek
bye,
Vara Prasad
next prev parent reply other threads:[~2005-07-13 15:59 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-07-12 1:10 Merging relayfs? Tom Zanussi
2005-07-12 1:45 ` Andrew Morton
2005-07-12 2:17 ` Dave Airlie
2005-07-12 2:22 ` Tom Zanussi
2005-07-12 9:12 ` Baruch Even
2005-07-12 2:25 ` Christoph Hellwig
2005-07-12 2:34 ` Andrew Morton
2005-07-12 2:59 ` Karim Yaghmour
2005-07-14 13:26 ` Roman Zippel
2005-07-14 15:01 ` Tom Zanussi
2005-07-17 14:04 ` Roman Zippel
2005-07-17 15:52 ` Tom Zanussi
2005-07-18 5:17 ` Hareesh Nagarajan
2005-07-18 14:31 ` Tom Zanussi
2005-07-18 13:44 ` Steven Rostedt
2005-07-18 14:16 ` Roman Zippel
2005-07-18 14:32 ` Karim Yaghmour
2005-07-18 15:20 ` Roman Zippel
2005-07-18 15:58 ` Tom Zanussi
2005-07-22 20:43 ` Tom Zanussi
2005-07-22 23:19 ` Karim Yaghmour
2005-07-23 2:31 ` Tom Zanussi
2005-07-26 2:35 ` Karim Yaghmour
2005-07-22 20:43 ` Tom Zanussi
2005-07-18 14:36 ` Steven Rostedt
2005-07-18 15:06 ` Roman Zippel
2005-07-18 14:41 ` Tom Zanussi
2005-07-18 8:40 ` Richard J Moore
2005-07-12 13:03 ` Steven Rostedt
2005-07-12 3:05 ` Greg KH
2005-07-12 3:03 ` Karim Yaghmour
2005-07-12 3:24 ` Greg KH
2005-07-12 3:52 ` Karim Yaghmour
2005-07-12 4:30 ` Greg KH
2005-07-12 4:40 ` Karim Yaghmour
2005-07-12 5:23 ` Greg KH
2005-07-12 14:36 ` Steve Rotolo
2005-07-12 3:55 ` Tom Zanussi
2005-07-12 4:27 ` Greg KH
2005-07-12 14:01 ` Tomasz Kłoczko
2005-07-12 14:21 ` Baruch Even
2005-07-12 15:30 ` Tomasz Kłoczko
2005-07-12 15:16 ` Tom Zanussi
2005-07-12 15:44 ` Tomasz Kłoczko
2005-07-12 16:27 ` Tom Zanussi
2005-07-12 17:01 ` Tomasz Kłoczko
2005-07-12 17:23 ` Tom Zanussi
[not found] ` <Pine.BSO.4.62.0507121935500.6919@rudy.mif.pg.gda.pl>
[not found] ` <17108.1906.628755.613285@tut.ibm.com>
[not found] ` <Pine.BSO.4.62.0507122026520.6919@rudy.mif.pg.gda.pl>
[not found] ` <17108.5721.202275.377020@tut.ibm.com>
2005-07-12 19:29 ` Tomasz Kłoczko
2005-07-12 20:44 ` Vara Prasad
2005-07-12 21:02 ` Vara Prasad
2005-07-13 12:40 ` Tomasz Kłoczko
2005-07-13 15:04 ` Vara Prasad
2005-07-13 16:22 ` Tomasz Kłoczko
2005-07-13 4:29 ` Vara Prasad
2005-07-13 13:47 ` Tomasz Kłoczko
2005-07-13 15:55 ` Karim Yaghmour
2005-07-13 15:56 ` Vara Prasad [this message]
2005-07-13 16:50 ` Tomasz Kłoczko
2005-07-12 14:58 ` Jason Baron
2005-07-12 15:26 ` Tom Zanussi
2005-07-12 16:00 ` Steven Rostedt
2005-07-12 15:53 ` Steven Rostedt
2005-07-12 16:08 ` Tom Zanussi
2005-07-12 16:23 ` Steven Rostedt
2005-07-12 16:36 ` Tom Zanussi
2005-07-12 16:49 ` Steven Rostedt
2005-07-12 17:01 ` Tom Zanussi
2005-07-12 21:38 ` Tom Zanussi
2005-07-12 23:40 ` Steven Rostedt
2005-07-12 23:55 ` Andrew Morton
2005-07-13 0:08 ` Steven Rostedt
2005-07-16 21:07 ` relayfs documentation sucks? bert hubert
2005-07-16 23:13 ` Tom Zanussi
2005-07-17 9:01 ` [PATCH] " bert hubert
2005-07-17 15:43 ` Tom Zanussi
2005-07-17 19:45 ` bert hubert
2005-07-17 20:47 ` Tom Zanussi
2005-07-18 13:27 ` Steven Rostedt
2005-07-20 21:27 ` Paul Jackson
2005-07-20 21:45 ` bert hubert
2005-07-21 0:31 ` Paul Jackson
2005-07-22 20:01 ` Paul Jackson
2005-07-22 20:33 ` relayfs as infrastructure, ltt, systemtap, diskstat bert hubert
2005-07-23 18:53 ` Christoph Hellwig
2005-07-23 18:53 ` [PATCH] Re: relayfs documentation sucks? Christoph Hellwig
2005-07-25 23:47 ` Karim Yaghmour
2005-07-26 5:15 ` bert hubert
[not found] <17107.6290.734560.231978@tut.ibm.com.suse.lists.linux.kernel>
[not found] ` <20050712022537.GA26128@infradead.org.suse.lists.linux.kernel>
[not found] ` <20050711193409.043ecb14.akpm@osdl.org.suse.lists.linux.kernel>
2005-07-12 4:36 ` Merging relayfs? Andi Kleen
-- strict thread matches above, loose matches on Subject: below --
2005-07-13 8:13 Spirakis, Charles
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=42D539BD.9060109@us.ibm.com \
--to=prasadav@us.ibm.com \
--cc=akpm@osdl.org \
--cc=kloczek@rudy.mif.pg.gda.pl \
--cc=linux-kernel@vger.kernel.org \
--cc=torvalds@osdl.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox