From: Martin Peschke <mp3@de.ibm.com>
To: "Frank Ch. Eigler" <fche@redhat.com>
Cc: Phillip Susi <psusi@cfl.rr.com>,
Jens Axboe <jens.axboe@oracle.com>, Andrew Morton <akpm@osdl.org>,
linux-kernel@vger.kernel.org
Subject: Re: [Patch 0/5] I/O statistics through request queues
Date: Thu, 26 Oct 2006 15:37:54 +0200 [thread overview]
Message-ID: <4540BA32.3020708@de.ibm.com> (raw)
In-Reply-To: <20061026121348.GB4978@redhat.com>
Frank Ch. Eigler wrote:
> Hi -
>
> On Thu, Oct 26, 2006 at 01:07:53PM +0200, Martin Peschke wrote:
>> [...]
>> I suppose the marker approach will be adopted if jumping from a
>> marker to code hooked up there can be made fast and secure enough
>> for prominent architectures.
>
> Agree, and I think we're not far. By "secure" you mean "robust"
> right?
yes
>> [...]
>> Dynamic instrumentation based on markers allows to grow code,
>> but it doesn't allow to grow data structure, AFAICS.
>>
>> Statistics might require temporary results to be stored per
>> entity.
>
> The data can be kept in data structures private to the instrumentation
> module. Instead of growing the base structure, you have a lookup
> table indexed by a key of the base structure. In the lookup table,
> you store whatever you would need: timestamps, whatnot.
lookup_table[key] = value , or
lookup_table[key]++
How does this scale?
It must be someting else than an array, because key boundaries
aren't known when the lookup table is created, right?
And actual keys might be few and far between.
So you have got some sort of list or tree and do some searching,
don't you?
What if the heap of intermediate results grows into thousands or more?
>> The workaround would be to pass any intermediate result in the form
>> of a trace event up to user space and try to sort it out later -
>> which takes us back to the blktrace approach.
>
> In systemtap, it is routine to store such intermediate data in kernel
> space, and process it into aggregate statistics on demand, still in
> kernel space. User space need only see finished results. This part
> is not complicated.
Yes. I tried out earlier this year.
next prev parent reply other threads:[~2006-10-26 13:38 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-10-21 12:57 [Patch 0/5] I/O statistics through request queues Martin Peschke
2006-10-23 11:37 ` Jens Axboe
2006-10-23 18:11 ` Martin Peschke
2006-10-23 20:02 ` Jens Axboe
2006-10-24 16:02 ` Martin Peschke
2006-10-24 16:20 ` Jens Axboe
2006-10-24 20:38 ` Phillip Susi
2006-10-24 22:27 ` Martin Peschke
2006-10-25 17:50 ` Frank Ch. Eigler
2006-10-26 11:07 ` Martin Peschke
2006-10-26 12:13 ` Frank Ch. Eigler
2006-10-26 13:37 ` Martin Peschke [this message]
2006-10-26 14:02 ` Frank Ch. Eigler
2006-10-26 15:36 ` Martin Peschke
2006-10-26 19:11 ` Frank Ch. Eigler
2006-10-24 23:04 ` Martin Peschke
2006-10-25 5:12 ` Jens Axboe
2006-10-25 10:32 ` Martin Peschke
2006-10-25 10:42 ` Jens Axboe
2006-11-02 14:39 ` martin
2006-11-02 14:46 ` Jens Axboe
2006-10-23 18:39 ` Phillip Susi
2006-10-24 16:05 ` Martin Peschke
2006-10-24 20:44 ` Phillip Susi
2006-10-24 22:49 ` Martin Peschke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4540BA32.3020708@de.ibm.com \
--to=mp3@de.ibm.com \
--cc=akpm@osdl.org \
--cc=fche@redhat.com \
--cc=jens.axboe@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=psusi@cfl.rr.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).