From: Shailabh Nagar <nagar@watson.ibm.com>
To: Jay Lan <jlan@sgi.com>
Cc: Jay Lan <jlan@engr.sgi.com>, Andrew Morton <akpm@osdl.org>,
balbir@in.ibm.com, csturtiv@sgi.com,
linux-kernel@vger.kernel.org
Subject: Re: [Patch][RFC] Disabling per-tgid stats on task exit in taskstats
Date: Mon, 12 Jun 2006 17:57:51 -0400 [thread overview]
Message-ID: <448DE35F.1060103@watson.ibm.com> (raw)
In-Reply-To: <448DB309.70508@sgi.com>
Jay Lan wrote:
> Shailabh Nagar wrote:
>
>> Jay Lan wrote:
>>
>>> Andrew Morton wrote:
>>>
>>>
>>>
>>
>>>> But the overhead at present is awfully low. If we don't need this
>>>> ability
>>>> at present (and I don't think we do) then a paper design would be
>>>> sufficient at this time. As long as we know we can do this in the
>>>> future
>>>> without breaking existing APIs then OK.
>>>>
>>>>
>>>
>>>
>>> i can see if an exiting process is the only process in the thread
>>> group,
>>> the (not is_thread_group) condition would be true. So, that leaves
>>> multi-threaded applications that are not interested in tgid-data still
>>> receive 2x taskstats data.
>>>
>>>
>> Jay,
>>
>> Why is the 2x taskstats data for the multithreaded app a real problem ?
>> When differnt clients agree to use a common taskstats structure, they
>> also incur the potential
>> overhead of receiving extra data they don't really care about (in
>> CSA's case, that would be all the
>> delay accounting fields of struct taskstats). Isn't that, in some
>> sense, the "price" of sharing a structure
>> or delivery mechanism ?
>
>
> You are mixing the two types of overhead: 1) overhead due to tgid,
> 2) overhead due to extra fields of struct taskstats they don't care
> about.
You're right..I am mixing the two..but only to show to make the point that
anyway clients have to deal with extra data they don't care about. As
long as the performance overhead
of that isn't significant, its not an issue.
Also, unlike, shared taskstats structure, discarding the excess per-tgid
data is even easier
because it comes in its own netlink attribute.
>
> The type 2 overhead for CSA is very small, but is bigger for you. In our
> discussion earlier, i told you (and you accpeted) that i will insert
> 128 bytes of data into taskstat struct. I have not finalized the CSA
> work yet, but it can be 168 additional bytes or close to that number:
>
> /* Common Accounting Fields start */
> u32 ac_uid; /* User ID */
> u32 ac_gid; /* Group ID */
> u32 ac_pid; /* Process ID */
> u32 ac_ppid; /* Parent process ID */
> struct timespec start_time; /* Start time */
> struct timespec exit_time; /* Exit time */
> u64 ac_utime; /* User CPU time [usec] */
> u64 ac_stime; /* SYstem CPU time [usec] */
> /* Common Accounting Fields end */
>
> /* CSA accounting fields start */
> u64 ac_sbu; /* System billing units */
> u16 csa_revision; /* CSA Revision */
> u8 csa_type; /* Record types */
> u8 csa_flag; /* Record flags */
> u8 ac_stat; /* Exit status */
> u8 ac_nice; /* Nice value */
> u8 ac_sched; /* Scheduling discipline */
> u8 pad0; /* Unused */
> u64 acct_rss_mem1; /* accumulated rss usage */
> u64 acct_vm_mem1; /* accumulated virtual memory
> usage */
> u64 hiwater_rss; /* High-watermark of RSS usage */
> u64 hiwater_vm; /* High-water virtual memory
> usage */
> u64 ac_minflt; /* Minor Page Fault */
> u64 ac_majflt; /* Major Page Fault */
> u64 ac_chr; /* bytes read */
> u64 ac_chw; /* bytes written */
> u64 ac_scr; /* read syscalls */
> u64 ac_scw; /* write syscalls */
> u64 ac_jid; /* Job ID */
> /* CSA accounting fields end */
>
> This is type 2 overhead. The bigger overhead in type 2, the bigger
> impact of sending tgid data is bigger.
Fair enough. So lets see what the excess 168 bytes does in terms of
perf and make a determination
based on that ?
>>
>> Of course, if this overhead becomes too much, we need to find
>> alternatives. But, as already shown,
>> even in the extreme case where app does nothing but fork/exit, there
>> is very
>> little performance impact. So I don't see how in the common case of
>> multithreaded apps, where exits
>> are going to be at a far lesser rate, the extra per-tgid data is a
>> real issue.
>
>
> Yes, application handles "real" work between fork and exit. But,
> each task within a thread group still trigger do_exit on termination,
> right?
Yes...but I don't see the point ? If exits happen at a very slow rate,
then the performance impact will drop
compared to if they happen at the insane rate in the toy program. So
rate of exit is a factor..or did I not
get your point ?
>
>>
>> So, are we trying to solve a real problem ?
>
>
> I do not know, but i am concerned. I will run some testing with the
> taskstats struct above and get some data.
Sounds good. Please share asap so that 2.6.18 acceptance isn't held up.
Regards,
Shailabh
next prev parent reply other threads:[~2006-06-12 21:57 UTC|newest]
Thread overview: 134+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-06-09 7:41 [Patch][RFC] Disabling per-tgid stats on task exit in taskstats Shailabh Nagar
2006-06-09 8:00 ` Andrew Morton
2006-06-09 10:51 ` Balbir Singh
2006-06-09 11:21 ` Andrew Morton
2006-06-09 13:20 ` Shailabh Nagar
2006-06-09 18:25 ` Jay Lan
2006-06-09 19:12 ` Shailabh Nagar
2006-06-09 15:36 ` Balbir Singh
2006-06-09 18:35 ` Jay Lan
2006-06-09 19:31 ` Shailabh Nagar
2006-06-09 21:56 ` Shailabh Nagar
2006-06-09 22:42 ` Jay Lan
2006-06-09 23:22 ` Andrew Morton
2006-06-09 23:47 ` Jay Lan
2006-06-09 23:56 ` Andrew Morton
2006-06-10 12:21 ` Shailabh Nagar
2006-06-12 18:31 ` Jay Lan
2006-06-12 21:57 ` Shailabh Nagar [this message]
2006-06-10 13:05 ` Shailabh Nagar
2006-06-12 18:54 ` Jay Lan
2006-06-21 19:11 ` Jay Lan
2006-06-21 19:14 ` Jay Lan
2006-06-21 19:34 ` Shailabh Nagar
2006-06-21 23:35 ` Jay Lan
2006-06-21 23:45 ` Shailabh Nagar
2006-06-23 17:14 ` Shailabh Nagar
2006-06-23 18:19 ` Jay Lan
2006-06-23 18:53 ` Shailabh Nagar
2006-06-23 20:00 ` Jay Lan
2006-06-23 20:16 ` Shailabh Nagar
2006-06-23 20:36 ` Jay Lan
2006-06-23 21:19 ` Andrew Morton
2006-06-23 22:07 ` Jay Lan
2006-06-23 23:47 ` Andrew Morton
2006-06-24 2:59 ` Shailabh Nagar
2006-06-24 4:39 ` Andrew Morton
2006-06-24 5:59 ` Shailabh Nagar
2006-06-26 17:33 ` Jay Lan
2006-06-26 17:52 ` Shailabh Nagar
2006-06-26 17:55 ` Andrew Morton
2006-06-26 18:00 ` Shailabh Nagar
2006-06-26 18:12 ` Andrew Morton
2006-06-26 18:26 ` Jay Lan
2006-06-26 18:39 ` Andrew Morton
2006-06-26 18:49 ` Shailabh Nagar
2006-06-26 19:00 ` Jay Lan
2006-06-28 21:30 ` Jay Lan
2006-06-28 21:53 ` Andrew Morton
2006-06-28 22:02 ` Jay Lan
2006-06-29 8:40 ` Paul Jackson
2006-06-29 12:30 ` Valdis.Kletnieks
2006-06-29 16:44 ` Paul Jackson
2006-06-29 18:01 ` Andrew Morton
2006-06-29 18:07 ` Paul Jackson
2006-06-29 18:26 ` Paul Jackson
2006-06-29 19:15 ` Shailabh Nagar
2006-06-29 19:41 ` Paul Jackson
2006-06-29 21:42 ` Shailabh Nagar
2006-06-29 21:54 ` Jay Lan
2006-06-29 22:09 ` Shailabh Nagar
2006-06-29 22:23 ` Paul Jackson
2006-06-30 0:15 ` Shailabh Nagar
2006-06-30 0:40 ` Paul Jackson
2006-06-30 1:00 ` Shailabh Nagar
2006-06-30 1:05 ` Paul Jackson
[not found] ` <44A46C6C.1090405@watson.ibm.com>
2006-06-30 0:38 ` Paul Jackson
2006-06-30 2:21 ` Paul Jackson
2006-06-30 2:46 ` Shailabh Nagar
2006-06-30 2:54 ` Paul Jackson
2006-06-30 3:02 ` Paul Jackson
2006-06-29 19:22 ` Shailabh Nagar
2006-06-29 19:10 ` Shailabh Nagar
2006-06-29 19:23 ` Paul Jackson
2006-06-29 19:33 ` Andrew Morton
2006-06-29 19:43 ` Shailabh Nagar
2006-06-29 20:00 ` Andrew Morton
2006-06-29 22:13 ` Shailabh Nagar
2006-06-29 23:00 ` jamal
2006-06-29 20:01 ` Shailabh Nagar
2006-06-29 21:22 ` Paul Jackson
2006-06-29 22:54 ` jamal
2006-06-30 0:38 ` Shailabh Nagar
2006-06-30 1:05 ` Andrew Morton
2006-06-30 1:11 ` Shailabh Nagar
2006-06-30 1:30 ` jamal
2006-06-30 3:01 ` Shailabh Nagar
2006-06-30 12:45 ` jamal
2006-06-30 2:25 ` Paul Jackson
2006-06-30 2:35 ` Andrew Morton
2006-06-30 2:43 ` Paul Jackson
2006-06-29 19:33 ` Jay Lan
2006-06-30 18:53 ` Shailabh Nagar
2006-06-30 19:10 ` Shailabh Nagar
2006-06-30 19:19 ` Shailabh Nagar
2006-06-30 20:19 ` jamal
2006-06-30 22:50 ` Andrew Morton
2006-07-01 2:20 ` Shailabh Nagar
2006-07-01 2:43 ` Andrew Morton
2006-07-01 3:37 ` Shailabh Nagar
2006-07-01 3:51 ` Andrew Morton
2006-07-03 21:11 ` Shailabh Nagar
2006-07-03 21:41 ` Andrew Morton
2006-07-04 0:13 ` Shailabh Nagar
2006-07-04 0:38 ` Andrew Morton
2006-07-04 20:19 ` Paul Jackson
2006-07-04 20:22 ` Paul Jackson
2006-07-04 0:54 ` Shailabh Nagar
2006-07-04 1:01 ` Andrew Morton
2006-07-04 13:05 ` jamal
2006-07-04 15:18 ` Shailabh Nagar
2006-07-04 16:37 ` Shailabh Nagar
2006-07-04 19:24 ` jamal
2006-07-05 14:09 ` Shailabh Nagar
2006-07-05 20:25 ` Chris Sturtivant
2006-07-05 20:32 ` Shailabh Nagar
2006-07-03 4:53 ` Paul Jackson
2006-07-03 15:02 ` Shailabh Nagar
2006-07-03 15:55 ` Paul Jackson
2006-07-03 16:31 ` Paul Jackson
2006-07-04 0:09 ` Shailabh Nagar
2006-07-04 19:59 ` Paul Jackson
2006-07-05 17:20 ` Jay Lan
2006-07-05 18:18 ` Shailabh Nagar
2006-06-30 22:56 ` Andrew Morton
2006-06-29 18:05 ` Nick Piggin
2006-06-29 12:42 ` Shailabh Nagar
2006-06-24 3:08 ` Shailabh Nagar
2006-06-21 20:38 ` Andrew Morton
2006-06-21 21:31 ` Shailabh Nagar
2006-06-21 21:45 ` Jay Lan
2006-06-21 21:54 ` Andrew Morton
2006-06-21 22:19 ` Jay Lan
2006-06-21 21:59 ` Shailabh Nagar
2006-06-09 15:55 ` Chris Sturtivant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=448DE35F.1060103@watson.ibm.com \
--to=nagar@watson.ibm.com \
--cc=akpm@osdl.org \
--cc=balbir@in.ibm.com \
--cc=csturtiv@sgi.com \
--cc=jlan@engr.sgi.com \
--cc=jlan@sgi.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox