From: Ankita Garg <ankita@in.ibm.com>
To: Madhava K R <madhavakr@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
bharathravi1@gmail.com, mingo@elte.hu,
linux-kernel@vger.kernel.org, dhaval@linux.vnet.ibm.com,
vatsa@in.ibm.com, balbir@in.ibm.com
Subject: Re: [PATCH] Delay accounting, fix incorrect delay time when constantly waiting on runqueue
Date: Mon, 16 Jun 2008 18:12:22 +0530 [thread overview]
Message-ID: <20080616124222.GC7434@in.ibm.com> (raw)
In-Reply-To: <f8c1374c0806160500p722dee55vf6f2084ecadf670c@mail.gmail.com>
Hi,
On Mon, Jun 16, 2008 at 05:30:26PM +0530, Madhava K R wrote:
> Hello,
>
> On Mon, Jun 16, 2008 at 3:25 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Mon, 2008-06-16 at 15:11 +0530, bharathravi1@gmail.com wrote:
> >> From: Bharath Ravi <bharathravi1@gmail.com>
> >>
> >> This patch corrects the incorrect value of per process run-queue wait time
> >> reported by delay statistics. The anomaly was due to the following reason.
> >> When a process leaves the CPU and immediately starts waiting for CPU on
> >> the runqueue (which means it remains in the TASK_RUNNABLE state), the time
> >> of re-entry into the run-queue is never recorded. Due to this, the waiting time
> >> on the runqueue from this point of re-entry upto the next time it hits the CPU
> >> is not accounted for. This is solved by recording the time of re-entry of a
> >> process leaving the CPU in the sched_info_depart() function IF the process will
> >> go back to waiting on the run-queue. This IF condition is verified by checking
> >> whether the process is still in the TASK_RUNNABLE state.
> >>
> >> The patch was tested on 2.6.26-rc6 using two simple CPU hog programs. The
> >> values noted prior to the fix did not account for the time spent on the
> >> runqueue waiting. After the fix, the correct values were reported back
> >> to user space.
> >
> >
> > Have you considered: http://lkml.org/lkml/2008/6/5/10
> >
> > I'm sad to say it is still pending in my todo list :-( - sorry Ankita.
> >
>
> It seems that Ankita's patch addresses the scenario where a process is
> already on the run-queue, and is shuffled about CPUs. This patch
> addresses the scenario where a process is pre-empted and returns to
> the run-queue, where the last_queued value is not recorded.
>
Right, so looks like the two patches are addressing different scenarios.
The patch I had sent earlier was addressing the case where the accounting
of the task had already begun and then it was migrated to another
runqueue, leading to skews. This patch addresses the issue that the task
delay accounting could potentially miss some delay stats, as explained by
Madhava.
> I tried our test case with Ankita's patch, and the problem remains.
> Our test case involves running two tight loops on an idle CPU.
> Ideally, both should experience a run time of 50% and a delay time of
> 50%. But the results show negligible delay time for both processes.
>
> The problems appear mutually exclusive...
>
> >> Signed-off-by: Bharath Ravi <bharathravi1@gmail.com>
> >> Signed-off-by: Madhava K R <madhavakr@gmail.com>
> >> ---
> >> kernel/sched_stats.h | 6 ++++++
> >> 1 files changed, 6 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/kernel/sched_stats.h b/kernel/sched_stats.h
> >> index a38878e..80179ef 100644
> >> --- a/kernel/sched_stats.h
> >> +++ b/kernel/sched_stats.h
> >> @@ -198,6 +198,9 @@ static inline void sched_info_queued(struct task_struct *t)
> >> /*
> >> * Called when a process ceases being the active-running process, either
> >> * voluntarily or involuntarily. Now we can calculate how long we ran.
> >> + * Also, if the process is still in the TASK_RUNNING state, call
> >> + * sched_info_queued() to mark that it has now again started waiting on
> >> + * the runqueue.
> >> */
> >> static inline void sched_info_depart(struct task_struct *t)
> >> {
> >> @@ -206,6 +209,9 @@ static inline void sched_info_depart(struct task_struct *t)
> >>
> >> t->sched_info.cpu_time += delta;
> >> rq_sched_info_depart(task_rq(t), delta);
> >> +
> >> + if (t->state == TASK_RUNNING)
> >> + sched_info_queued(t);
> >> }
> >>
> >> /*
> >
> >
--
Regards,
Ankita Garg (ankita@in.ibm.com)
Linux Technology Center
IBM India Systems & Technology Labs,
Bangalore, India
next prev parent reply other threads:[~2008-06-16 12:42 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-06-16 9:41 [PATCH] Delay accounting, fix incorrect delay time when constantly waiting on runqueue bharathravi1
2008-06-16 9:55 ` Peter Zijlstra
2008-06-16 12:00 ` Madhava K R
2008-06-16 12:42 ` Ankita Garg [this message]
2008-06-16 16:34 ` Balbir Singh
2008-06-19 10:01 ` Peter Zijlstra
2008-06-19 12:17 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080616124222.GC7434@in.ibm.com \
--to=ankita@in.ibm.com \
--cc=balbir@in.ibm.com \
--cc=bharathravi1@gmail.com \
--cc=dhaval@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=madhavakr@gmail.com \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
--cc=vatsa@in.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox