public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* fair clock use in CFS
@ 2007-05-14  8:33 Srivatsa Vaddagiri
  2007-05-14 10:29 ` William Lee Irwin III
  2007-05-14 11:10 ` Ingo Molnar
  0 siblings, 2 replies; 21+ messages in thread
From: Srivatsa Vaddagiri @ 2007-05-14  8:33 UTC (permalink / raw)
  To: Ingo Molnar, efault; +Cc: tingy, wli, linux-kernel

Hi,
	I have been brooding over how fair clock is computed/used in
CFS and thought I would ask the experts to avoid wrong guesses!

As I understand, fair_clock is a monotonously increasing clock which
advances at a pace inversely proportional to the load on the runqueue.
If load = 1 (task), it will advance at same pace as wall clock, as 
load increases it advances slower than wall clock.

In addition, following calculations depend on fair clock: task's wait
time on runqueue and sleep time outside the runqueue (both reflected in
p->wait_run_time).

Few questions that come up are:

1. Why can't fair clock be same as wall clock at all times? i.e fair
   clock progresses at same pace as wall clock independent of the load on
   the runqueue.

   It would still give the ability to measure time spent waiting on runqueue 
   or sleeping and use that calculated time to give latency/bandwidth
   credit? 

   In case of EEVDF, the use of virtual clock seems more
   understandable, if we consider the fact that each client gets 'wi' real
   time units in 1 virtual time unit. That doesnt seem to be the case in
   CFS as Ting Yang explained +/- lags here 
   http://lkml.org/lkml/2007/5/2/612 ..


2. Preemption granularity - sysctl_sched_granularity

	This seems to be measured in the fair clock scale rather than
	wall clock scale. As a consequence of this, the time taken
	for a task to relinquish to competetion is dependent on number N
	of tasks? For ex: if there a million cpu hungry tasks, then the
	time taken to switch between two tasks is more compared to the
	case where just two cpu hungry tasks are running. Is there
	any advantage of using fair clock scale to detect preemtion points?


-- 
Regards,
vatsa

^ permalink raw reply	[flat|nested] 21+ messages in thread
* Re: fair clock use in CFS
@ 2007-05-14 15:02 Al Boldi
  0 siblings, 0 replies; 21+ messages in thread
From: Al Boldi @ 2007-05-14 15:02 UTC (permalink / raw)
  To: linux-kernel

Ingo Molnar wrote:
> the current task is recalculated at scheduler tick time and put into the
> tree at its new position. At a million tasks the fair-clock will advance
> little (or not at all - which at these load levels is our smallest
> problem anyway) so during a scheduling tick in kernel/sched_fair.c
> update_curr() we will have a 'delta_mine' and 'delta_fair' of near zero
> and a 'delta_exec' of ~1 million, so curr->wait_runtime will be
> decreased at 'full speed': delta_exec-delta_mine, by almost a full tick.
> So preemption will occur every sched_granularity (rounded up to the next
> tick) points in time, in wall-clock time.

The only problem I have with this fairness is the server workload that 
services requests by fork/thread creation.  In such a case, this fairness is 
completely counter-productive, as running tasks unfairly inhibit the 
creation of peers.

Giving fork/thread creation special priority may alleviate this problem.


Thanks!

--
Al


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2007-05-15  3:00 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-14  8:33 fair clock use in CFS Srivatsa Vaddagiri
2007-05-14 10:29 ` William Lee Irwin III
2007-05-14 10:31   ` Ingo Molnar
2007-05-14 11:05     ` William Lee Irwin III
2007-05-14 11:22       ` Srivatsa Vaddagiri
2007-05-14 11:20         ` William Lee Irwin III
2007-05-14 12:04           ` Ingo Molnar
2007-05-14 23:57             ` William Lee Irwin III
2007-05-14 20:20           ` Ting Yang
2007-05-14 11:50       ` Ingo Molnar
2007-05-14 14:31         ` Daniel Hazelton
2007-05-14 15:02           ` Srivatsa Vaddagiri
2007-05-14 15:08           ` Ingo Molnar
2007-05-15  2:59           ` David Schwartz
2007-05-14 21:24         ` Ting Yang
2007-05-15  0:57           ` Ting Yang
2007-05-14 23:23         ` William Lee Irwin III
2007-05-14 11:10 ` Ingo Molnar
2007-05-14 13:04   ` Srivatsa Vaddagiri
2007-05-14 13:15     ` Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2007-05-14 15:02 Al Boldi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox