From mboxrd@z Thu Jan 1 00:00:00 1970 From: Frederic Weisbecker Subject: Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem. Date: Wed, 7 Aug 2013 02:58:39 +0200 Message-ID: <20130807005838.GB3011@somewhere> References: <201301152014.AAD52192.FOOHQVtSFMFOJL@I-love.SAKURA.ne.jp> <201301180857.r0I8vK7c052791@www262.sakura.ne.jp> <1363660703.4993.3.camel@nexus> <201304012205.DFC60784.HVOMJSFFLFtOOQ@I-love.SAKURA.ne.jp> <201304232145.AHE52181.HJVOOQSFLMFOtF@I-love.SAKURA.ne.jp> <20130428004940.GA10354@somewhere> <51D24F54.1000703@lab.ntt.co.jp> <51D2ADCC.1090807@lab.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Tetsuo Handa , tglx@linutronix.de, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andrew Morton , Arjan van de Ven To: Fernando Luis Vazquez Cao Return-path: Received: from mail-we0-f182.google.com ([74.125.82.182]:60772 "EHLO mail-we0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756650Ab3HGA6o (ORCPT ); Tue, 6 Aug 2013 20:58:44 -0400 Content-Disposition: inline In-Reply-To: <51D2ADCC.1090807@lab.ntt.co.jp> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Tue, Jul 02, 2013 at 07:39:08PM +0900, Fernando Luis Vazquez Cao wro= te: > On 2013=E5=B9=B407=E6=9C=8802=E6=97=A5 12:56, Fernando Luis Vazquez C= ao wrote: > >Hi Frederic, > > > >I'm sorry it's taken me so long to respond; I got sidetracked for > >a while. Comments follow below. > > > >On 2013/04/28 09:49, Frederic Weisbecker wrote: > >>On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote: > >>>CONFIG_NO_HZ=3Dy can cause idle/iowait values to decrease. > >[...] > >>It's not clear in the changelog why you see non-monotonic > >>idle/iowait values. > >> > >>Looking at the previous patch from Fernando, it seems that's > >>because we can > >>race with concurrent updates from the CPU target when it wakes > >>up from idle? > >>(could be updated by drivers/cpufreq/cpufreq_governor.c as well). > >> > >>If so the bug has another symptom: we may also report a wrong > >>iowait/idle time > >>by accounting the last idle time twice. > >> > >>In this case we should fix the bug from the source, for example > >>we can force > >>the given ordering: > >> > >>=3D Write side =3D =3D Read side =3D > >> > >>// tick_nohz_start_idle() > >>write_seqcount_begin(ts->seq) > >>ts->idle_entrytime =3D now > >>ts->idle_active =3D 1 > >>write_seqcount_end(ts->seq) > >> > >>// tick_nohz_stop_idle() > >>write_seqcount_begin(ts->seq) > >>ts->iowait_sleeptime +=3D now - ts->idle_entrytime > >>t->idle_active =3D 0 > >>write_seqcount_end(ts->seq) > >> > >> // get_cpu_iowait_time_us(= ) > >> do { > >> seq =3D > >>read_seqcount_begin(ts->seq) > >> if (t->idle_active) { > >> time =3D now - > >>ts->idle_entrytime > >> time +=3D > >>ts->iowait_sleeptime > >> } else { > >> time =3D > >>ts->iowait_sleeptime > >> } > >> } while > >>(read_seqcount_retry(ts->seq, seq)); > >> > >>Right? seqcount should be enough to make sure we are getting a > >>consistent result. > >>I doubt we need harder locking. > > > >I tried that and it doesn't suffice. The problem that causes the mos= t > >serious skews is related to the CPU scheduler: the per-run queue > >counter nr_iowait can be updated not only from the CPU it belongs > >to but also from any other CPU if tasks are migrated out while > >waiting on I/O. > > > >The race looks like this: > > > >CPU0 CPU1 > > [ CPU1_rq->nr_iowait =3D=3D 0 ] > > Task foo: io_schedule() > > schedule() > > [ CPU1_rq->nr_iowait =3D=3D 1) ] > > Task foo migrated to CPU0 > > Goes to sleep > > > >// get_cpu_iowait_time_us(1, NULL) > >[ CPU1_ts->idle_active =3D=3D 1, CPU1_rq->nr_iowait =3D=3D 1 ] > >[ CPU1_ts->iowait_sleeptime =3D 4, CPU1_ts->idle_entrytime =3D 3 ] > >now =3D 5 > >delta =3D 5 - 3 =3D 2 > >iowait =3D 4 + 2 =3D 6 > > > >Task foo wakes up > >[ CPU1_rq->nr_iowait =3D=3D 0 ] > > > > CPU1 comes out of sleep state > > tick_nohz_stop_idle() > > update_ts_time_stats() > > [ CPU1_ts->idle_active =3D=3D 1, > >CPU1_rq->nr_iowait =3D=3D 0 ] > > [ CPU1_ts->iowait_sleeptime =3D > >4, CPU1_ts->idle_entrytime =3D 3 ] > > now =3D 6 > > delta =3D 6 - 3 =3D 3 > > (CPU1_ts->iowait_sleeptime is > >not updated) > > CPU1_ts->idle_entrytime =3D now = =3D 6 > > CPU1_ts->idle_active =3D 0 > > > >// get_cpu_iowait_time_us(1, NULL) > >[ CPU1_ts->idle_active =3D=3D 0, CPU1_rq->nr_iowait =3D=3D 0 ] > >[ CPU1_ts->iowait_sleeptime =3D 4, CPU1_ts->idle_entrytime =3D 6 ] > >iowait =3D CPU1_ts->iowait_sleeptime =3D 4 > >(iowait decreased from 6 to 4) >=20 > A possible solution to the races above would be to add > a per-cpu variable such ->iowait_sleeptime_user which > shadows ->iowait_sleeptime but is maintained in > get_cpu_iowait_time_us() and kept monotonic, > the former being the one we would export to user > space. >=20 > Another approach would be updating ->nr_iowait > of the source and destination CPUs during task > migration, but this may be overkill. >=20 > What do you think? I have the feeling we can fix that with: * only update ts->idle_sleeptime / ts->iowait_sleeptime locally from tick_nohz_start_idle() and tick_nohz_stop_idle() * readers can add the pending delta to these values anytime they fetch = it * use seqcount to ensure that ts->idle_entrytime, ts->iowait/idle_sleep= time update sequences are well synchronized. I just wrote the patches that do that. Let me just test them and write = the changelogs then I'll post that tomorrow. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html