linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [BUG] local_softirq_pending storm
@ 2007-05-09 14:12 Anant Nitya
  2007-05-09 16:31 ` Thomas Gleixner
  0 siblings, 1 reply; 27+ messages in thread
From: Anant Nitya @ 2007-05-09 14:12 UTC (permalink / raw)
  To: linux-kernel

Hi,
Ever since I upgrade to 2.6.21/.1, system log is filled with following 
messages if I enable CONFIG_NO_HZ=y, going through archives it seems ingo 
sometime back posted some patch and now it is upstream, but its not helping 
here. If I disable NOHZ by kernel command line nohz=off this problem 
disappears. This system is P4/2.40GHz/HT with SMP/SMT on in kernel config. 
One more thing that I noticed is this problem only arises while using X or 
network otherwise plain command line with no network access don't trigger 
this with nohz=on.
Please ask if more information is required regarding this setup.

dumdum@hahakaar [~]$ >>  grep NOHZ /var/log/messages
May  8 03:38:14 rudra kernel: [  419.271195] NOHZ: local_softirq_pending 02
May  8 03:38:14 rudra kernel: [  419.271663] NOHZ: local_softirq_pending 02
May  8 03:38:14 rudra kernel: [  419.343948] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.344236] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.344397] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.344545] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.344691] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.344842] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.344991] NOHZ: local_softirq_pending 22
May  8 03:38:14 rudra kernel: [  419.345137] NOHZ: local_softirq_pending 22
May  8 03:38:18 rudra kernel: [  423.065780] NOHZ: local_softirq_pending 22
May  8 03:38:18 rudra kernel: [  423.066206] NOHZ: local_softirq_pending 22
May  8 03:38:18 rudra kernel: [  423.066509] NOHZ: local_softirq_pending 22
May  8 03:38:19 rudra kernel: [  424.006549] NOHZ: local_softirq_pending 22
May  8 03:38:19 rudra kernel: [  424.006983] NOHZ: local_softirq_pending 22
May  8 03:38:19 rudra kernel: [  424.007239] NOHZ: local_softirq_pending 22
May  8 03:38:19 rudra kernel: [  424.007473] NOHZ: local_softirq_pending 22
May  8 03:38:19 rudra kernel: [  424.007706] NOHZ: local_softirq_pending 22
May  8 03:38:19 rudra kernel: [  424.007941] NOHZ: local_softirq_pending 22
May  8 03:38:22 rudra kernel: [  426.862456] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.331619] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.331991] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.332192] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.332378] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.332553] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.332740] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.332914] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.333097] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.333271] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.333443] NOHZ: local_softirq_pending 22
May  8 03:38:23 rudra kernel: [  428.333619] NOHZ: local_softirq_pending 22
May  8 03:38:24 rudra kernel: [  429.261574] NOHZ: local_softirq_pending 22
May  8 03:38:24 rudra kernel: [  429.262024] NOHZ: local_softirq_pending 22
May  8 03:38:24 rudra kernel: [  429.262339] NOHZ: local_softirq_pending 22
May  8 03:38:24 rudra kernel: [  429.262610] NOHZ: local_softirq_pending 22
May  8 03:38:24 rudra kernel: [  429.262847] NOHZ: local_softirq_pending 22
May  8 03:38:24 rudra kernel: [  429.263081] NOHZ: local_softirq_pending 22
May  8 03:38:35 rudra kernel: [  440.182998] NOHZ: local_softirq_pending 22
May  8 03:38:35 rudra kernel: [  440.183408] NOHZ: local_softirq_pending 22
May  8 03:38:35 rudra kernel: [  440.183661] NOHZ: local_softirq_pending 22
May  8 03:38:43 rudra kernel: [  448.272087] NOHZ: local_softirq_pending 22
May  8 03:38:43 rudra kernel: [  448.272529] NOHZ: local_softirq_pending 22
May  8 03:38:44 rudra kernel: [  449.223360] NOHZ: local_softirq_pending 22
May  8 03:38:44 rudra kernel: [  449.223887] NOHZ: local_softirq_pending 22
May  8 03:38:44 rudra kernel: [  449.224570] NOHZ: local_softirq_pending 22
May  8 03:38:44 rudra kernel: [  449.225066] NOHZ: local_softirq_pending 22
May  8 03:38:44 rudra kernel: [  449.232989] NOHZ: local_softirq_pending 22
May  8 03:38:47 rudra kernel: [  452.178583] NOHZ: local_softirq_pending a2
May  8 03:38:47 rudra kernel: [  452.179017] NOHZ: local_softirq_pending a2
May  8 03:38:47 rudra kernel: [  452.179257] NOHZ: local_softirq_pending a2
May  8 03:38:51 rudra kernel: [  455.957968] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.958462] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.958741] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.958984] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.959292] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.959540] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.959774] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.960005] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.972368] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.972681] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.972884] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.973069] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.973251] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.973433] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.973631] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.973832] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.974018] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.974234] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.974422] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.974610] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.974791] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.974981] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.975194] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.975390] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.975570] NOHZ: local_softirq_pending 10
May  8 03:38:51 rudra kernel: [  455.976091] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.976376] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.976622] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.976862] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.977099] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.977337] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.977577] NOHZ: local_softirq_pending 22
May  8 03:38:51 rudra kernel: [  455.977815] NOHZ: local_softirq_pending 22
May  8 03:38:52 rudra kernel: [  456.848267] NOHZ: local_softirq_pending 22
May  8 03:38:52 rudra kernel: [  456.848726] NOHZ: local_softirq_pending 22
May  8 03:38:52 rudra kernel: [  456.848982] NOHZ: local_softirq_pending 22
May  8 03:38:52 rudra kernel: [  456.849223] NOHZ: local_softirq_pending 22
May  8 03:38:52 rudra kernel: [  456.849459] NOHZ: local_softirq_pending 22
May  8 03:38:54 rudra kernel: [  459.279942] NOHZ: local_softirq_pending 02
May  8 03:38:54 rudra kernel: [  459.280267] NOHZ: local_softirq_pending 02
May  8 03:38:54 rudra kernel: [  459.280477] NOHZ: local_softirq_pending 02
May  8 03:38:54 rudra kernel: [  459.280667] NOHZ: local_softirq_pending 02
May  8 03:38:54 rudra kernel: [  459.280865] NOHZ: local_softirq_pending 02
May  8 03:38:59 rudra kernel: [  464.046556] NOHZ: local_softirq_pending 22
May  8 03:38:59 rudra kernel: [  464.046999] NOHZ: local_softirq_pending 22
May  8 03:38:59 rudra kernel: [  464.047315] NOHZ: local_softirq_pending 22
May  8 03:38:59 rudra kernel: [  464.047554] NOHZ: local_softirq_pending 22
May  8 03:38:59 rudra kernel: [  464.047790] NOHZ: local_softirq_pending 22
May  8 03:39:08 rudra kernel: [  473.019484] NOHZ: local_softirq_pending a2
May  8 03:39:08 rudra kernel: [  473.019954] NOHZ: local_softirq_pending a2
May  8 03:39:08 rudra kernel: [  473.020215] NOHZ: local_softirq_pending a2
May  8 03:39:08 rudra kernel: [  473.020455] NOHZ: local_softirq_pending a2
May  8 03:39:08 rudra kernel: [  473.020692] NOHZ: local_softirq_pending a2
May  8 03:39:08 rudra kernel: [  473.020925] NOHZ: local_softirq_pending a2
May  8 03:39:11 rudra kernel: [  475.876271] NOHZ: local_softirq_pending 22
May  8 03:39:11 rudra kernel: [  475.876697] NOHZ: local_softirq_pending 22
May  8 03:39:11 rudra kernel: [  475.876947] NOHZ: local_softirq_pending 22
May  8 03:39:11 rudra kernel: [  475.877183] NOHZ: local_softirq_pending 22
May  8 03:39:11 rudra kernel: [  475.877490] NOHZ: local_softirq_pending 22
May  8 03:39:11 rudra kernel: [  475.877736] NOHZ: local_softirq_pending 22
May  8 03:39:11 rudra kernel: [  475.877964] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.878914] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.879262] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.879473] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.879674] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.879849] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.880031] NOHZ: local_softirq_pending 22
May  8 03:39:31 rudra kernel: [  495.880206] NOHZ: local_softirq_pending 22
May  8 03:39:53 rudra kernel: [  517.755212] NOHZ: local_softirq_pending 22
May  8 03:39:53 rudra kernel: [  517.755863] NOHZ: local_softirq_pending 22
May  8 03:39:53 rudra kernel: [  517.756181] NOHZ: local_softirq_pending 22
May  8 03:39:53 rudra kernel: [  517.756434] NOHZ: local_softirq_pending 22
May  8 03:39:53 rudra kernel: [  517.756667] NOHZ: local_softirq_pending 22
May  8 03:39:53 rudra kernel: [  517.756900] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.718220] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.718540] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.718756] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.718944] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.719121] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.719298] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.719484] NOHZ: local_softirq_pending 22
May  8 03:39:54 rudra kernel: [  518.719672] NOHZ: local_softirq_pending 22
May  8 03:40:02 rudra kernel: [  526.814840] NOHZ: local_softirq_pending 22
May  8 03:40:02 rudra kernel: [  526.815354] NOHZ: local_softirq_pending 22
May  8 03:40:02 rudra kernel: [  526.815703] NOHZ: local_softirq_pending 22
May  8 03:40:02 rudra kernel: [  526.816015] NOHZ: local_softirq_pending 22
May  8 03:40:02 rudra kernel: [  526.816255] NOHZ: local_softirq_pending 22
May  8 03:40:11 rudra kernel: [  535.847890] NOHZ: local_softirq_pending a2
May  8 03:40:11 rudra kernel: [  535.848360] NOHZ: local_softirq_pending a2
May  8 03:40:11 rudra kernel: [  535.848618] NOHZ: local_softirq_pending a2
May  8 03:40:11 rudra kernel: [  535.848857] NOHZ: local_softirq_pending a2
May  8 03:40:11 rudra kernel: [  535.849095] NOHZ: local_softirq_pending a2
May  8 03:40:12 rudra kernel: [  536.788182] NOHZ: local_softirq_pending 22
May  8 03:40:12 rudra kernel: [  536.788630] NOHZ: local_softirq_pending 22
May  8 03:40:12 rudra kernel: [  536.788939] NOHZ: local_softirq_pending 22
May  8 03:40:12 rudra kernel: [  536.789240] NOHZ: local_softirq_pending 22
May  8 03:40:12 rudra kernel: [  536.789484] NOHZ: local_softirq_pending 22
May  8 03:40:12 rudra kernel: [  536.789718] NOHZ: local_softirq_pending 22
May  8 03:40:16 rudra kernel: [  541.078474] NOHZ: local_softirq_pending a2
May  8 03:40:16 rudra kernel: [  541.078933] NOHZ: local_softirq_pending a2
May  8 03:40:16 rudra kernel: [  541.079186] NOHZ: local_softirq_pending a2
May  8 03:40:16 rudra kernel: [  541.079425] NOHZ: local_softirq_pending a2
May  8 03:40:16 rudra kernel: [  541.079657] NOHZ: local_softirq_pending a2
May  8 03:40:16 rudra kernel: [  541.079893] NOHZ: local_softirq_pending a2
May  8 03:40:18 rudra kernel: [  542.974969] NOHZ: local_softirq_pending 22
May  8 03:40:18 rudra kernel: [  542.975419] NOHZ: local_softirq_pending 22
May  8 03:40:18 rudra kernel: [  542.975754] NOHZ: local_softirq_pending 22
May  8 03:40:18 rudra kernel: [  542.976018] NOHZ: local_softirq_pending 22
May  8 03:40:18 rudra kernel: [  542.976253] NOHZ: local_softirq_pending 22
May  8 03:40:18 rudra kernel: [  542.976484] NOHZ: local_softirq_pending 22
May  8 03:40:24 rudra kernel: [  548.687200] NOHZ: local_softirq_pending a2
May  8 03:40:24 rudra kernel: [  548.687691] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.977399] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.977730] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.977946] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.978135] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.978318] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.978499] NOHZ: local_softirq_pending a2
May  8 03:40:28 rudra kernel: [  552.978708] NOHZ: local_softirq_pending a2
May  8 03:40:33 rudra kernel: [  557.728762] NOHZ: local_softirq_pending 22
May  8 03:40:33 rudra kernel: [  557.729230] NOHZ: local_softirq_pending 22
May  8 03:40:33 rudra kernel: [  557.729539] NOHZ: local_softirq_pending 22
May  8 03:40:33 rudra kernel: [  557.729821] NOHZ: local_softirq_pending 22
May  8 03:40:33 rudra kernel: [  557.730060] NOHZ: local_softirq_pending 22
May  8 03:40:33 rudra kernel: [  557.730297] NOHZ: local_softirq_pending 22
May  8 03:40:36 rudra kernel: [  561.065192] NOHZ: local_softirq_pending e2
May  8 03:40:36 rudra kernel: [  561.065539] NOHZ: local_softirq_pending e2
May  8 03:40:36 rudra kernel: [  561.065750] NOHZ: local_softirq_pending e2
May  8 03:40:36 rudra kernel: [  561.065942] NOHZ: local_softirq_pending e2
May  8 03:40:36 rudra kernel: [  561.066139] NOHZ: local_softirq_pending e2
May  8 03:40:36 rudra kernel: [  561.066339] NOHZ: local_softirq_pending e2
May  8 03:40:36 rudra kernel: [  561.066515] NOHZ: local_softirq_pending e2
May  8 03:40:42 rudra kernel: [  566.772651] NOHZ: local_softirq_pending 22
May  8 03:40:42 rudra kernel: [  566.773108] NOHZ: local_softirq_pending 22
May  8 03:40:42 rudra kernel: [  566.773314] NOHZ: local_softirq_pending 22
May  8 03:40:42 rudra kernel: [  566.773359] NOHZ: local_softirq_pending 22
May  8 03:40:42 rudra kernel: [  566.787249] NOHZ: local_softirq_pending 62
May  8 03:40:42 rudra kernel: [  566.787536] NOHZ: local_softirq_pending 62
May  8 03:40:51 rudra kernel: [  575.816602] NOHZ: local_softirq_pending 22
May  8 03:40:51 rudra kernel: [  575.817065] NOHZ: local_softirq_pending 22
May  8 03:41:03 rudra kernel: [  587.715470] NOHZ: local_softirq_pending 22
May  8 03:41:03 rudra kernel: [  587.715920] NOHZ: local_softirq_pending 22
May  8 03:41:03 rudra kernel: [  587.716259] NOHZ: local_softirq_pending 22
May  8 03:41:03 rudra kernel: [  587.716528] NOHZ: local_softirq_pending 22
May  8 03:41:03 rudra kernel: [  587.716764] NOHZ: local_softirq_pending 22
May  8 03:41:03 rudra kernel: [  587.717001] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.805445] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.805945] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.806196] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.806430] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.806659] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.806888] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.807116] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.807341] NOHZ: local_softirq_pending 22
May  8 03:41:12 rudra kernel: [  596.807570] NOHZ: local_softirq_pending 22
May  8 03:41:20 rudra kernel: [  604.847182] NOHZ: local_softirq_pending 22
May  8 03:41:20 rudra kernel: [  604.847637] NOHZ: local_softirq_pending 22
May  8 03:41:20 rudra kernel: [  604.847941] NOHZ: local_softirq_pending 22
May  8 03:41:20 rudra kernel: [  604.848225] NOHZ: local_softirq_pending 22
May  8 03:41:20 rudra kernel: [  604.848463] NOHZ: local_softirq_pending 22
May  8 03:41:20 rudra kernel: [  604.848700] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.702270] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.702604] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.702804] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.702982] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.703181] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.703431] NOHZ: local_softirq_pending 22
May  8 03:41:23 rudra kernel: [  607.703621] NOHZ: local_softirq_pending 22
May  8 03:41:29 rudra kernel: [  613.888941] NOHZ: local_softirq_pending 22
May  8 03:41:29 rudra kernel: [  613.889415] NOHZ: local_softirq_pending 22
May  8 03:41:29 rudra kernel: [  613.889712] NOHZ: local_softirq_pending 22
May  8 03:41:29 rudra kernel: [  613.890009] NOHZ: local_softirq_pending 22
May  8 03:41:29 rudra kernel: [  613.890251] NOHZ: local_softirq_pending 22
May  8 03:41:29 rudra kernel: [  613.890487] NOHZ: local_softirq_pending 22
May  8 03:41:32 rudra kernel: [  616.743862] NOHZ: local_softirq_pending 22
May  8 03:41:32 rudra kernel: [  616.744331] NOHZ: local_softirq_pending 22
May  8 03:41:32 rudra kernel: [  616.744640] NOHZ: local_softirq_pending 22
May  8 03:41:32 rudra kernel: [  616.744929] NOHZ: local_softirq_pending 22
May  8 03:41:32 rudra kernel: [  616.745175] NOHZ: local_softirq_pending 22
May  8 03:41:32 rudra kernel: [  616.745409] NOHZ: local_softirq_pending 22
May  8 03:41:34 rudra kernel: [  618.660611] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.660958] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.661166] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.661349] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.661541] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.661729] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.661911] NOHZ: local_softirq_pending a2
May  8 03:41:34 rudra kernel: [  618.662089] NOHZ: local_softirq_pending a2
May  8 03:41:37 rudra kernel: [  621.996786] NOHZ: local_softirq_pending 22
May  8 03:41:50 rudra kernel: [  634.831692] NOHZ: local_softirq_pending 22
May  8 03:41:50 rudra kernel: [  634.832143] NOHZ: local_softirq_pending 22
May  8 03:41:50 rudra kernel: [  634.832453] NOHZ: local_softirq_pending 22
May  8 03:41:50 rudra kernel: [  634.832765] NOHZ: local_softirq_pending 22
May  8 03:41:50 rudra kernel: [  634.833011] NOHZ: local_softirq_pending 22
May  8 03:41:50 rudra kernel: [  634.833241] NOHZ: local_softirq_pending 22
May  8 03:41:51 rudra kernel: [  635.790271] NOHZ: local_softirq_pending a2
May  8 03:41:51 rudra kernel: [  635.790608] NOHZ: local_softirq_pending a2
May  8 03:41:51 rudra kernel: [  635.790823] NOHZ: local_softirq_pending a2
May  8 03:41:51 rudra kernel: [  635.791001] NOHZ: local_softirq_pending a2
May  8 03:41:51 rudra kernel: [  635.791179] NOHZ: local_softirq_pending a2
May  8 03:41:51 rudra kernel: [  635.791354] NOHZ: local_softirq_pending a2
May  8 03:41:51 rudra kernel: [  635.791537] NOHZ: local_softirq_pending a2
May  8 03:41:53 rudra kernel: [  637.686505] NOHZ: local_softirq_pending 22
May  8 03:41:53 rudra kernel: [  637.686981] NOHZ: local_softirq_pending 22
May  8 03:41:53 rudra kernel: [  637.687428] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.941445] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.941976] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.942243] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.942485] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.942717] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.942953] NOHZ: local_softirq_pending 22
May  8 03:42:08 rudra kernel: [  652.943188] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.862586] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.862918] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.863124] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.863316] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.863501] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.863687] NOHZ: local_softirq_pending 22
May  8 03:42:09 rudra kernel: [  653.863863] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.714675] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.715163] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.715423] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.715665] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.715904] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.716140] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.716378] NOHZ: local_softirq_pending 22
May  8 03:42:12 rudra kernel: [  656.716611] NOHZ: local_softirq_pending 22
May  8 03:42:14 rudra kernel: [  658.622576] NOHZ: local_softirq_pending 22
May  8 03:42:15 rudra kernel: [  659.569910] NOHZ: local_softirq_pending 22
May  8 03:42:15 rudra kernel: [  659.570340] NOHZ: local_softirq_pending 22
May  8 03:42:15 rudra kernel: [  659.570650] NOHZ: local_softirq_pending 22
May  8 03:42:15 rudra kernel: [  659.570924] NOHZ: local_softirq_pending 22
May  8 03:42:15 rudra kernel: [  659.571159] NOHZ: local_softirq_pending 22
May  8 03:42:15 rudra kernel: [  659.571393] NOHZ: local_softirq_pending 22
May  9 01:10:43 rudra kernel: [ 4472.241952] NOHZ: local_softirq_pending 22
May  9 01:10:43 rudra kernel: [ 4472.243018] NOHZ: local_softirq_pending 22
May  9 01:10:43 rudra kernel: [ 4472.243383] NOHZ: local_softirq_pending 22
May  9 01:11:15 rudra kernel: [ 4504.194191] NOHZ: local_softirq_pending 22
May  9 01:11:15 rudra kernel: [ 4504.194815] NOHZ: local_softirq_pending 22
May  9 01:11:15 rudra kernel: [ 4504.195140] NOHZ: local_softirq_pending 22
May  9 01:11:35 rudra kernel: [ 4524.158524] NOHZ: local_softirq_pending 22
May  9 01:11:35 rudra kernel: [ 4524.159118] NOHZ: local_softirq_pending 22
May  9 01:11:35 rudra kernel: [ 4524.159439] NOHZ: local_softirq_pending 22
May  9 01:11:45 rudra kernel: [ 4534.140638] NOHZ: local_softirq_pending 22
May  9 01:11:45 rudra kernel: [ 4534.141046] NOHZ: local_softirq_pending 22
May  9 01:11:45 rudra kernel: [ 4534.141456] NOHZ: local_softirq_pending 22
May  9 01:11:45 rudra kernel: [ 4534.141738] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.718873] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.719297] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.719557] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.719808] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.720047] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.720295] NOHZ: local_softirq_pending 22
May  9 01:12:32 rudra kernel: [ 4580.720534] NOHZ: local_softirq_pending 22
May  9 01:12:36 rudra kernel: [ 4585.053403] NOHZ: local_softirq_pending 22
May  9 01:12:36 rudra kernel: [ 4585.054308] NOHZ: local_softirq_pending 22
May  9 01:12:36 rudra kernel: [ 4585.054588] NOHZ: local_softirq_pending 22
May  9 01:12:36 rudra kernel: [ 4585.054796] NOHZ: local_softirq_pending 22
May  9 01:12:36 rudra kernel: [ 4585.054981] NOHZ: local_softirq_pending 22
May  9 01:12:36 rudra kernel: [ 4585.055163] NOHZ: local_softirq_pending 22
May  9 01:12:38 rudra kernel: [ 4586.927035] NOHZ: local_softirq_pending 22
May  9 01:12:51 rudra kernel: [ 4599.749155] NOHZ: local_softirq_pending 22
May  9 01:12:51 rudra kernel: [ 4599.750614] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.648012] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.648457] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.648722] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.648971] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.649217] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.649463] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.649705] NOHZ: local_softirq_pending 22
May  9 01:12:53 rudra kernel: [ 4601.649951] NOHZ: local_softirq_pending 22
May  9 01:19:16 rudra kernel: [  325.029390] NOHZ: local_softirq_pending 22

TIA

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-09 14:12 [BUG] local_softirq_pending storm Anant Nitya
@ 2007-05-09 16:31 ` Thomas Gleixner
  2007-05-10 11:53   ` Anant Nitya
  0 siblings, 1 reply; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-09 16:31 UTC (permalink / raw)
  To: Anant Nitya; +Cc: linux-kernel

On Wed, 2007-05-09 at 19:42 +0530, Anant Nitya wrote:
> Hi,
> Ever since I upgrade to 2.6.21/.1, system log is filled with following 
> messages if I enable CONFIG_NO_HZ=y, going through archives it seems ingo 
> sometime back posted some patch and now it is upstream, but its not helping 
> here. If I disable NOHZ by kernel command line nohz=off this problem 
> disappears. This system is P4/2.40GHz/HT with SMP/SMT on in kernel config. 
> One more thing that I noticed is this problem only arises while using X or 
> network otherwise plain command line with no network access don't trigger 
> this with nohz=on.

Is this independent of the load on the system ? i.e. : What happens if
you only use the console and run a kernel compile with -j4 ?

> dumdum@hahakaar [~]$ >>  grep NOHZ /var/log/messages
> May  8 03:38:14 rudra kernel: [  419.271195] NOHZ: local_softirq_pending 02
> May  8 03:38:14 rudra kernel: [  419.271663] NOHZ: local_softirq_pending 02

The patch below ratelimits the printk output, so your syslog is not
flooded anymore.

	tglx

Index: linux-2.6.21/kernel/time/tick-sched.c
===================================================================
--- linux-2.6.21.orig/kernel/time/tick-sched.c
+++ linux-2.6.21/kernel/time/tick-sched.c
@@ -167,9 +167,15 @@ void tick_nohz_stop_sched_tick(void)
 		goto end;
 
 	cpu = smp_processor_id();
-	if (unlikely(local_softirq_pending()))
-		printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
-		       local_softirq_pending());
+	if (unlikely(local_softirq_pending())) {
+		static int ratelimit;
+
+		if (ratelimit < 10) {
+			printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
+			       local_softirq_pending());
+			ratelimit++;
+		}
+	}
 
 	now = ktime_get();
 	/*



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-09 16:31 ` Thomas Gleixner
@ 2007-05-10 11:53   ` Anant Nitya
  2007-05-10 21:58     ` Thomas Gleixner
  0 siblings, 1 reply; 27+ messages in thread
From: Anant Nitya @ 2007-05-10 11:53 UTC (permalink / raw)
  To: tglx; +Cc: linux-kernel

On 5/9/07, Thomas Gleixner <tglx@linutronix.de> wrote:
> On Wed, 2007-05-09 at 19:42 +0530, Anant Nitya wrote:
> > Hi,
> > Ever since I upgrade to 2.6.21/.1, system log is filled with following
> > messages if I enable CONFIG_NO_HZ=y, going through archives it seems ingo
> > sometime back posted some patch and now it is upstream, but its not helping
> > here. If I disable NOHZ by kernel command line nohz=off this problem
> > disappears. This system is P4/2.40GHz/HT with SMP/SMT on in kernel config.
> > One more thing that I noticed is this problem only arises while using X or
> > network otherwise plain command line with no network access don't trigger
> > this with nohz=on.
>
> Is this independent of the load on the system ? i.e. : What happens if
> you only use the console and run a kernel compile with -j4 ?
Yep, it seems independent of load on the system, to test I compiled
kernel with make -j8 and to throw some more load same time I also used
amavisd to clean around 2000 of spam/virus infected mails and same
time was listening to few radio stream over net in console only login
mode. Even then there was not a single NOHZ: local_softirq_pending
message in log. Once kernel got compiled and amavisd finished its job
and load drops back to around 0.5/2.0 I started X and within few secs
log starts getting filled with local_softirq_pending messages, (sorry,
I didn't applied ratelimit patch since I wanted to test is it high
load on system that causes this or something else). It seems even
network operation is not causing this. It seems either X is doing
something terribly wrong or kernel is getting hosed by X.
If some more information is needed please feel free to ask

dumdum@hahakaar [~]$ >> grep load top.log
top - 14:04:12 up 15 min,  4 users,  load average: 9.42, 5.12, 2.20
top - 14:04:17 up 15 min,  4 users,  load average: 9.47, 5.20, 2.24
top - 14:04:22 up 16 min,  4 users,  load average: 9.35, 5.25, 2.27
top - 14:04:27 up 16 min,  4 users,  load average: 9.32, 5.31, 2.31
top - 14:04:32 up 16 min,  4 users,  load average: 9.22, 5.36, 2.34
top - 14:04:37 up 16 min,  4 users,  load average: 9.36, 5.45, 2.38
top - 14:04:42 up 16 min,  4 users,  load average: 9.33, 5.51, 2.42
top - 14:04:47 up 16 min,  4 users,  load average: 9.38, 5.58, 2.46
top - 14:04:52 up 16 min,  4 users,  load average: 9.35, 5.64, 2.49
top - 14:04:57 up 16 min,  4 users,  load average: 9.24, 5.68, 2.52
top - 14:05:02 up 16 min,  4 users,  load average: 9.14, 5.72, 2.55
top - 14:05:07 up 16 min,  4 users,  load average: 9.05, 5.75, 2.58
top - 14:05:12 up 16 min,  4 users,  load average: 9.13, 5.83, 2.62
top - 14:05:17 up 16 min,  4 users,  load average: 9.28, 5.91, 2.66
top - 14:05:22 up 17 min,  4 users,  load average: 9.34, 5.98, 2.70
top - 14:05:27 up 17 min,  4 users,  load average: 9.47, 6.06, 2.75
top - 14:05:32 up 17 min,  4 users,  load average: 9.43, 6.11, 2.78
top - 14:05:37 up 17 min,  4 users,  load average: 9.32, 6.14, 2.81
top - 14:05:42 up 17 min,  4 users,  load average: 9.37, 6.20, 2.85
top - 14:05:47 up 17 min,  4 users,  load average: 9.26, 6.23, 2.87
top - 14:05:52 up 17 min,  4 users,  load average: 9.16, 6.26, 2.90
top - 14:05:58 up 17 min,  4 users,  load average: 9.07, 6.29, 2.93
top - 14:06:03 up 17 min,  4 users,  load average: 9.14, 6.35, 2.97
top - 14:06:08 up 17 min,  4 users,  load average: 9.13, 6.40, 3.00
top - 14:06:13 up 17 min,  4 users,  load average: 9.28, 6.47, 3.04
top - 14:06:18 up 17 min,  4 users,  load average: 9.34, 6.53, 3.08
top - 14:06:23 up 18 min,  4 users,  load average: 9.31, 6.57, 3.11
top - 14:06:28 up 18 min,  4 users,  load average: 9.36, 6.63, 3.15
top - 14:06:33 up 18 min,  4 users,  load average: 9.25, 6.65, 3.17
top - 14:06:38 up 18 min,  4 users,  load average: 9.23, 6.69, 3.20
top - 14:06:43 up 18 min,  4 users,  load average: 9.38, 6.76, 3.25
top - 14:06:48 up 18 min,  4 users,  load average: 9.27, 6.78, 3.27
top - 14:06:53 up 18 min,  4 users,  load average: 9.32, 6.84, 3.31
top - 14:06:58 up 18 min,  4 users,  load average: 9.70, 6.95, 3.36
top - 14:07:03 up 18 min,  4 users,  load average: 9.72, 7.00, 3.40
top - 14:07:08 up 18 min,  4 users,  load average: 9.82, 7.07, 3.44
top - 14:07:13 up 18 min,  4 users,  load average: 9.84, 7.12, 3.48
top - 14:07:18 up 18 min,  4 users,  load average: 9.85, 7.17, 3.51
top - 14:07:23 up 19 min,  4 users,  load average: 9.78, 7.20, 3.54
top - 14:07:28 up 19 min,  4 users,  load average: 9.80, 7.24, 3.57
top - 14:07:33 up 19 min,  4 users,  load average: 10.14, 7.40, 3.66
top - 14:07:38 up 19 min,  4 users,  load average: 10.45, 7.51, 3.72
top - 14:07:43 up 19 min,  4 users,  load average: 10.33, 7.53, 3.75
top - 14:07:48 up 19 min,  4 users,  load average: 9.58, 7.42, 3.73
top - 14:07:53 up 19 min,  4 users,  load average: 8.97, 7.33, 3.72
top - 14:07:58 up 19 min,  4 users,  load average: 8.34, 7.23, 3.71
top - 14:08:03 up 19 min,  4 users,  load average: 8.31, 7.24, 3.73
top - 14:08:08 up 19 min,  4 users,  load average: 8.36, 7.27, 3.76
top - 14:08:13 up 19 min,  4 users,  load average: 8.89, 7.40, 3.82
top - 14:08:18 up 20 min,  4 users,  load average: 8.34, 7.31, 3.81
top - 14:08:23 up 20 min,  4 users,  load average: 8.47, 7.35, 3.84
top - 14:08:28 up 20 min,  4 users,  load average: 8.52, 7.38, 3.87
top - 14:08:33 up 20 min,  4 users,  load average: 8.56, 7.41, 3.90
top - 14:08:39 up 20 min,  4 users,  load average: 8.51, 7.42, 3.92
top - 14:08:44 up 20 min,  4 users,  load average: 8.63, 7.46, 3.95
top - 14:08:49 up 20 min,  4 users,  load average: 8.66, 7.48, 3.98
top - 14:08:54 up 20 min,  4 users,  load average: 8.85, 7.54, 4.02
top - 14:08:59 up 20 min,  4 users,  load average: 8.94, 7.58, 4.05
top - 14:09:04 up 20 min,  4 users,  load average: 9.02, 7.62, 4.08
top - 14:09:09 up 20 min,  4 users,  load average: 9.02, 7.64, 4.10
top - 14:09:14 up 20 min,  4 users,  load average: 8.94, 7.65, 4.13
top - 14:09:19 up 21 min,  4 users,  load average: 9.02, 7.69, 4.16
top - 14:09:24 up 21 min,  4 users,  load average: 8.94, 7.69, 4.18
top - 14:09:29 up 21 min,  4 users,  load average: 8.87, 7.70, 4.20
top - 14:09:34 up 21 min,  4 users,  load average: 8.88, 7.72, 4.22
top - 14:09:39 up 21 min,  4 users,  load average: 8.81, 7.72, 4.24
top - 14:09:44 up 21 min,  4 users,  load average: 8.82, 7.75, 4.27
top - 14:09:49 up 21 min,  4 users,  load average: 8.76, 7.75, 4.29
top - 14:09:54 up 21 min,  4 users,  load average: 8.85, 7.79, 4.32
top - 14:09:59 up 21 min,  4 users,  load average: 8.79, 7.79, 4.34
top - 14:10:04 up 21 min,  4 users,  load average: 8.88, 7.83, 4.37
top - 14:10:09 up 21 min,  4 users,  load average: 8.81, 7.83, 4.39
top - 14:10:14 up 21 min,  5 users,  load average: 8.75, 7.83, 4.41
top - 14:10:19 up 22 min,  5 users,  load average: 8.77, 7.85, 4.43
top - 14:10:24 up 22 min,  5 users,  load average: 8.79, 7.87, 4.46
top - 14:10:29 up 22 min,  5 users,  load average: 8.72, 7.87, 4.47
top - 14:10:34 up 22 min,  5 users,  load average: 8.74, 7.89, 4.50
top - 14:10:39 up 22 min,  5 users,  load average: 8.92, 7.94, 4.53
top - 14:10:44 up 22 min,  5 users,  load average: 9.17, 8.01, 4.57
top - 14:10:49 up 22 min,  5 users,  load average: 9.24, 8.04, 4.60
top - 14:10:54 up 22 min,  5 users,  load average: 9.14, 8.04, 4.62
top - 14:10:59 up 22 min,  5 users,  load average: 9.21, 8.07, 4.65
top - 14:11:04 up 22 min,  5 users,  load average: 9.27, 8.10, 4.68
top - 14:11:09 up 22 min,  5 users,  load average: 9.25, 8.12, 4.70
top - 14:11:14 up 22 min,  5 users,  load average: 9.15, 8.12, 4.72
top - 14:11:19 up 23 min,  5 users,  load average: 9.06, 8.11, 4.73
top - 14:11:24 up 23 min,  5 users,  load average: 8.97, 8.11, 4.75
top - 14:11:29 up 23 min,  5 users,  load average: 9.13, 8.16, 4.79
top - 14:11:35 up 23 min,  5 users,  load average: 9.04, 8.16, 4.80
top - 14:11:40 up 23 min,  5 users,  load average: 9.12, 8.19, 4.83
top - 14:11:45 up 23 min,  5 users,  load average: 9.27, 8.23, 4.86
top - 14:11:50 up 23 min,  5 users,  load average: 9.33, 8.26, 4.89
top - 14:11:55 up 23 min,  5 users,  load average: 9.22, 8.26, 4.91
top - 14:12:00 up 23 min,  5 users,  load average: 9.28, 8.29, 4.93
top - 14:12:05 up 23 min,  5 users,  load average: 9.34, 8.32, 4.96
top - 14:12:10 up 23 min,  5 users,  load average: 9.47, 8.36, 4.99
top - 14:12:15 up 23 min,  5 users,  load average: 9.51, 8.39, 5.02
top - 14:12:20 up 24 min,  5 users,  load average: 9.47, 8.40, 5.04
top - 14:12:25 up 24 min,  5 users,  load average: 9.52, 8.42, 5.07
top - 14:12:30 up 24 min,  5 users,  load average: 9.47, 8.43, 5.09
top - 14:12:35 up 24 min,  5 users,  load average: 9.36, 8.43, 5.10
top - 14:12:40 up 24 min,  5 users,  load average: 9.25, 8.42, 5.12
top - 14:12:45 up 24 min,  5 users,  load average: 9.39, 8.46, 5.15
top - 14:12:50 up 24 min,  5 users,  load average: 9.36, 8.47, 5.17
top - 14:12:55 up 24 min,  5 users,  load average: 9.41, 8.50, 5.20
top - 14:13:00 up 24 min,  5 users,  load average: 9.45, 8.52, 5.22
top - 14:13:05 up 24 min,  5 users,  load average: 9.34, 8.51, 5.24
top - 14:13:10 up 24 min,  5 users,  load average: 9.23, 8.50, 5.25
top - 14:13:15 up 24 min,  5 users,  load average: 9.29, 8.53, 5.28
top - 14:13:20 up 25 min,  5 users,  load average: 9.35, 8.55, 5.30
top - 14:13:25 up 25 min,  5 users,  load average: 9.32, 8.56, 5.32
top - 14:13:30 up 25 min,  5 users,  load average: 9.21, 8.55, 5.34
top - 14:13:35 up 25 min,  5 users,  load average: 9.20, 8.56, 5.35
top - 14:13:40 up 25 min,  5 users,  load average: 9.10, 8.55, 5.37
top - 14:13:45 up 25 min,  5 users,  load average: 9.17, 8.57, 5.39
top - 14:13:50 up 25 min,  5 users,  load average: 9.08, 8.56, 5.41
top - 14:13:55 up 25 min,  5 users,  load average: 9.07, 8.57, 5.43
top - 14:14:00 up 25 min,  5 users,  load average: 8.99, 8.56, 5.44
top - 14:14:05 up 25 min,  5 users,  load average: 8.91, 8.55, 5.45
top - 14:14:10 up 25 min,  5 users,  load average: 8.91, 8.56, 5.47
top - 14:14:16 up 25 min,  5 users,  load average: 8.92, 8.56, 5.49
top - 14:14:21 up 26 min,  5 users,  load average: 8.85, 8.55, 5.50
top - 14:14:26 up 26 min,  5 users,  load average: 8.86, 8.56, 5.52
top - 14:14:31 up 26 min,  5 users,  load average: 9.03, 8.60, 5.55
top - 14:14:36 up 26 min,  5 users,  load average: 9.19, 8.64, 5.58
top - 14:14:41 up 26 min,  5 users,  load average: 9.17, 8.65, 5.60
top - 14:14:46 up 26 min,  5 users,  load average: 9.24, 8.67, 5.62
top - 14:14:51 up 26 min,  5 users,  load average: 9.22, 8.67, 5.64
top - 14:14:56 up 26 min,  5 users,  load average: 9.28, 8.70, 5.66
top - 14:15:01 up 26 min,  5 users,  load average: 9.26, 8.70, 5.68
top - 14:15:06 up 26 min,  5 users,  load average: 9.16, 8.69, 5.69
top - 14:15:11 up 26 min,  5 users,  load average: 9.14, 8.69, 5.71
top - 14:15:16 up 26 min,  5 users,  load average: 9.05, 8.68, 5.72
top - 14:15:21 up 27 min,  5 users,  load average: 9.13, 8.70, 5.75
top - 14:15:26 up 27 min,  5 users,  load average: 9.04, 8.69, 5.76
top - 14:15:31 up 27 min,  5 users,  load average: 9.11, 8.71, 5.78
top - 14:15:36 up 27 min,  5 users,  load average: 9.19, 8.74, 5.80
top - 14:15:41 up 27 min,  5 users,  load average: 9.17, 8.74, 5.82
top - 14:15:46 up 27 min,  5 users,  load average: 9.08, 8.73, 5.83
top - 14:15:51 up 27 min,  5 users,  load average: 8.99, 8.71, 5.84
top - 14:15:56 up 27 min,  5 users,  load average: 8.99, 8.72, 5.86
top - 14:16:01 up 27 min,  5 users,  load average: 8.99, 8.72, 5.88
top - 14:16:06 up 27 min,  5 users,  load average: 8.91, 8.71, 5.89
top - 14:16:11 up 27 min,  5 users,  load average: 8.92, 8.72, 5.90
top - 14:16:16 up 27 min,  5 users,  load average: 9.00, 8.74, 5.93
top - 14:16:21 up 28 min,  5 users,  load average: 9.00, 8.74, 5.94
top - 14:16:26 up 28 min,  5 users,  load average: 9.00, 8.74, 5.96
top - 14:16:31 up 28 min,  5 users,  load average: 9.00, 8.75, 5.97
top - 14:16:37 up 28 min,  5 users,  load average: 9.08, 8.77, 6.00
top - 14:16:42 up 28 min,  5 users,  load average: 9.16, 8.79, 6.02
top - 14:16:47 up 28 min,  5 users,  load average: 9.06, 8.78, 6.03
top - 14:16:52 up 28 min,  5 users,  load average: 8.98, 8.76, 6.04
top - 14:16:57 up 28 min,  5 users,  load average: 8.90, 8.75, 6.05
top - 14:17:02 up 28 min,  5 users,  load average: 9.15, 8.80, 6.08
top - 14:17:07 up 28 min,  5 users,  load average: 9.14, 8.81, 6.09
top - 14:17:12 up 28 min,  5 users,  load average: 9.12, 8.81, 6.11
top - 14:17:17 up 28 min,  5 users,  load average: 9.11, 8.81, 6.12
top - 14:17:22 up 29 min,  5 users,  load average: 9.02, 8.80, 6.13
top - 14:17:27 up 29 min,  5 users,  load average: 9.02, 8.80, 6.15
top - 14:17:32 up 29 min,  5 users,  load average: 9.02, 8.80, 6.16
top - 14:17:37 up 29 min,  5 users,  load average: 9.18, 8.84, 6.19
top - 14:17:42 up 29 min,  5 users,  load average: 9.24, 8.86, 6.21
top - 14:17:47 up 29 min,  5 users,  load average: 9.22, 8.86, 6.23
top - 14:17:52 up 29 min,  5 users,  load average: 9.21, 8.86, 6.24
top - 14:17:57 up 29 min,  5 users,  load average: 9.19, 8.87, 6.25
top - 14:18:02 up 29 min,  5 users,  load average: 9.33, 8.90, 6.28
top - 14:18:07 up 29 min,  5 users,  load average: 9.47, 8.93, 6.31
top - 14:18:12 up 29 min,  5 users,  load average: 9.43, 8.94, 6.32
top - 14:18:17 up 29 min,  5 users,  load average: 9.47, 8.95, 6.34
top - 14:18:22 up 30 min,  5 users,  load average: 9.52, 8.97, 6.36
top - 14:18:27 up 30 min,  5 users,  load average: 9.64, 9.00, 6.38
top - 14:18:32 up 30 min,  5 users,  load average: 9.58, 9.00, 6.40
top - 14:18:37 up 30 min,  5 users,  load average: 9.62, 9.02, 6.42
top - 14:18:42 up 30 min,  5 users,  load average: 9.65, 9.04, 6.43
top - 14:18:47 up 30 min,  5 users,  load average: 9.68, 9.05, 6.45
top - 14:18:52 up 30 min,  5 users,  load average: 9.86, 9.10, 6.48
top - 14:18:57 up 30 min,  5 users,  load average: 10.03, 9.15, 6.51
top - 14:19:02 up 30 min,  5 users,  load average: 10.11, 9.18, 6.54
top - 14:19:07 up 30 min,  5 users,  load average: 10.18, 9.21, 6.56
top - 14:19:12 up 30 min,  6 users,  load average: 10.25, 9.24, 6.58
top - 14:19:17 up 30 min,  6 users,  load average: 10.47, 9.30, 6.62
top - 14:19:22 up 31 min,  6 users,  load average: 10.51, 9.33, 6.64
top - 14:19:27 up 31 min,  6 users,  load average: 10.47, 9.34, 6.66
top - 14:19:32 up 31 min,  6 users,  load average: 10.59, 9.38, 6.69
top - 14:19:38 up 31 min,  6 users,  load average: 10.46, 9.38, 6.70
top - 14:19:43 up 31 min,  6 users,  load average: 10.59, 9.42, 6.73
top - 14:19:48 up 31 min,  6 users,  load average: 10.46, 9.41, 6.74
top - 14:19:53 up 31 min,  6 users,  load average: 10.34, 9.40, 6.75
top - 14:19:58 up 31 min,  6 users,  load average: 10.23, 9.40, 6.76
top - 14:20:03 up 31 min,  6 users,  load average: 10.29, 9.42, 6.79
top - 14:20:08 up 31 min,  6 users,  load average: 10.19, 9.42, 6.80
top - 14:20:13 up 31 min,  6 users,  load average: 10.26, 9.44, 6.82
top - 14:20:18 up 31 min,  6 users,  load average: 10.23, 9.45, 6.84
top - 14:20:23 up 32 min,  6 users,  load average: 10.14, 9.44, 6.85
top - 14:20:28 up 32 min,  6 users,  load average: 10.04, 9.44, 6.86
top - 14:20:33 up 32 min,  6 users,  load average: 9.96, 9.43, 6.87
top - 14:20:38 up 32 min,  6 users,  load average: 10.12, 9.47, 6.90
top - 14:20:43 up 32 min,  6 users,  load average: 10.11, 9.48, 6.91
top - 14:20:48 up 32 min,  6 users,  load average: 10.10, 9.49, 6.93
top - 14:20:53 up 32 min,  6 users,  load average: 10.18, 9.51, 6.95
top - 14:20:58 up 32 min,  6 users,  load average: 10.08, 9.50, 6.96
top - 14:21:03 up 32 min,  6 users,  load average: 9.99, 9.50, 6.97
top - 14:21:08 up 32 min,  6 users,  load average: 10.07, 9.52, 7.00
top - 14:21:13 up 32 min,  6 users,  load average: 9.99, 9.51, 7.01
top - 14:21:18 up 32 min,  6 users,  load average: 10.23, 9.57, 7.04
top - 14:21:23 up 33 min,  6 users,  load average: 10.13, 9.56, 7.05
top - 14:21:28 up 33 min,  6 users,  load average: 10.19, 9.59, 7.08
top - 14:21:33 up 33 min,  6 users,  load average: 10.17, 9.60, 7.10
top - 14:21:38 up 33 min,  6 users,  load average: 10.40, 9.65, 7.13
top - 14:21:43 up 33 min,  6 users,  load average: 10.45, 9.67, 7.15
top - 14:21:49 up 33 min,  6 users,  load average: 10.65, 9.73, 7.18
top - 14:21:54 up 33 min,  6 users,  load average: 10.52, 9.72, 7.19
top - 14:21:59 up 33 min,  6 users,  load average: 10.64, 9.75, 7.22
top - 14:22:04 up 33 min,  6 users,  load average: 10.59, 9.76, 7.23
top - 14:22:09 up 33 min,  6 users,  load average: 10.54, 9.76, 7.25
top - 14:22:14 up 33 min,  6 users,  load average: 11.14, 9.90, 7.30
top - 14:22:19 up 34 min,  6 users,  load average: 11.21, 9.93, 7.33
top - 14:22:24 up 34 min,  6 users,  load average: 11.03, 9.92, 7.34
top - 14:22:29 up 34 min,  6 users,  load average: 11.11, 9.95, 7.36
top - 14:22:34 up 34 min,  6 users,  load average: 10.86, 9.92, 7.37
top - 14:22:39 up 34 min,  6 users,  load average: 10.79, 9.92, 7.38
top - 14:22:44 up 34 min,  6 users,  load average: 10.81, 9.94, 7.40
top - 14:22:49 up 34 min,  6 users,  load average: 10.82, 9.96, 7.42
top - 14:22:54 up 34 min,  6 users,  load average: 10.59, 9.92, 7.42
top - 14:22:59 up 34 min,  6 users,  load average: 10.63, 9.94, 7.44
top - 14:23:04 up 34 min,  6 users,  load average: 10.58, 9.94, 7.45
top - 14:23:10 up 34 min,  6 users,  load average: 10.37, 9.91, 7.46
top - 14:23:15 up 34 min,  6 users,  load average: 10.34, 9.91, 7.47
top - 14:23:20 up 35 min,  6 users,  load average: 10.39, 9.93, 7.49
top - 14:23:25 up 35 min,  6 users,  load average: 10.52, 9.96, 7.51
top - 14:23:30 up 35 min,  6 users,  load average: 10.56, 9.98, 7.53
top - 14:23:35 up 35 min,  6 users,  load average: 10.59, 10.00, 7.55
top - 14:23:40 up 35 min,  6 users,  load average: 10.71, 10.03, 7.57
top - 14:23:45 up 35 min,  6 users,  load average: 10.73, 10.05, 7.59
top - 14:23:50 up 35 min,  6 users,  load average: 10.83, 10.08, 7.61
top - 14:23:55 up 35 min,  6 users,  load average: 11.08, 10.14, 7.65
top - 14:24:00 up 35 min,  6 users,  load average: 11.00, 10.14, 7.66
top - 14:24:05 up 35 min,  6 users,  load average: 10.92, 10.14, 7.67
top - 14:24:10 up 35 min,  6 users,  load average: 10.76, 10.12, 7.68
top - 14:24:15 up 35 min,  6 users,  load average: 10.78, 10.13, 7.70
top - 14:24:20 up 36 min,  6 users,  load average: 12.88, 10.58, 7.85
top - 14:24:25 up 36 min,  6 users,  load average: 13.05, 10.65, 7.89
top - 14:24:30 up 36 min,  6 users,  load average: 12.89, 10.66, 7.91
top - 14:24:35 up 36 min,  6 users,  load average: 12.82, 10.68, 7.93
top - 14:24:40 up 36 min,  6 users,  load average: 12.59, 10.67, 7.94
top - 14:24:46 up 36 min,  6 users,  load average: 12.46, 10.67, 7.96
top - 14:24:51 up 36 min,  6 users,  load average: 12.51, 10.71, 7.98
top - 14:24:56 up 36 min,  6 users,  load average: 12.30, 10.70, 8.00
top - 14:25:01 up 36 min,  6 users,  load average: 12.12, 10.69, 8.01
top - 14:25:06 up 36 min,  6 users,  load average: 12.03, 10.69, 8.02
top - 14:25:11 up 36 min,  6 users,  load average: 12.19, 10.75, 8.05
top - 14:25:16 up 36 min,  6 users,  load average: 11.93, 10.72, 8.06
top - 14:25:21 up 37 min,  6 users,  load average: 11.94, 10.74, 8.08
top - 14:25:26 up 37 min,  6 users,  load average: 11.62, 10.69, 8.08
top - 14:25:31 up 37 min,  6 users,  load average: 11.57, 10.70, 8.09
top - 14:25:36 up 37 min,  6 users,  load average: 11.45, 10.69, 8.10
top - 14:25:41 up 37 min,  6 users,  load average: 11.49, 10.71, 8.12
top - 14:25:46 up 37 min,  6 users,  load average: 11.69, 10.76, 8.16
top - 14:25:51 up 37 min,  6 users,  load average: 11.64, 10.77, 8.17
top - 14:25:56 up 37 min,  6 users,  load average: 11.58, 10.77, 8.19
top - 14:26:01 up 37 min,  6 users,  load average: 11.46, 10.76, 8.20
top - 14:26:06 up 37 min,  6 users,  load average: 11.58, 10.79, 8.22
top - 14:26:11 up 37 min,  6 users,  load average: 11.53, 10.80, 8.24
top - 14:26:16 up 37 min,  6 users,  load average: 11.41, 10.78, 8.25
top - 14:26:21 up 38 min,  6 users,  load average: 11.22, 10.75, 8.25
top - 14:26:26 up 38 min,  6 users,  load average: 11.04, 10.72, 8.25
top - 14:26:31 up 38 min,  6 users,  load average: 10.88, 10.69, 8.26
top - 14:26:36 up 38 min,  6 users,  load average: 10.89, 10.70, 8.27
top - 14:26:41 up 38 min,  6 users,  load average: 10.73, 10.67, 8.28
top - 14:26:46 up 38 min,  6 users,  load average: 10.68, 10.66, 8.28
top - 14:26:51 up 38 min,  6 users,  load average: 10.70, 10.67, 8.30
top - 14:26:57 up 38 min,  6 users,  load average: 10.81, 10.69, 8.32
top - 14:27:02 up 38 min,  6 users,  load average: 10.66, 10.66, 8.32
top - 14:27:07 up 38 min,  6 users,  load average: 10.85, 10.70, 8.35
top - 14:27:12 up 38 min,  6 users,  load average: 10.70, 10.67, 8.35
top - 14:27:17 up 38 min,  6 users,  load average: 10.72, 10.67, 8.36
top - 14:27:22 up 39 min,  6 users,  load average: 10.91, 10.71, 8.39
top - 14:27:27 up 39 min,  6 users,  load average: 10.99, 10.73, 8.41
top - 14:27:32 up 39 min,  6 users,  load average: 10.99, 10.74, 8.42
top - 14:27:37 up 39 min,  6 users,  load average: 10.99, 10.74, 8.43
top - 14:27:42 up 39 min,  6 users,  load average: 10.99, 10.75, 8.45
top - 14:27:47 up 39 min,  6 users,  load average: 10.99, 10.75, 8.46
top - 14:27:52 up 39 min,  6 users,  load average: 10.99, 10.75, 8.47
top - 14:27:57 up 39 min,  6 users,  load average: 10.99, 10.76, 8.49
top - 14:28:02 up 39 min,  6 users,  load average: 11.07, 10.78, 8.51
top - 14:28:07 up 39 min,  6 users,  load average: 10.99, 10.77, 8.51
top - 14:28:12 up 39 min,  6 users,  load average: 10.99, 10.77, 8.53
top - 14:28:17 up 39 min,  6 users,  load average: 10.91, 10.76, 8.54
top - 14:28:22 up 40 min,  6 users,  load average: 10.76, 10.73, 8.54
top - 14:28:27 up 40 min,  6 users,  load average: 10.70, 10.71, 8.55
top - 14:28:32 up 40 min,  6 users,  load average: 10.72, 10.72, 8.56
top - 14:28:37 up 40 min,  6 users,  load average: 10.58, 10.69, 8.56
top - 14:28:42 up 40 min,  6 users,  load average: 10.61, 10.69, 8.57
top - 14:28:47 up 40 min,  6 users,  load average: 10.49, 10.67, 8.58
top - 14:28:53 up 40 min,  6 users,  load average: 10.45, 10.66, 8.58
top - 14:28:58 up 40 min,  6 users,  load average: 10.33, 10.63, 8.58
top - 14:29:03 up 40 min,  6 users,  load average: 10.46, 10.65, 8.60
top - 14:29:08 up 40 min,  6 users,  load average: 10.43, 10.64, 8.61
top - 14:29:13 up 40 min,  6 users,  load average: 10.39, 10.63, 8.62
top - 14:29:18 up 40 min,  6 users,  load average: 10.68, 10.68, 8.65
top - 14:29:23 up 41 min,  6 users,  load average: 10.79, 10.70, 8.66
top - 14:29:28 up 41 min,  6 users,  load average: 10.64, 10.68, 8.66
top - 14:29:33 up 41 min,  6 users,  load average: 10.75, 10.70, 8.68
top - 14:29:38 up 41 min,  6 users,  load average: 10.85, 10.72, 8.70
top - 14:29:43 up 41 min,  6 users,  load average: 10.78, 10.71, 8.71
top - 14:29:48 up 41 min,  6 users,  load average: 10.96, 10.75, 8.73
top - 14:29:53 up 41 min,  6 users,  load average: 11.12, 10.78, 8.75
top - 14:29:58 up 41 min,  6 users,  load average: 11.19, 10.80, 8.77
top - 14:30:03 up 41 min,  6 users,  load average: 11.55, 10.89, 8.82
top - 14:30:08 up 41 min,  6 users,  load average: 11.43, 10.88, 8.83
top - 14:30:13 up 41 min,  6 users,  load average: 11.23, 10.84, 8.83
top - 14:30:18 up 42 min,  6 users,  load average: 11.13, 10.83, 8.83
top - 14:30:23 up 42 min,  6 users,  load average: 11.20, 10.85, 8.85
top - 14:30:28 up 42 min,  6 users,  load average: 11.03, 10.82, 8.85
top - 14:30:34 up 42 min,  6 users,  load average: 10.94, 10.80, 8.85
top - 14:30:39 up 42 min,  6 users,  load average: 10.87, 10.79, 8.86
top - 14:30:44 up 42 min,  6 users,  load average: 10.64, 10.74, 8.86
top - 14:30:49 up 42 min,  6 users,  load average: 10.59, 10.73, 8.86
top - 14:30:54 up 42 min,  6 users,  load average: 10.62, 10.74, 8.87
top - 14:30:59 up 42 min,  6 users,  load average: 10.65, 10.74, 8.88
top - 14:31:04 up 42 min,  6 users,  load average: 10.68, 10.74, 8.90
top - 14:31:09 up 42 min,  6 users,  load average: 10.78, 10.76, 8.91
top - 14:31:14 up 42 min,  6 users,  load average: 10.64, 10.74, 8.91
top - 14:31:19 up 43 min,  6 users,  load average: 10.67, 10.74, 8.92
top - 14:31:24 up 43 min,  6 users,  load average: 10.69, 10.74, 8.93
top - 14:31:29 up 43 min,  6 users,  load average: 10.72, 10.75, 8.94
top - 14:31:34 up 43 min,  6 users,  load average: 10.66, 10.73, 8.95
top - 14:31:39 up 43 min,  6 users,  load average: 10.77, 10.76, 8.97
top - 14:31:44 up 43 min,  6 users,  load average: 10.79, 10.76, 8.98
top - 14:31:49 up 43 min,  6 users,  load average: 10.64, 10.73, 8.98
top - 14:31:54 up 43 min,  6 users,  load average: 10.99, 10.80, 9.01
top - 14:31:59 up 43 min,  6 users,  load average: 10.83, 10.77, 9.01
top - 14:32:05 up 43 min,  6 users,  load average: 10.68, 10.74, 9.01
top - 14:32:10 up 43 min,  6 users,  load average: 10.63, 10.73, 9.01
top - 14:32:15 up 43 min,  6 users,  load average: 10.74, 10.75, 9.03
top - 14:32:20 up 44 min,  6 users,  load average: 10.76, 10.75, 9.04
top - 14:32:25 up 44 min,  6 users,  load average: 10.94, 10.79, 9.06
top - 14:32:30 up 44 min,  6 users,  load average: 10.86, 10.78, 9.06
top - 14:32:35 up 44 min,  6 users,  load average: 10.87, 10.78, 9.07
top - 14:32:40 up 44 min,  6 users,  load average: 10.88, 10.78, 9.08
top - 14:32:45 up 44 min,  6 users,  load average: 10.89, 10.79, 9.10
top - 14:32:50 up 44 min,  6 users,  load average: 10.90, 10.79, 9.10
top - 14:32:55 up 44 min,  6 users,  load average: 10.99, 10.81, 9.12
top - 14:33:00 up 44 min,  6 users,  load average: 11.07, 10.83, 9.14
top - 14:33:05 up 44 min,  6 users,  load average: 10.90, 10.80, 9.13
top - 14:33:10 up 44 min,  6 users,  load average: 10.75, 10.77, 9.13
top - 14:33:15 up 44 min,  6 users,  load average: 11.01, 10.82, 9.16
top - 14:33:20 up 45 min,  6 users,  load average: 11.17, 10.86, 9.18
top - 14:33:25 up 45 min,  6 users,  load average: 11.16, 10.86, 9.19
top - 14:33:30 up 45 min,  6 users,  load average: 11.22, 10.88, 9.20
top - 14:33:35 up 45 min,  6 users,  load average: 11.13, 10.86, 9.21
top - 14:33:41 up 45 min,  6 users,  load average: 11.12, 10.87, 9.22
top - 14:33:46 up 45 min,  6 users,  load average: 11.19, 10.88, 9.23
top - 14:33:51 up 45 min,  6 users,  load average: 11.17, 10.89, 9.24
top - 14:33:56 up 45 min,  6 users,  load average: 11.08, 10.87, 9.25
top - 14:34:01 up 45 min,  6 users,  load average: 10.99, 10.86, 9.25
top - 14:34:06 up 45 min,  6 users,  load average: 10.91, 10.84, 9.25
top - 14:34:11 up 45 min,  6 users,  load average: 10.76, 10.81, 9.25
top - 14:34:16 up 45 min,  6 users,  load average: 11.02, 10.86, 9.28
top - 14:34:21 up 46 min,  6 users,  load average: 11.10, 10.88, 9.29
top - 14:34:26 up 46 min,  6 users,  load average: 11.17, 10.90, 9.31
top - 14:34:31 up 46 min,  6 users,  load average: 11.23, 10.92, 9.32
top - 14:34:36 up 46 min,  6 users,  load average: 11.13, 10.90, 9.32
top - 14:34:41 up 46 min,  6 users,  load average: 11.20, 10.92, 9.34
top - 14:34:46 up 46 min,  6 users,  load average: 11.03, 10.89, 9.33
top - 14:34:51 up 46 min,  6 users,  load average: 11.10, 10.91, 9.35
top - 14:34:56 up 46 min,  6 users,  load average: 10.86, 10.86, 9.34
top - 14:35:01 up 46 min,  6 users,  load average: 10.95, 10.88, 9.36
top - 14:35:06 up 46 min,  6 users,  load average: 10.79, 10.85, 9.35
top - 14:35:11 up 46 min,  6 users,  load average: 11.21, 10.93, 9.39
top - 14:35:16 up 46 min,  6 users,  load average: 11.19, 10.93, 9.40
top - 14:35:21 up 47 min,  6 users,  load average: 11.09, 10.92, 9.40
top - 14:35:27 up 47 min,  6 users,  load average: 11.01, 10.90, 9.40
top - 14:35:32 up 47 min,  6 users,  load average: 11.01, 10.90, 9.41
top - 14:35:37 up 47 min,  6 users,  load average: 11.09, 10.92, 9.43
top - 14:35:42 up 47 min,  6 users,  load average: 11.00, 10.91, 9.43
top - 14:35:47 up 47 min,  6 users,  load average: 10.92, 10.89, 9.43
top - 14:35:52 up 47 min,  6 users,  load average: 11.00, 10.91, 9.45
top - 14:35:57 up 47 min,  6 users,  load average: 11.08, 10.93, 9.46
top - 14:36:02 up 47 min,  6 users,  load average: 11.24, 10.96, 9.48
top - 14:36:07 up 47 min,  6 users,  load average: 11.06, 10.93, 9.47
top - 14:36:12 up 47 min,  6 users,  load average: 11.45, 11.01, 9.51
top - 14:36:17 up 47 min,  6 users,  load average: 11.42, 11.01, 9.52
top - 14:36:22 up 48 min,  6 users,  load average: 11.46, 11.03, 9.53
top - 14:36:27 up 48 min,  6 users,  load average: 11.75, 11.09, 9.56
top - 14:36:32 up 48 min,  6 users,  load average: 11.53, 11.06, 9.56
top - 14:36:37 up 48 min,  6 users,  load average: 11.64, 11.09, 9.57
top - 14:36:42 up 48 min,  6 users,  load average: 11.67, 11.10, 9.59
top - 14:36:47 up 48 min,  6 users,  load average: 11.62, 11.10, 9.59
top - 14:36:52 up 48 min,  6 users,  load average: 11.65, 11.12, 9.61
top - 14:36:57 up 48 min,  6 users,  load average: 11.68, 11.13, 9.62
top - 14:37:02 up 48 min,  6 users,  load average: 11.70, 11.15, 9.63
top - 14:37:07 up 48 min,  6 users,  load average: 12.05, 11.23, 9.67
top - 14:37:13 up 48 min,  6 users,  load average: 12.20, 11.27, 9.69
top - 14:37:18 up 48 min,  6 users,  load average: 12.03, 11.25, 9.69
top - 14:37:23 up 49 min,  6 users,  load average: 12.10, 11.28, 9.71
top - 14:37:28 up 49 min,  6 users,  load average: 12.74, 11.42, 9.76
top - 14:37:33 up 49 min,  6 users,  load average: 12.68, 11.43, 9.78
top - 14:37:38 up 49 min,  6 users,  load average: 12.30, 11.38, 9.77
top - 14:37:43 up 49 min,  6 users,  load average: 12.52, 11.44, 9.79
top - 14:37:48 up 49 min,  6 users,  load average: 12.24, 11.40, 9.79
top - 14:37:53 up 49 min,  6 users,  load average: 12.22, 11.41, 9.80
top - 14:37:58 up 49 min,  6 users,  load average: 12.28, 11.43, 9.82
top - 14:38:03 up 49 min,  6 users,  load average: 12.10, 11.41, 9.82
top - 14:38:08 up 49 min,  6 users,  load average: 11.93, 11.38, 9.82
top - 14:38:13 up 49 min,  6 users,  load average: 11.77, 11.36, 9.82
top - 14:38:18 up 49 min,  6 users,  load average: 11.87, 11.39, 9.84
top - 14:38:23 up 50 min,  6 users,  load average: 12.12, 11.45, 9.86
top - 14:38:28 up 50 min,  6 users,  load average: 12.19, 11.47, 9.88
top - 14:38:33 up 50 min,  6 users,  load average: 12.10, 11.46, 9.89
top - 14:38:38 up 50 min,  6 users,  load average: 12.95, 11.68, 9.97
top - 14:38:43 up 50 min,  6 users,  load average: 12.80, 11.67, 9.98
top - 14:38:48 up 50 min,  6 users,  load average: 12.57, 11.64, 9.98
top - 14:38:54 up 50 min,  6 users,  load average: 12.69, 11.68, 10.00
top - 14:38:59 up 50 min,  6 users,  load average: 12.63, 11.68, 10.01
top - 14:39:04 up 50 min,  6 users,  load average: 12.42, 11.65, 10.01
top - 14:39:09 up 50 min,  6 users,  load average: 12.31, 11.64, 10.01
top - 14:39:14 up 50 min,  6 users,  load average: 12.12, 11.61, 10.01
top - 14:39:19 up 51 min,  6 users,  load average: 12.03, 11.60, 10.02
top - 14:39:24 up 51 min,  6 users,  load average: 11.71, 11.54, 10.01
top - 14:39:29 up 51 min,  6 users,  load average: 12.21, 11.65, 10.05
top - 14:39:34 up 51 min,  6 users,  load average: 12.03, 11.62, 10.05
top - 14:39:39 up 51 min,  6 users,  load average: 11.87, 11.60, 10.05
top - 14:39:44 up 51 min,  6 users,  load average: 11.64, 11.55, 10.04
top - 14:39:50 up 51 min,  6 users,  load average: 11.59, 11.54, 10.05
top - 14:39:55 up 51 min,  6 users,  load average: 11.38, 11.50, 10.04
top - 14:40:00 up 51 min,  6 users,  load average: 11.27, 11.48, 10.04
top - 14:40:05 up 51 min,  6 users,  load average: 11.01, 11.42, 10.03
top - 14:40:10 up 51 min,  6 users,  load average: 10.93, 11.39, 10.03
top - 14:40:15 up 51 min,  6 users,  load average: 10.85, 11.37, 10.03
top - 14:40:20 up 52 min,  6 users,  load average: 10.94, 11.38, 10.04
top - 14:40:25 up 52 min,  6 users,  load average: 10.87, 11.36, 10.04
top - 14:40:30 up 52 min,  6 users,  load average: 10.80, 11.33, 10.04
top - 14:40:35 up 52 min,  6 users,  load average: 10.81, 11.33, 10.04
top - 14:40:40 up 52 min,  6 users,  load average: 10.67, 11.29, 10.04
top - 14:40:45 up 52 min,  6 users,  load average: 10.86, 11.32, 10.05
top - 14:40:50 up 52 min,  6 users,  load average: 10.87, 11.31, 10.06
top - 14:40:55 up 52 min,  6 users,  load average: 10.96, 11.32, 10.07
top - 14:41:00 up 52 min,  6 users,  load average: 10.96, 11.32, 10.07
top - 14:41:05 up 52 min,  6 users,  load average: 11.20, 11.36, 10.09
top - 14:41:10 up 52 min,  6 users,  load average: 11.43, 11.41, 10.12
top - 14:41:16 up 52 min,  6 users,  load average: 11.47, 11.42, 10.12
top - 14:41:21 up 53 min,  6 users,  load average: 11.35, 11.39, 10.12
top - 14:41:26 up 53 min,  6 users,  load average: 11.41, 11.40, 10.13
top - 14:41:31 up 53 min,  6 users,  load average: 11.21, 11.36, 10.13
top - 14:41:36 up 53 min,  6 users,  load average: 11.28, 11.37, 10.14
top - 14:41:41 up 53 min,  6 users,  load average: 11.25, 11.37, 10.14
top - 14:41:46 up 53 min,  6 users,  load average: 11.15, 11.34, 10.14
top - 14:41:51 up 53 min,  6 users,  load average: 11.30, 11.37, 10.16
top - 14:41:56 up 53 min,  6 users,  load average: 11.28, 11.36, 10.16
top - 14:42:01 up 53 min,  6 users,  load average: 11.17, 11.34, 10.16
top - 14:42:06 up 53 min,  6 users,  load average: 11.16, 11.33, 10.16
top - 14:42:11 up 53 min,  6 users,  load average: 10.99, 11.30, 10.16
top - 14:42:16 up 53 min,  6 users,  load average: 10.83, 11.26, 10.15
top - 14:42:21 up 54 min,  6 users,  load average: 11.08, 11.30, 10.17
top - 14:42:26 up 54 min,  6 users,  load average: 10.99, 11.28, 10.17
top - 14:42:31 up 54 min,  6 users,  load average: 11.07, 11.29, 10.18
top - 14:42:37 up 54 min,  6 users,  load average: 10.99, 11.27, 10.18
top - 14:42:42 up 54 min,  6 users,  load average: 10.83, 11.23, 10.17
top - 14:42:47 up 54 min,  6 users,  load average: 10.68, 11.20, 10.17
top - 14:42:52 up 54 min,  6 users,  load average: 10.79, 11.21, 10.18
top - 14:42:57 up 54 min,  6 users,  load average: 10.88, 11.22, 10.19
top - 14:43:02 up 54 min,  6 users,  load average: 11.05, 11.25, 10.20
top - 14:43:07 up 54 min,  6 users,  load average: 10.89, 11.21, 10.19
top - 14:43:12 up 54 min,  6 users,  load average: 10.82, 11.19, 10.19
top - 14:43:17 up 54 min,  6 users,  load average: 10.83, 11.19, 10.20
top - 14:43:22 up 55 min,  6 users,  load average: 11.00, 11.22, 10.21
top - 14:43:27 up 55 min,  6 users,  load average: 11.00, 11.21, 10.21
top - 14:43:32 up 55 min,  6 users,  load average: 10.76, 11.16, 10.20
top - 14:43:37 up 55 min,  6 users,  load average: 10.06, 11.01, 10.16
top - 14:43:42 up 55 min,  6 users,  load average: 9.42, 10.86, 10.11
top - 14:43:47 up 55 min,  6 users,  load average: 8.90, 10.73, 10.08
top - 14:43:52 up 55 min,  6 users,  load average: 8.27, 10.57, 10.03
top - 14:43:57 up 55 min,  6 users,  load average: 8.01, 10.47, 10.00
top - 14:44:02 up 55 min,  6 users,  load average: 7.44, 10.32, 9.95
top - 14:44:07 up 55 min,  6 users,  load average: 6.93, 10.16, 9.90
top - 14:44:12 up 55 min,  6 users,  load average: 6.69, 10.06, 9.87
top - 14:44:17 up 55 min,  6 users,  load average: 6.64, 9.99, 9.85
top - 14:44:22 up 56 min,  6 users,  load average: 6.27, 9.86, 9.81
top - 14:44:27 up 56 min,  6 users,  load average: 5.84, 9.71, 9.76
top - 14:44:32 up 56 min,  6 users,  load average: 5.78, 9.63, 9.73
top - 14:44:37 up 56 min,  6 users,  load average: 5.63, 9.54, 9.70
top - 14:44:43 up 56 min,  6 users,  load average: 5.90, 9.53, 9.70
top - 14:44:48 up 56 min,  6 users,  load average: 6.95, 9.69, 9.75
top - 14:44:53 up 56 min,  6 users,  load average: 7.28, 9.71, 9.76
top - 14:44:58 up 56 min,  6 users,  load average: 7.49, 9.71, 9.76
top - 14:45:03 up 56 min,  6 users,  load average: 8.09, 9.80, 9.78
top - 14:45:08 up 56 min,  6 users,  load average: 8.25, 9.80, 9.79
top - 14:45:13 up 56 min,  6 users,  load average: 8.71, 9.87, 9.81
top - 14:45:18 up 56 min,  6 users,  load average: 8.65, 9.84, 9.80
top - 14:45:23 up 57 min,  6 users,  load average: 8.76, 9.84, 9.80
top - 14:45:28 up 57 min,  6 users,  load average: 8.86, 9.85, 9.80
top - 14:45:33 up 57 min,  6 users,  load average: 9.19, 9.90, 9.82
top - 14:45:38 up 57 min,  6 users,  load average: 9.49, 9.95, 9.83
top - 14:45:43 up 57 min,  6 users,  load average: 9.61, 9.97, 9.84
top - 14:45:48 up 57 min,  6 users,  load average: 10.05, 10.05, 9.87
top - 14:45:53 up 57 min,  6 users,  load average: 10.28, 10.10, 9.88
top - 14:45:58 up 57 min,  6 users,  load average: 10.34, 10.11, 9.89
top - 14:46:03 up 57 min,  6 users,  load average: 10.55, 10.16, 9.91
top - 14:46:08 up 57 min,  6 users,  load average: 10.67, 10.19, 9.92
top - 14:46:13 up 57 min,  6 users,  load average: 11.09, 10.30, 9.95
top - 14:46:18 up 58 min,  6 users,  load average: 10.92, 10.28, 9.95
top - 14:46:23 up 58 min,  6 users,  load average: 11.33, 10.37, 9.98
top - 14:46:28 up 58 min,  6 users,  load average: 11.38, 10.40, 9.99
top - 14:46:34 up 58 min,  6 users,  load average: 11.19, 10.38, 9.99
top - 14:46:39 up 58 min,  6 users,  load average: 11.02, 10.35, 9.98
top - 14:46:44 up 58 min,  6 users,  load average: 10.94, 10.35, 9.98
top - 14:46:49 up 58 min,  6 users,  load average: 10.94, 10.36, 9.99
top - 14:46:54 up 58 min,  6 users,  load average: 11.02, 10.38, 10.00
top - 14:46:59 up 58 min,  6 users,  load average: 10.86, 10.36, 9.99
top - 14:47:04 up 58 min,  6 users,  load average: 10.87, 10.37, 10.00
top - 14:47:09 up 58 min,  6 users,  load average: 10.80, 10.36, 10.00
top - 14:47:14 up 58 min,  6 users,  load average: 10.74, 10.36, 10.00
top - 14:47:19 up 59 min,  6 users,  load average: 10.84, 10.39, 10.01
top - 14:47:24 up 59 min,  6 users,  load average: 10.85, 10.39, 10.01
top - 14:47:29 up 59 min,  6 users,  load average: 11.02, 10.44, 10.03
top - 14:47:34 up 59 min,  6 users,  load average: 10.94, 10.43, 10.03
top - 14:47:39 up 59 min,  6 users,  load average: 10.87, 10.42, 10.03
top - 14:47:44 up 59 min,  6 users,  load average: 10.72, 10.40, 10.02
top - 14:47:49 up 59 min,  6 users,  load average: 10.66, 10.39, 10.02
top - 14:47:54 up 59 min,  6 users,  load average: 10.69, 10.40, 10.03
top - 14:47:59 up 59 min,  6 users,  load average: 10.71, 10.41, 10.03
top - 14:48:04 up 59 min,  6 users,  load average: 10.73, 10.42, 10.04
top - 14:48:09 up 59 min,  6 users,  load average: 10.75, 10.43, 10.04
top - 14:48:14 up 59 min,  6 users,  load average: 10.77, 10.44, 10.05
top - 14:48:20 up  1:00,  6 users,  load average: 11.03, 10.50, 10.07
top - 14:48:25 up  1:00,  6 users,  load average: 11.27, 10.56, 10.09
top - 14:48:30 up  1:00,  6 users,  load average: 11.33, 10.58, 10.10
top - 14:48:35 up  1:00,  6 users,  load average: 11.14, 10.55, 10.09
top - 14:48:40 up  1:00,  6 users,  load average: 11.29, 10.60, 10.11
top - 14:48:45 up  1:00,  6 users,  load average: 11.43, 10.63, 10.12
top - 14:48:50 up  1:00,  6 users,  load average: 11.47, 10.66, 10.13
top - 14:48:55 up  1:00,  6 users,  load average: 11.51, 10.68, 10.14
top - 14:49:00 up  1:00,  6 users,  load average: 11.39, 10.67, 10.14
top - 14:49:05 up  1:00,  6 users,  load average: 11.52, 10.71, 10.16
top - 14:49:10 up  1:00,  6 users,  load average: 11.40, 10.69, 10.16
top - 14:49:15 up  1:00,  6 users,  load average: 11.45, 10.72, 10.16
top - 14:49:20 up  1:01,  6 users,  load average: 11.41, 10.72, 10.17
top - 14:49:25 up  1:01,  6 users,  load average: 11.30, 10.71, 10.17
top - 14:49:30 up  1:01,  6 users,  load average: 11.75, 10.81, 10.20
top - 14:49:35 up  1:01,  6 users,  load average: 11.53, 10.78, 10.20
top - 14:49:40 up  1:01,  6 users,  load average: 11.41, 10.77, 10.20
top - 14:49:45 up  1:01,  6 users,  load average: 11.46, 10.79, 10.21
top - 14:49:50 up  1:01,  6 users,  load average: 11.34, 10.77, 10.20
top - 14:49:55 up  1:01,  6 users,  load average: 11.15, 10.75, 10.20
top - 14:50:00 up  1:01,  6 users,  load average: 11.38, 10.80, 10.22
top - 14:50:05 up  1:01,  6 users,  load average: 11.03, 10.74, 10.20
top - 14:50:10 up  1:01,  6 users,  load average: 11.11, 10.76, 10.21
top - 14:50:15 up  1:01,  6 users,  load average: 10.86, 10.71, 10.20
top - 14:50:20 up  1:02,  6 users,  load average: 11.11, 10.76, 10.22
top - 14:50:25 up  1:02,  6 users,  load average: 11.42, 10.83, 10.24
top - 14:50:31 up  1:02,  6 users,  load average: 11.47, 10.85, 10.25
top - 14:50:36 up  1:02,  6 users,  load average: 11.59, 10.89, 10.27
top - 14:50:41 up  1:02,  6 users,  load average: 11.94, 10.97, 10.30
top - 14:50:46 up  1:02,  6 users,  load average: 12.19, 11.04, 10.32
top - 14:50:51 up  1:02,  6 users,  load average: 12.65, 11.15, 10.36
top - 14:50:56 up  1:02,  6 users,  load average: 12.92, 11.23, 10.39
top - 14:51:01 up  1:02,  6 users,  load average: 13.17, 11.31, 10.42
top - 14:51:06 up  1:02,  6 users,  load average: 13.47, 11.41, 10.46
top - 14:51:11 up  1:02,  6 users,  load average: 13.76, 11.50, 10.49
top - 14:51:16 up  1:02,  6 users,  load average: 14.02, 11.59, 10.53
top - 14:51:21 up  1:03,  6 users,  load average: 13.37, 11.50, 10.50
top - 14:51:26 up  1:03,  6 users,  load average: 12.86, 11.42, 10.48
top - 14:51:31 up  1:03,  6 users,  load average: 12.23, 11.32, 10.45
top - 14:51:36 up  1:03,  6 users,  load average: 11.65, 11.21, 10.42
top - 14:51:41 up  1:03,  6 users,  load average: 11.20, 11.12, 10.40
top - 14:51:46 up  1:03,  6 users,  load average: 10.62, 11.01, 10.37
top - 14:51:51 up  1:03,  6 users,  load average: 9.93, 10.86, 10.32
top - 14:51:56 up  1:03,  6 users,  load average: 9.38, 10.73, 10.28
top - 14:52:01 up  1:03,  6 users,  load average: 8.95, 10.61, 10.25
top - 14:52:06 up  1:03,  6 users,  load average: 8.47, 10.49, 10.21
top - 14:52:12 up  1:03,  6 users,  load average: 7.95, 10.35, 10.16
top - 14:52:17 up  1:03,  6 users,  load average: 7.55, 10.22, 10.12
top - 14:52:22 up  1:04,  6 users,  load average: 7.27, 10.12, 10.09
top - 14:52:27 up  1:04,  6 users,  load average: 7.01, 10.02, 10.06
top - 14:52:32 up  1:04,  6 users,  load average: 6.61, 9.89, 10.02
top - 14:52:37 up  1:04,  6 users,  load average: 6.40, 9.79, 9.98
top - 14:52:42 up  1:04,  6 users,  load average: 6.12, 9.67, 9.94
top - 14:52:47 up  1:04,  6 users,  load average: 6.03, 9.60, 9.92
top - 14:52:52 up  1:04,  6 users,  load average: 5.95, 9.52, 9.89
top - 14:52:57 up  1:04,  6 users,  load average: 5.79, 9.43, 9.86
top - 14:53:02 up  1:04,  6 users,  load average: 5.65, 9.34, 9.83
top - 14:53:07 up  1:04,  6 users,  load average: 5.44, 9.23, 9.79
top - 14:53:12 up  1:04,  6 users,  load average: 5.24, 9.13, 9.75
top - 14:53:17 up  1:04,  6 users,  load average: 5.22, 9.06, 9.73
top - 14:53:22 up  1:05,  6 users,  load average: 4.97, 8.94, 9.69
top - 14:53:27 up  1:05,  6 users,  load average: 5.05, 8.89, 9.67
top - 14:53:32 up  1:05,  6 users,  load average: 5.36, 8.89, 9.66
top - 14:53:37 up  1:05,  6 users,  load average: 5.09, 8.78, 9.62
top - 14:53:42 up  1:05,  6 users,  load average: 4.85, 8.67, 9.58
top - 14:53:47 up  1:05,  6 users,  load average: 4.62, 8.56, 9.54
top - 14:53:52 up  1:05,  6 users,  load average: 4.33, 8.43, 9.49
top - 14:53:57 up  1:05,  6 users,  load average: 4.30, 8.36, 9.46
top - 14:54:02 up  1:05,  6 users,  load average: 4.04, 8.23, 9.42
top - 14:54:07 up  1:05,  6 users,  load average: 3.95, 8.15, 9.38
top - 14:54:12 up  1:05,  6 users,  load average: 4.04, 8.09, 9.36
top - 14:54:17 up  1:05,  6 users,  load average: 4.03, 8.03, 9.33
top - 14:54:22 up  1:06,  6 users,  load average: 5.07, 8.18, 9.37
top - 14:54:27 up  1:06,  6 users,  load average: 4.91, 8.09, 9.34
top - 14:54:32 up  1:06,  6 users,  load average: 4.83, 8.02, 9.31
top - 14:54:37 up  1:06,  6 users,  load average: 4.53, 7.90, 9.26
top - 14:54:42 up  1:06,  6 users,  load average: 4.64, 7.87, 9.24
top - 14:54:47 up  1:06,  6 users,  load average: 4.83, 7.86, 9.23
top - 14:54:52 up  1:06,  6 users,  load average: 4.61, 7.76, 9.19
top - 14:54:57 up  1:06,  6 users,  load average: 4.80, 7.75, 9.18
top - 14:55:02 up  1:06,  6 users,  load average: 4.49, 7.64, 9.14
top - 14:55:07 up  1:06,  6 users,  load average: 4.29, 7.54, 9.10
top - 14:55:12 up  1:06,  6 users,  load average: 4.19, 7.47, 9.07
top - 14:55:17 up  1:06,  6 users,  load average: 4.17, 7.41, 9.04
top - 14:55:23 up  1:07,  6 users,  load average: 4.40, 7.40, 9.03
top - 14:55:28 up  1:07,  6 users,  load average: 4.21, 7.31, 8.99
top - 14:55:33 up  1:07,  6 users,  load average: 4.43, 7.31, 8.98
top - 14:55:38 up  1:07,  6 users,  load average: 4.32, 7.23, 8.95
top - 14:55:43 up  1:07,  6 users,  load average: 4.29, 7.18, 8.92
top - 14:55:48 up  1:07,  6 users,  load average: 4.27, 7.13, 8.89
top - 14:55:53 up  1:07,  6 users,  load average: 4.09, 7.04, 8.85
top - 14:55:58 up  1:07,  6 users,  load average: 3.84, 6.94, 8.81
top - 14:56:03 up  1:07,  6 users,  load average: 3.69, 6.86, 8.78
top - 14:56:08 up  1:07,  6 users,  load average: 3.55, 6.78, 8.74
top - 14:56:13 up  1:07,  6 users,  load average: 3.43, 6.70, 8.70
top - 14:56:18 up  1:07,  6 users,  load average: 3.24, 6.60, 8.66
top - 14:56:23 up  1:08,  6 users,  load average: 3.22, 6.54, 8.63
top - 14:56:28 up  1:08,  6 users,  load average: 3.20, 6.48, 8.60
top - 14:56:33 up  1:08,  6 users,  load average: 3.34, 6.46, 8.58
top - 14:56:38 up  1:08,  6 users,  load average: 3.15, 6.37, 8.54
top - 14:56:43 up  1:08,  6 users,  load average: 3.14, 6.31, 8.51
top - 14:56:48 up  1:08,  6 users,  load average: 2.97, 6.22, 8.47
top - 14:56:53 up  1:08,  6 users,  load average: 2.81, 6.14, 8.43
top - 14:56:58 up  1:08,  6 users,  load average: 2.75, 6.07, 8.39
top - 14:57:03 up  1:08,  6 users,  load average: 2.77, 6.02, 8.36
top - 14:57:08 up  1:08,  6 users,  load average: 2.70, 5.95, 8.33
top - 14:57:13 up  1:08,  6 users,  load average: 3.22, 5.95, 8.30
top - 14:57:18 up  1:09,  6 users,  load average: 3.20, 5.90, 8.28
top - 14:57:23 up  1:09,  6 users,  load average: 3.10, 5.84, 8.24
top - 14:57:28 up  1:09,  6 users,  load average: 2.93, 5.76, 8.20
top - 14:57:33 up  1:09,  6 users,  load average: 2.78, 5.68, 8.16
top - 14:57:38 up  1:09,  6 users,  load average: 2.64, 5.60, 8.12
top - 14:57:43 up  1:09,  6 users,  load average: 2.51, 5.52, 8.09
top - 14:57:48 up  1:09,  6 users,  load average: 2.55, 5.48, 8.06
top - 14:57:53 up  1:09,  6 users,  load average: 2.66, 5.46, 8.04
top - 14:57:58 up  1:09,  6 users,  load average: 2.69, 5.41, 8.01
top - 14:58:04 up  1:09,  6 users,  load average: 2.71, 5.37, 7.98
top - 14:58:09 up  1:09,  6 users,  load average: 2.82, 5.35, 7.96
top - 14:58:14 up  1:09,  6 users,  load average: 2.91, 5.33, 7.94
top - 14:58:19 up  1:10,  6 users,  load average: 2.92, 5.29, 7.91
top - 14:58:24 up  1:10,  6 users,  load average: 2.76, 5.22, 7.87
top - 14:58:29 up  1:10,  6 users,  load average: 2.86, 5.20, 7.85
top - 14:58:34 up  1:10,  6 users,  load average: 2.79, 5.14, 7.82
top - 14:58:39 up  1:10,  6 users,  load average: 2.65, 5.08, 7.79
top - 14:58:44 up  1:10,  6 users,  load average: 2.76, 5.06, 7.76
top - 14:58:49 up  1:10,  6 users,  load average: 2.62, 4.99, 7.73
top - 14:58:54 up  1:10,  6 users,  load average: 2.65, 4.96, 7.70
top - 14:58:59 up  1:10,  6 users,  load average: 2.51, 4.89, 7.67
top - 14:59:04 up  1:10,  6 users,  load average: 2.63, 4.88, 7.65
top - 14:59:09 up  1:10,  6 users,  load average: 2.82, 4.88, 7.63
top - 14:59:14 up  1:10,  6 users,  load average: 2.68, 4.81, 7.60
top - 14:59:19 up  1:11,  6 users,  load average: 2.62, 4.77, 7.57
top - 14:59:24 up  1:11,  6 users,  load average: 2.57, 4.72, 7.54
top - 14:59:29 up  1:11,  6 users,  load average: 2.69, 4.71, 7.52
top - 14:59:34 up  1:11,  6 users,  load average: 2.63, 4.66, 7.49
top - 14:59:39 up  1:11,  6 users,  load average: 2.50, 4.60, 7.45
top - 14:59:44 up  1:11,  6 users,  load average: 2.54, 4.58, 7.43
top - 14:59:49 up  1:11,  6 users,  load average: 2.42, 4.52, 7.39
top - 14:59:54 up  1:11,  6 users,  load average: 2.38, 4.47, 7.36
top - 14:59:59 up  1:11,  6 users,  load average: 2.59, 4.48, 7.35
top - 15:00:04 up  1:11,  6 users,  load average: 2.70, 4.47, 7.33
top - 15:00:09 up  1:11,  6 users,  load average: 2.57, 4.42, 7.30
top - 15:00:14 up  1:11,  6 users,  load average: 2.44, 4.36, 7.26
top - 15:00:19 up  1:12,  6 users,  load average: 2.49, 4.34, 7.24
top - 15:00:24 up  1:12,  6 users,  load average: 2.53, 4.31, 7.22
top - 15:00:29 up  1:12,  6 users,  load average: 2.40, 4.26, 7.18
top - 15:00:34 up  1:12,  6 users,  load average: 2.37, 4.22, 7.16
top - 15:00:39 up  1:12,  6 users,  load average: 2.34, 4.18, 7.13
top - 15:00:44 up  1:12,  6 users,  load average: 2.23, 4.13, 7.10
top - 15:00:49 up  1:12,  6 users,  load average: 2.29, 4.11, 7.07
top - 15:00:55 up  1:12,  6 users,  load average: 2.19, 4.06, 7.04
top - 15:01:00 up  1:12,  6 users,  load average: 2.18, 4.02, 7.01
top - 15:01:05 up  1:12,  6 users,  load average: 2.08, 3.97, 6.98
top - 15:01:10 up  1:12,  6 users,  load average: 1.99, 3.92, 6.95
top - 15:01:15 up  1:12,  6 users,  load average: 1.99, 3.89, 6.92
top - 15:01:20 up  1:13,  6 users,  load average: 1.91, 3.84, 6.89
top - 15:01:25 up  1:13,  6 users,  load average: 1.84, 3.80, 6.86
top - 15:01:30 up  1:13,  6 users,  load average: 1.93, 3.78, 6.84
top - 15:01:35 up  1:13,  6 users,  load average: 1.86, 3.74, 6.80
top - 15:01:40 up  1:13,  6 users,  load average: 1.95, 3.72, 6.78
top - 15:01:45 up  1:13,  6 users,  load average: 2.03, 3.71, 6.76
top - 15:01:50 up  1:13,  6 users,  load average: 2.27, 3.73, 6.75
top - 15:01:55 up  1:13,  6 users,  load average: 2.41, 3.74, 6.74
top - 15:02:00 up  1:13,  6 users,  load average: 2.46, 3.72, 6.72
top - 15:02:05 up  1:13,  6 users,  load average: 2.50, 3.71, 6.70
top - 15:02:10 up  1:13,  6 users,  load average: 2.54, 3.70, 6.68
top - 15:02:15 up  1:13,  6 users,  load average: 2.74, 3.72, 6.67
top - 15:02:20 up  1:14,  6 users,  load average: 2.92, 3.74, 6.66
top - 15:02:25 up  1:14,  6 users,  load average: 3.08, 3.76, 6.65
top - 15:02:30 up  1:14,  6 users,  load average: 3.32, 3.80, 6.65
top - 15:02:35 up  1:14,  6 users,  load average: 3.13, 3.75, 6.62
top - 15:02:40 up  1:14,  6 users,  load average: 3.04, 3.72, 6.59
top - 15:02:45 up  1:14,  6 users,  load average: 2.96, 3.69, 6.57
top - 15:02:50 up  1:14,  6 users,  load average: 2.88, 3.67, 6.54
top - 15:02:55 up  1:14,  6 users,  load average: 2.97, 3.67, 6.53
top - 15:03:00 up  1:14,  6 users,  load average: 2.97, 3.66, 6.51
top - 15:03:05 up  1:14,  6 users,  load average: 2.89, 3.63, 6.48
top - 15:03:10 up  1:14,  6 users,  load average: 2.82, 3.61, 6.46
top - 15:03:15 up  1:14,  6 users,  load average: 2.68, 3.56, 6.43
top - 15:03:20 up  1:15,  6 users,  load average: 2.54, 3.52, 6.40
top - 15:03:25 up  1:15,  6 users,  load average: 2.42, 3.48, 6.37
top - 15:03:30 up  1:15,  6 users,  load average: 2.62, 3.50, 6.36
top - 15:03:35 up  1:15,  6 users,  load average: 2.73, 3.51, 6.35
top - 15:03:40 up  1:15,  6 users,  load average: 2.76, 3.50, 6.33
top - 15:03:45 up  1:15,  6 users,  load average: 3.10, 3.56, 6.34
top - 15:03:51 up  1:15,  6 users,  load average: 2.93, 3.52, 6.31
top - 15:03:56 up  1:15,  6 users,  load average: 3.01, 3.52, 6.29
top - 15:04:01 up  1:15,  6 users,  load average: 2.93, 3.50, 6.27
top - 15:04:06 up  1:15,  6 users,  load average: 2.86, 3.47, 6.25
top - 15:04:11 up  1:15,  6 users,  load average: 2.79, 3.45, 6.22
top - 15:04:16 up  1:15,  6 users,  load average: 3.04, 3.49, 6.22
top - 15:04:21 up  1:16,  6 users,  load average: 3.20, 3.52, 6.22
top - 15:04:26 up  1:16,  6 users,  load average: 3.18, 3.51, 6.20
top - 15:04:31 up  1:16,  6 users,  load average: 3.25, 3.51, 6.19
top - 15:04:36 up  1:16,  6 users,  load average: 3.07, 3.47, 6.16
top - 15:04:41 up  1:16,  6 users,  load average: 3.14, 3.48, 6.15
top - 15:04:46 up  1:16,  6 users,  load average: 2.89, 3.42, 6.11
top - 15:04:51 up  1:16,  6 users,  load average: 2.66, 3.37, 6.08
top - 15:04:56 up  1:16,  6 users,  load average: 2.61, 3.34, 6.06
top - 15:05:01 up  1:16,  6 users,  load average: 2.56, 3.32, 6.04
top - 15:05:06 up  1:16,  6 users,  load average: 2.35, 3.27, 6.00
top - 15:05:11 up  1:16,  6 users,  load average: 2.16, 3.21, 5.97
top - 15:05:16 up  1:16,  6 users,  load average: 2.23, 3.21, 5.95
top - 15:05:21 up  1:17,  6 users,  load average: 2.13, 3.17, 5.93
top - 15:05:26 up  1:17,  6 users,  load average: 2.20, 3.17, 5.91
top - 15:05:31 up  1:17,  6 users,  load average: 2.35, 3.18, 5.90
top - 15:05:36 up  1:17,  6 users,  load average: 2.16, 3.13, 5.87
top - 15:05:41 up  1:17,  6 users,  load average: 1.98, 3.07, 5.84
top - 15:05:46 up  1:17,  6 users,  load average: 1.83, 3.02, 5.81
top - 15:05:51 up  1:17,  6 users,  load average: 1.92, 3.02, 5.79
top - 15:05:56 up  1:17,  6 users,  load average: 2.01, 3.02, 5.78
top - 15:06:01 up  1:17,  6 users,  load average: 1.84, 2.97, 5.74
top - 15:06:06 up  1:17,  6 users,  load average: 1.70, 2.92, 5.71
top - 15:06:11 up  1:17,  6 users,  load average: 1.72, 2.91, 5.69
top - 15:06:16 up  1:17,  6 users,  load average: 1.58, 2.86, 5.66
top - 15:06:21 up  1:18,  6 users,  load average: 1.46, 2.81, 5.63
top - 15:06:26 up  1:18,  6 users,  load average: 1.34, 2.76, 5.60
top - 15:06:31 up  1:18,  6 users,  load average: 1.31, 2.73, 5.58
top - 15:06:36 up  1:18,  6 users,  load average: 1.45, 2.74, 5.56
top - 15:06:41 up  1:18,  6 users,  load average: 1.49, 2.73, 5.54
top - 15:06:46 up  1:18,  6 users,  load average: 1.37, 2.68, 5.51
top - 15:06:51 up  1:18,  6 users,  load average: 1.50, 2.69, 5.50
top - 15:06:56 up  1:18,  6 users,  load average: 1.46, 2.66, 5.47
top - 15:07:01 up  1:18,  6 users,  load average: 1.50, 2.65, 5.46
top - 15:07:06 up  1:18,  6 users,  load average: 1.46, 2.62, 5.43
top - 15:07:11 up  1:18,  6 users,  load average: 1.43, 2.59, 5.41
top - 15:07:16 up  1:18,  6 users,  load average: 1.31, 2.55, 5.38
top - 15:07:21 up  1:19,  6 users,  load average: 1.29, 2.52, 5.35
top - 15:07:26 up  1:19,  6 users,  load average: 1.18, 2.48, 5.33
top - 15:07:31 up  1:19,  6 users,  load average: 1.17, 2.45, 5.30
top - 15:07:36 up  1:19,  6 users,  load average: 1.31, 2.46, 5.29
top - 15:07:41 up  1:19,  6 users,  load average: 1.29, 2.44, 5.27
top - 15:07:46 up  1:19,  6 users,  load average: 1.19, 2.40, 5.24
top - 15:07:52 up  1:19,  6 users,  load average: 1.09, 2.36, 5.21
top - 15:07:57 up  1:19,  6 users,  load average: 1.16, 2.35, 5.19
top - 15:08:02 up  1:19,  6 users,  load average: 1.07, 2.31, 5.16
top - 15:08:07 up  1:19,  6 users,  load average: 0.98, 2.27, 5.14
top - 15:08:12 up  1:19,  6 users,  load average: 1.39, 2.34, 5.14
top - 15:08:17 up  1:19,  6 users,  load average: 1.43, 2.33, 5.12
top - 15:08:22 up  1:20,  6 users,  load average: 1.56, 2.34, 5.11
top - 15:08:27 up  1:20,  6 users,  load average: 1.59, 2.34, 5.09
top - 15:08:32 up  1:20,  6 users,  load average: 1.55, 2.31, 5.07
top - 15:08:37 up  1:20,  6 users,  load average: 1.74, 2.34, 5.07
top - 15:08:42 up  1:20,  6 users,  load average: 1.60, 2.30, 5.04
top - 15:08:47 up  1:20,  6 users,  load average: 1.47, 2.26, 5.01
top - 15:08:52 up  1:20,  6 users,  load average: 1.60, 2.28, 5.00
top - 15:08:57 up  1:20,  6 users,  load average: 1.47, 2.24, 4.97
top - 15:09:02 up  1:20,  6 users,  load average: 1.35, 2.20, 4.95
top - 15:09:07 up  1:20,  6 users,  load average: 1.24, 2.16, 4.92
top - 15:09:12 up  1:20,  6 users,  load average: 1.46, 2.19, 4.91
top - 15:09:17 up  1:20,  6 users,  load average: 1.35, 2.16, 4.89
top - 15:09:22 up  1:21,  6 users,  load average: 1.32, 2.14, 4.87
top - 15:09:27 up  1:21,  6 users,  load average: 1.21, 2.10, 4.84
top - 15:09:32 up  1:21,  6 users,  load average: 1.20, 2.08, 4.82
top - 15:09:37 up  1:21,  6 users,  load average: 1.10, 2.05, 4.79
top - 15:09:42 up  1:21,  6 users,  load average: 1.01, 2.01, 4.77
top - 15:09:47 up  1:21,  6 users,  load average: 1.09, 2.01, 4.75
top - 15:09:52 up  1:21,  6 users,  load average: 1.16, 2.01, 4.74
top - 15:09:57 up  1:21,  6 users,  load average: 1.15, 2.00, 4.72
top - 15:10:02 up  1:21,  6 users,  load average: 1.14, 1.98, 4.70
top - 15:10:07 up  1:21,  6 users,  load average: 1.37, 2.01, 4.69
top - 15:10:12 up  1:21,  6 users,  load average: 1.26, 1.98, 4.67
top - 15:10:17 up  1:21,  6 users,  load average: 1.16, 1.95, 4.64
top - 15:10:22 up  1:22,  6 users,  load average: 1.14, 1.93, 4.62
top - 15:10:27 up  1:22,  6 users,  load average: 1.05, 1.90, 4.60
top - 15:10:32 up  1:22,  6 users,  load average: 1.05, 1.88, 4.58
top - 15:10:37 up  1:22,  6 users,  load average: 1.20, 1.90, 4.57
top - 15:10:42 up  1:22,  6 users,  load average: 1.11, 1.87, 4.54
top - 15:10:47 up  1:22,  6 users,  load average: 1.02, 1.84, 4.52
top - 15:10:52 up  1:22,  6 users,  load average: 1.02, 1.82, 4.50
top - 15:10:57 up  1:22,  6 users,  load average: 0.93, 1.79, 4.47
top - 15:11:02 up  1:22,  6 users,  load average: 0.86, 1.76, 4.45
top - 15:11:07 up  1:22,  6 users,  load average: 1.03, 1.78, 4.44
top - 15:11:12 up  1:22,  6 users,  load average: 0.95, 1.75, 4.42
top - 15:11:17 up  1:22,  6 users,  load average: 1.19, 1.79, 4.42
top - 15:11:22 up  1:23,  6 users,  load average: 1.10, 1.76, 4.39
top - 15:11:27 up  1:23,  6 users,  load average: 1.09, 1.75, 4.37
top - 15:11:32 up  1:23,  6 users,  load average: 1.00, 1.72, 4.35
top - 15:11:37 up  1:23,  6 users,  load average: 0.92, 1.69, 4.33
top - 15:11:42 up  1:23,  6 users,  load average: 0.85, 1.66, 4.30
top - 15:11:47 up  1:23,  6 users,  load average: 0.78, 1.63, 4.28
top - 15:11:52 up  1:23,  6 users,  load average: 0.80, 1.62, 4.26
top - 15:11:57 up  1:23,  6 users,  load average: 0.81, 1.61, 4.24
top - 15:12:02 up  1:23,  6 users,  load average: 0.91, 1.62, 4.23
top - 15:12:08 up  1:23,  6 users,  load average: 0.83, 1.59, 4.21
top - 15:12:13 up  1:23,  6 users,  load average: 0.85, 1.58, 4.19
top - 15:12:18 up  1:23,  6 users,  load average: 0.78, 1.56, 4.17
top - 15:12:23 up  1:24,  6 users,  load average: 0.88, 1.56, 4.16
top - 15:12:28 up  1:24,  6 users,  load average: 1.21, 1.62, 4.16
top - 15:12:33 up  1:24,  6 users,  load average: 1.43, 1.66, 4.16
top - 15:12:38 up  1:24,  6 users,  load average: 1.40, 1.65, 4.14
top - 15:12:43 up  1:24,  6 users,  load average: 1.28, 1.62, 4.12
top - 15:12:48 up  1:24,  6 users,  load average: 1.42, 1.64, 4.11
top - 15:12:53 up  1:24,  6 users,  load average: 1.39, 1.63, 4.10
top - 15:12:58 up  1:24,  6 users,  load average: 1.36, 1.62, 4.08
top - 15:13:03 up  1:24,  6 users,  load average: 1.33, 1.61, 4.06
top - 15:13:08 up  1:24,  6 users,  load average: 1.46, 1.63, 4.06
top - 15:13:13 up  1:24,  6 users,  load average: 1.34, 1.61, 4.04
top - 15:13:18 up  1:24,  6 users,  load average: 1.24, 1.58, 4.01
top - 15:13:23 up  1:25,  6 users,  load average: 1.22, 1.57, 4.00
top - 15:13:28 up  1:25,  6 users,  load average: 1.12, 1.54, 3.97
top - 15:13:33 up  1:25,  6 users,  load average: 1.03, 1.52, 3.95
top - 15:13:38 up  1:25,  6 users,  load average: 1.11, 1.52, 3.94
top - 15:13:43 up  1:25,  6 users,  load average: 1.02, 1.50, 3.92
top - 15:13:48 up  1:25,  6 users,  load average: 1.10, 1.51, 3.91
top - 15:13:53 up  1:25,  6 users,  load average: 1.09, 1.50, 3.89
top - 15:13:58 up  1:25,  6 users,  load average: 1.32, 1.54, 3.89
top - 15:14:03 up  1:25,  6 users,  load average: 1.30, 1.53, 3.88
top - 15:14:08 up  1:25,  6 users,  load average: 1.35, 1.54, 3.87
top - 15:14:13 up  1:25,  6 users,  load average: 1.48, 1.56, 3.86
top - 15:14:18 up  1:25,  6 users,  load average: 1.85, 1.64, 3.88
top - 15:14:23 up  1:26,  6 users,  load average: 3.54, 1.99, 3.98
top - 15:14:28 up  1:26,  6 users,  load average: 4.99, 2.35, 4.07
top - 15:14:33 up  1:26,  6 users,  load average: 5.71, 2.55, 4.13
top - 15:14:39 up  1:26,  6 users,  load average: 5.90, 2.64, 4.15
top - 15:14:44 up  1:26,  6 users,  load average: 6.31, 2.77, 4.18
top - 15:14:49 up  1:26,  6 users,  load average: 6.20, 2.81, 4.19
top - 15:14:54 up  1:26,  6 users,  load average: 6.26, 2.88, 4.20
top - 15:14:59 up  1:26,  6 users,  load average: 6.00, 2.88, 4.20
top - 15:15:04 up  1:26,  6 users,  load average: 6.16, 2.97, 4.22
top - 15:15:09 up  1:26,  6 users,  load average: 5.91, 2.97, 4.21
top - 15:15:14 up  1:26,  6 users,  load average: 5.84, 3.00, 4.21
top - 15:15:19 up  1:27,  6 users,  load average: 5.93, 3.07, 4.23
top - 15:15:24 up  1:27,  6 users,  load average: 5.69, 3.07, 4.22
top - 15:15:29 up  1:27,  6 users,  load average: 6.12, 3.20, 4.26
top - 15:15:34 up  1:27,  6 users,  load average: 6.35, 3.29, 4.28
top - 15:15:39 up  1:27,  6 users,  load average: 6.72, 3.42, 4.32
top - 15:15:44 up  1:27,  6 users,  load average: 7.62, 3.66, 4.39
top - 15:15:49 up  1:27,  6 users,  load average: 8.29, 3.87, 4.45
top - 15:15:55 up  1:27,  6 users,  load average: 10.19, 4.33, 4.60
top - 15:16:00 up  1:27,  6 users,  load average: 10.10, 4.41, 4.62
top - 15:16:05 up  1:27,  6 users,  load average: 10.25, 4.54, 4.66
top - 15:16:10 up  1:27,  6 users,  load average: 10.07, 4.59, 4.68
top - 15:16:15 up  1:27,  6 users,  load average: 9.82, 4.63, 4.69
top - 15:16:20 up  1:28,  6 users,  load average: 9.60, 4.67, 4.71
top - 15:16:25 up  1:28,  6 users,  load average: 9.63, 4.76, 4.73
top - 15:16:30 up  1:28,  6 users,  load average: 9.66, 4.85, 4.76
top - 15:16:35 up  1:28,  6 users,  load average: 9.52, 4.90, 4.78
top - 15:16:40 up  1:28,  6 users,  load average: 9.16, 4.90, 4.78
top - 15:16:45 up  1:28,  6 users,  load average: 9.39, 5.02, 4.82
top - 15:16:50 up  1:28,  6 users,  load average: 9.12, 5.04, 4.83
top - 15:16:56 up  1:28,  6 users,  load average: 8.71, 5.02, 4.82
top - 15:17:01 up  1:28,  6 users,  load average: 8.25, 4.98, 4.81
top - 15:17:06 up  1:28,  6 users,  load average: 7.99, 4.98, 4.81
top - 15:17:11 up  1:28,  6 users,  load average: 7.75, 4.98, 4.81
top - 15:17:16 up  1:28,  6 users,  load average: 7.77, 5.03, 4.83
top - 15:17:21 up  1:29,  6 users,  load average: 7.79, 5.08, 4.85
top - 15:17:26 up  1:29,  6 users,  load average: 7.88, 5.15, 4.87
top - 15:17:31 up  1:29,  6 users,  load average: 8.05, 5.23, 4.89
top - 15:17:36 up  1:29,  6 users,  load average: 8.29, 5.32, 4.93
top - 15:17:41 up  1:29,  6 users,  load average: 8.51, 5.42, 4.96
top - 15:17:48 up  1:29,  6 users,  load average: 9.26, 5.68, 5.05
top - 15:17:53 up  1:29,  6 users,  load average: 9.48, 5.79, 5.09
top - 15:17:59 up  1:29,  6 users,  load average: 9.12, 5.77, 5.09
top - 15:18:04 up  1:29,  6 users,  load average: 8.63, 5.73, 5.08
top - 15:18:09 up  1:29,  6 users,  load average: 8.18, 5.68, 5.06
top - 15:18:14 up  1:29,  6 users,  load average: 8.01, 5.69, 5.07
top - 15:18:19 up  1:30,  6 users,  load average: 7.77, 5.68, 5.07
top - 15:18:24 up  1:30,  6 users,  load average: 7.62, 5.68, 5.07
top - 15:18:29 up  1:30,  6 users,  load average: 7.17, 5.62, 5.06
top - 15:18:34 up  1:30,  6 users,  load average: 6.84, 5.58, 5.05
top - 15:18:39 up  1:30,  6 users,  load average: 6.45, 5.52, 5.03
top - 15:18:44 up  1:30,  6 users,  load average: 6.17, 5.47, 5.02
top - 15:18:49 up  1:30,  6 users,  load average: 6.00, 5.45, 5.01
top - 15:18:54 up  1:30,  6 users,  load average: 6.24, 5.51, 5.03
top - 15:18:59 up  1:30,  6 users,  load average: 5.82, 5.43, 5.01
top - 15:19:04 up  1:30,  6 users,  load average: 5.67, 5.41, 5.01
top - 15:19:09 up  1:30,  6 users,  load average: 5.38, 5.35, 4.99
top - 15:19:14 up  1:30,  6 users,  load average: 5.11, 5.30, 4.97
top - 15:19:19 up  1:31,  6 users,  load average: 4.70, 5.21, 4.95
top - 15:19:24 up  1:31,  6 users,  load average: 4.48, 5.15, 4.93
top - 15:19:29 up  1:31,  6 users,  load average: 4.20, 5.08, 4.91
top - 15:19:34 up  1:31,  6 users,  load average: 3.95, 5.02, 4.89
top - 15:19:39 up  1:31,  6 users,  load average: 3.87, 4.98, 4.88
top - 15:19:44 up  1:31,  6 users,  load average: 3.64, 4.92, 4.86
top - 15:19:49 up  1:31,  6 users,  load average: 3.35, 4.83, 4.83
top - 15:19:54 up  1:31,  6 users,  load average: 3.16, 4.77, 4.81
top - 15:19:59 up  1:31,  6 users,  load average: 3.31, 4.77, 4.81
top - 15:20:04 up  1:31,  6 users,  load average: 3.28, 4.74, 4.80
top - 15:20:09 up  1:31,  6 users,  load average: 3.26, 4.72, 4.79
top - 15:20:15 up  1:31,  6 users,  load average: 3.00, 4.64, 4.76
top - 15:20:20 up  1:32,  6 users,  load average: 2.76, 4.56, 4.74
top - 15:20:25 up  1:32,  6 users,  load average: 2.70, 4.52, 4.72
top - 15:20:30 up  1:32,  6 users,  load average: 2.56, 4.46, 4.70
top - 15:20:35 up  1:32,  6 users,  load average: 2.36, 4.38, 4.68
top - 15:20:40 up  1:32,  6 users,  load average: 2.25, 4.33, 4.66
top - 15:20:45 up  1:32,  6 users,  load average: 2.07, 4.25, 4.63
top - 15:20:50 up  1:32,  6 users,  load average: 1.90, 4.18, 4.61
top - 15:20:55 up  1:32,  6 users,  load average: 1.91, 4.15, 4.59
top - 15:21:00 up  1:32,  6 users,  load average: 1.76, 4.08, 4.57
top - 15:21:05 up  1:32,  6 users,  load average: 1.69, 4.03, 4.55
top - 15:21:10 up  1:32,  6 users,  load average: 1.56, 3.96, 4.52
top - 15:21:15 up  1:32,  6 users,  load average: 1.43, 3.89, 4.50
top - 15:21:20 up  1:33,  6 users,  load average: 1.48, 3.86, 4.49
top - 15:21:25 up  1:33,  6 users,  load average: 2.24, 3.98, 4.52
top - 15:21:30 up  1:33,  6 users,  load average: 2.14, 3.93, 4.50
top - 15:21:35 up  1:33,  6 users,  load average: 2.85, 4.05, 4.54
top - 15:21:40 up  1:33,  6 users,  load average: 2.62, 3.98, 4.51
top - 15:21:45 up  1:33,  6 users,  load average: 2.41, 3.91, 4.49
top - 15:21:50 up  1:33,  6 users,  load average: 2.22, 3.85, 4.46
top - 15:21:55 up  1:33,  6 users,  load average: 2.04, 3.78, 4.44
top - 15:22:00 up  1:33,  6 users,  load average: 1.88, 3.72, 4.41
top - 15:22:05 up  1:33,  6 users,  load average: 1.73, 3.66, 4.39
top - 15:22:10 up  1:33,  6 users,  load average: 1.59, 3.60, 4.37
top - 15:22:15 up  1:33,  6 users,  load average: 1.46, 3.54, 4.34
top - 15:22:20 up  1:34,  6 users,  load average: 1.34, 3.48, 4.32
top - 15:22:25 up  1:34,  6 users,  load average: 1.24, 3.42, 4.30
top - 15:22:30 up  1:34,  6 users,  load average: 1.14, 3.36, 4.27
top - 15:22:35 up  1:34,  6 users,  load average: 1.04, 3.31, 4.25
top - 15:22:40 up  1:34,  6 users,  load average: 0.96, 3.25, 4.23
top - 15:22:45 up  1:34,  6 users,  load average: 0.88, 3.20, 4.20
top - 15:22:50 up  1:34,  6 users,  load average: 0.81, 3.15, 4.18
top - 15:22:55 up  1:34,  6 users,  load average: 0.75, 3.09, 4.16
top - 15:23:00 up  1:34,  6 users,  load average: 0.69, 3.04, 4.14
top - 15:23:05 up  1:34,  6 users,  load average: 0.63, 2.99, 4.11
top - 15:23:10 up  1:34,  6 users,  load average: 0.58, 2.94, 4.09
top - 15:23:15 up  1:34,  6 users,  load average: 0.53, 2.89, 4.07
top - 15:23:20 up  1:35,  6 users,  load average: 0.49, 2.84, 4.05
top - 15:23:25 up  1:35,  6 users,  load average: 0.45, 2.80, 4.02
top - 15:23:30 up  1:35,  6 users,  load average: 0.42, 2.75, 4.00
top - 15:23:35 up  1:35,  6 users,  load average: 0.38, 2.70, 3.98
top - 15:23:40 up  1:35,  6 users,  load average: 0.35, 2.66, 3.96
top - 15:23:45 up  1:35,  6 users,  load average: 0.32, 2.61, 3.94
top - 15:23:50 up  1:35,  6 users,  load average: 0.30, 2.57, 3.92
top - 15:23:55 up  1:35,  6 users,  load average: 0.27, 2.53, 3.89
top - 15:24:00 up  1:35,  6 users,  load average: 0.25, 2.49, 3.87
top - 15:24:05 up  1:35,  6 users,  load average: 0.23, 2.44, 3.85
top - 15:24:10 up  1:35,  6 users,  load average: 0.21, 2.40, 3.83
top - 15:24:15 up  1:35,  6 users,  load average: 0.19, 2.36, 3.81
top - 15:24:20 up  1:36,  6 users,  load average: 0.18, 2.32, 3.79
top - 15:24:25 up  1:36,  6 users,  load average: 0.16, 2.28, 3.77
top - 15:24:30 up  1:36,  6 users,  load average: 0.15, 2.25, 3.75
top - 15:24:35 up  1:36,  6 users,  load average: 0.14, 2.21, 3.73
top - 15:24:40 up  1:36,  6 users,  load average: 0.13, 2.17, 3.71
top - 15:24:45 up  1:36,  6 users,  load average: 0.12, 2.14, 3.69
top - 15:24:50 up  1:36,  6 users,  load average: 0.11, 2.10, 3.67
top - 15:24:55 up  1:36,  6 users,  load average: 0.10, 2.06, 3.65
top - 15:25:00 up  1:36,  6 users,  load average: 0.09, 2.03, 3.63
top - 15:25:05 up  1:36,  6 users,  load average: 0.08, 2.00, 3.61
top - 15:25:11 up  1:36,  6 users,  load average: 0.08, 1.96, 3.59
top - 15:25:16 up  1:36,  6 users,  load average: 0.07, 1.93, 3.57
top - 15:25:21 up  1:37,  6 users,  load average: 0.06, 1.90, 3.55
top - 15:25:26 up  1:37,  6 users,  load average: 0.06, 1.87, 3.53
top - 15:25:31 up  1:37,  6 users,  load average: 0.05, 1.83, 3.51
top - 15:25:36 up  1:37,  6 users,  load average: 0.05, 1.80, 3.49
top - 15:25:41 up  1:37,  6 users,  load average: 0.05, 1.77, 3.47
top - 15:25:46 up  1:37,  6 users,  load average: 0.04, 1.74, 3.45
top - 15:25:51 up  1:37,  6 users,  load average: 0.04, 1.71, 3.44
top - 15:25:56 up  1:37,  6 users,  load average: 0.03, 1.69, 3.42

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-10 11:53   ` Anant Nitya
@ 2007-05-10 21:58     ` Thomas Gleixner
  2007-05-17  6:41       ` Anant Nitya
  0 siblings, 1 reply; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-10 21:58 UTC (permalink / raw)
  To: Anant Nitya; +Cc: linux-kernel

On Thu, 2007-05-10 at 17:23 +0530, Anant Nitya wrote:
> On 5/9/07, Thomas Gleixner <tglx@linutronix.de> wrote:
> > On Wed, 2007-05-09 at 19:42 +0530, Anant Nitya wrote:
> > > Hi,
> > > Ever since I upgrade to 2.6.21/.1, system log is filled with following
> > > messages if I enable CONFIG_NO_HZ=y, going through archives it seems ingo
> > > sometime back posted some patch and now it is upstream, but its not helping
> > > here. If I disable NOHZ by kernel command line nohz=off this problem
> > > disappears. This system is P4/2.40GHz/HT with SMP/SMT on in kernel config.
> > > One more thing that I noticed is this problem only arises while using X or
> > > network otherwise plain command line with no network access don't trigger
> > > this with nohz=on.
> >
> > Is this independent of the load on the system ? i.e. : What happens if
> > you only use the console and run a kernel compile with -j4 ?
> Yep, it seems independent of load on the system, to test I compiled
> kernel with make -j8 and to throw some more load same time I also used
> amavisd to clean around 2000 of spam/virus infected mails and same
> time was listening to few radio stream over net in console only login
> mode. Even then there was not a single NOHZ: local_softirq_pending
> message in log. Once kernel got compiled and amavisd finished its job
> and load drops back to around 0.5/2.0 I started X and within few secs
> log starts getting filled with local_softirq_pending messages, (sorry,
> I didn't applied ratelimit patch since I wanted to test is it high
> load on system that causes this or something else). It seems even
> network operation is not causing this. It seems either X is doing
> something terribly wrong or kernel is getting hosed by X.
> If some more information is needed please feel free to ask

Ok, that's consistent with earlier reports. The problem surfaces when
one of the SMT-"cpus" goes idle. The problem goes away when you disable
hyperthreading.

When you apply the ratelimit patch, does the softlockup problem
persist ?

	tglx



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-10 21:58     ` Thomas Gleixner
@ 2007-05-17  6:41       ` Anant Nitya
  2007-05-18 13:01         ` Thomas Gleixner
  0 siblings, 1 reply; 27+ messages in thread
From: Anant Nitya @ 2007-05-17  6:41 UTC (permalink / raw)
  To: tglx; +Cc: linux-kernel

On Friday 11 May 2007 03:28:46 Thomas Gleixner wrote:

> Ok, that's consistent with earlier reports. The problem surfaces when
> one of the SMT-"cpus" goes idle. The problem goes away when you disable
> hyperthreading.
Yes with HT disabled in BIOS there is no local_softirq_pending messages. BTW 
why does this problem persist only with X ? 

> When you apply the ratelimit patch, does the softlockup problem
> persist ?
>
Yes, though softlockup is rare and mostly hit when system is under high load. 
Apart of that I am also getting following messages consistently across 
multiple boot cycles with NOHZ=y and ratelimit patch applied.

May 15 11:51:22 rudra kernel: [ 2594.341068] Clocksource tsc unstable (delta = 
28111260302 ns)
May 15 11:51:22 rudra kernel: [ 2594.343194] Time: acpi_pm clocksource has 
been installed.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-17  6:41       ` Anant Nitya
@ 2007-05-18 13:01         ` Thomas Gleixner
  2007-05-19  4:30           ` Anant Nitya
  2007-05-19  9:55           ` Anant Nitya
  0 siblings, 2 replies; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-18 13:01 UTC (permalink / raw)
  To: Anant Nitya; +Cc: linux-kernel

On Thu, 2007-05-17 at 12:11 +0530, Anant Nitya wrote:
> On Friday 11 May 2007 03:28:46 Thomas Gleixner wrote:
> 
> > Ok, that's consistent with earlier reports. The problem surfaces when
> > one of the SMT-"cpus" goes idle. The problem goes away when you disable
> > hyperthreading.
> Yes with HT disabled in BIOS there is no local_softirq_pending messages. BTW 
> why does this problem persist only with X ? 

No idea. I uploaded a debug patch against 2.6.22-rc1 to

http://www.tglx.de/private/tglx/2.6.22-rc1-hrt-debug.patch

Can you give it a try and report the output ?

> > When you apply the ratelimit patch, does the softlockup problem
> > persist ?
> >
> Yes, though softlockup is rare and mostly hit when system is under high load. 
> Apart of that I am also getting following messages consistently across 
> multiple boot cycles with NOHZ=y and ratelimit patch applied.
> 
> May 15 11:51:22 rudra kernel: [ 2594.341068] Clocksource tsc unstable (delta = 
> 28111260302 ns)
> May 15 11:51:22 rudra kernel: [ 2594.343194] Time: acpi_pm clocksource has 
> been installed.

That's informal. The TSC is detected to be unstable and replaced by the
pm timer. Nothing to worry about. It happens with NOHZ=n as well,
right ?

	tglx



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-18 13:01         ` Thomas Gleixner
@ 2007-05-19  4:30           ` Anant Nitya
  2007-05-19  9:55           ` Anant Nitya
  1 sibling, 0 replies; 27+ messages in thread
From: Anant Nitya @ 2007-05-19  4:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner

On Friday 18 May 2007 18:31:17 Thomas Gleixner wrote:
> On Thu, 2007-05-17 at 12:11 +0530, Anant Nitya wrote:
> > On Friday 11 May 2007 03:28:46 Thomas Gleixner wrote:
> > > Ok, that's consistent with earlier reports. The problem surfaces when
> > > one of the SMT-"cpus" goes idle. The problem goes away when you disable
> > > hyperthreading.
> >
> > Yes with HT disabled in BIOS there is no local_softirq_pending messages.
> > BTW why does this problem persist only with X ?
>
> No idea. I uploaded a debug patch against 2.6.22-rc1 to
>
> http://www.tglx.de/private/tglx/2.6.22-rc1-hrt-debug.patch
>
> Can you give it a try and report the output ?

I am compiling kernel with above patch applied and will post the results.

> > > When you apply the ratelimit patch, does the softlockup problem
> > > persist ?
> >
> > Yes, though softlockup is rare and mostly hit when system is under high
> > load. Apart of that I am also getting following messages consistently
> > across multiple boot cycles with NOHZ=y and ratelimit patch applied.
> >
> > May 15 11:51:22 rudra kernel: [ 2594.341068] Clocksource tsc unstable
> > (delta = 28111260302 ns)
> > May 15 11:51:22 rudra kernel: [ 2594.343194] Time: acpi_pm clocksource
> > has been installed.
>
> That's informal. The TSC is detected to be unstable and replaced by the
> pm timer. Nothing to worry about. It happens with NOHZ=n as well,
> right ?
Nope it don't appear with nohz=no till now across so many boot cycles, it 
always appears with nohz!=no.

Regards
Ananitya



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-18 13:01         ` Thomas Gleixner
  2007-05-19  4:30           ` Anant Nitya
@ 2007-05-19  9:55           ` Anant Nitya
  2007-05-19 19:11             ` Thomas Gleixner
  1 sibling, 1 reply; 27+ messages in thread
From: Anant Nitya @ 2007-05-19  9:55 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner

On Friday 18 May 2007 18:31:17 Thomas Gleixner wrote:
> On Thu, 2007-05-17 at 12:11 +0530, Anant Nitya wrote:
> > On Friday 11 May 2007 03:28:46 Thomas Gleixner wrote:
> > > Ok, that's consistent with earlier reports. The problem surfaces when
> > > one of the SMT-"cpus" goes idle. The problem goes away when you disable
> > > hyperthreading.
> >
> > Yes with HT disabled in BIOS there is no local_softirq_pending messages.
> > BTW why does this problem persist only with X ?
>
> No idea. I uploaded a debug patch against 2.6.22-rc1 to
>
> http://www.tglx.de/private/tglx/2.6.22-rc1-hrt-debug.patch
>
> Can you give it a try and report the output ?
Hi
Here it goes 
[  159.646196] NOHZ softirq pending 22 on CPU 0
[  159.646207] .... task state: 1 00000000
[  159.646217] .... last caller: __tasklet_schedule
[  159.646997] NOHZ softirq pending 22 on CPU 0
[  159.647006] .... task state: 1 00000000
[  159.647013] .... last caller: __tasklet_schedule
[  159.647398] NOHZ softirq pending 22 on CPU 0
[  159.647405] .... task state: 1 00000000
[  159.647412] .... last caller: __tasklet_schedule
[  159.647768] NOHZ softirq pending 22 on CPU 0
[  159.647775] .... task state: 1 00000000
[  159.647781] .... last caller: __tasklet_schedule
[  166.285664] NOHZ softirq pending 22 on CPU 0
[  166.285675] .... task state: 1 00000000
[  166.285687] .... last caller: raise_softirq
[  166.286321] NOHZ softirq pending 22 on CPU 0
[  166.286329] .... task state: 1 00000000
[  166.286337] .... last caller: raise_softirq
[  166.286715] NOHZ softirq pending 22 on CPU 0
[  166.286722] .... task state: 1 00000000
[  166.286729] .... last caller: raise_softirq
[  166.287085] NOHZ softirq pending 22 on CPU 0
[  166.287092] .... task state: 1 00000000
[  166.287098] .... last caller: raise_softirq
[  171.512134] NOHZ softirq pending 22 on CPU 0
[  171.512144] .... task state: 1 00000000
[  171.512154] .... last caller: __tasklet_schedule
[  171.512712] NOHZ softirq pending 22 on CPU 0
[  171.512720] .... task state: 1 00000000
[  171.512727] .... last caller: __tasklet_schedule

Regards
Ananitya


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-19  9:55           ` Anant Nitya
@ 2007-05-19 19:11             ` Thomas Gleixner
  2007-05-19 21:23               ` Anant Nitya
  2007-05-20 10:18               ` Heiko Carstens
  0 siblings, 2 replies; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-19 19:11 UTC (permalink / raw)
  To: Anant Nitya; +Cc: linux-kernel, Ingo Molnar

On Sat, 2007-05-19 at 15:25 +0530, Anant Nitya wrote:
> > No idea. I uploaded a debug patch against 2.6.22-rc1 to
> >
> > http://www.tglx.de/private/tglx/2.6.22-rc1-hrt-debug.patch
> >
> > Can you give it a try and report the output ?
> Hi
> Here it goes 
> [  159.646196] NOHZ softirq pending 22 on CPU 0
> [  159.646207] .... task state: 1 00000000

1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken up. At
least it is not a scheduler problem.

I work out a more complex debug patch and pester you to test once I'm
done.

	tglx




^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-19 19:11             ` Thomas Gleixner
@ 2007-05-19 21:23               ` Anant Nitya
  2007-05-20 21:43                 ` Thomas Gleixner
  2007-05-20 10:18               ` Heiko Carstens
  1 sibling, 1 reply; 27+ messages in thread
From: Anant Nitya @ 2007-05-19 21:23 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar

On Sunday 20 May 2007 00:41:08 Thomas Gleixner wrote:
> On Sat, 2007-05-19 at 15:25 +0530, Anant Nitya wrote:
> > > No idea. I uploaded a debug patch against 2.6.22-rc1 to
> > >
> > > http://www.tglx.de/private/tglx/2.6.22-rc1-hrt-debug.patch
> > >
> > > Can you give it a try and report the output ?
> >
> > Hi
> > Here it goes
> > [  159.646196] NOHZ softirq pending 22 on CPU 0
> > [  159.646207] .... task state: 1 00000000
>
> 1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken up. At
> least it is not a scheduler problem.
>
> I work out a more complex debug patch and pester you to test once I'm
> done.
No problem :)

Regards
Ananitya

>
> 	tglx



-- 
Out of many thousands, one may endeavor for perfection, and of
those who have achieved perfection, hardly one knows Me in truth.
				-- Gita Sutra Of Mysticism

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-19 19:11             ` Thomas Gleixner
  2007-05-19 21:23               ` Anant Nitya
@ 2007-05-20 10:18               ` Heiko Carstens
  2007-05-20 13:52                 ` Thomas Gleixner
  1 sibling, 1 reply; 27+ messages in thread
From: Heiko Carstens @ 2007-05-20 10:18 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Anant Nitya, linux-kernel, Ingo Molnar

On Sat, May 19, 2007 at 09:11:08PM +0200, Thomas Gleixner wrote:
> On Sat, 2007-05-19 at 15:25 +0530, Anant Nitya wrote:
> > > No idea. I uploaded a debug patch against 2.6.22-rc1 to
> > >
> > > http://www.tglx.de/private/tglx/2.6.22-rc1-hrt-debug.patch
> > >
> > > Can you give it a try and report the output ?
> > Hi
> > Here it goes 
> > [  159.646196] NOHZ softirq pending 22 on CPU 0
> > [  159.646207] .... task state: 1 00000000
> 
> 1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken up. At
> least it is not a scheduler problem.
> 
> I work out a more complex debug patch and pester you to test once I'm
> done.

I've also tons of 'NOHZ: local_softirq_pending 08' that disappear with
nohz=off. But I'm still running 2.6.21. Are there any patches that should
fix this?
Machine is a Lenovo T60p:

i686 Intel(R) Core(TM)2 CPU T7600  @ 2.33GHz GenuineIntel GNU/Linux

Besides that I get a lot of clock skews on 'make headers_check', but
these are unrelated to nohz:

  CHECK   include/asm/dasd.h
make[2]: warning:  Clock skew detected.  Your build may be incomplete.
make[2]: Warning: File `/dev/null' has modification time 5.1e+03 s in the future

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-20 10:18               ` Heiko Carstens
@ 2007-05-20 13:52                 ` Thomas Gleixner
  2007-05-20 18:36                   ` Heiko Carstens
  0 siblings, 1 reply; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-20 13:52 UTC (permalink / raw)
  To: Heiko Carstens; +Cc: Anant Nitya, linux-kernel, Ingo Molnar

On Sun, 2007-05-20 at 12:18 +0200, Heiko Carstens wrote:
> > I work out a more complex debug patch and pester you to test once I'm
> > done.
> 
> I've also tons of 'NOHZ: local_softirq_pending 08' that disappear with
> nohz=off. But I'm still running 2.6.21. Are there any patches that should
> fix this?
> Machine is a Lenovo T60p:
> 
> i686 Intel(R) Core(TM)2 CPU T7600  @ 2.33GHz GenuineIntel GNU/Linux

Hmm, that's a different problem than the 0x22 which shows up on
hyperthreading enabled P4 systems. Are you using plip ?

> Besides that I get a lot of clock skews on 'make headers_check', but
> these are unrelated to nohz:
> 
>   CHECK   include/asm/dasd.h
> make[2]: warning:  Clock skew detected.  Your build may be incomplete.
> make[2]: Warning: File `/dev/null' has modification time 5.1e+03 s in the future

Strange.

	tglx



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-20 13:52                 ` Thomas Gleixner
@ 2007-05-20 18:36                   ` Heiko Carstens
       [not found]                     ` <20070520191800.GA14225@osiris.ibm.com>
  0 siblings, 1 reply; 27+ messages in thread
From: Heiko Carstens @ 2007-05-20 18:36 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Anant Nitya, linux-kernel, Ingo Molnar

On Sun, May 20, 2007 at 03:52:21PM +0200, Thomas Gleixner wrote:
> On Sun, 2007-05-20 at 12:18 +0200, Heiko Carstens wrote:
> > > I work out a more complex debug patch and pester you to test once I'm
> > > done.
> > 
> > I've also tons of 'NOHZ: local_softirq_pending 08' that disappear with
> > nohz=off. But I'm still running 2.6.21. Are there any patches that should
> > fix this?
> > Machine is a Lenovo T60p:
> > 
> > i686 Intel(R) Core(TM)2 CPU T7600  @ 2.33GHz GenuineIntel GNU/Linux
> 
> Hmm, that's a different problem than the 0x22 which shows up on
> hyperthreading enabled P4 systems. Are you using plip ?

No, after all it turned out that is caused by an IBM internal module.
Just ignore me, sorry for the noise :)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-19 21:23               ` Anant Nitya
@ 2007-05-20 21:43                 ` Thomas Gleixner
  2007-05-21  6:22                   ` Anant Nitya
  2007-05-22 19:03                   ` Michal Piotrowski
  0 siblings, 2 replies; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-20 21:43 UTC (permalink / raw)
  To: Anant Nitya; +Cc: linux-kernel, Ingo Molnar, Michal Piotrowski

On Sun, 2007-05-20 at 02:53 +0530, Anant Nitya wrote:
> > 1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken up. At
> > least it is not a scheduler problem.
> >
> > I work out a more complex debug patch and pester you to test once I'm
> > done.
> No problem :)

You asked for it :)

Please patch 2.6.22-rc2 with

http://tglx.de/projects/hrtimers/2.6.22-rc2/patch-2.6.22-rc2-hrt2.patch
and
http://www.tglx.de/private/tglx/ht-debug/tracer.diff

Compile it with the config

http://www.tglx.de/private/tglx/ht-debug/config.debug

You should find something like:

(         swapper-0    |#0): new 67173 us user-latency.

along with the familiar "NOHZ ......" message in your log file.

Once that happened please do:

$ cat /proc/latency_trace >trace.txt

compress it and send it to me along with the full dmesg output or put
both up to some place, where I can download it.

Michal,

IIRC you encountered the same P4/HT related wreckage. Can you do the
same ?

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-20 21:43                 ` Thomas Gleixner
@ 2007-05-21  6:22                   ` Anant Nitya
  2007-05-21  9:01                     ` Thomas Gleixner
  2007-05-22 19:03                   ` Michal Piotrowski
  1 sibling, 1 reply; 27+ messages in thread
From: Anant Nitya @ 2007-05-21  6:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Michal Piotrowski

On Monday 21 May 2007 03:13:08 Thomas Gleixner wrote:
> On Sun, 2007-05-20 at 02:53 +0530, Anant Nitya wrote:
> > > 1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken up. At
> > > least it is not a scheduler problem.
> > >
> > > I work out a more complex debug patch and pester you to test once I'm
> > > done.
> >
> > No problem :)
>
> You asked for it :)
>
> Please patch 2.6.22-rc2 with
>
> http://tglx.de/projects/hrtimers/2.6.22-rc2/patch-2.6.22-rc2-hrt2.patch
> and
> http://www.tglx.de/private/tglx/ht-debug/tracer.diff
>
> Compile it with the config
>
> http://www.tglx.de/private/tglx/ht-debug/config.debug
>
> You should find something like:
>
> (         swapper-0    |#0): new 67173 us user-latency.
>
> along with the familiar "NOHZ ......" message in your log file.
>
> Once that happened please do:
>
> $ cat /proc/latency_trace >trace.txt
>
> compress it and send it to me along with the full dmesg output or put
> both up to some place, where I can download it.
Hi Thomas

Here are the links...
http://cybertek.info/taitai/dmesg-2.6.22.rc2.hrt2-1.SMP.DN.LINUX.txt
http://cybertek.info/taitai/trace.txt.bz2

Regards
Ananitya

>
> Michal,
>
> IIRC you encountered the same P4/HT related wreckage. Can you do the
> same ?
>
> Thanks,
>
> 	tglx



-- 
Out of many thousands, one may endeavor for perfection, and of
those who have achieved perfection, hardly one knows Me in truth.
				-- Gita Sutra Of Mysticism

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-21  6:22                   ` Anant Nitya
@ 2007-05-21  9:01                     ` Thomas Gleixner
  2007-05-21 10:02                       ` Anant Nitya
  2007-05-21 18:45                       ` Anant Nitya
  0 siblings, 2 replies; 27+ messages in thread
From: Thomas Gleixner @ 2007-05-21  9:01 UTC (permalink / raw)
  To: Anant Nitya; +Cc: linux-kernel, Ingo Molnar, Michal Piotrowski

On Mon, 2007-05-21 at 11:52 +0530, Anant Nitya wrote:
> > You should find something like:
> >
> > (         swapper-0    |#0): new 67173 us user-latency.
> >
> > along with the familiar "NOHZ ......" message in your log file.
> >
> > Once that happened please do:
> >
> > $ cat /proc/latency_trace >trace.txt
> >
> > compress it and send it to me along with the full dmesg output or put
> > both up to some place, where I can download it.
> Hi Thomas
> 
> Here are the links...
> http://cybertek.info/taitai/dmesg-2.6.22.rc2.hrt2-1.SMP.DN.LINUX.txt
> http://cybertek.info/taitai/trace.txt.bz2

Thanks. Sorry, I need more info. I uploaded a new tracer.diff to 

http://www.tglx.de/private/tglx/ht-debug/tracer.diff

Can you please revert the first one and retest with the new one ?

Thanks,

	tglx




^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-21  9:01                     ` Thomas Gleixner
@ 2007-05-21 10:02                       ` Anant Nitya
  2007-05-21 18:45                       ` Anant Nitya
  1 sibling, 0 replies; 27+ messages in thread
From: Anant Nitya @ 2007-05-21 10:02 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Michal Piotrowski

On Monday 21 May 2007 14:31:57 Thomas Gleixner wrote:
> On Mon, 2007-05-21 at 11:52 +0530, Anant Nitya wrote:
> > > You should find something like:
> > >
> > > (         swapper-0    |#0): new 67173 us user-latency.
> > >
> > > along with the familiar "NOHZ ......" message in your log file.
> > >
> > > Once that happened please do:
> > >
> > > $ cat /proc/latency_trace >trace.txt
> > >
> > > compress it and send it to me along with the full dmesg output or put
> > > both up to some place, where I can download it.
> >
> > Hi Thomas
> >
> > Here are the links...
> > http://cybertek.info/taitai/dmesg-2.6.22.rc2.hrt2-1.SMP.DN.LINUX.txt
> > http://cybertek.info/taitai/trace.txt.bz2
>
> Thanks. Sorry, I need more info. I uploaded a new tracer.diff to
>
> http://www.tglx.de/private/tglx/ht-debug/tracer.diff
>
> Can you please revert the first one and retest with the new one ?

Okay sure, compiling now.

Regards
Ananitya

>
> Thanks,
>
> 	tglx



-- 
Out of many thousands, one may endeavor for perfection, and of
those who have achieved perfection, hardly one knows Me in truth.
				-- Gita Sutra Of Mysticism

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-21  9:01                     ` Thomas Gleixner
  2007-05-21 10:02                       ` Anant Nitya
@ 2007-05-21 18:45                       ` Anant Nitya
  1 sibling, 0 replies; 27+ messages in thread
From: Anant Nitya @ 2007-05-21 18:45 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Michal Piotrowski

On Monday 21 May 2007 14:31:57 Thomas Gleixner wrote:
> On Mon, 2007-05-21 at 11:52 +0530, Anant Nitya wrote:
> > > You should find something like:
> > >
> > > (         swapper-0    |#0): new 67173 us user-latency.
> > >
> > > along with the familiar "NOHZ ......" message in your log file.
> > >
> > > Once that happened please do:
> > >
> > > $ cat /proc/latency_trace >trace.txt
> > >
> > > compress it and send it to me along with the full dmesg output or put
> > > both up to some place, where I can download it.
> >
> > Hi Thomas
> >
> > Here are the links...
> > http://cybertek.info/taitai/dmesg-2.6.22.rc2.hrt2-1.SMP.DN.LINUX.txt
> > http://cybertek.info/taitai/trace.txt.bz2
>
> Thanks. Sorry, I need more info. I uploaded a new tracer.diff to
>
> http://www.tglx.de/private/tglx/ht-debug/tracer.diff
>
> Can you please revert the first one and retest with the new one ?
>
> Thanks,
Sorry for delay, here is link for output from new tacer.
http://cybertek.info/taitai/trace-new.txt.bz2

Regards
Ananitya

>
> 	tglx



-- 
Out of many thousands, one may endeavor for perfection, and of
those who have achieved perfection, hardly one knows Me in truth.
				-- Gita Sutra Of Mysticism

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-20 21:43                 ` Thomas Gleixner
  2007-05-21  6:22                   ` Anant Nitya
@ 2007-05-22 19:03                   ` Michal Piotrowski
  2007-05-22 19:59                     ` Michal Piotrowski
  2007-05-22 20:10                     ` Ingo Molnar
  1 sibling, 2 replies; 27+ messages in thread
From: Michal Piotrowski @ 2007-05-22 19:03 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Anant Nitya, linux-kernel, Ingo Molnar

Hi Thomas,

On 20/05/07, Thomas Gleixner <tglx@linutronix.de> wrote:
> On Sun, 2007-05-20 at 02:53 +0530, Anant Nitya wrote:
> > > 1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken up. At
> > > least it is not a scheduler problem.
> > >
> > > I work out a more complex debug patch and pester you to test once I'm
> > > done.
> > No problem :)
>
> You asked for it :)
>
> Please patch 2.6.22-rc2 with
>
> http://tglx.de/projects/hrtimers/2.6.22-rc2/patch-2.6.22-rc2-hrt2.patch
> and
> http://www.tglx.de/private/tglx/ht-debug/tracer.diff
>
> Compile it with the config
>
> http://www.tglx.de/private/tglx/ht-debug/config.debug
>
> You should find something like:
>
> (         swapper-0    |#0): new 67173 us user-latency.
>
> along with the familiar "NOHZ ......" message in your log file.
>
> Once that happened please do:
>
> $ cat /proc/latency_trace >trace.txt
>
> compress it and send it to me along with the full dmesg output or put
> both up to some place, where I can download it.
>
> Michal,
>
> IIRC you encountered the same P4/HT related wreckage. Can you do the
> same ?

Good news - I can't reproduce this bug. It's time to remove
"Subject    : 2.6.21-git4 BUG: soft lockup detected on CPU#1!
References : http://lkml.org/lkml/2007/5/2/511
Submitter  : Michal Piotrowski <michal.k.k.piotrowski@gmail.com>
Handled-By : Thomas Gleixner <tglx@linutronix.de>
Status     : problem is being debugged"
from KR.

Bad news - I hit a bug in 2.6.22-rc2-hrt3. Bug symptoms:
- X hangs (keyboard, mouse, sound etc.)
- only Magic SysRq works

http://www.stardust.webpages.pl/files/tbf/bitis-gabonica/2.6.22-rc2-hrt3/hrt-config
http://www.stardust.webpages.pl/files/tbf/bitis-gabonica/2.6.22-rc2-hrt3/hrt-console.log

Now I'm trying to apply hrtimers debug patch ontop 2.6.22-rc2-hrt3
http://lkml.org/lkml/2007/3/23/106

Regards,
Michal

-- 
Michal K. K. Piotrowski
Kernel Monkeys
(http://kernel.wikidot.com/start)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-22 19:03                   ` Michal Piotrowski
@ 2007-05-22 19:59                     ` Michal Piotrowski
  2007-05-22 20:10                     ` Ingo Molnar
  1 sibling, 0 replies; 27+ messages in thread
From: Michal Piotrowski @ 2007-05-22 19:59 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Anant Nitya, linux-kernel, Ingo Molnar

Michal Piotrowski napisał(a):
> Hi Thomas,
> 
> On 20/05/07, Thomas Gleixner <tglx@linutronix.de> wrote:
>> On Sun, 2007-05-20 at 02:53 +0530, Anant Nitya wrote:
>> > > 1 == TASK_INTERRUPTIBLE, so we know that ksoftirqd was not woken
>> up. At
>> > > least it is not a scheduler problem.
>> > >
>> > > I work out a more complex debug patch and pester you to test once I'm
>> > > done.
>> > No problem :)
>>
>> You asked for it :)
>>
>> Please patch 2.6.22-rc2 with
>>
>> http://tglx.de/projects/hrtimers/2.6.22-rc2/patch-2.6.22-rc2-hrt2.patch
>> and
>> http://www.tglx.de/private/tglx/ht-debug/tracer.diff
>>
>> Compile it with the config
>>
>> http://www.tglx.de/private/tglx/ht-debug/config.debug
>>
>> You should find something like:
>>
>> (         swapper-0    |#0): new 67173 us user-latency.
>>
>> along with the familiar "NOHZ ......" message in your log file.
>>
>> Once that happened please do:
>>
>> $ cat /proc/latency_trace >trace.txt
>>
>> compress it and send it to me along with the full dmesg output or put
>> both up to some place, where I can download it.
>>
>> Michal,
>>
>> IIRC you encountered the same P4/HT related wreckage. Can you do the
>> same ?
> 
> Good news - I can't reproduce this bug. It's time to remove
> "Subject    : 2.6.21-git4 BUG: soft lockup detected on CPU#1!
> References : http://lkml.org/lkml/2007/5/2/511
> Submitter  : Michal Piotrowski <michal.k.k.piotrowski@gmail.com>
> Handled-By : Thomas Gleixner <tglx@linutronix.de>
> Status     : problem is being debugged"
> from KR.
> 
> Bad news - I hit a bug in 2.6.22-rc2-hrt3. Bug symptoms:
> - X hangs (keyboard, mouse, sound etc.)
> - only Magic SysRq works
> 
> http://www.stardust.webpages.pl/files/tbf/bitis-gabonica/2.6.22-rc2-hrt3/hrt-config
> 
> http://www.stardust.webpages.pl/files/tbf/bitis-gabonica/2.6.22-rc2-hrt3/hrt-console.log
> 
> 
> Now I'm trying to apply hrtimers debug patch ontop 2.6.22-rc2-hrt3
> http://lkml.org/lkml/2007/3/23/106
> 

It _almost_ works

http://www.stardust.webpages.pl/files/tbf/bitis-gabonica/2.6.22-rc2-hrt3/hrtimers_debug.patch

Hmmm..

[  135.206505] SysRq : Show Pending Timers
[  135.210476] Timer List Version: v0.3
[  135.214102] HRTIMER_MAX_CLOCK_BASES: 2
[  135.217899] now at 112955954742 nsecs
[  135.221651]
[  135.221662] cpu: 0
[  135.225246]  clock 0:
[  135.227588]   .index:      0
[  135.230545]   .resolution: 1 nsecs
[  135.234010]   .get_time:   ktime_get_real
[  135.238151]   .offset:     1179862680097808665 nsecs
[  135.243207] active timers:
[  135.245992]  clock 1:
[  135.248316]   .index:      1
[  135.251246]   .resolution: 1 nsecs
[  135.254722]   .get_time:   ktime_get
[  135.258473]   .offset:     0 nsecs
[  135.261923] active timers:
[  135.264683]  #0: hardirq_stack, tick_sched_timer, S:01, tick_nohz_restart_sched_tick, swapper/0
[  135.273913]  # expires at 112952000000 nsecs [in 18446744073705596874 nsecs]
[  135.281194]  #1: hardirq_stack, hrtimer_wakeup, S:01, futex_wait, automount/2293
[  135.289057]  # expires at 113902202172 nsecs [in 946247430 nsecs]
[  135.295233]  #2: hardirq_stack, hrtimer_wakeup, S:01, do_nanosleep, crond/2373
[  135.302973]  # expires at 121028637388 nsecs [in 8072682646 nsecs]
[  135.309279]  #3: hardirq_stack, it_real_fn, S:01, do_setitimer, syslogd/2171
[  135.316701]  # expires at 121465401717 nsecs [in 8509446975 nsecs]
[  135.323035]   .expires_next   : 112952000000 nsecs
[  135.327921]   .exp_prev       : 112951000000 nsecs
[  135.332786]  last expires_next stacktrace:
[  135.336944]   update_cpu_base_expires_next
[  135.341188]   hrtimer_interrupt
[  135.344430]   smp_apic_timer_interrupt
[  135.348310]   apic_timer_interrupt
[  135.351845]   __mcount
[  135.354327]   mcount
[  135.356627]   permission
[  135.359298]   vfs_permission
[  135.362305]   __link_path_walk
[  135.365497]   link_path_walk
[  135.368486]   path_walk
[  135.371044]   do_path_lookup
[  135.374086]   __user_walk_fd
[  135.377122]   vfs_stat_fd
[  135.379905]   vfs_stat
[  135.382385]   sys_stat64
[  135.385065]   syscall_call
[  135.387909]   <3>BUG: sleeping function called from invalid context at kernel/mutex.c:86
[  135.396175] in_atomic():1, irqs_disabled():1
[  135.400541] 2 locks held by readahead/2442:
[  135.404783]  #0:  (&serio->lock){++..}, at: [<c02dbf17>] serio_interrupt+0x22/0x79
[  135.412623]  #1:  (sysrq_key_table_lock){+...}, at: [<c027a4af>] __handle_sysrq+0x1b/0x10c
[  135.421208] irq event stamp: 3249062
[  135.424856] hardirqs last  enabled at (3249061): [<c0181b67>] kmem_cache_free+0x97/0xa1
[  135.433050] hardirqs last disabled at (3249062): [<c0104bd4>] common_interrupt+0x24/0x34
[  135.441350] softirqs last  enabled at (3248980): [<c012a418>] __do_softirq+0xe8/0xef
[  135.449261] softirqs last disabled at (3248975): [<c0106df7>] do_softirq+0x69/0xd6
[  135.457049]  [<c010526c>] show_trace_log_lvl+0x35/0x54
[  135.462312]  [<c0106027>] show_trace+0x2c/0x2e
[  135.466876]  [<c01060d0>] dump_stack+0x29/0x2b
[  135.471423]  [<c011de0c>] __might_sleep+0xe0/0xe2
[  135.476221]  [<c0360257>] mutex_lock+0x22/0x2e
[  135.480767]  [<c014eb96>] lookup_module_symbol_name+0x1a/0xc4
[  135.486739]  [<c014f514>] lookup_symbol_name+0x62/0x69
[  135.492021]  [<c013eb45>] print_name_offset+0x29/0x81
[  135.497200]  [<c013f4c5>] timer_list_show+0x657/0xc55
[  135.502368]  [<c013fae2>] sysrq_timer_list_show+0x1f/0x21
[  135.507859]  [<c027a5fd>] sysrq_handle_show_timers+0xd/0xf
[  135.513451]  [<c027a525>] __handle_sysrq+0x91/0x10c
[  135.518507]  [<c027a79e>] handle_sysrq+0x37/0x39
[  135.523296]  [<c0274b00>] kbd_event+0x335/0x58d
[  135.527921]  [<c02df806>] input_event+0x43e/0x460
[  135.532718]  [<c02e3766>] atkbd_interrupt+0x467/0x58d
[  135.537887]  [<c02dbf37>] serio_interrupt+0x42/0x79
[  135.542865]  [<c02dceed>] i8042_interrupt+0x1ff/0x210
[  135.548130]  [<c0160963>] handle_IRQ_event+0x24/0x54
[  135.553187]  [<c0161b2a>] handle_edge_irq+0xd7/0x11f
[  135.558253]  [<c0106f46>] do_IRQ+0xe2/0x10c
[  135.562544]  [<c0104bde>] common_interrupt+0x2e/0x34
[  135.567675]  [<c0116134>] mcount+0x14/0x18
[  135.571858]  [<c019b9ab>] mntput_no_expire+0xd/0x89
[  135.576845]  [<c018cd6a>] path_release+0x2f/0x33
[  135.581645]  [<c01893f4>] vfs_stat_fd+0x4c/0x55
[  135.586276]  [<c01894c9>] vfs_stat+0x25/0x27
[  135.590660]  [<c01894e9>] sys_stat64+0x1e/0x37
[  135.595215]  [<c01041f5>] syscall_call+0x7/0xb
[  135.599752]  =======================
[  135.603365] ---------------------------
[  135.607229] | preempt count: 00010000 ]
[  135.611146] | 0-level deep critical section nesting:
[  135.616202] ----------------------------------------
[  135.621258]
[  135.795930] (       readahead-2442 |#0): new 112512428 us user-latency.
[  135.802591] stopped custom tracer.
[  135.806058] BUG: at kernel/mutex.c:132 __mutex_lock_common()
[  135.811824]  [<c010526c>] show_trace_log_lvl+0x35/0x54
[  135.817035]  [<c0106027>] show_trace+0x2c/0x2e
[  135.821557]  [<c01060d0>] dump_stack+0x29/0x2b
[  135.826078]  [<c035fff2>] __mutex_lock_slowpath+0x64/0x2a7
[  135.831714]  [<c036025e>] mutex_lock+0x29/0x2e
[  135.836277]  [<c014eb96>] lookup_module_symbol_name+0x1a/0xc4
[  135.842121]  [<c014f514>] lookup_symbol_name+0x62/0x69
[  135.847342]  [<c013eb45>] print_name_offset+0x29/0x81
[  135.852478]  [<c013f4c5>] timer_list_show+0x657/0xc55
[  135.857611]  [<c013fae2>] sysrq_timer_list_show+0x1f/0x21
[  135.863128]  [<c027a5fd>] sysrq_handle_show_timers+0xd/0xf
[  135.868744]  [<c027a525>] __handle_sysrq+0x91/0x10c
[  135.873689]  [<c027a79e>] handle_sysrq+0x37/0x39
[  135.878383]  [<c0274b00>] kbd_event+0x335/0x58d
[  135.882991]  [<c02df806>] input_event+0x43e/0x460
[  135.887806]  [<c02e3766>] atkbd_interrupt+0x467/0x58d
[  135.892948]  [<c02dbf37>] serio_interrupt+0x42/0x79
[  135.897918]  [<c02dceed>] i8042_interrupt+0x1ff/0x210
[  135.903053]  [<c0160963>] handle_IRQ_event+0x24/0x54
[  135.908120]  [<c0161b2a>] handle_edge_irq+0xd7/0x11f
[  135.913176]  [<c0106f46>] do_IRQ+0xe2/0x10c
[  135.917449]  [<c0104bde>] common_interrupt+0x2e/0x34
[  135.922502]  [<c0116134>] mcount+0x14/0x18
[  135.926696]  [<c019b9ab>] mntput_no_expire+0xd/0x89
[  135.931657]  [<c018cd6a>] path_release+0x2f/0x33
[  135.936378]  [<c01893f4>] vfs_stat_fd+0x4c/0x55
[  135.941001]  [<c01894c9>] vfs_stat+0x25/0x27
[  135.945376]  [<c01894e9>] sys_stat64+0x1e/0x37
[  135.949931]  [<c01041f5>] syscall_call+0x7/0xb
[  135.954467]  =======================
[  135.958091] ---------------------------
[  135.962015] | preempt count: 00010000 ]
[  135.965905] | 0-level deep critical section nesting:
[  135.970927] ----------------------------------------
[  135.975931]
[  135.977468] <ffffffff>
[  135.979865]
[  135.981396]   .hres_active    : 1
[  135.984757]   .nr_events      : 91663
[  135.988465]   .nohz_mode      : 2
[  135.991820]   .idle_tick      : 112686000000 nsecs
[  135.996651]   .tick_stopped   : 0
[  136.000016]   .idle_jiffies   : 4294779982
[  136.004190]   .idle_calls     : 30792
[  136.007882]   .idle_sleeps    : 3020
[  136.011486]   .idle_entrytime : 112932690671 nsecs
[  136.016335]   .idle_sleeptime : 18007304053 nsecs
[  136.021063]   .last_jiffies   : 4294780229
[  136.025220]   .next_jiffies   : 4294780556
[  136.029352]   .idle_expires   : 112855000000 nsecs
[  136.034167] jiffies: 4294780248
[  136.037331]
[  136.037336] cpu: 1
[  136.040884]  clock 0:
[  136.043182]   .index:      0
[  136.046087]   .resolution: 1 nsecs
[  136.049535]   .get_time:   ktime_get_real
[  136.053739]   .offset:     1179862680097808665 nsecs
[  136.058751] active timers:
[  136.061501]  clock 1:
[  136.063877]   .index:      1
[  136.066781]   .resolution: 1 nsecs
[  136.070229]   .get_time:   ktime_get
[  136.073922]   .offset:     0 nsecs
[  136.077465] active timers:
[  136.080293]  #0: hardirq_stack, tick_sched_timer, S:01, tick_nohz_restart_sched_tick, swapper/0
[  136.089309]  # expires at 113828250000 nsecs [in 872295258 nsecs]
[  136.095654]  #1: hardirq_stack, hrtimer_wakeup, S:01, futex_wait, automount/2294
[  136.103356]  # expires at 113902196736 nsecs [in 946241994 nsecs]
[  136.109485]  #2: hardirq_stack, hrtimer_wakeup, S:01, do_nanosleep, smartd/2640
[  136.117101]  # expires at 1897330250564 nsecs [in 1784374295822 nsecs]
[  136.123723]   .expires_next   : 113871250000 nsecs
[  136.128726]   .exp_prev       : 113875250000 nsecs
[  136.133558]  last expires_next stacktrace:
[  136.137662]   update_cpu_base_expires_next
[  136.141838]   hrtimer_interrupt
[  136.145071]   smp_apic_timer_interrupt
[  136.148935]   apic_timer_interrupt
[  136.152419]   cpu_idle
[  136.154890]   start_secondary
[  136.157967]   <00000000>
[  136.160639]   <ffffffff>
[  136.163267]
[  136.164789]   .hres_active    : 1
[  136.168143]   .nr_events      : 85123
[  136.171981]   .nohz_mode      : 2
[  136.175344]   .idle_tick      : 112682250000 nsecs
[  136.180281]   .tick_stopped   : 0
[  136.183737]   .idle_jiffies   : 4294779978
[  136.187974]   .idle_calls     : 44643
[  136.191708]   .idle_sleeps    : 1384
[  136.195424]   .idle_entrytime : 113942257992 nsecs
[  136.200421]   .idle_sleeptime : 18596870874 nsecs
[  136.205158]   .last_jiffies   : 4294780248
[  136.209316]   .next_jiffies   : 4294780261
[  136.213595]   .idle_expires   : 112778000000 nsecs
[  136.218451] jiffies: 4294780248
[  136.221650]
[  136.223163]
[  136.223168] Tick Device: mode:     1
[  136.228383] Clock Event Device: pit
[  136.231928]  max_delta_ns:   27461866
[  136.235636]  min_delta_ns:   12571
[  136.239172]  mult:           5124677
[  136.242769]  shift:          32
[  136.245965]  mode:           3
[  136.249095]  next_event:     9223372036854775807 nsecs
[  136.254282]  set_next_event: pit_next_event
[  136.258613]  set_mode:       init_pit_timer
[  136.262908]  event_handler:  tick_handle_oneshot_broadcast
[  136.268510] tick_broadcast_mask: 00000000
[  136.272583] tick_broadcast_oneshot_mask: 00000000
[  136.277310]
[  136.278840]
[  136.278845] Tick Device: mode:     1
[  136.284086] Clock Event Device: lapic
[  136.287803]  max_delta_ns:   671411430
[  136.291615]  min_delta_ns:   1200
[  136.294977]  mult:           53661274
[  136.298668]  shift:          32
[  136.301902]  mode:           3
[  136.304971]  next_event:     112952000000 nsecs
[  136.309638]  set_next_event: lapic_next_event
[  136.314177]  set_mode:       lapic_timer_setup
[  136.318680]  event_handler:  hrtimer_interrupt
[  136.323236]
[  136.323241] Tick Device: mode:     1
[  136.328404] Clock Event Device: lapic
[  136.332191]  max_delta_ns:   671411430
[  136.335977]  min_delta_ns:   1200
[  136.339367]  mult:           53661274
[  136.343075]  shift:          32
[  136.346268]  mode:           3
[  136.349375]  next_event:     114098250000 nsecs
[  136.353914]  set_next_event: lapic_next_event
[  136.358357]  set_mode:       lapic_timer_setup
[  136.362973]  event_handler:  hrtimer_interrupt

Regards,
Michal

-- 
Michal K. K. Piotrowski
Kernel Monkeys
(http://kernel.wikidot.com/start)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-22 19:03                   ` Michal Piotrowski
  2007-05-22 19:59                     ` Michal Piotrowski
@ 2007-05-22 20:10                     ` Ingo Molnar
  2007-05-22 20:23                       ` Michal Piotrowski
  2007-05-23 17:00                       ` Chuck Ebbert
  1 sibling, 2 replies; 27+ messages in thread
From: Ingo Molnar @ 2007-05-22 20:10 UTC (permalink / raw)
  To: Michal Piotrowski; +Cc: Thomas Gleixner, Anant Nitya, linux-kernel


* Michal Piotrowski <michal.k.k.piotrowski@gmail.com> wrote:

> Bad news - I hit a bug in 2.6.22-rc2-hrt3. Bug symptoms:
> - X hangs (keyboard, mouse, sound etc.)
> - only Magic SysRq works

please try the patch below! I think we have nailed this bug.

	Ingo

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -4212,9 +4212,7 @@ int __sched cond_resched_softirq(void)
 	BUG_ON(!in_softirq());
 
 	if (need_resched() && system_state == SYSTEM_RUNNING) {
-		raw_local_irq_disable();
-		_local_bh_enable();
-		raw_local_irq_enable();
+		local_bh_enable();
 		__cond_resched();
 		local_bh_disable();
 		return 1;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-22 20:10                     ` Ingo Molnar
@ 2007-05-22 20:23                       ` Michal Piotrowski
  2007-05-23 17:00                       ` Chuck Ebbert
  1 sibling, 0 replies; 27+ messages in thread
From: Michal Piotrowski @ 2007-05-22 20:23 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Thomas Gleixner, Anant Nitya, linux-kernel

On 22/05/07, Ingo Molnar <mingo@elte.hu> wrote:
>
> * Michal Piotrowski <michal.k.k.piotrowski@gmail.com> wrote:
>
> > Bad news - I hit a bug in 2.6.22-rc2-hrt3. Bug symptoms:
> > - X hangs (keyboard, mouse, sound etc.)
> > - only Magic SysRq works
>
> please try the patch below! I think we have nailed this bug.

I picked it from -hrt5 and so far so good.

Regards,
Michal

-- 
Michal K. K. Piotrowski
Kernel Monkeys
(http://kernel.wikidot.com/start)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-22 20:10                     ` Ingo Molnar
  2007-05-22 20:23                       ` Michal Piotrowski
@ 2007-05-23 17:00                       ` Chuck Ebbert
  2007-05-23 18:38                         ` Chuck Ebbert
  2007-05-24  7:45                         ` Ingo Molnar
  1 sibling, 2 replies; 27+ messages in thread
From: Chuck Ebbert @ 2007-05-23 17:00 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Michal Piotrowski, Thomas Gleixner, Anant Nitya, linux-kernel,
	David Miller

Ingo Molnar wrote:
> * Michal Piotrowski <michal.k.k.piotrowski@gmail.com> wrote:
> 
>> Bad news - I hit a bug in 2.6.22-rc2-hrt3. Bug symptoms:
>> - X hangs (keyboard, mouse, sound etc.)
>> - only Magic SysRq works
> 
> please try the patch below! I think we have nailed this bug.
> 
> 	Ingo
> 
> Index: linux/kernel/sched.c
> ===================================================================
> --- linux.orig/kernel/sched.c
> +++ linux/kernel/sched.c
> @@ -4212,9 +4212,7 @@ int __sched cond_resched_softirq(void)
>  	BUG_ON(!in_softirq());
>  
>  	if (need_resched() && system_state == SYSTEM_RUNNING) {
> -		raw_local_irq_disable();
> -		_local_bh_enable();
> -		raw_local_irq_enable();
> +		local_bh_enable();
>  		__cond_resched();
>  		local_bh_disable();
>  		return 1;

We may have a problem with that:

 BUG: warning at kernel/softirq.c:138/local_bh_enable() (Not tainted)
  [<c042b2ef>] local_bh_enable+0x45/0x92
  [<c06036b7>] cond_resched_softirq+0x2c/0x42
  [<c059d5d0>] release_sock+0x54/0xa3
  [<c05c9428>] tcp_sendmsg+0x91b/0xa0c
  [<c05e1bb9>] inet_sendmsg+0x3b/0x45
  [<c059af34>] sock_aio_write+0xf9/0x105
  [<c0476035>] do_sync_write+0xc7/0x10a
  [<c0437265>] autoremove_wake_function+0x0/0x35
  [<c047688e>] vfs_write+0xbc/0x154
  [<c0476e8c>] sys_write+0x41/0x67
  [<c0404f70>] syscall_call+0x7/0xb

That's:
        WARN_ON_ONCE(irqs_disabled());


And another, tainted but probably valid:

BUG: warning at kernel/softirq.c:138/local_bh_enable() (Tainted: P      )
 [<c042b2ef>] local_bh_enable+0x45/0x92
 [<c06036b7>] cond_resched_softirq+0x2c/0x42
 [<c059d5d0>] release_sock+0x54/0xa3
 [<c05c9428>] tcp_sendmsg+0x91b/0xa0c
 [<c04e8b30>] copy_to_user+0x3c/0x50
 [<c05a1b71>] memcpy_toiovec+0x27/0x4a
 [<c059d58f>] release_sock+0x13/0xa3
 [<c0604ad5>] _spin_unlock_bh+0x5/0xd
 [<c05e1bb9>] inet_sendmsg+0x3b/0x45
 [<c059af34>] sock_aio_write+0xf9/0x105
 [<c0475f31>] do_sync_readv_writev+0xc1/0xfe
 [<c0437265>] autoremove_wake_function+0x0/0x35
 [<c04e88e0>] copy_from_user+0x3a/0x66
 [<c0475dec>] rw_copy_check_uvector+0x5c/0xb0
 [<c047667a>] do_readv_writev+0xbc/0x187
 [<c059ae3b>] sock_aio_write+0x0/0x105
 [<c0476782>] vfs_writev+0x3d/0x48
 [<c0476bee>] sys_writev+0x41/0x95
 [<c0404f70>] syscall_call+0x7/0xb


https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240982

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-23 17:00                       ` Chuck Ebbert
@ 2007-05-23 18:38                         ` Chuck Ebbert
  2007-05-23 23:24                           ` Anant Nitya
  2007-05-24  7:45                         ` Ingo Molnar
  1 sibling, 1 reply; 27+ messages in thread
From: Chuck Ebbert @ 2007-05-23 18:38 UTC (permalink / raw)
  To: Chuck Ebbert
  Cc: Ingo Molnar, Michal Piotrowski, Thomas Gleixner, Anant Nitya,
	linux-kernel, David Miller, Netdev

Chuck Ebbert wrote:
> 
> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240982

Another; these started to appear after the below patch was merged:

> Index: linux/kernel/sched.c
> > ===================================================================
> > --- linux.orig/kernel/sched.c
> > +++ linux/kernel/sched.c
> > @@ -4212,9 +4212,7 @@ int __sched cond_resched_softirq(void)
> >  	BUG_ON(!in_softirq());
> >  
> >  	if (need_resched() && system_state == SYSTEM_RUNNING) {
> > -		raw_local_irq_disable();
> > -		_local_bh_enable();
> > -		raw_local_irq_enable();
> > +		local_bh_enable();
> >  		__cond_resched();
> >  		local_bh_disable();
> >  		return 1;

May 23 19:26:26 localhost kernel: BUG: warning at kernel/softirq.c:138/local_bh_enable() (Not tainted)
May 23 19:26:26 localhost kernel:  [<c042b2ef>] local_bh_enable+0x45/0x92
May 23 19:26:26 localhost kernel:  [<c06036b7>] cond_resched_softirq+0x2c/0x42
May 23 19:26:26 localhost kernel:  [<c059d5d0>] release_sock+0x54/0xa3
May 23 19:26:26 localhost kernel:  [<c04373af>] prepare_to_wait+0x24/0x3f
May 23 19:26:26 localhost kernel:  [<c05e267f>] inet_stream_connect+0x116/0x1ff
May 23 19:26:26 localhost kernel:  [<c0437265>] autoremove_wake_function+0x0/0x35
May 23 19:26:26 localhost kernel:  [<c059c339>] sys_connect+0x82/0xad
May 23 19:26:26 localhost kernel:  [<c059d58f>] release_sock+0x13/0xa3
May 23 19:26:26 localhost kernel:  [<c0604ad5>] _spin_unlock_bh+0x5/0xd
May 23 19:26:26 localhost kernel:  [<c059e714>] sock_setsockopt+0x4a8/0x4b2
May 23 19:26:26 localhost kernel:  [<c059b6b6>] sock_attach_fd+0x70/0xd2
May 23 19:26:26 localhost kernel:  [<c04774a0>] get_empty_filp+0xfc/0x170
May 23 19:26:26 localhost kernel:  [<c059b54f>] sys_setsockopt+0x9b/0xa7
May 23 19:26:26 localhost kernel:  [<c059cb83>] sys_socketcall+0xac/0x261
May 23 19:26:26 localhost kernel:  [<c0404f70>] syscall_call+0x7/0xb


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
       [not found]                         ` <20070521200428.GA9855@osiris.ibm.com>
@ 2007-05-23 20:54                           ` Mikulas Patocka
  0 siblings, 0 replies; 27+ messages in thread
From: Mikulas Patocka @ 2007-05-23 20:54 UTC (permalink / raw)
  To: Heiko Carstens; +Cc: Thomas Gleixner, David Miller, linux-kernel

On Mon, 21 May 2007, Heiko Carstens wrote:

> On Sun, May 20, 2007 at 09:29:49PM +0200, Thomas Gleixner wrote:
>> On Sun, 2007-05-20 at 21:18 +0200, Heiko Carstens wrote:
>>>>> Hmm, that's a different problem than the 0x22 which shows up on
>>>>> hyperthreading enabled P4 systems. Are you using plip ?
>>>>
>>>> No, after all it turned out that is caused by an IBM internal module.
>>>> Just ignore me, sorry for the noise :)
>>>
>>> Just in case you're interested: I just looked into it and it turned
>>> out that this module was calling netif_rx() from process context,
>>> just like plip. Changing it to netif_rx_ni() fixes it.
>>
>> Thanks, that's a good pointer. You should have cc'ed Dave Miller and the
>> guy who found this plip thingy. Care to resend ?
>
> No problem. Here we go, even though I doubt that netif_rx_ni() is what
> people want for plip. Dunno...

netif_rx_ni calls softirq synchronously, am I right? That would probably 
hurt plip. I can try it if someone will be interested in it, but I doubt 
it is good solution for any network interfaces except tun.

BTW. does in_interrupt() have high cost on some high-performance 
architectures? Would it be possible to just add if (!in_interrupt()) 
wakeup_softirqd(); to netif_rx()?

Mikulas

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-23 18:38                         ` Chuck Ebbert
@ 2007-05-23 23:24                           ` Anant Nitya
  0 siblings, 0 replies; 27+ messages in thread
From: Anant Nitya @ 2007-05-23 23:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Chuck Ebbert, Ingo Molnar, Michal Piotrowski, Thomas Gleixner,
	David Miller, Netdev

On Thursday 24 May 2007 00:08:40 Chuck Ebbert wrote:
> Chuck Ebbert wrote:
> > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240982
>
> Another; these started to appear after the below patch was merged:
> > Index: linux/kernel/sched.c
> >
> > > ===================================================================
> > > --- linux.orig/kernel/sched.c
> > > +++ linux/kernel/sched.c
> > > @@ -4212,9 +4212,7 @@ int __sched cond_resched_softirq(void)
> > >  	BUG_ON(!in_softirq());
> > >
> > >  	if (need_resched() && system_state == SYSTEM_RUNNING) {
> > > -		raw_local_irq_disable();
> > > -		_local_bh_enable();
> > > -		raw_local_irq_enable();
> > > +		local_bh_enable();
> > >  		__cond_resched();
> > >  		local_bh_disable();
> > >  		return 1;
>
> May 23 19:26:26 localhost kernel: BUG: warning at
> kernel/softirq.c:138/local_bh_enable() (Not tainted) May 23 19:26:26
> localhost kernel:  [<c042b2ef>] local_bh_enable+0x45/0x92 May 23 19:26:26
> localhost kernel:  [<c06036b7>] cond_resched_softirq+0x2c/0x42 May 23
> 19:26:26 localhost kernel:  [<c059d5d0>] release_sock+0x54/0xa3 May 23
> 19:26:26 localhost kernel:  [<c04373af>] prepare_to_wait+0x24/0x3f May 23
> 19:26:26 localhost kernel:  [<c05e267f>] inet_stream_connect+0x116/0x1ff
> May 23 19:26:26 localhost kernel:  [<c0437265>]
> autoremove_wake_function+0x0/0x35 May 23 19:26:26 localhost kernel: 
> [<c059c339>] sys_connect+0x82/0xad May 23 19:26:26 localhost kernel: 
> [<c059d58f>] release_sock+0x13/0xa3 May 23 19:26:26 localhost kernel: 
> [<c0604ad5>] _spin_unlock_bh+0x5/0xd May 23 19:26:26 localhost kernel: 
> [<c059e714>] sock_setsockopt+0x4a8/0x4b2 May 23 19:26:26 localhost kernel: 
> [<c059b6b6>] sock_attach_fd+0x70/0xd2 May 23 19:26:26 localhost kernel: 
> [<c04774a0>] get_empty_filp+0xfc/0x170 May 23 19:26:26 localhost kernel: 
> [<c059b54f>] sys_setsockopt+0x9b/0xa7 May 23 19:26:26 localhost kernel: 
> [<c059cb83>] sys_socketcall+0xac/0x261 May 23 19:26:26 localhost kernel: 
> [<c0404f70>] syscall_call+0x7/0xb

strange, while applying the concerned patch first time I was hand editing __ 
kernel/sched.c __ and stupidly typed _local_bh_enable() instead of 
local_bh_enable() and when did a reboot, as soon as I got inside X I was 
welcomed with following message in system log.

[  152.692609] BUG: at kernel/softirq.c:122 _local_bh_enable()
[  152.692637]  [<b040624d>] show_trace_log_lvl+0x1a/0x2f
[  152.692658]  [<b0406801>] show_trace+0x12/0x14
[  152.692668]  [<b0406885>] dump_stack+0x16/0x18
[  152.692678]  [<b0428d66>] _local_bh_enable+0x8b/0xc3
[  152.692688]  [<b059e497>] cond_resched_softirq+0x2b/0x40
[  152.692700]  [<b05797b6>] established_get_first+0x19/0xad
[  152.692712]  [<b057a831>] tcp_seq_next+0x76/0x8c
[  152.692722]  [<b04875ef>] seq_read+0x17b/0x264
[  152.692733]  [<b047102a>] vfs_read+0xad/0x161
[  152.692745]  [<b04714b6>] sys_read+0x3d/0x61
[  152.692755]  [<b04050d4>] syscall_call+0x7/0xb
[  152.692765]  =======================
[  152.692770] BUG: at kernel/lockdep.c:1937 trace_softirqs_on()
[  152.692777]  [<b040624d>] show_trace_log_lvl+0x1a/0x2f
[  152.692789]  [<b0406801>] show_trace+0x12/0x14
[  152.692800]  [<b0406885>] dump_stack+0x16/0x18
[  152.692810]  [<b043f5a5>] trace_softirqs_on+0x5f/0xa5
[  152.692822]  [<b0428d8e>] _local_bh_enable+0xb3/0xc3
[  152.692831]  [<b059e497>] cond_resched_softirq+0x2b/0x40
[  152.692842]  [<b05797b6>] established_get_first+0x19/0xad
[  152.692852]  [<b057a831>] tcp_seq_next+0x76/0x8c
[  152.692862]  [<b04875ef>] seq_read+0x17b/0x264
[  152.692870]  [<b047102a>] vfs_read+0xad/0x161
[  152.692879]  [<b04714b6>] sys_read+0x3d/0x61
[  152.692889]  [<b04050d4>] syscall_call+0x7/0xb
[  152.692899]  =======================
[  159.257890] NOHZ: local_softirq_pending 22
[  159.266009] NOHZ: local_softirq_pending 22
[  159.273965] NOHZ: local_softirq_pending 22
[  159.281884] NOHZ: local_softirq_pending 22
[  160.712828] NOHZ: local_softirq_pending 22
[  162.609377] NOHZ: local_softirq_pending 22
[  162.609804] NOHZ: local_softirq_pending 22
[  162.610054] NOHZ: local_softirq_pending 22
[  162.610279] NOHZ: local_softirq_pending 22
[  162.610502] NOHZ: local_softirq_pending 22

After realzing my mistake, I changed it to local_bh_enable() as was in patch 
and since then not a single BUG or local_softirq_pending message occurs in 
system log, maybe my system waiting for that condition to happen :).


-- 
Out of many thousands, one may endeavor for perfection, and of
those who have achieved perfection, hardly one knows Me in truth.
				-- Gita Sutra Of Mysticism

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [BUG] local_softirq_pending storm
  2007-05-23 17:00                       ` Chuck Ebbert
  2007-05-23 18:38                         ` Chuck Ebbert
@ 2007-05-24  7:45                         ` Ingo Molnar
  1 sibling, 0 replies; 27+ messages in thread
From: Ingo Molnar @ 2007-05-24  7:45 UTC (permalink / raw)
  To: Chuck Ebbert
  Cc: Michal Piotrowski, Thomas Gleixner, Anant Nitya, linux-kernel,
	David Miller, Andrew Morton


* Chuck Ebbert <cebbert@redhat.com> wrote:

> >  	if (need_resched() && system_state == SYSTEM_RUNNING) {
> > -		raw_local_irq_disable();
> > -		_local_bh_enable();
> > -		raw_local_irq_enable();
> > +		local_bh_enable();
> >  		__cond_resched();
> >  		local_bh_disable();
> >  		return 1;
> 
> We may have a problem with that:
> 
>  BUG: warning at kernel/softirq.c:138/local_bh_enable() (Not tainted)
>   [<c042b2ef>] local_bh_enable+0x45/0x92
>   [<c06036b7>] cond_resched_softirq+0x2c/0x42
>   [<c059d5d0>] release_sock+0x54/0xa3
>   [<c05c9428>] tcp_sendmsg+0x91b/0xa0c
>   [<c05e1bb9>] inet_sendmsg+0x3b/0x45
>   [<c059af34>] sock_aio_write+0xf9/0x105
>   [<c0476035>] do_sync_write+0xc7/0x10a
>   [<c0437265>] autoremove_wake_function+0x0/0x35
>   [<c047688e>] vfs_write+0xbc/0x154
>   [<c0476e8c>] sys_write+0x41/0x67
>   [<c0404f70>] syscall_call+0x7/0xb

hm, this place really shouldnt call cond_resched_softirq() with hardirqs 
disabled.

perhaps a buggy ->sk_backlog_rcv() handler disabled interrupts without 
restoring them?

could you enable CONFIG_PROVE_LOCKING and apply the patch below - which 
location is printed as having last disabled hardirqs?

	Ingo

--------------------->
Subject: [patch] softirqs: print out irq-trace events
From: Ingo Molnar <mingo@elte.hu>

some code is fiddling with softirqs but hardirqs are disabled, so try to 
figure out who disabled hardirqs.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/softirq.c |   10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Index: linux/kernel/softirq.c
===================================================================
--- linux.orig/kernel/softirq.c
+++ linux/kernel/softirq.c
@@ -135,7 +135,15 @@ void local_bh_enable(void)
 
 	WARN_ON_ONCE(in_irq());
 #endif
-	WARN_ON_ONCE(irqs_disabled());
+	if (irqs_disabled()) {
+		static int once = 1;
+
+		if (once) {
+			once = 0;
+			print_irqtrace_events(current);
+			WARN_ON(1);
+		}
+	}
 
 #ifdef CONFIG_TRACE_IRQFLAGS
 	local_irq_save(flags);

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2007-05-24  7:46 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-09 14:12 [BUG] local_softirq_pending storm Anant Nitya
2007-05-09 16:31 ` Thomas Gleixner
2007-05-10 11:53   ` Anant Nitya
2007-05-10 21:58     ` Thomas Gleixner
2007-05-17  6:41       ` Anant Nitya
2007-05-18 13:01         ` Thomas Gleixner
2007-05-19  4:30           ` Anant Nitya
2007-05-19  9:55           ` Anant Nitya
2007-05-19 19:11             ` Thomas Gleixner
2007-05-19 21:23               ` Anant Nitya
2007-05-20 21:43                 ` Thomas Gleixner
2007-05-21  6:22                   ` Anant Nitya
2007-05-21  9:01                     ` Thomas Gleixner
2007-05-21 10:02                       ` Anant Nitya
2007-05-21 18:45                       ` Anant Nitya
2007-05-22 19:03                   ` Michal Piotrowski
2007-05-22 19:59                     ` Michal Piotrowski
2007-05-22 20:10                     ` Ingo Molnar
2007-05-22 20:23                       ` Michal Piotrowski
2007-05-23 17:00                       ` Chuck Ebbert
2007-05-23 18:38                         ` Chuck Ebbert
2007-05-23 23:24                           ` Anant Nitya
2007-05-24  7:45                         ` Ingo Molnar
2007-05-20 10:18               ` Heiko Carstens
2007-05-20 13:52                 ` Thomas Gleixner
2007-05-20 18:36                   ` Heiko Carstens
     [not found]                     ` <20070520191800.GA14225@osiris.ibm.com>
     [not found]                       ` <1179689389.6570.1.camel@chaos>
     [not found]                         ` <20070521200428.GA9855@osiris.ibm.com>
2007-05-23 20:54                           ` Mikulas Patocka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).