From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752724AbbKPQQK (ORCPT ); Mon, 16 Nov 2015 11:16:10 -0500 Received: from mail-ig0-f180.google.com ([209.85.213.180]:34819 "EHLO mail-ig0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752573AbbKPQQF (ORCPT ); Mon, 16 Nov 2015 11:16:05 -0500 Subject: Re: CFQ timer precision To: Jan Kara , LKML References: <20151116151159.GE3443@quack.suse.cz> Cc: Jeff Moyer From: Jens Axboe Message-ID: <564A0142.2090605@kernel.dk> Date: Mon, 16 Nov 2015 09:16:02 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <20151116151159.GE3443@quack.suse.cz> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/16/2015 08:11 AM, Jan Kara wrote: > Hello, > > lately I was looking into a big performance hit we take when blkio > controller is enabled and jbd2 thread ends up in a different cgroup than > user process. E.g. dbench4 throughput drops from ~140 MB/s to ~20 MB/s. > However artificial dbench4 is, this kind of drop will likely be clearly > visible in real life workloads as well. With unified cgroup hierarchy > the above cgroup split between jbd2 and user processes is unavoidable > once you enable blkio controller so IMO we should accomodate that better. > > I have couple of CFQ idling improvements / fixes which I'll post later this > week once I'll complete some round of benchmarking. They improve the > throughput to ~40 MB/s which helps but clearly there's still a big room for > improvement. The reason for the performance drop is essentially in idling > we do to avoid starvation of CFQ queues. Now when idling in this context, > current default of 8 ms idle window is far to large - we start the timer > after the final request is completed and thus we effectively give the > process 8 ms of CPU time to submit the next IO request. Which I think is > usually far too much. The problem is that more fine grained idling is > actually problematic because e.g. SUSE distro kernels have HZ=250 and thus > 1 jiffy is 4 ms. Hence my proposal: Do you think it would be OK to convert > CFQ to use highres timers and do all the accounting in microseconds? > Then we could tune the idle time to be say 1ms or even autotune it based on > process' think time both of which I expect would get us much closer to > original throughput (4 ms idle window gets us to ~70 MB/s with my patches, > disabling idling gets us to original throughput as expected). Converting to a non-jiffies timer base should be quite fine. We didn't have hrtimers when CFQ was written :-) -- Jens Axboe