From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37575) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WJKtF-0001ZS-3z for qemu-devel@nongnu.org; Fri, 28 Feb 2014 05:42:08 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WJKt7-0002RI-QO for qemu-devel@nongnu.org; Fri, 28 Feb 2014 05:42:01 -0500 Received: from roura.ac.upc.edu ([147.83.33.10]:52083 helo=roura.ac.upc.es) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WJKt7-0002Qe-EJ for qemu-devel@nongnu.org; Fri, 28 Feb 2014 05:41:53 -0500 Message-ID: <531067ED.7000404@ac.upc.edu> Date: Fri, 28 Feb 2014 11:41:49 +0100 From: Joaquim Barrera MIME-Version: 1.0 References: <52FDE495.4050004@ac.upc.edu> <52FDF6F0.4090405@redhat.com> <20140224151654.GB23185@stefanha-thinkpad.hitronhub.home> <392369891.8578031.1393280788249.JavaMail.zimbra@redhat.com> In-Reply-To: <392369891.8578031.1393280788249.JavaMail.zimbra@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [libvirt-users] Adjust disk image migration (NBD) List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , Stefan Hajnoczi Cc: Libvirt Users , Michal Privoznik , Jeff Cody , qemu-devel On 24/02/14 23:26, Paolo Bonzini wrote: >> Thanks for raising this. >> >> I noticed that mirror_run() does not throttle the first loop where it >> populates the dirty bitmap using bdrv_is_allocated_above(). > This is on purpose. Does it causes a noticeable stall in the guest? > >> The main >> copy loop does take the speed limit into account but perhaps that's >> broken too. > Yeah, it looks broken. Each iteration of the loop can write much more > than sectors_per_chunk sectors, but here: > > if (s->common.speed) { > delay_ns = ratelimit_calculate_delay(&s->limit, sectors_per_chunk); > } else { > delay_ns = 0; > } > > the second argument is fixed. :/ > > Paolo > Thanks for the answer. Something is still not clear to me. Are we in front of a bug (that means, something that could be fixed) or is this behaviour somehow expected for some reason? More and more tests I am doing, I get allways the same throughput chart: unlimited bandwidth when syncronizing the disk, and smooth bandwidth limit when migrating RAM. Joaquim