From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59636) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VHEwZ-00033q-Aw for qemu-devel@nongnu.org; Wed, 04 Sep 2013 11:24:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VHEwY-0001DH-BO for qemu-devel@nongnu.org; Wed, 04 Sep 2013 11:24:31 -0400 Received: from mail.avalus.com ([2001:41c8:10:1dd::10]:39893) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VHEwY-0001D6-5d for qemu-devel@nongnu.org; Wed, 04 Sep 2013 11:24:30 -0400 Date: Wed, 04 Sep 2013 16:24:20 +0100 From: Alex Bligh Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: [Qemu-devel] When does live migration give up? Reply-To: Alex Bligh List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Alex Bligh We have seen a situation when migrating about 50 VMs at once where some of them fail. I think this is because they are dirtying pages faster than they can be transmitted. What algorithm controls when migration fails in this way, and is it tunable? I am fully aware one answer to this question is "do not attempt to migrate 50 busy VMs through a single 1GB/s NIC". -- Alex Bligh