From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45241) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cnTAe-0002p0-Ij for qemu-devel@nongnu.org; Mon, 13 Mar 2017 12:50:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cnTAZ-0008Pv-Kp for qemu-devel@nongnu.org; Mon, 13 Mar 2017 12:50:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38332) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cnTAZ-0008P8-EH for qemu-devel@nongnu.org; Mon, 13 Mar 2017 12:50:03 -0400 Received: from smtp.corp.redhat.com (int-mx16.intmail.prod.int.phx2.redhat.com [10.5.11.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8036272B0B for ; Mon, 13 Mar 2017 16:50:03 +0000 (UTC) From: Juan Quintela In-Reply-To: <20170313163429.GK4799@redhat.com> (Daniel P. Berrange's message of "Mon, 13 Mar 2017 16:34:39 +0000") References: <20170313124434.1043-1-quintela@redhat.com> <20170313124434.1043-8-quintela@redhat.com> <20170313163429.GK4799@redhat.com> Reply-To: quintela@redhat.com Date: Mon, 13 Mar 2017 17:49:59 +0100 Message-ID: <87o9x5unns.fsf@secure.mitica> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Qemu-devel] [PATCH 07/16] migration: Create x-multifd-group parameter List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Daniel P. Berrange" Cc: qemu-devel@nongnu.org, amit.shah@redhat.com, dgilbert@redhat.com "Daniel P. Berrange" wrote: > On Mon, Mar 13, 2017 at 01:44:25PM +0100, Juan Quintela wrote: >> Indicates how many pages we are going to send in each bach to a multifd >> thread. > > >> diff --git a/qapi-schema.json b/qapi-schema.json >> index b7cb26d..33a6267 100644 >> --- a/qapi-schema.json >> +++ b/qapi-schema.json >> @@ -988,6 +988,9 @@ >> # @x-multifd-threads: Number of threads used to migrate data in parallel >> # The default value is 2 (since 2.9) >> # >> +# @x-multifd-group: Number of pages sent together to a thread >> +# The default value is 16 (since 2.9) >> +# >> # Since: 2.4 >> ## >> { 'enum': 'MigrationParameter', >> @@ -995,7 +998,7 @@ >> 'cpu-throttle-initial', 'cpu-throttle-increment', >> 'tls-creds', 'tls-hostname', 'max-bandwidth', >> 'downtime-limit', 'x-checkpoint-delay', >> - 'x-multifd-threads'] } >> + 'x-multifd-threads', 'x-multifd-group'] } >> >> ## >> # @migrate-set-parameters: >> @@ -1062,6 +1065,9 @@ >> # @x-multifd-threads: Number of threads used to migrate data in parallel >> # The default value is 2 (since 2.9) >> # >> +# @x-multifd-group: Number of pages sent together in a bunch >> +# The default value is 16 (since 2.9) >> +# > > How is this parameter supposed to be used ? Or to put it another way, > what are the benefits / effects of changing it from its default > value and can an application usefully decide what value to set ? I'm > loathe to see us expose another "black magic" parameter where you can't > easily determine what values to set, without predicting future guest > workloads We have multiple threads, we can send to each thread the number of pages that it needs to send one by one, two by two, n x n. The bigger the number, the less locking to do, and then less contention. But if it is too big, we could probably end with too few distribution. Reason to add this parameter is that if we send page by page, we end spending too much time in locking. Later, Juan.