kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiao Guangrong <guangrong.xiao@gmail.com>
To: Peter Xu <peterx@redhat.com>
Cc: kvm@vger.kernel.org, mst@redhat.com, mtosatti@redhat.com,
	Xiao Guangrong <xiaoguangrong@tencent.com>,
	dgilbert@redhat.com, qemu-devel@nongnu.org, wei.w.wang@intel.com,
	jiang.biao2@zte.com.cn, pbonzini@redhat.com
Subject: Re: [PATCH v2 1/8] migration: do not wait for free thread
Date: Mon, 23 Jul 2018 15:16:35 +0800	[thread overview]
Message-ID: <be6cec90-2938-5eda-ea7d-ee06ea9483fc@gmail.com> (raw)
In-Reply-To: <20180723032509.GA2491@xz-mi>



On 07/23/2018 11:25 AM, Peter Xu wrote:
> On Thu, Jul 19, 2018 at 08:15:13PM +0800, guangrong.xiao@gmail.com wrote:
>> @@ -3113,6 +3132,8 @@ static Property migration_properties[] = {
>>       DEFINE_PROP_UINT8("x-compress-threads", MigrationState,
>>                         parameters.compress_threads,
>>                         DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT),
>> +    DEFINE_PROP_BOOL("x-compress-wait-thread", MigrationState,
>> +                      parameters.compress_wait_thread, false),
> 
> This performance feature bit makes sense to me, but I would still
> think it should be true by default to keep the old behavior:
> 
> - it might change the behavior drastically: we might be in a state
>    between "normal" migration and "compressed" migration since we'll
>    contain both of the pages.  Old compression users might not expect
>    that.
> 
> - it might still even perform worse - an extreme case is that when
>    network bandwidth is very very limited but instead we have plenty of
>    CPU resources. [1]
> 
> So it's really a good tunable for me when people really needs to
> understand what's it before turning it on.

That looks good to me.

> 
>>       DEFINE_PROP_UINT8("x-decompress-threads", MigrationState,
>>                         parameters.decompress_threads,
>>                         DEFAULT_MIGRATE_DECOMPRESS_THREAD_COUNT),
>> diff --git a/migration/migration.h b/migration/migration.h
>> index 64a7b33735..a46b9e6c8d 100644
>> --- a/migration/migration.h
>> +++ b/migration/migration.h
>> @@ -271,6 +271,7 @@ bool migrate_use_return_path(void);
>>   bool migrate_use_compression(void);
>>   int migrate_compress_level(void);
>>   int migrate_compress_threads(void);
>> +int migrate_compress_wait_thread(void);
>>   int migrate_decompress_threads(void);
>>   bool migrate_use_events(void);
>>   bool migrate_postcopy_blocktime(void);
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 52dd678092..0ad234c692 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1889,30 +1889,34 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
>>                                              ram_addr_t offset)
>>   {
>>       int idx, thread_count, bytes_xmit = -1, pages = -1;
>> +    bool wait = migrate_compress_wait_thread();
>>   
>>       thread_count = migrate_compress_threads();
>>       qemu_mutex_lock(&comp_done_lock);
>> -    while (true) {
>> -        for (idx = 0; idx < thread_count; idx++) {
>> -            if (comp_param[idx].done) {
>> -                comp_param[idx].done = false;
>> -                bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file);
>> -                qemu_mutex_lock(&comp_param[idx].mutex);
>> -                set_compress_params(&comp_param[idx], block, offset);
>> -                qemu_cond_signal(&comp_param[idx].cond);
>> -                qemu_mutex_unlock(&comp_param[idx].mutex);
>> -                pages = 1;
>> -                ram_counters.normal++;
>> -                ram_counters.transferred += bytes_xmit;
>> -                break;
>> -            }
>> -        }
>> -        if (pages > 0) {
>> +retry:
>> +    for (idx = 0; idx < thread_count; idx++) {
>> +        if (comp_param[idx].done) {
>> +            comp_param[idx].done = false;
>> +            bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file);
>> +            qemu_mutex_lock(&comp_param[idx].mutex);
>> +            set_compress_params(&comp_param[idx], block, offset);
>> +            qemu_cond_signal(&comp_param[idx].cond);
>> +            qemu_mutex_unlock(&comp_param[idx].mutex);
>> +            pages = 1;
>> +            ram_counters.normal++;
>> +            ram_counters.transferred += bytes_xmit;
>>               break;
>> -        } else {
>> -            qemu_cond_wait(&comp_done_cond, &comp_done_lock);
>>           }
>>       }
>> +
>> +    /*
>> +     * if there is no thread is free to compress the data and the user
>> +     * really expects the slowdown, wait it.
> 
> Considering [1] above, IMHO it might not really be a slow down but it
> depends.  Maybe only mentioning about the fact that we're sending a
> normal page instead of the compressed page if wait is not specified.
> 

Okay, will update the comments based on your suggestion.

>> +     */
>> +    if (pages < 0 && wait) {
>> +        qemu_cond_wait(&comp_done_cond, &comp_done_lock);
>> +        goto retry;
>> +    }
>>       qemu_mutex_unlock(&comp_done_lock);
>>   
>>       return pages;
>> @@ -2226,7 +2230,10 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>>        * CPU resource.
>>        */
>>       if (block == rs->last_sent_block && save_page_use_compression(rs)) {
>> -        return compress_page_with_multi_thread(rs, block, offset);
>> +        res = compress_page_with_multi_thread(rs, block, offset);
>> +        if (res > 0) {
>> +            return res;
>> +        }
>>       } else if (migrate_use_multifd()) {
>>           return ram_save_multifd_page(rs, block, offset);
>>       }
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index 186e8a7303..b4f394844b 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -462,6 +462,11 @@
>>   # @compress-threads: Set compression thread count to be used in live migration,
>>   #          the compression thread count is an integer between 1 and 255.
>>   #
>> +# @compress-wait-thread: Wait if no thread is free to compress the memory page
>> +#          if it's enabled, otherwise, the page will be posted out immediately
>> +#          in the main thread without compression. It's off on default.
>> +#          (Since: 3.0)
>> +#
> 
> Should "Since 3.1" in all the places.
> 

Oh... the thing goes faster than i realized :)

> We'll need to touch up the "by default" part depending on whether we'd
> need to change it according to above comment.
> 
> Otherwise it looks good to me.
> 

Okay, thank you, Peter.

  reply	other threads:[~2018-07-23  7:16 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-19 12:15 [PATCH v2 0/8] migration: compression optimization guangrong.xiao
2018-07-19 12:15 ` [PATCH v2 1/8] migration: do not wait for free thread guangrong.xiao
2018-07-23  3:25   ` Peter Xu
2018-07-23  7:16     ` Xiao Guangrong [this message]
2018-07-23 18:36   ` Eric Blake
2018-07-24  7:40     ` Xiao Guangrong
2018-07-19 12:15 ` [PATCH v2 2/8] migration: fix counting normal page for compression guangrong.xiao
2018-07-23  3:33   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 3/8] migration: show the statistics of compression guangrong.xiao
2018-07-23  4:36   ` Peter Xu
2018-07-23  7:39     ` Xiao Guangrong
2018-07-23  8:05       ` Peter Xu
2018-07-23  8:40         ` Xiao Guangrong
2018-07-23  9:15           ` Peter Xu
2018-07-24  7:37             ` Xiao Guangrong
2018-07-25 16:44         ` Dr. David Alan Gilbert
2018-07-26  5:29           ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 4/8] migration: introduce save_zero_page_to_file guangrong.xiao
2018-07-23  4:40   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 5/8] migration: drop the return value of do_compress_ram_page guangrong.xiao
2018-07-23  4:48   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 6/8] migration: move handle of zero page to the thread guangrong.xiao
2018-07-23  5:03   ` Peter Xu
2018-07-23  7:56     ` Xiao Guangrong
2018-07-23  8:28       ` Peter Xu
2018-07-23  8:44         ` Xiao Guangrong
2018-07-23  9:40           ` Peter Xu
2018-07-24  7:39             ` Xiao Guangrong
2018-07-19 12:15 ` [PATCH v2 7/8] migration: hold the lock only if it is really needed guangrong.xiao
2018-07-23  5:36   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao
2018-07-23  5:49   ` Peter Xu
2018-07-23  8:05     ` Xiao Guangrong
2018-07-23  8:35       ` Peter Xu
2018-07-23  8:53         ` Xiao Guangrong
2018-07-23  9:01           ` Peter Xu
2018-07-24  7:29             ` Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=be6cec90-2938-5eda-ea7d-ee06ea9483fc@gmail.com \
    --to=guangrong.xiao@gmail.com \
    --cc=dgilbert@redhat.com \
    --cc=jiang.biao2@zte.com.cn \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wei.w.wang@intel.com \
    --cc=xiaoguangrong@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).