kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiao Guangrong <guangrong.xiao@gmail.com>
To: Peter Xu <peterx@redhat.com>
Cc: kvm@vger.kernel.org, mst@redhat.com, mtosatti@redhat.com,
	Xiao Guangrong <xiaoguangrong@tencent.com>,
	dgilbert@redhat.com, qemu-devel@nongnu.org, wei.w.wang@intel.com,
	jiang.biao2@zte.com.cn, pbonzini@redhat.com
Subject: Re: [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration
Date: Mon, 23 Jul 2018 16:05:21 +0800	[thread overview]
Message-ID: <5447275f-fe48-ec8f-399c-1a5530417f65@gmail.com> (raw)
In-Reply-To: <20180723054903.GH2491@xz-mi>



On 07/23/2018 01:49 PM, Peter Xu wrote:
> On Thu, Jul 19, 2018 at 08:15:20PM +0800, guangrong.xiao@gmail.com wrote:
>> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>>
>> flush_compressed_data() needs to wait all compression threads to
>> finish their work, after that all threads are free until the
>> migration feeds new request to them, reducing its call can improve
>> the throughput and use CPU resource more effectively
>>
>> We do not need to flush all threads at the end of iteration, the
>> data can be kept locally until the memory block is changed or
>> memory migration starts over in that case we will meet a dirtied
>> page which may still exists in compression threads's ring
>>
>> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
>> ---
>>   migration/ram.c | 15 ++++++++++++++-
>>   1 file changed, 14 insertions(+), 1 deletion(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 89305c7af5..fdab13821d 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -315,6 +315,8 @@ struct RAMState {
>>       uint64_t iterations;
>>       /* number of dirty bits in the bitmap */
>>       uint64_t migration_dirty_pages;
>> +    /* last dirty_sync_count we have seen */
>> +    uint64_t dirty_sync_count;
> 
> Better suffix it with "_prev" as well?  So that we can quickly
> identify that it's only a cache and it can be different from the one
> in the ram_counters.

Indeed, will update it.

> 
>>       /* protects modification of the bitmap */
>>       QemuMutex bitmap_mutex;
>>       /* The RAMBlock used in the last src_page_requests */
>> @@ -2532,6 +2534,7 @@ static void ram_save_cleanup(void *opaque)
>>       }
>>   
>>       xbzrle_cleanup();
>> +    flush_compressed_data(*rsp);
> 
> Could I ask why do we need this considering that we have
> compress_threads_save_cleanup() right down there?

Dave ask it too. :(

"This is for the error condition, if any error occurred during live migration,
there is no chance to call ram_save_complete. After using the lockless
multithreads model, we assert all requests have been handled before destroy
the work threads."

That makes sure there is nothing left in the threads before doing
compress_threads_save_cleanup() as current behavior. For lockless
mutilthread model, we check if all requests are free before destroy
them.

> 
>>       compress_threads_save_cleanup();
>>       ram_state_cleanup(rsp);
>>   }
>> @@ -3203,6 +3206,17 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>   
>>       ram_control_before_iterate(f, RAM_CONTROL_ROUND);
>>   
>> +    /*
>> +     * if memory migration starts over, we will meet a dirtied page which
>> +     * may still exists in compression threads's ring, so we should flush
>> +     * the compressed data to make sure the new page is not overwritten by
>> +     * the old one in the destination.
>> +     */
>> +    if (ram_counters.dirty_sync_count != rs->dirty_sync_count) {
>> +        rs->dirty_sync_count = ram_counters.dirty_sync_count;
>> +        flush_compressed_data(rs);
>> +    }
>> +
>>       t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
>>       i = 0;
>>       while ((ret = qemu_file_rate_limit(f)) == 0 ||
>> @@ -3235,7 +3249,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>           }
>>           i++;
>>       }
>> -    flush_compressed_data(rs);
> 
> This looks sane to me, but I'd like to see how other people would
> think about it too...

Thank you a lot, Peter! :)

  reply	other threads:[~2018-07-23  8:05 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-19 12:15 [PATCH v2 0/8] migration: compression optimization guangrong.xiao
2018-07-19 12:15 ` [PATCH v2 1/8] migration: do not wait for free thread guangrong.xiao
2018-07-23  3:25   ` Peter Xu
2018-07-23  7:16     ` Xiao Guangrong
2018-07-23 18:36   ` Eric Blake
2018-07-24  7:40     ` Xiao Guangrong
2018-07-19 12:15 ` [PATCH v2 2/8] migration: fix counting normal page for compression guangrong.xiao
2018-07-23  3:33   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 3/8] migration: show the statistics of compression guangrong.xiao
2018-07-23  4:36   ` Peter Xu
2018-07-23  7:39     ` Xiao Guangrong
2018-07-23  8:05       ` Peter Xu
2018-07-23  8:40         ` Xiao Guangrong
2018-07-23  9:15           ` Peter Xu
2018-07-24  7:37             ` Xiao Guangrong
2018-07-25 16:44         ` Dr. David Alan Gilbert
2018-07-26  5:29           ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 4/8] migration: introduce save_zero_page_to_file guangrong.xiao
2018-07-23  4:40   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 5/8] migration: drop the return value of do_compress_ram_page guangrong.xiao
2018-07-23  4:48   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 6/8] migration: move handle of zero page to the thread guangrong.xiao
2018-07-23  5:03   ` Peter Xu
2018-07-23  7:56     ` Xiao Guangrong
2018-07-23  8:28       ` Peter Xu
2018-07-23  8:44         ` Xiao Guangrong
2018-07-23  9:40           ` Peter Xu
2018-07-24  7:39             ` Xiao Guangrong
2018-07-19 12:15 ` [PATCH v2 7/8] migration: hold the lock only if it is really needed guangrong.xiao
2018-07-23  5:36   ` Peter Xu
2018-07-19 12:15 ` [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao
2018-07-23  5:49   ` Peter Xu
2018-07-23  8:05     ` Xiao Guangrong [this message]
2018-07-23  8:35       ` Peter Xu
2018-07-23  8:53         ` Xiao Guangrong
2018-07-23  9:01           ` Peter Xu
2018-07-24  7:29             ` Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5447275f-fe48-ec8f-399c-1a5530417f65@gmail.com \
    --to=guangrong.xiao@gmail.com \
    --cc=dgilbert@redhat.com \
    --cc=jiang.biao2@zte.com.cn \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wei.w.wang@intel.com \
    --cc=xiaoguangrong@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).