From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration Date: Mon, 23 Jul 2018 16:53:11 +0800 Message-ID: References: <20180719121520.30026-1-xiaoguangrong@tencent.com> <20180719121520.30026-9-xiaoguangrong@tencent.com> <20180723054903.GH2491@xz-mi> <5447275f-fe48-ec8f-399c-1a5530417f65@gmail.com> <20180723083551.GK2491@xz-mi> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, mst@redhat.com, mtosatti@redhat.com, Xiao Guangrong , dgilbert@redhat.com, qemu-devel@nongnu.org, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, pbonzini@redhat.com To: Peter Xu Return-path: In-Reply-To: <20180723083551.GK2491@xz-mi> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel2=m.gmane.org@nongnu.org Sender: "Qemu-devel" List-Id: kvm.vger.kernel.org On 07/23/2018 04:35 PM, Peter Xu wrote: > On Mon, Jul 23, 2018 at 04:05:21PM +0800, Xiao Guangrong wrote: >> >> >> On 07/23/2018 01:49 PM, Peter Xu wrote: >>> On Thu, Jul 19, 2018 at 08:15:20PM +0800, guangrong.xiao@gmail.com wrote: >>>> From: Xiao Guangrong >>>> >>>> flush_compressed_data() needs to wait all compression threads to >>>> finish their work, after that all threads are free until the >>>> migration feeds new request to them, reducing its call can improve >>>> the throughput and use CPU resource more effectively >>>> >>>> We do not need to flush all threads at the end of iteration, the >>>> data can be kept locally until the memory block is changed or >>>> memory migration starts over in that case we will meet a dirtied >>>> page which may still exists in compression threads's ring >>>> >>>> Signed-off-by: Xiao Guangrong >>>> --- >>>> migration/ram.c | 15 ++++++++++++++- >>>> 1 file changed, 14 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/migration/ram.c b/migration/ram.c >>>> index 89305c7af5..fdab13821d 100644 >>>> --- a/migration/ram.c >>>> +++ b/migration/ram.c >>>> @@ -315,6 +315,8 @@ struct RAMState { >>>> uint64_t iterations; >>>> /* number of dirty bits in the bitmap */ >>>> uint64_t migration_dirty_pages; >>>> + /* last dirty_sync_count we have seen */ >>>> + uint64_t dirty_sync_count; >>> >>> Better suffix it with "_prev" as well? So that we can quickly >>> identify that it's only a cache and it can be different from the one >>> in the ram_counters. >> >> Indeed, will update it. >> >>> >>>> /* protects modification of the bitmap */ >>>> QemuMutex bitmap_mutex; >>>> /* The RAMBlock used in the last src_page_requests */ >>>> @@ -2532,6 +2534,7 @@ static void ram_save_cleanup(void *opaque) >>>> } >>>> xbzrle_cleanup(); >>>> + flush_compressed_data(*rsp); >>> >>> Could I ask why do we need this considering that we have >>> compress_threads_save_cleanup() right down there? >> >> Dave ask it too. :( >> >> "This is for the error condition, if any error occurred during live migration, >> there is no chance to call ram_save_complete. After using the lockless >> multithreads model, we assert all requests have been handled before destroy >> the work threads." >> >> That makes sure there is nothing left in the threads before doing >> compress_threads_save_cleanup() as current behavior. For lockless >> mutilthread model, we check if all requests are free before destroy >> them. > > But why do we need to explicitly flush it here? Now in > compress_threads_save_cleanup() we have qemu_fclose() on the buffers, > which logically will flush the data and clean up everything too. > Would that suffice? > Yes, it's sufficient for current thread model, will drop it for now and add it at the time when the lockless mutilthread model is applied. :)