* [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
@ 2017-11-12 9:26 Chunguang Li
2017-11-15 9:45 ` Juan Quintela
2017-11-15 10:11 ` Dr. David Alan Gilbert
0 siblings, 2 replies; 8+ messages in thread
From: Chunguang Li @ 2017-11-12 9:26 UTC (permalink / raw)
To: qemu-devel; +Cc: dgilbert, quintela, amit.shah, pbonzini, stefanha
Hi all!
I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during migration are "not really dirty", which is, their content are the same as the old version.
I did the migration experiment like this:
During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking began and I resumed the VM. Besides, at the end
of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each pre-copy iteration.
I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server, kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are write-intensive workloads. So almost all of them did not converge to stop-copy.
Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006 benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty pages within all the dirty pages are as high as 45%-80%. The proportions of the other workloads are about 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be far less than these numbers. I think it should be quite a particular case that one page is written with exactly the same content as the former data.
Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write large area of pages to zero pages, which are already zero pages before.
I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments with two sets of different machines. Then I guessed it might be related with the huge page feature. However, the result was the same when I turned the huge page feature off in the OS.
Now there are only two possible reasons in my opinion.
First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not receive write request as dirty.
Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write requests.
What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I appreciate any responses, because it has confused me for a long time. Thank you.
--
Chunguang Li, Ph.D. Candidate
Wuhan National Laboratory for Optoelectronics (WNLO)
Huazhong University of Science & Technology (HUST)
Wuhan, Hubei Prov., China
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
@ 2017-11-15 6:24 Chunguang Li
0 siblings, 0 replies; 8+ messages in thread
From: Chunguang Li @ 2017-11-15 6:24 UTC (permalink / raw)
To: lichunguang; +Cc: qemu-devel, dgilbert, quintela, amit.shah, pbonzini, stefanha
Some more details about this experiment:
The host is running Ubuntu-16.04 with 4.4.0 Linux kernel and QEMU-2.5.1; The
guest is running Ubuntu-12.04, except Memcached with Ubuntu-16.04.
The exact numbers of the proportions of write-not-dirty pages for the first
2 pre-copy iterations: (0.445 means 44.5%)
Memcached: 0.445, 0.478
Zeusmp: 0.670, 0.727
Mcf: 0.808, 0.793
Bzip2: 0.464, 0.447
Milc: 0.341, 0.037
cactusADM: 0.280, 0.248
lbm: 0.090, 0.037
GemsFDTD: 0.226, 0.172
Bwaves: 0.069, 0.003
Astar: 0.113, 0.039
Xalancbmk: 0.082, 0.041
Wrf: 0.141, 0.073
Any advice? Looking forward to any response. Thank you.
Chunguang
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
2017-11-12 9:26 Chunguang Li
@ 2017-11-15 9:45 ` Juan Quintela
2017-11-15 14:22 ` Chunguang Li
2017-11-15 10:11 ` Dr. David Alan Gilbert
1 sibling, 1 reply; 8+ messages in thread
From: Juan Quintela @ 2017-11-15 9:45 UTC (permalink / raw)
To: Chunguang Li; +Cc: qemu-devel, dgilbert, amit.shah, pbonzini, stefanha
"Chunguang Li" <lichunguang@hust.edu.cn> wrote:
> Hi all!
Hi
Sorry for the delay, I was on vacation an still getting up to speed.
> I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during
> migration are "not really dirty", which is, their content are the same as the old version.
I think your test is quite good, and I am also ashamed that 80% of
"false" dirty pages is really a lot.
> I did the migration experiment like this:
>
> During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest
> physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking
> began and I resumed the VM. Besides, at the end
> of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all
> the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the
> content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is
> written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with
> the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the
> VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each
> pre-copy iteration.
vhost and friends could make a small difference here, but in general,
this approach should be ok.
> I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server,
> kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are
> write-intensive workloads. So almost all of them did not converge to stop-copy.
That is the impressive part, 15 workloads. Thanks for taking the effor.
BTW, do you have your qemu changes handy, just to be able to test
locally, and "review" how do you measure things.
> Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006
> benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty
> pages within all the dirty pages are as high as 45%-80%.
Or the workload does really stupid things like:
a = 0;
a = 1;
a = 0;
This makes no sense at all.
Just in case, could you try to test this with xbzrle? It should go well
with this use case (but you need to get a big enough buffer to cache
enough memory).
> The proportions of the other workloads are about
> 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be
> far less than these numbers. I think it should be quite a particular case that one page is written with exactly
> the same content as the former data.
I agree with that.
> Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write
> large area of pages to zero pages, which are already zero pages before.
>
> I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments
> with two sets of different machines. Then I guessed it might be related with the huge page feature. However,
> the result was the same when I turned the huge page feature off in the OS.
Huge page could have caused that. Remember that we have transparent
huge pages. I have to look at that code.
> Now there are only two possible reasons in my opinion.
>
> First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not
> receive write request as dirty.
That is a posibilty.
> Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write
> requests.
>
> What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I
> appreciate any responses, because it has confused me for a long time. Thank you.
I would like to reproduce this.
Thanks for bringing this to our attention.
Later, Juan.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
2017-11-12 9:26 Chunguang Li
2017-11-15 9:45 ` Juan Quintela
@ 2017-11-15 10:11 ` Dr. David Alan Gilbert
2017-11-15 13:41 ` Chunguang Li
1 sibling, 1 reply; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-15 10:11 UTC (permalink / raw)
To: Chunguang Li; +Cc: qemu-devel, quintela, amit.shah, pbonzini, stefanha
* Chunguang Li (lichunguang@hust.edu.cn) wrote:
> Hi all!
>
> I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during migration are "not really dirty", which is, their content are the same as the old version.
>
>
>
>
> I did the migration experiment like this:
>
> During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking began and I resumed the VM. Besides, at the end
> of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each pre-copy iteration.
>
> I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server, kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are write-intensive workloads. So almost all of them did not converge to stop-copy.
>
>
>
>
> Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006 benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty pages within all the dirty pages are as high as 45%-80%. The proportions of the other workloads are about 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be far less than these numbers. I think it should be quite a particular case that one page is written with exactly the same content as the former data.
>
> Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write large area of pages to zero pages, which are already zero pages before.
>
>
>
>
> I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments with two sets of different machines. Then I guessed it might be related with the huge page feature. However, the result was the same when I turned the huge page feature off in the OS.
>
>
>
>
> Now there are only two possible reasons in my opinion.
>
> First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not receive write request as dirty.
>
> Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write requests.
>
>
> What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I appreciate any responses, because it has confused me for a long time. Thank you.
Wasn't it you who pointed out last year the other possibility? - The
problem of false positives due to sync'ing the whole of memory and then
writing the data out, but some of the dirty pages were already written?
Dave
>
> --
> Chunguang Li, Ph.D. Candidate
> Wuhan National Laboratory for Optoelectronics (WNLO)
> Huazhong University of Science & Technology (HUST)
> Wuhan, Hubei Prov., China
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
2017-11-15 10:11 ` Dr. David Alan Gilbert
@ 2017-11-15 13:41 ` Chunguang Li
2017-11-15 14:23 ` Dr. David Alan Gilbert
0 siblings, 1 reply; 8+ messages in thread
From: Chunguang Li @ 2017-11-15 13:41 UTC (permalink / raw)
To: dr. david alan gilbert
Cc: qemu-devel, quintela, amit.shah, pbonzini, stefanha
> -----原始邮件-----
> 发件人: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 发送时间: 2017-11-15 18:11:37 (星期三)
> 收件人: "Chunguang Li" <lichunguang@hust.edu.cn>
> 抄送: qemu-devel@nongnu.org, quintela@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, stefanha@redhat.com
> 主题: Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
>
> * Chunguang Li (lichunguang@hust.edu.cn) wrote:
> > Hi all!
> >
> > I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during migration are "not really dirty", which is, their content are the same as the old version.
> >
> >
> >
> >
> > I did the migration experiment like this:
> >
> > During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking began and I resumed the VM. Besides, at the end
> > of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each pre-copy iteration.
> >
> > I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server, kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are write-intensive workloads. So almost all of them did not converge to stop-copy.
> >
> >
> >
> >
> > Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006 benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty pages within all the dirty pages are as high as 45%-80%. The proportions of the other workloads are about 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be far less than these numbers. I think it should be quite a particular case that one page is written with exactly the same content as the former data.
> >
> > Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write large area of pages to zero pages, which are already zero pages before.
> >
> >
> >
> >
> > I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments with two sets of different machines. Then I guessed it might be related with the huge page feature. However, the result was the same when I turned the huge page feature off in the OS.
> >
> >
> >
> >
> > Now there are only two possible reasons in my opinion.
> >
> > First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not receive write request as dirty.
> >
> > Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write requests.
> >
> >
> > What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I appreciate any responses, because it has confused me for a long time. Thank you.
>
> Wasn't it you who pointed out last year the other possibility? - The
> problem of false positives due to sync'ing the whole of memory and then
> writing the data out, but some of the dirty pages were already written?
>
> Dave
Yes, you remember that! It was me. After that, I did more analysis and experiments. I found that, in fact, both reasons contribute to the "fake dirty" pages (dirty pages that do not need to be resent, because their contents are the same as that in the target node). One is what I pointed out last year, which you have mentioned. The other reason is what I am talking about now, the "write-not-dirty" phenomenon.
In fact, according to my experiments results, the "wirte-not-dirty" is the main reason resulting to the "fake dirty" pages, while sync'ing the whole of memory contributes less.
Chunguang
>
> >
> > --
> > Chunguang Li, Ph.D. Candidate
> > Wuhan National Laboratory for Optoelectronics (WNLO)
> > Huazhong University of Science & Technology (HUST)
> > Wuhan, Hubei Prov., China
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
2017-11-15 9:45 ` Juan Quintela
@ 2017-11-15 14:22 ` Chunguang Li
0 siblings, 0 replies; 8+ messages in thread
From: Chunguang Li @ 2017-11-15 14:22 UTC (permalink / raw)
To: quintela; +Cc: qemu-devel, dgilbert, amit.shah, pbonzini, stefanha
> -----Original Messages-----
> From: "Juan Quintela" <quintela@redhat.com>
> Sent Time: 2017-11-15 17:45:44 (Wednesday)
> To: "Chunguang Li" <lichunguang@hust.edu.cn>
> Cc: qemu-devel@nongnu.org, dgilbert@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, stefanha@redhat.com
> Subject: Re: Abnormal observation during migration: too many "write-not-dirty" pages
>
> "Chunguang Li" <lichunguang@hust.edu.cn> wrote:
> > Hi all!
>
> Hi
>
> Sorry for the delay, I was on vacation an still getting up to speed.
Hi, Juan, thanks for your reply.
>
> > I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during
> > migration are "not really dirty", which is, their content are the same as the old version.
>
> I think your test is quite good, and I am also ashamed that 80% of
> "false" dirty pages is really a lot.
>
> > I did the migration experiment like this:
> >
> > During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest
> > physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking
> > began and I resumed the VM. Besides, at the end
> > of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all
> > the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the
> > content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is
> > written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with
> > the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the
> > VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each
> > pre-copy iteration.
>
>
> vhost and friends could make a small difference here, but in general,
> this approach should be ok.
>
> > I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server,
> > kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are
> > write-intensive workloads. So almost all of them did not converge to stop-copy.
>
> That is the impressive part, 15 workloads. Thanks for taking the effor.
>
> BTW, do you have your qemu changes handy, just to be able to test
> locally, and "review" how do you measure things.
Sorry, I do not have my changes handy. But don't worry, I will send them to you tomorrow morning. It's night here.
>
>
> > Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006
> > benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty
> > pages within all the dirty pages are as high as 45%-80%.
>
> Or the workload does really stupid things like:
>
> a = 0;
> a = 1;
> a = 0;
>
> This makes no sense at all.
>
> Just in case, could you try to test this with xbzrle? It should go well
> with this use case (but you need to get a big enough buffer to cache
> enough memory).
In fact, I have tested these workloads (the 45%-80% ones) with xbzrle. And when the buffer is big enough, they really go well. While they do not converge to stop-copy before, now they finish migration quickly.
>
>
> > The proportions of the other workloads are about
> > 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be
> > far less than these numbers. I think it should be quite a particular case that one page is written with exactly
> > the same content as the former data.
>
> I agree with that.
>
> > Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write
> > large area of pages to zero pages, which are already zero pages before.
> >
> > I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments
> > with two sets of different machines. Then I guessed it might be related with the huge page feature. However,
> > the result was the same when I turned the huge page feature off in the OS.
>
> Huge page could have caused that. Remember that we have transparent
> huge pages. I have to look at that code.
In fact, the results are the same no matter I turn on or turn off the transparent huge pages in the OS.
Later, Chunguang.
>
> > Now there are only two possible reasons in my opinion.
> >
> > First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not
> > receive write request as dirty.
>
> That is a posibilty.
>
> > Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write
> > requests.
> >
> > What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I
> > appreciate any responses, because it has confused me for a long time. Thank you.
>
> I would like to reproduce this.
>
> Thanks for bringing this to our attention.
>
> Later, Juan.
--
Chunguang Li, Ph.D. Candidate
Wuhan National Laboratory for Optoelectronics (WNLO)
Huazhong University of Science & Technology (HUST)
Wuhan, Hubei Prov., China
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
2017-11-15 13:41 ` Chunguang Li
@ 2017-11-15 14:23 ` Dr. David Alan Gilbert
2017-11-16 3:01 ` Chunguang Li
0 siblings, 1 reply; 8+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-15 14:23 UTC (permalink / raw)
To: Chunguang Li; +Cc: qemu-devel, quintela, amit.shah, pbonzini, stefanha
* Chunguang Li (lichunguang@hust.edu.cn) wrote:
>
>
>
> > -----原始邮件-----
> > 发件人: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > 发送时间: 2017-11-15 18:11:37 (星期三)
> > 收件人: "Chunguang Li" <lichunguang@hust.edu.cn>
> > 抄送: qemu-devel@nongnu.org, quintela@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, stefanha@redhat.com
> > 主题: Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
> >
> > * Chunguang Li (lichunguang@hust.edu.cn) wrote:
> > > Hi all!
> > >
> > > I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during migration are "not really dirty", which is, their content are the same as the old version.
> > >
> > >
> > >
> > >
> > > I did the migration experiment like this:
> > >
> > > During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking began and I resumed the VM. Besides, at the end
> > > of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each pre-copy iteration.
> > >
> > > I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server, kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are write-intensive workloads. So almost all of them did not converge to stop-copy.
> > >
> > >
> > >
> > >
> > > Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006 benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty pages within all the dirty pages are as high as 45%-80%. The proportions of the other workloads are about 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be far less than these numbers. I think it should be quite a particular case that one page is written with exactly the same content as the former data.
> > >
> > > Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write large area of pages to zero pages, which are already zero pages before.
> > >
> > >
> > >
> > >
> > > I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments with two sets of different machines. Then I guessed it might be related with the huge page feature. However, the result was the same when I turned the huge page feature off in the OS.
> > >
> > >
> > >
> > >
> > > Now there are only two possible reasons in my opinion.
> > >
> > > First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not receive write request as dirty.
> > >
> > > Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write requests.
> > >
> > >
> > > What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I appreciate any responses, because it has confused me for a long time. Thank you.
> >
> > Wasn't it you who pointed out last year the other possibility? - The
> > problem of false positives due to sync'ing the whole of memory and then
> > writing the data out, but some of the dirty pages were already written?
> >
> > Dave
>
> Yes, you remember that!
Yes, I remember that, and my TODO list told me it was you :-)
> It was me. After that, I did more analysis and experiments. I found that, in fact, both reasons contribute to the "fake dirty" pages (dirty pages that do not need to be resent, because their contents are the same as that in the target node). One is what I pointed out last year, which you have mentioned. The other reason is what I am talking about now, the "write-not-dirty" phenomenon.
> In fact, according to my experiments results, the "wirte-not-dirty" is the main reason resulting to the "fake dirty" pages, while sync'ing the whole of memory contributes less.
How do you differentiate between "fake dirty' and the syncing?
The cases where values change back to what they used to be seem
the most likely to me (e.g. locks/counts that decrement back) - but
that seems a high %.
I wonder if there's any difference between page write protection
based dirtying and PML (that I think can be used on some newer chips).
One way to debug it I guess would be to keep the write protection
and watch the progression of data values within a page - do they
actually change and then change back or do the values never
really change.
Dave
> Chunguang
> >
> > >
> > > --
> > > Chunguang Li, Ph.D. Candidate
> > > Wuhan National Laboratory for Optoelectronics (WNLO)
> > > Huazhong University of Science & Technology (HUST)
> > > Wuhan, Hubei Prov., China
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
>
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
2017-11-15 14:23 ` Dr. David Alan Gilbert
@ 2017-11-16 3:01 ` Chunguang Li
0 siblings, 0 replies; 8+ messages in thread
From: Chunguang Li @ 2017-11-16 3:01 UTC (permalink / raw)
To: dr. david alan gilbert
Cc: qemu-devel, quintela, amit.shah, pbonzini, stefanha
> -----Original Messages-----
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> Sent Time: 2017-11-15 22:23:52 (Wednesday)
> To: "Chunguang Li" <lichunguang@hust.edu.cn>
> Cc: qemu-devel@nongnu.org, quintela@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, stefanha@redhat.com
> Subject: Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
>
> * Chunguang Li (lichunguang@hust.edu.cn) wrote:
> >
> >
> >
> > > -----原始邮件-----
> > > 发件人: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > > 发送时间: 2017-11-15 18:11:37 (星期三)
> > > 收件人: "Chunguang Li" <lichunguang@hust.edu.cn>
> > > 抄送: qemu-devel@nongnu.org, quintela@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, stefanha@redhat.com
> > > 主题: Re: [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages
> > >
> > > * Chunguang Li (lichunguang@hust.edu.cn) wrote:
> > > > Hi all!
> > > >
> > > > I got a very abnormal observation for the VM migration. I found that many pages marked as dirty during migration are "not really dirty", which is, their content are the same as the old version.
> > > >
> > > >
> > > >
> > > >
> > > > I did the migration experiment like this:
> > > >
> > > > During the setup phase of migration, first I suspended the VM. Then I copied all the pages within the guest physical address space to a memory buffer as large as the guest memory size. After that, the dirty tracking began and I resumed the VM. Besides, at the end
> > > > of each iteration, I also suspended the VM temporarily. During the suspension, I compared the content of all the pages marked as dirty in this iteration byte-by-byte with their former copies inside the buffer. If the content of one page was the same as its former copy, I recorded it as a "write-not-dirty" page (the page is written exactly with the same content as the old version). Otherwise, I replaced this page in the buffer with the new content, for the possible comparison in the future. After the reset of the dirty bitmap, I resumed the VM. Thus, I obtain the proportion of the write-not-dirty pages within all the pages marked as dirty for each pre-copy iteration.
> > > >
> > > > I repeated this experiment with 15 workloads, which are 11 CPU2006 benchmarks, Memcached server, kernel compilation, playing a video, and an idle VM. The CPU2006 benchmarks and Memcached are write-intensive workloads. So almost all of them did not converge to stop-copy.
> > > >
> > > >
> > > >
> > > >
> > > > Startlingly, the proportions of the write-not-dirty pages are quite high. Memcached and three CPU2006 benchmarks(zeusmp, mcf and bzip2) have the most high proportions. Their proportions of the write-not-dirty pages within all the dirty pages are as high as 45%-80%. The proportions of the other workloads are about 5%-20%, which are also abnormal. According to my intuition, the proportion of write-not-dirty pages should be far less than these numbers. I think it should be quite a particular case that one page is written with exactly the same content as the former data.
> > > >
> > > > Besides, the zero pages are not counted for all the results. Because I think codes like memset() may write large area of pages to zero pages, which are already zero pages before.
> > > >
> > > >
> > > >
> > > >
> > > > I excluded some possible unknown reasons with the machine hardware, because I repeated the experiments with two sets of different machines. Then I guessed it might be related with the huge page feature. However, the result was the same when I turned the huge page feature off in the OS.
> > > >
> > > >
> > > >
> > > >
> > > > Now there are only two possible reasons in my opinion.
> > > >
> > > > First, there is some bugs in the KVM kernel dirty tracking mechanism. It may mark some pages that do not receive write request as dirty.
> > > >
> > > > Second, there is some bugs in the OS running inside the VM. It may issue some unnecessary write requests.
> > > >
> > > >
> > > > What do you think about this abnormal phenomenon? Any advice or possible reasons or even guesses? I appreciate any responses, because it has confused me for a long time. Thank you.
> > >
> > > Wasn't it you who pointed out last year the other possibility? - The
> > > problem of false positives due to sync'ing the whole of memory and then
> > > writing the data out, but some of the dirty pages were already written?
> > >
> > > Dave
> >
> > Yes, you remember that!
>
> Yes, I remember that, and my TODO list told me it was you :-)
>
> > It was me. After that, I did more analysis and experiments. I found that, in fact, both reasons contribute to the "fake dirty" pages (dirty pages that do not need to be resent, because their contents are the same as that in the target node). One is what I pointed out last year, which you have mentioned. The other reason is what I am talking about now, the "write-not-dirty" phenomenon.
> > In fact, according to my experiments results, the "wirte-not-dirty" is the main reason resulting to the "fake dirty" pages, while sync'ing the whole of memory contributes less.
>
> How do you differentiate between "fake dirty' and the syncing?
I record the following two kinds of proportions:
(1): The proportion of "write-not-dirty" pages. This is done by the method I have described. I copied the whole VM memory in migration setup, and at the end of the first iteration, compared the pages marked as dirty to their copies in the buffer. Thus I got the proportion of "write-not-dirty" pages for the first iteration.
(2): Besides, I also got the following proportion, which is the "fake dirty". In the first iteration, when each page was sent to the target, I copied it to a buffer. Note that, the contents of these pages in the buffer may be different from that in (1). Buffer in (1) records the pages content before migration, and buffer in (2) records the content sent to target. Then during the second iteration, when each dirty pages was sent to target, I compared it with its copy in buffer. Thus I got the proportion of "fake dirty" pages sent in the second iteration.
The number of (2) is bigger than (1). I attribute this part to the syncing.
>
> The cases where values change back to what they used to be seem
> the most likely to me (e.g. locks/counts that decrement back) - but
> that seems a high %.
> I wonder if there's any difference between page write protection
> based dirtying and PML (that I think can be used on some newer chips).
I think PML may only reduce the overhead for tracing dirty, but which pages are marked as dirty should be the same as page write protection. However, I am not sure for this.
Chunguang
>
> One way to debug it I guess would be to keep the write protection
> and watch the progression of data values within a page - do they
> actually change and then change back or do the values never
> really change.
>
> Dave
>
>
> > Chunguang
> > >
> > > >
> > > > --
> > > > Chunguang Li, Ph.D. Candidate
> > > > Wuhan National Laboratory for Optoelectronics (WNLO)
> > > > Huazhong University of Science & Technology (HUST)
> > > > Wuhan, Hubei Prov., China
> > > >
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
> >
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
--
Chunguang Li, Ph.D. Candidate
Wuhan National Laboratory for Optoelectronics (WNLO)
Huazhong University of Science & Technology (HUST)
Wuhan, Hubei Prov., China
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-11-16 3:01 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-15 6:24 [Qemu-devel] Abnormal observation during migration: too many "write-not-dirty" pages Chunguang Li
-- strict thread matches above, loose matches on Subject: below --
2017-11-12 9:26 Chunguang Li
2017-11-15 9:45 ` Juan Quintela
2017-11-15 14:22 ` Chunguang Li
2017-11-15 10:11 ` Dr. David Alan Gilbert
2017-11-15 13:41 ` Chunguang Li
2017-11-15 14:23 ` Dr. David Alan Gilbert
2017-11-16 3:01 ` Chunguang Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).