From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Junyan He <junyan.he@gmx.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Haozhong Zhang <haozhong.zhang@intel.com>,
"xiaoguangrong.eric@gmail.com" <xiaoguangrong.eric@gmail.com>,
"crosthwaite.peter@gmail.com" <crosthwaite.peter@gmail.com>,
"mst@redhat.com" <mst@redhat.com>,
"ehabkost@redhat.com" <ehabkost@redhat.com>,
"quintela@redhat.com" <quintela@redhat.com>,
Junyan He <junyan.he@intel.com>,
"stefanha@redhat.com" <stefanha@redhat.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"imammedo@redhat.com" <imammedo@redhat.com>,
"rth@twiddle.net" <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH V5 0/9] nvdimm: guarantee persistence of QEMU writes to persistent memory
Date: Thu, 31 May 2018 15:42:19 +0100 [thread overview]
Message-ID: <20180531144217.GE2755@work-vm> (raw)
In-Reply-To: <HK2PR06MB05315C48C2D953449EC5A0A7EB630@HK2PR06MB0531.apcprd06.prod.outlook.com>
* Junyan He (junyan.he@gmx.com) wrote:
> > Also, there was a discussion about leaving the code unchanged but adding
> > an nvdimm_flush() call at the very end of migration. I think someone
> > benchmarked it but can't find the email. Please post a link or
> > summarize the results, because that approach would be much less
> > invasive. Thanks!
>
>
> And previous comments:
>
>
> > > > 2. The migration/ram code is invasive. Is it really necessary to
> > > > persist data each time pages are loaded from a migration stream? It
> > > > seems simpler to migrate as normal and call pmem_persist() just once
> > > > after RAM has been migrated but before the migration completes.
> > >
> > > The concern is about the overhead of cache flush.
> > >
> > > In this patch series, if possible, QEMU will use pmem_mem{set,cpy}_nodrain
> > > APIs to copy NVDIMM blocks. Those APIs use movnt (if it's available) and
> > > can avoid the subsequent cache flush.
> > >
> > > Anyway, I'll make some microbenchmark to check which one will be better.
>
> > The problem is not just the overhead; the problem is the code
> > complexity; this series makes all the paths through the migration code
> > more complex in places we wouldn't expect to change.
>
> I already use the migration info tool and list the result in the Mail just after this patch set sent:
>
> Disable all haozhong's pmem_drain and pmem_memset_nodrain kind function call
> and make the cleanup function do the flush job like this:
>
> static int ram_load_cleanup(void *opaque)
> {
> RAMBlock *rb;
> RAMBLOCK_FOREACH(rb) {
> if (ramblock_is_pmem(rb)) {
> pmem_persist(rb->host, rb->used_length);
> }
> }
>
> xbzrle_load_cleanup();
> compress_threads_load_cleanup();
>
> RAMBLOCK_FOREACH(rb) {
> g_free(rb->receivedmap);
> rb->receivedmap = NULL;
> }
> return 0;
> }
>
>
> The migrate info result is:
>
> Haozhong's Manner
>
> (qemu) migrate -d tcp:localhost:4444
> (qemu) info migrate
> globals:
> store-global-state: on
> only-migratable: off
> send-configuration: on
> send-section-footer: on
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off release-ram: off block: off return-path: off pause-before-switchover: off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> Migration status: completed
> total time: 333668 milliseconds
> downtime: 17 milliseconds
> setup: 50 milliseconds
> transferred ram: 10938039 kbytes
> throughput: 268.55 mbps
> remaining ram: 0 kbytes
> total ram: 11027272 kbytes
> duplicate: 35533 pages
> skipped: 0 pages
> normal: 2729095 pages
> normal bytes: 10916380 kbytes
> dirty sync count: 4
> page size: 4 kbytes
> (qemu)
>
>
> flush before complete
>
> QEMU 2.12.50 monitor - type 'help' for more information
> (qemu) migrate -d tcp:localhost:4444
> (qemu) info migrate
> globals:
> store-global-state: on
> only-migratable: off
> send-configuration: on
> send-section-footer: on
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off postcopy-ram: off x-colo: off release-ram: off block: off return-path: off pause-before-switchover: off x-multifd: off dirty-bitmaps: off postcopy-blocktime: off
> Migration status: completed
> total time: 334836 milliseconds
> downtime: 17 milliseconds
> setup: 49 milliseconds
> transferred ram: 10978886 kbytes
> throughput: 268.62 mbps
> remaining ram: 0 kbytes
> total ram: 11027272 kbytes
> duplicate: 23149 pages
> skipped: 0 pages
> normal: 2739314 pages
> normal bytes: 10957256 kbytes
> dirty sync count: 4
> page size: 4 kbytes
> (qemu)
>
>
> So Haozhong's manner seems to be a little faster and I choose to keep that.
>
> If you want to choose this manner, the code will be clean and no need for
>
> > typedef struct {
> > void (*memset)(void *s, int c, size_t n);
> > void (*memcpy)(void *dest, const void *src, size_t n);
> > } MemoryOperations;
>
>
> performance is close, and I am a little new in Qemu:), so both options are OK for me,
>
> Which one do you prefer to?
The one with the least impact; the migration code is getting more and
more complex, so having to do the 'if (is_pmem)' check everywhere isn't
nice, passing an 'ops' pointer in is better. However if you can do the
'flush before complete' instead then the amount of code change is a LOT
smaller.
The only other question is whether from your pmem view, the
flush-before-complete causes any problems; in the worst case, how long
could the flush take?
Dave
> ________________________________
> From: Stefan Hajnoczi <stefanha@gmail.com>
> Sent: Thursday, May 31, 2018 1:18:58 PM
> To: junyan.he@gmx.com
> Cc: qemu-devel@nongnu.org; Haozhong Zhang; xiaoguangrong.eric@gmail.com; crosthwaite.peter@gmail.com; mst@redhat.com; dgilbert@redhat.com; ehabkost@redhat.com; quintela@redhat.com; Junyan He; stefanha@redhat.com; pbonzini@redhat.com; imammedo@redhat.com; rth@twiddle.net
> Subject: Re: [Qemu-devel] [PATCH V5 0/9] nvdimm: guarantee persistence of QEMU writes to persistent memory
>
> David Gilbert previously suggested a memory access interface. I guess
> it would look something like this:
>
> typedef struct {
> void (*memset)(void *s, int c, size_t n);
> void (*memcpy)(void *dest, const void *src, size_t n);
> } MemoryOperations;
>
> That way code doesn't need if (pmem) A else B. It can just do
> mem_ops->foo(). Have you looked into this idea?
>
> Also, there was a discussion about leaving the code unchanged but adding
> an nvdimm_flush() call at the very end of migration. I think someone
> benchmarked it but can't find the email. Please post a link or
> summarize the results, because that approach would be much less
> invasive. Thanks!
>
> Stefan
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2018-05-31 14:42 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-10 2:08 [Qemu-devel] [PATCH V5 0/9] nvdimm: guarantee persistence of QEMU writes to persistent memory junyan.he
2018-05-10 2:08 ` [Qemu-devel] [PATCH 1/9 V5] memory, exec: switch file ram allocation functions to 'flags' parameters junyan.he
2018-05-10 21:08 ` Murilo Opsfelder Araujo
2018-05-11 7:20 ` Junyan He
2018-05-31 12:46 ` Stefan Hajnoczi
2018-05-10 2:08 ` [Qemu-devel] [PATCH 2/9 V5] hostmem-file: add the 'pmem' option junyan.he
2018-05-31 12:35 ` Stefan Hajnoczi
2018-05-31 13:09 ` Stefan Hajnoczi
2018-06-01 5:14 ` Junyan He
2018-05-10 2:08 ` [Qemu-devel] [PATCH 3/9 V5] configure: add libpmem support junyan.he
2018-05-31 12:54 ` Stefan Hajnoczi
2018-05-10 2:08 ` [Qemu-devel] [PATCH 4/9 V5] mem/nvdimm: ensure write persistence to PMEM in label emulation junyan.he
2018-05-10 2:08 ` [Qemu-devel] [PATCH 5/9 V5] migration/ram: ensure write persistence on loading zero pages to PMEM junyan.he
2018-05-10 2:08 ` [Qemu-devel] [PATCH 6/9 V5] migration/ram: ensure write persistence on loading normal " junyan.he
2018-05-10 2:08 ` [Qemu-devel] [PATCH 7/9 V5] migration/ram: ensure write persistence on loading compressed " junyan.he
2018-05-10 2:08 ` [Qemu-devel] [PATCH 8/9 V5] migration/ram: ensure write persistence on loading xbzrle " junyan.he
2018-05-10 2:08 ` [Qemu-devel] [PATCH 9/9 V5] migration/ram: Add check and info message to nvdimm post copy junyan.he
2018-05-10 2:21 ` [Qemu-devel] [PATCH V5 0/9] nvdimm: guarantee persistence of QEMU writes to persistent memory He, Junyan
2018-05-10 2:22 ` no-reply
2018-05-21 3:19 ` Junyan He
2018-05-28 5:26 ` Junyan He
2018-05-31 13:18 ` Stefan Hajnoczi
2018-05-31 14:28 ` Dr. David Alan Gilbert
2018-05-31 14:36 ` Junyan He
2018-05-31 14:42 ` Dr. David Alan Gilbert [this message]
2018-05-31 15:04 ` Junyan He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180531144217.GE2755@work-vm \
--to=dgilbert@redhat.com \
--cc=crosthwaite.peter@gmail.com \
--cc=ehabkost@redhat.com \
--cc=haozhong.zhang@intel.com \
--cc=imammedo@redhat.com \
--cc=junyan.he@gmx.com \
--cc=junyan.he@intel.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=rth@twiddle.net \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
--cc=xiaoguangrong.eric@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).