From: Hollis Blanchard <hollis_blanchard@mentor.com>
To: pbonzini@redhat.com
Cc: qemu-devel@nongnu.org, stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH 2/2] trace: separate MMIO tracepoints from TB-access tracepoints
Date: Wed, 23 Mar 2016 09:47:05 -0700 [thread overview]
Message-ID: <1458751625.2947.19.camel@mentor.com> (raw)
In-Reply-To: <1455744555-22101-2-git-send-email-hollis_blanchard@mentor.com>
[-- Attachment #1: Type: text/plain, Size: 6657 bytes --]
Paolo, is it true that only TB-invalidating writes go through the
io_mem_notdirty path? I'm looking at the live migration code now, and
it seems like every memory write will go through that path when global
dirty memory logging is enabled.
--
Hollis Blanchard <hollis_blanchard@mentor.com>
Mentor Graphics Emulation DivisionOn Wed, 2016-02-17 at 13:29 -0800, Hollis Blanchard wrote:
> Memory accesses to code which has previously been translated into a
> TB show up
> in the MMIO path, so that they may invalidate the TB. It's extremely
> confusing
> to mix those in with device MMIOs, so split them into their own
> tracepoint.
>
> Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
>
> ---
> It took many hours to figure out why some RAM accesses were coming
> through the
> MMIO path instead of being handled inline in the TBs.
>
> On IRC, Paolo expressed some concern about performance, but
> ultimately agreed
> that adding one conditional to an already heavy codepath wouldn't
> have much
> impact.
> ---
> memory.c | 25 +++++++++++++++++++++++++
> trace-events | 2 ++
> 2 files changed, 27 insertions(+)
>
> diff --git a/memory.c b/memory.c
> index 6ae7bae..3d125c9 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -403,6 +403,11 @@ static MemTxResult
> memory_region_oldmmio_read_accessor(MemoryRegion *mr,
> tmp = mr->ops->old_mmio.read[ctz32(size)](mr->opaque, addr);
> if (mr->subpage) {
> trace_memory_region_subpage_read(cpu_index, mr, addr, tmp,
> size);
> + } else if (mr == &io_mem_notdirty) {
> + /* Accesses to code which has previously been translated
> into a TB show
> + * up in the MMIO path, as accesses to the io_mem_notdirty
> + * MemoryRegion. */
> + trace_memory_region_ops_tb_read(cpu_index, addr, tmp, size);
> } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) {
> hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
> trace_memory_region_ops_read(cpu_index, mr, abs_addr, tmp,
> size);
> @@ -428,6 +433,11 @@ static
> MemTxResult memory_region_read_accessor(MemoryRegion *mr,
> tmp = mr->ops->read(mr->opaque, addr, size);
> if (mr->subpage) {
> trace_memory_region_subpage_read(cpu_index, mr, addr, tmp,
> size);
> + } else if (mr == &io_mem_notdirty) {
> + /* Accesses to code which has previously been translated
> into a TB show
> + * up in the MMIO path, as accesses to the io_mem_notdirty
> + * MemoryRegion. */
> + trace_memory_region_ops_tb_read(cpu_index, addr, tmp, size);
> } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) {
> hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
> trace_memory_region_ops_read(cpu_index, mr, abs_addr, tmp,
> size);
> @@ -454,6 +464,11 @@ static MemTxResult
> memory_region_read_with_attrs_accessor(MemoryRegion *mr,
> r = mr->ops->read_with_attrs(mr->opaque, addr, &tmp, size,
> attrs);
> if (mr->subpage) {
> trace_memory_region_subpage_read(cpu_index, mr, addr, tmp,
> size);
> + } else if (mr == &io_mem_notdirty) {
> + /* Accesses to code which has previously been translated
> into a TB show
> + * up in the MMIO path, as accesses to the io_mem_notdirty
> + * MemoryRegion. */
> + trace_memory_region_ops_tb_read(cpu_index, addr, tmp, size);
> } else if (TRACE_MEMORY_REGION_OPS_READ_ENABLED) {
> hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
> trace_memory_region_ops_read(cpu_index, mr, abs_addr, tmp,
> size);
> @@ -479,6 +494,11 @@ static MemTxResult
> memory_region_oldmmio_write_accessor(MemoryRegion *mr,
> tmp = (*value >> shift) & mask;
> if (mr->subpage) {
> trace_memory_region_subpage_write(cpu_index, mr, addr, tmp,
> size);
> + } else if (mr == &io_mem_notdirty) {
> + /* Accesses to code which has previously been translated
> into a TB show
> + * up in the MMIO path, as accesses to the io_mem_notdirty
> + * MemoryRegion. */
> + trace_memory_region_ops_tb_write(cpu_index, addr, tmp,
> size);
> } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) {
> hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
> trace_memory_region_ops_write(cpu_index, mr, abs_addr, tmp,
> size);
> @@ -504,6 +524,11 @@ static MemTxResult
> memory_region_write_accessor(MemoryRegion *mr,
> tmp = (*value >> shift) & mask;
> if (mr->subpage) {
> trace_memory_region_subpage_write(cpu_index, mr, addr, tmp,
> size);
> + } else if (mr == &io_mem_notdirty) {
> + /* Accesses to code which has previously been translated
> into a TB show
> + * up in the MMIO path, as accesses to the io_mem_notdirty
> + * MemoryRegion. */
> + trace_memory_region_ops_tb_write(cpu_index, addr, tmp,
> size);
> } else if (TRACE_MEMORY_REGION_OPS_WRITE_ENABLED) {
> hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
> trace_memory_region_ops_write(cpu_index, mr, abs_addr, tmp,
> size);
> diff --git a/trace-events b/trace-events
> index 756ce86..7994420 100644
> --- a/trace-events
> +++ b/trace-events
> @@ -1630,6 +1630,8 @@ memory_region_ops_read(int cpu_index, void *mr,
> uint64_t addr, uint64_t value, u
> memory_region_ops_write(int cpu_index, void *mr, uint64_t addr,
> uint64_t value, unsigned size) "cpu %d mr %p addr %#"PRIx64" value
> %#"PRIx64" size %u"
> memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset,
> uint64_t value, unsigned size) "cpu %d mr %p offset %#"PRIx64" value
> %#"PRIx64" size %u"
> memory_region_subpage_write(int cpu_index, void *mr, uint64_t
> offset, uint64_t value, unsigned size) "cpu %d mr %p offset
> %#"PRIx64" value %#"PRIx64" size %u"
> +memory_region_ops_tb_read(int cpu_index, uint64_t addr, uint64_t
> value, unsigned size) "cpu %d addr %#"PRIx64" value %#"PRIx64" size
> %u"
> +memory_region_ops_tb_write(int cpu_index, uint64_t addr, uint64_t
> value, unsigned size) "cpu %d addr %#"PRIx64" value %#"PRIx64" size
> %u"
>
> # qom/object.c
> object_dynamic_cast_assert(const char *type, const char *target,
> const char *file, int line, const char *func) "%s->%s (%s:%d:%s)"
[-- Attachment #2: Type: text/html, Size: 6333 bytes --]
next prev parent reply other threads:[~2016-03-23 16:47 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-17 21:29 [Qemu-devel] [PATCH 1/2] trace: include CPU index in trace_memory_region_ops_*() Hollis Blanchard
2016-02-17 21:29 ` [Qemu-devel] [PATCH 2/2] trace: separate MMIO tracepoints from TB-access tracepoints Hollis Blanchard
2016-02-18 18:37 ` Hollis Blanchard
2016-02-24 14:13 ` Stefan Hajnoczi
2016-03-23 16:47 ` Hollis Blanchard [this message]
2016-03-23 16:53 ` Paolo Bonzini
2016-03-23 17:16 ` Hollis Blanchard
2016-03-24 19:30 ` [Qemu-devel] io_mem_notdirty and live migration Hollis Blanchard
2016-03-24 22:01 ` Paolo Bonzini
2016-02-24 14:12 ` [Qemu-devel] [PATCH 1/2] trace: include CPU index in trace_memory_region_ops_*() Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1458751625.2947.19.camel@mentor.com \
--to=hollis_blanchard@mentor.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).