From: Paolo Bonzini <pbonzini@redhat.com>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: mttcg@greensocs.com, Mark Burton <mark@greensocs.com>,
Paolo Bonzini <bonzini@gnu.org>,
alvise rigo <a.rigo@virtualopensystems.com>,
QEMU Developers <qemu-devel@nongnu.org>,
KONRAD Frederic <fred.konrad@greensocs.com>
Subject: Re: [Qemu-devel] MTTCG sync-up call today?
Date: Tue, 12 Jan 2016 17:19:44 +0100 [thread overview]
Message-ID: <569527A0.5080502@redhat.com> (raw)
In-Reply-To: <87twmidbdb.fsf@linaro.org>
On 12/01/2016 17:13, Alex Bennée wrote:
> #4 0x00005555556e5b06 in tb_invalidate_phys_range (start=start@entry=0, end=end@entry=4096) at /home/alex/lsrc/qemu/qemu.git/translate-all.c:1303
> #5 0x00005555556dbe42 in invalidate_and_set_dirty (mr=mr@entry=0x555556571800, addr=0, length=length@entry=4096) at /home/alex/lsrc/qemu/qemu.git/exec.c:2420
> #6 0x00005555556e1890 in address_space_unmap (as=as@entry=0x555555ff7000 <address_space_memory>, buffer=<optimised out>, len=<optimised out>,
> is_write=is_write@entry=1, access_len=access_len@entry=4096) at /home/alex/lsrc/qemu/qemu.git/exec.c:2933
> #7 0x00005555556e19bf in cpu_physical_memory_unmap (buffer=<optimised out>, len=<optimised out>, is_write=is_write@entry=1, access_len=access_len@entry=4096)
> at /home/alex/lsrc/qemu/qemu.git/exec.c:2962
> #8 0x000055555578219c in virtqueue_unmap_sg (elem=elem@entry=0x7ffe782c7cf0, len=len@entry=4097, vq=0x555556e6f020)
> at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:257
> #9 0x0000555555782ac0 in virtqueue_fill (vq=vq@entry=0x555556e6f020, elem=elem@entry=0x7ffe782c7cf0, len=4097, idx=idx@entry=0)
> at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:282
> #10 0x0000555555782ccf in virtqueue_push (vq=0x555556e6f020, elem=elem@entry=0x7ffe782c7cf0, len=<optimised out>)
> at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:308
> #11 0x000055555573451a in virtio_blk_complete_request (req=0x7ffe782c7ce0, status=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:58
> #12 0x0000555555734a13 in virtio_blk_req_complete (status=0 '\000', req=0x7ffe782c7ce0) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:64
> #13 virtio_blk_rw_complete (opaque=<optimised out>, ret=0) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:122
> ---Type <return> to continue, or q <return> to quit---
> #14 0x0000555555a2d822 in bdrv_co_complete (acb=0x7ffe780189c0) at block/io.c:2122
> #15 0x0000555555a87a7a in coroutine_trampoline (i0=<optimised out>, i1=<optimised out>) at util/coroutine-ucontext.c:80
> #16 0x00007ffff0afc8b0 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #17 0x00007fff8f5aa6e0 in ?? ()
> #18 0x0000000000000000 in ?? ()
>
> I guess the tb_lock could just be grabbed but there is stuff in that
> path that assumes current_cpu is valid so I thought the thing to do was
> defer the operation until a "real" vCPU can deal with it.
I need to look at the branch... The latest version I have here does
not require tb_lock taken in tb_invalidate_phys_range.
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;end[. NOTE: start and end may refer to *different* physical pages.
* 'is_cpu_write_access' should be true if called from a real cpu write
* access: the virtual CPU will exit the current TB if code is modified inside
* this TB.
*
* Called with mmap_lock held for user-mode emulation
*/
void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
{
while (start < end) {
tb_invalidate_phys_page_range(start, end, 0);
start &= TARGET_PAGE_MASK;
start += TARGET_PAGE_SIZE;
}
}
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;end[. NOTE: start and end must refer to the *same* physical page.
* 'is_cpu_write_access' should be true if called from a real cpu write
* access: the virtual CPU will exit the current TB if code is modified inside
* this TB.
*
* Called with mmap_lock held for user-mode emulation
* If called from generated code, iothread mutex must not be held.
*/
void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
int is_cpu_write_access)
Paolo
next prev parent reply other threads:[~2016-01-12 16:20 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <87r3hx6040.fsf@linaro.org>
[not found] ` <5695081C.1070101@greensocs.com>
2016-01-12 15:11 ` [Qemu-devel] MTTCG sync-up call today? Alex Bennée
2016-01-12 15:19 ` Paolo Bonzini
2016-01-12 16:13 ` Alex Bennée
2016-01-12 16:19 ` Paolo Bonzini [this message]
2016-01-12 16:32 ` Alex Bennée
2016-01-12 16:43 ` Paolo Bonzini
2016-01-12 17:05 ` Alex Bennée
2016-01-12 17:08 ` Paolo Bonzini
2016-01-12 17:08 ` Paolo Bonzini
2016-04-25 9:53 [Qemu-devel] MTTCG Sync-up " Alex Bennée
2016-04-25 10:32 ` alvise rigo
2016-04-25 10:33 ` Mark Burton
-- strict thread matches above, loose matches on Subject: below --
2016-05-09 11:56 Alex Bennée
2016-05-09 12:03 ` KONRAD Frederic
2016-05-09 12:11 ` alvise rigo
2016-05-09 12:41 ` Alex Bennée
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=569527A0.5080502@redhat.com \
--to=pbonzini@redhat.com \
--cc=a.rigo@virtualopensystems.com \
--cc=alex.bennee@linaro.org \
--cc=bonzini@gnu.org \
--cc=fred.konrad@greensocs.com \
--cc=mark@greensocs.com \
--cc=mttcg@greensocs.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).